Search Results: "Iustin Pop"

20 March 2024

Iustin Pop: Corydalis 2024.12.0 released

I ve been working for the past few weeks on Corydalis, and was in no hurry to make a release, but last evening I found the explanation for a really, really, really annoying issue: unintended zooming on touch interfaces in the image viewer. Or more precisely, I found this post from 2015 (9 years ago!): https://webkit.org/blog/5610/more-responsive-tapping-on-ios/ and I finally understood things. And decided this was the best choice for cutting a new release. Of course, the release contains more things, see the changelog on the release page: https://github.com/iustin/corydalis/releases/tag/v2024.12.0. And of course, it s up on http://demo.corydalis.io. And after putting out the new release, I saw that release tagging is in the pre-built binaries still broken, and found the reason at https://github.com/actions/checkout/issues/290. Will fix for the next release The stream of bugs never ends

9 March 2024

Iustin Pop: Finally learning some Rust - hello photo-backlog-exporter!

After 4? 5? or so years of wanting to learn Rust, over the past 4 or so months I finally bit the bullet and found the motivation to write some Rust. And the subject. And I was, and still am, thoroughly surprised. It s like someone took Haskell, simplified it to some extents, and wrote a systems language out of it. Writing Rust after Haskell seems easy, and pleasant, and you: On the other hand: However, overall, one can clearly see there s more movement in Rust, and the quality of some parts of the toolchain is better (looking at you, rust-analyzer, compared to HLS). So, with that, I ve just tagged photo-backlog-exporter v0.1.0. It s a port of a Python script that was run as a textfile collector, which meant updates every ~15 minutes, since it was a bit slow to start, which I then rewrote in Go (but I don t like Go the language, plus the GC - if I have to deal with a GC, I d rather write Haskell), then finally rewrote in Rust. What does this do? It exports metrics for Prometheus based on the count, age and distribution of files in a directory. These files being, for me, the pictures I still have to sort, cull and process, because I never have enough free time to clear out the backlog. The script is kind of designed to work together with Corydalis, but since it doesn t care about file content, it can also double (easily) as simple file count/age exporter . And to my surprise, writing in Rust is soo pleasant, that the feature list is greater than the original Python script, and - compared to that untested script - I ve rather easily achieved a very high coverage ratio. Rust has multiple types of tests, and the combination allows getting pretty down to details on testing: I had to combine a (large) number of testing crates to get it expressive enough, but it was worth the effort. The last find from yesterday, assert_cmd, is excellent to describe testing/assertion in Rust itself, rather than via a separate, new DSL, like I was using shelltest for, in Haskell. To some extent, I feel like I found the missing arrow in the quiver. Haskell is good, quite very good for some type of workloads, but of course not all, and Rust complements that very nicely, with lots of overlap (as expected). Python can fill in any quick-and-dirty scripting needed. And I just need to learn more frontend, specifically Typescript (the language, not referring to any specific libraries/frameworks), and I ll be ready for AI to take over coding So, for now, I ll need to split my free time coding between all of the above, and keep exercising my skills. But so glad to have found a good new language!

3 March 2024

Iustin Pop: New corydalis 2024.9.0 release!

Obligatory and misused quote: It s not dead, Jim! I ve kind of dropped by ball lately on organising my own photo collection, but February was a pretty good month and I managed to write some more code for Corydalis, ending up with the aforementioned new release. The release is not a big one, but I did manage to solve one thing that was annoying me greatly: that lack of ability to play videos inline in one of the two picture viewing modes (in my preferred mode, in fact). Now, whether you re browsing through pictures, or looking at pictures one-by-one, you can in both cases play videos easily, and to some extent, as it should be . No user docs for that, yet (I actually need to split the manual in user/admin/developer parts) I did some more internal cleanups, and I ve enabled building release zips (since that s how GitHub actions creates artifacts), which means it should be 10% easier to test this. The rest 90% is configuring it and pointing to picture folders and and and, so this is definitely not plug-and-play. The diff summary between 2023.44.0 and 2024.9.0 is: 56 files changed, 1412 insertions(+), 700 deletions(-). Which is not bad, but also not too much. The biggest churn was, as expected, in the viewer (due to the aforementioned video playing). The scary part is that the TypeScript code is not at 7.9% (and a tiny more JS, which I can t convert yet due to lack of type definitions upstream). I say scary in quotes, because I would actually like to know Typescript better, but no time. The new release can be seen in action on demo.corydalis.io, and as always, just after release I found two minor issues: Well, there will be future releases. For now, I ve made an open-source package release, which I didn t do in a while, so I m happy . See you!

18 February 2024

Iustin Pop: New skis , new fun!

As I wrote a bit back, I had a really, really bad fourth quarter in 2023. As new years approached, and we were getting ready to go on a ski trip, I wasn t even sure if and how much I ll be able to ski. And I felt so out of it that I didn t even buy a ski pass for the whole week, just bought one day to see if a) I still like, and b) my knee can deal with it. And, of course, it was good. It was good enough that I ended up skiing the entire week, and my knee got better during the week. WTH?! I don t understand this anymore, but it was good. Good enough that this early year trip put me back on track and I started doing sports again. But the main point is, that during this ski week, and talking to the teacher, I realised that my ski equipment is getting a bit old. I bought everything roughly ten years ago, and while they still hold up OK, my ski skills have improved since then. I said to myself, 10 years is a good run, I ll replace this year the skis, next year the boot & helmet, etc. I didn t expect much from new skis - I mean, yes, better skis, but what does better mean? Well, once I ve read enough forum posts, apparently the skis I selected are that good , which to me meant they re not bad. Oh my, how wrong I was! Double, triple wrong! Rather than fighting with the skis, it s enough to think what I wand to do, and the skis do it. I felt OK-ish, maybe 10% limited by my previous skis, but the new skis are really good and also I know that I m just at 30% or so of the new skis - so room to grow. For now, I am able to ski faster, longer, and I feel less tired than before. I ve actually compared and I can do twice the distance in a day and feel slightly less tired at the end. I ve moved from this black is cool but a bit difficult, I ll do another run later in the day when I ve recovered to how cool, this blacks is quite empty of people, let s stay here for 2-3 more rounds . The skis are new, and I haven t used them on all the places I m familiar with - but the upgrade is huge. The people on the ski forum were actually not exaggerating, I realise now. St ckli++, and they re also made in Switzerland. Can t wait to get back to Saas Fee and to the couple of slopes that were killing me before, to see how they feel now. So, very happy with this choice. I d be even happier if my legs were less damaged, but well, you can t win them all. And not last, the skis are also very cool looking

31 December 2023

Iustin Pop: Happy New Year!

Happy New Year everyone! Goodbye 2023: you were a difficult year. Along multiple axes. Learned new things, learned not pleasant things, and mostly failed at becoming better. Hello 2024: I m hoping I can do better in the coming year. We ll see. My goal list is quite long, and ambitious. But all plans meet reality at one point, so who knows where 2024 will end. In any case - wishing all good people health, wisdom, and a good year.

3 December 2023

Iustin Pop: Life, getting sick and unfit

Like clockwork, like every autumn, got sick again. Just a flu, actually probably two in a row. I don t really understand it - from January to September I m feeling really awesome, and I manage to do sports five days a week, or more. Then September comes, and things start degrading, and then October or November, I get sick and it takes me ~3 weeks to recover, during which I m not even managing reliably 5K steps a day (from walking), not even talking about running or swimming or biking. My Garmin statistics for November compared to October (which wasn t a good month either) are depressing. Let s not even bring up for example July I can see three potential pathways: Basically all three pathways are similar, just not sure what s the root cause (vs. a trigger). I just did a D test, and despite taking regularly supplements, it was below normal range. So clearly somewhere there, lack of vitamin D is a problem. I probably should do a yearly check-up at end of September? Sigh, I still haven t found the detailed user manual for Homo Sapiens. If anyone has it, specifically for version late 1900 s, I d be thankful. Till next time, hopefully by then things will get better.

5 November 2023

Iustin Pop: Corydalis: new release and switching to date-versioning

After 4 years, I finally managed to tag a new release of Corydalis. There s nothing special about this specific point in time, but there was also none in the last four years, so I gave up on trying to any kind of usual version release, and simply switched to CalVer. So, Corydalis 2023.44.0 is up and running on https://demo.corydalis.io. I am 100% sure I m the only one using it, but doing it open-source is nicer, and I still can t imagine another way of managing/browsing my photo library (that keeps it under my own control), so I keep doing 10-20 commits per year to it. There s a lot of bugs to fix and functionality to improve (main thing - a real video player), but until I can find a chunk of free time, it is what it is .

31 October 2023

Iustin Pop: Raspberry PI OS: upgrading and cross-grading

One of the downsides of running Raspberry PI OS is the fact that - not having the resources of pure Debian - upgrades are not recommended, and cross-grades (migrating between armhf and arm64) is not even mentioned. Is this really true? It is, after all a Debian-based system, so it should in theory be doable. Let s try!

Upgrading The recently announced release based on Debian Bookworm here says:
We have always said that for a major version upgrade, you should re-image your SD card and start again with a clean image. In the past, we have suggested procedures for updating an existing image to the new version, but always with the caveat that we do not recommend it, and you do this at your own risk. This time, because the changes to the underlying architecture are so significant, we are not suggesting any procedure for upgrading a Bullseye image to Bookworm; any attempt to do this will almost certainly end up with a non-booting desktop and data loss. The only way to get Bookworm is either to create an SD card using Raspberry Pi Imager, or to download and flash a Bookworm image from here with your tool of choice.
Which means, it s time to actually try it turns out it s actually trivial, if you use RPIs as headless servers. I had only three issues:
  • if using an initrd, the new initrd-building scripts/hooks are looking for some binaries in /usr/bin, and not in /bin; solution: install manually the usrmerge package, and then re-run dpkg --configure -a;
  • also if using an initrd, the scripts are looking for the kernel config file in /boot/config-$(uname -r), and the raspberry pi kernel package doesn t provide this; workaround: modprobe configs && zcat /proc/config.gz > /boot/config-$(uname -r);
  • and finally, on normal RPI systems, that don t use manual configurations of interfaces in /etc/network/interface, migrating from the previous dhcpcd to NetworkManager will break network connectivity, and require you to log in locally and fix things.
I expect most people to hit only the 3rd, and almost no-one to use initrd on raspberry pi. But, overall, aside from these two issues and a couple of cosmetic ones (login.defs being rewritten from scratch and showing a baffling diff, for example), it was easy. Is it worth doing? Definitely. Had no data loss, and no non-booting system.

Cross-grading (32 bit to 64 bit userland) This one is actually painful. Internet searches go from it s possible, I think to it s definitely not worth trying . Examples: Aside from these, there are a gazillion other posts about switching the kernel to 64 bit. And that s worth doing on its own, but it s only half the way. So, armed with two different systems - a RPI4 4GB and a RPI Zero W2 - I tried to do this. And while it can be done, it takes many hours - first system was about 6 hours, second the same, and a third RPI4 probably took ~3 hours only since I knew the problematic issues. So, what are the steps? Basically:
  • install devscripts, since you will need dget
  • enable new architecture in dpkg: dpkg --add-architecture arm64
  • switch over apt sources to include the 64 bit repos, which are different than the 32 bit ones (Raspberry PI OS did a migration here; normally a single repository has all architectures, of course)
  • downgrade all custom rpi packages/libraries to the standard bookworm/bullseye version, since dpkg won t usually allow a single library package to have different versions (I think it s possible to override, but I didn t bother)
  • install libc for the arm64 arch (this takes some effort, it s actually a set of 3-4 packages)
  • once the above is done, install whiptail:amd64 and rejoice at running a 64-bit binary!
  • then painfully go through sets of packages and migrate the set to arm64:
    • sometimes this work via apt, sometimes you ll need to use dget and dpkg -i
    • make sure you download both the armhf and arm64 versions before doing dpkg -i, since you ll need to rollback some installs
  • at one point, you ll be able to switch over dpkg and apt to arm64, at which point the default architecture flips over; from here, if you ve done it at the right moment, it becomes very easy; you ll probably need an apt install --fix-broken, though, at first
  • and then, finish by replacing all packages with arm64 versions
  • and then, dpkg --remove-architecture armhf, reboot, and profit!
But it s tears and blood to get to that point

Pain point 1: RPI custom versions of packages Since the 32bit armhf architecture is a bit weird - having many variations - it turns out that raspberry pi OS has many packages that are very slightly tweaked to disable a compilation flag or work around build/test failures, or whatnot. Since we talk here about 64-bit capable processors, almost none of these are needed, but they do make life harder since the 64 bit version doesn t have those overrides. So what is needed would be to say downgrade all armhf packages to the version in debian upstream repo , but I couldn t find the right apt pinning incantation to do that. So what I did was to remove the 32bit repos, then use apt-show-versions to see which packages have versions that are no longer in any repo, then downgrade them. There s a further, minor, complication that there were about 3-4 packages with same version but different hash (!), which simply needed apt install --reinstall, I think.

Pain point 2: architecture independent packages There is one very big issue with dpkg in all this story, and the one that makes things very problematic: while you can have a library package installed multiple times for different architectures, as the files live in different paths, a non-library package can only be installed once (usually). For binary packages (arch:any), that is fine. But architecture-independent packages (arch:all) are problematic since usually they depend on a binary package, but they always depend on the default architecture version! Hrmm, and I just realise I don t have logs from this, so I m only ~80% confident. But basically:
  • vim-solarized (arch:all) depends on vim (arch:any)
  • if you replace vim armhf with vim arm64, this will break vim-solarized, until the default architecture becomes arm64
So you need to keep track of which packages apt will de-install, for later re-installation. It is possible that Multi-Arch: foreign solves this, per the debian wiki which says:
Note that even though Architecture: all and Multi-Arch: foreign may look like similar concepts, they are not. The former means that the same binary package can be installed on different architectures. Yet, after installation such packages are treated as if they were native architecture (by definition the architecture of the dpkg package) packages. Thus Architecture: all packages cannot satisfy dependencies from other architectures without being marked Multi-Arch foreign.
It also has warnings about how to properly use this. But, in general, not many packages have it, so it is a problem.

Pain point 3: remove + install vs overwrite It seems that depending on how the solver computes a solution, when migrating a package from 32 to 64 bit, it can choose either to:
  • overwrite in place the package (akin to dpkg -i)
  • remove + install later
The former is OK, the later is not. Or, actually, it might be that apt never can do this, for example (edited for brevity):
# apt install systemd:arm64 --no-install-recommends
The following packages will be REMOVED:
  systemd
The following NEW packages will be installed:
  systemd:arm64
0 upgraded, 1 newly installed, 1 to remove and 35 not upgraded.
Do you want to continue? [Y/n] y
dpkg: systemd: dependency problems, but removing anyway as you requested:
 systemd-sysv depends on systemd.
Removing systemd (247.3-7+deb11u2) ...
systemd is the active init system, please switch to another before removing systemd.
dpkg: error processing package systemd (--remove):
 installed systemd package pre-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 systemd
Processing was halted because there were too many errors.
But at the same time, overwrite in place is all good - via dpkg -i from /var/cache/apt/archives. In this case it manifested via a prerm script, in other cases is manifests via dependencies that are no longer satisfied for packages that can t be removed, etc. etc. So you will have to resort to dpkg -i a lot.

Pain point 4: lib- packages that are not lib During the whole process, it is very tempting to just go ahead and install the corresponding arm64 package for all armhf lib package, in one go, since these can coexist. Well, this simple plan is complicated by the fact that some packages are named libfoo-bar, but are actual holding (e.g.) the bar binary for the libfoo package. Examples:
  • libmagic-mgc contains /usr/lib/file/magic.mgc, which conflicts between the 32 and 64 bit versions; of course, it s the exact same file, so this should be an arch:all package, but
  • libpam-modules-bin and liblockfile-bin actually contain binaries (per the -bin suffix)
It s possible to work around all this, but it changes a 1 minute:
# apt install $(dpkg -i   grep ^ii   awk ' print $2 ' grep :amrhf sed -e 's/:armhf/:arm64')
into a 10-20 minutes fight with packages (like most other steps).

Is it worth doing? Compared to the simple bullseye bookworm upgrade, I m not sure about this. The result? Yes, definitely, the system feels - weirdly - much more responsive, logged in over SSH. I guess the arm64 base architecture has some more efficient ops than the lowest denominator armhf , so to say (e.g. there was in the 32 bit version some rpi-custom package with string ops), and thus migrating to 64 bit makes more things faster , but this is subjective so it might be actually not true. But from the point of view of the effort? Unless you like to play with dpkg and apt, and understand how these work and break, I d rather say, migrate to ansible and automate the deployment. It s doable, sure, and by the third system, I got this nailed down pretty well, but it was a lot of time spent. The good aspect is that I did 3 migrations:
  • rpi zero w2: bullseye 32 bit to 64 bit, then bullseye to bookworm
  • rpi 4: bullseye to bookworm, then bookworm 32bit to 64 bit
  • same, again, for a more important system
And all three worked well and no data loss. But I m really glad I have this behind me, I probably wouldn t do a fourth system, even if forced And now, waiting for the RPI 5 to be available See you!

24 October 2023

Iustin Pop: OS updates are damn easy nowadays!

I m baffled at how simple and reliable operating system updates have become. Upgraded Debian bullseye to bookworm, across a few systems, easy. On VMs, it s even so fast that installing base system from scratch is probably the same time. But Linux/Debian OFC works well. Shall we look at MacOS? Takes longer, but just runs and reboots a couple of times and then, bam, it s up and with windows restored. Surely Windows is the outlier? Nah, finally said yes to the Upgrade to Win 11? prompt, and it took a while to download (why is Win/Mac so heavy and slow to download? Debian just flies!), then rebooted a few times, and again, bam, it s up and GoG and Steam still work. I swear, there was a time when updating the OS felt like an accomplishment. Now, except for Raspberry Pi OS ( upgrades not supported, reinstall! but I bet they also work), upgrading an actual OS is just like new Android/iOS version. And yes, get off my lawn! I still have a lower digit count Slashdot ID

20 October 2023

Iustin Pop: How to set a per-app locale in MacOS

After spending ~20+ years with a Linux desktop, I m trying to expand my desktop setup to include MacOS (well, desktop/laptop, I mean end user in general). And to my surprise, there s no clear repository of MacOS info. Man pages yes, some StackOverflow, some Apple forums, but no canonical version. Or, I didn t find it, please enlighten me Another issue is that Apple apparently changes behaviour without clearly documenting it. In this specific case, the region part of the locale went through significant churn lately. So, my goal: In Linux, this would simply mean running the app with the correct environment variables. But MacOS deprecated this a while back (it used to work). After reading what I could, the solution is quite easy, just not obvious:
% defaults read .GlobalPreferences grep en_
    AKLastLocale = "en_CH";
    AppleLocale = "en_CH";
% defaults read -app FooBar
(has no AppleLocale key)
% defaults write -app FooBar AppleLocale en_US
And that s it. Now, the defaults man page says the global-global is NSGlobalDomain, I don t know where I got the .GlobalPreferences. But I only needed to know the key name (in this case, AppleLocale - of course it couldn t be LC_ALL/LANG). One day I ll know MacOS better, but I try to learn more for 2+ years now, and it s not a smooth ride. Old dog new tricks, right?

11 October 2023

Iustin Pop: Not-quite-announcement: this blog is not entirely dead

Something, something then something else, and it s been another six months since I last wrote anything. The world is crazy, somehow there s no time for anything, and yet life move forward, inexorably. (Well, without going into gory side-notes about how evil stops life in various parts of the world.) As a proof that this blog is not dead, I ve finally implemented proper per-page keywords support, rather than the hard-coded keywords that were present before. Well, those defaults are still present in all pages that don t declare keywords, will probably remove them at one point. Implementing this in Hakyll is not hard, at all, if one semi-regularly writes code. There are a number of examples out there, but what was confusing: Anyway, after 2 files changed, 23 insertions(+), 8 deletions(-), this page is the first one to have any keywords. And the funny thing, after initially struggling to write a single line of Haskell, the final version of the keywords is something like this:
 
    where renderKeywords =
            return . escapeHtml . intercalate "," . map trim . splitAll ","
And I wrote that as is, compiled successfully, and did the right thing from the first try. I only write some Haskell code 2-3 times per year, but man, I love this language. Until next time, hopefully before 2024

16 April 2023

Iustin Pop: Quick note: nftables and TCP MSS clamping

Another short note to myself, and whomever cares/searches later for nft or nftables, tcp mss clamping. Somewhat surprising, many/most of the instructions found by Google are still related to iptables. I guess people stopped writing blog posts by the time nftables became widely used? The only official documentation I can find is in the official wiki, but it doesn t list/explain exactly how does this work/in which conditions. I think this results in posts like this one that suggest additionally limiting the packets it acts on using a size limiter, in order to prevent changing small packets. Looking at the code that actually implements this, in net/netfilter/xt_TCPMSS.c (and not in the lower case-named file, which is about matching, TIL), in the function tcpmss_mangle_packet, first there is this comment:
/* Never increase MSS, even when setting it, as
 * doing so results in problems for hosts that rely
 * on MSS being set correctly.
 */
So at least the intent is that this always does the right thing (only decrease). Second, the code does correctly look at both directions of the packet when using auto-clamping (set rt mtu rather than set 1452), in the if branch for XT_TCPMSS_CLAMP_PMTU. This means, it s safer to use auto-clamping, rather than manually set the value. And finally, there is handling of some corner cases as well (syn packet with data, syn packet without the MSS option - unlikely for modern stacks - in which case it defaults to minimal values). All in all, it seems to me that it should always be correct to simply do what the wiki recommends, setting this for all packets traversing the host:
nft add rule ip filter forward tcp flags syn tcp option maxseg size set rt mtu
Of course, if you d rather not do it always, but only for external interfaces, make sure you set it in both directions:
nft add rule ip filter forward iifname ppp0 tcp flags syn tcp option maxseg size set rt mtu
nft add rule ip filter forward oifname ppp0 tcp flags syn tcp option maxseg size set rt mtu
And that should be it. Well, use iifgroup/oifgroup for better rules .

21 August 2022

Iustin Pop: Note to self: Don't forget Qemu's discard option

This is just a short note to myself, and to anyone who might run VMs via home-grown scripts (or systemd units). I expect modern VM managers to do this automatically, but for myself, I have just a few hacked together scripts. By default, QEMU (at least as of version 7.0) does not honour/pass discard requests from block devices to the underlying storage. This is a sane default (like lvm s default setting), but with long-lived VMs it can lead to lots of wasted disk space. I keep my VMs on SSDs, which is limited space for me, so savings here are important. Older Debian versions did not trim automatically, but nowadays they do (which is why this is worth enabling for all VMs), so all you need is to pass: And the next trim should save lots of disk space. It doesn t matter much if you use raw or qcow2, both will know to unmap the unused disk, leading to less disk space used. This part seems to me safe security-wise, as long as you trust the host. If you have pass-through to the actual hardware, it will also do proper discard at the SSD level (with the potential security issues leading from that). I m happy with the freed up disk space Note: If you have (like I do) Windows VMs as well, using paravirt block devices, make sure the drive is recent enough. One interesting behaviour from Windows: it looks like the default cluster size is quite high (64K), which with many small files will lead to significant overhead. But, either I misunderstand, or Windows actually knows how to unmap the unused part of a cluster (although it takes a while). So in the end, the backing file for the VM (19G) is smaller than the disk used as reported in Windows (23-24G), but higher than size on disk for all the files (17.2G). Seems legit, and it still boots Most Linux file systems have much smaller block sizes (usually 4K), so this is not a problem for it.

20 June 2022

Iustin Pop: Experiment: A week of running

My sports friends know that I wasn t able to really run in many, many years, due to a recurring injury that was not fully diagnosed and which, after many sessions with the doctor, ended up with OK-ish state for day-to-day life but also with these words: Maybe, running is just not for you? The year 2012 was my running year . I went to a number of races, wrote blog posts, then slowly started running only rarely, then a few years later I was really only running once in a while, and coupled with a number of bad ideas of the type lets run today after a long break, but a lot , I started injuring my foot. Add a few more years, some more kilograms on my body, a one event of jumping with a kid on my shoulders and landing on my bad foot, and the setup was complete. Doctor visits, therapy, slow improvements, but not really solving the problem. 6 months breaks, small attempts at running, pain again, repeat, pain again, etc. It ended up with me acknowledging that yes, maybe running is not for me, and I should really give it up. Incidentally, in 2021, as part of me trying to improve my health/diet, I tried some thing that is not important for this post and for the first time in a long time, I was fully, 100%, pain free in my leg during day-to-day activities. Huh, maybe this is not purely related to running? From that point on, my foot became, very slowly, better. I started doing short runs (2-3km), especially on holidays where I can t bike, and if I was careful, it didn t go too bad. But I knew I can t run, so these were rare events. In April this year, on vacation, I run a couple of times - 20km distance. In May, 12km. Then, there was a Garmin Badge I really wanted, so against my good judgement, I did a run/walk (2:1 ratio) the previous weekend, and to my surprise, no unwanted side-effect. And I got an idea: what if I do short run/walks an entire week? When does my foot break ? I mean, by now I knew that a short (3-4, maybe 5km) run that has pauses doesn t negatively impact my foot. What about the 2nd one? Or the 3rd one? When does it break? Is it distance, or something else? The other problem was - when to run? I mean, on top of hybrid work model. When working from home, all good, but when working from the office? So the other, somewhat more impossible task for me, was to wake up early and run before 8 AM. Clearly destined to fail! But, the following day (Monday), I did wake up and 3km. Then Tuesday again, 3.3km (and later, one hour of biking). Wed - 3.3km. Thu - 4.40km, at 4:1 ratio (2m:30s). Friday, 3.7km (4:1), plus a very long for me (112km) bike ride. By this time, I was physically dead. Not my foot, just my entire body. On Saturday morning, Training Peaks said my form is -52, and it starts warning below -15. I woke up late and groggy, and I had to extra motivate myself to go for the last, 5.3km run, to round up the week. On Friday and Saturday, my problem leg did start to how to say, remind me it is problematic? But not like previously, no waking in the morning with a stiff tendon. No, just not fully happy. And, to my surprise, correlated again with my consumption of problematic food (I was getting hungrier and hungrier, and eating too much of things I should keep an eye on). At this point, with the week behind me: Did my experiment make me wiser? Not really. Happier? Yes, 100%. I plan to buy some new running clothes, my current ones are really old. But did I really understand how my body function? A loud no. Sigh. The next challenge will be, how to manage my time across multiple sports (and work, and family, and other hobbies). Still, knowing that I can anytime go for 25-35 minutes of running, without preparation, is very reassuring. Freedom, health and injury-free sports to everyone!

12 June 2022

Iustin Pop: Somewhat committing to a new sport

Quite a few years ago - 4, to be precise, so in 2018 - I did a couple of SUP trainings, organised by a colleague. That was enjoyable, but not really matching with me (asymmetric paddling, ugh!), so I also did learn some kayaking, which I really love, but that s way higher overhead - no sea around in Switzerland, and lakes are generally too small. So I basically postponed any more water sports , until sometime in the future when I ll finally decide what I want to do (and in what setup). I did a couple of one-off SUP rides in various places (2019, 2021), but I really was out of practice, so it wasn t really enjoyable. But with family, SUP offers a much easier way to carry a passenger (than a kayak), so slowly I started thinking more about doing it more seriously. So last week, after much deliberation, bought an inflatable board, paddle and various other accessories, and on Saturday went to try it out, on excellent weather (completely flat) and hot but not overly so. The board choosing in itself was something I like to do (research options), so for a bit I was concerned whether I m more interested in the gear, or the actual paddling itself To my surprise, it went way better than I feared - last time I tried it, paddled 30 minutes on my knees (knee-paddling?!), since I didn t dare stand up. But this time, I launched and then did stand up, and while very shaky, I didn t fall in. Neither by myself, nor with an extra passenger And hour later, and my initial shakiness went away, with the trainings slowly coming back to mind. Another half hour, and - for completely flat water - I felt quite confident. The view was awesome, the weather nice, the water cold enough to be refreshing and the only question on my mind was - why didn t I do this 2, 3 years ago? Well, Corona aside. I forgot how much I love just being on the water. It definitely pays off the cost of going somewhere, unpacking the stuff, pumping up the board (that s a bit of a sport in itself ), because the blue-green-light-blue colour palette is just how things should be:
Small lake, but beautiful view Small lake, but beautiful view
Well, approximately blue. This being a small lake, it s more blue-green than proper blue. That s next level, since bigger lakes mean waves, and more traffic. Of course, this could also turn up like many other things I tried (a device in a corner that s not used anymore), but at least for yesterday, I was a happy paddler!

10 June 2022

Iustin Pop: Still alive, 2022 version

Still alive, despite the blog being silent for more than a year. Nothing bad happened, but there was always something more important (or interesting) to do than write a post. And I did say many, many times - Oh, I should write a post about this thing I just did or learned about , but I never followed up. And I was close to forgetting entirely about blogging (ahem, it s a bit much calling it blogging ), until someone I follow posted something along the lines I have this half-written post for many months that I can t finish, here s some pictures instead . And from that followed an interesting discussion, and the similarity between why I didn t blog recently were very interesting, despite different countries, continents/etc. So yes, I don t know what happened - beside the chaos that even the end of Covid caused in our lives, and the psychological impact of the Ukraine invasion, but all this is relatively recent - that I couldn t muster the energy to write posts again. I even had a half-written post in late June last year, never finished. Sigh. I won t even bring up open-source work, since I haven t done that either. Life. Sometimes things just happen. But yes, I did get many Garmin badges in the last 12 months Oh, and Top Gun: Maverick is awesome. A movie, but an awesome movie. See you!

17 January 2022

Wouter Verhelst: Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap. He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases. There are two reasons why you should have backups. The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed. Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online. The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable. That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create. All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites. In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much. As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data. But of course, this is something everyone should consider for themselves...

6 June 2021

Iustin Pop: Goodbye Travis, hello GitHub Actions

My very cyclical open-source work For some reason, I only manage to do coding at home every some months - mostly 6 months apart, so twice a year (even worse than my blogging frequency :P). As such, I missed the whole discussion about travis-ci (the .org version) going away, etc. So when I finally did some work on corydalis a few weeks ago, and had a travis build failure (restoring a large cache simply timed out, either travis or S3 has some hiccup - it cleared by itself a day later), I opened the travis-ci interface to see the scary banner ( travis-ci.org is shutting down ), and asked myself what s happening. The deadline was in less than a week even Long story short, Travis was a good home for many years, but they were bought and are doing significant changes to their OSS support, so it s time to move on. I anyway wanted to learn GitHub Actions for a while (ahem, free time intervened), so this was a good (forced) opportunity .

Proper composable CI The advantage of Travis infrastructure was that the build configuration was really simple. It had very few primitives: pre-steps (before_install), install steps, and the actual things to do to test, post-steps (after_sucess) and a few other small helpers (caching, apt packages, etc.) This made it really easy to just pick up and write a config, plus it had the advantage of allowing to test configs from the web UI without needing to push. This simplicity was unfortunately also its significant limiter: the way to do complex things in steps was simply to add more shell commands. GitHub actions, together with its marketplace, changes this entirely. There are no built-in actions, the language just defines the build/job/step hierarchy, and one glues together whatever steps they want. This has the disadvantage that even checking out the code needs to be explicitly written in all workflows (so boilerplate, if you don t need customisation), but it opens up a huge opportunity for composition, since it allows people to publish actions (steps) that you just import, encapsulating all the work. So, after learning how to write a moderately complicated workflow (complicated as in multiple Python version, some of them needing different OS version, and multi-OS), it was straightforward to port this to all my projects - just somewhat tedious. I ve now shutdown all builds on Travis, just can t find a way to delete my account

Better multi-OS, worse (missing) multi-arch In theory, Travis supports Linux, MacOS, FreeBSD and Windows, but I ve found that support for non-Linux is not quite as good. Maybe I missed things, but multi-version Python builds on MacOS were not as nicely supported as Linux; Windows is quite early, and very limited; and I haven t tested FreeBSD. GitHub is more restrictive - Linux, MacOS and Windows - but I found support for MacOS and Windows better for my use cases. If your use case is testing multiple MacOS versions, Travis wins, if it s more varied languages/etc. on the single available MacOS version, GitHub works better. On the multi-arch side, Travis wins hands-down. Four different native architectures, and enabling one more is just adding another key to the arch list. With GitHub, if I understand right, you either have to use docker+emulation, or use self-hosted runners. So here it really matters what is more important to you. Maybe in the future GitHub will support more arches, but right now, Travis wins for this use-case.

Summary For my specific use-case, GitHub Actions is a better fit right now. The marketplace has advantages (I ll explain better in a future post), the actions are a very nice way to encapsulate functionality, and it s still available/free (up to a limit) for open source projects. I don t know what the future of Travis for OSS will be, but all I heard so far is very concerning. However, I ll still miss a few things. For example, an overall dashboard for all my projects, like this one:
Travis dashboard Travis dashboard
I couldn t find any such thing on GitHub, so I just use my set of badges. Then cache management. Travis allows you to clear the cache, and it does auto-update the cache. GitHub caches are immutable once built, so you have to:
  • watch if changed dependencies/dependency chains result in things that are no longer cached;
  • if so, need to manually bump the cache key, resulting in a commit for purely administrative purposes.
For languages where you have a clean full chain of dependencies recorded (e.g. node s package-lock.json, stack s stack.yaml.lock), this is trivial to achieve, but gets complicated if you add OS dependencies, languages which don t record all this, etc. Hmm, maybe I should just embed the year/month in the cache name - cheating, but automated cheating. With all said and done, I think GHA is much less refined, but with more potential. Plus, the pace of innovation on Travis side was quite slow (likely money problems, hence them being bought, etc.) So one TODO done: learn GitHub Actions , even if a bit unplanned

25 January 2021

Iustin Pop: Raspbian/Raspberry PI OS with initrd

Background While Raspbian, ahem, Raspberry PI OS is mostly Debian, the biggest difference is the kernel, both in terms of code and packaging. The packaging is weird since it needs to deal with the fact that there s no bootloader per se, the firmware parses /boot/config.txt and depending on the setting of 64bit and/or kernel line, it loads a specific file. Normally, one of kernel7.img, kernel7l.img or kernel8.img. While this configuration file supports an initrd, it doesn t have a clean way to associate an initrd with a kernel, but rather you have to (like for the actual kernel) settle on a hard-coded initrd name. Due to this, the normal way of using an initrd doesn t work, and one has to do two things:
  • enable building initrd s at all
  • settle on the naming for the initrd
  • ensure the initrd is updated correctly
There are quite a few of forum threads about this, but there s no official support for this. The best link I found was this Stack Exchange post, which goes most of the way, but fails at the third point above.

My trivial solution Instead of naming tricks the above post suggests, I settled on having a fixed name. It risks boot failure when the kernel architecture will change, which could be worked around with hard-coded kernel too, but I haven t done that yet. First, enable initrd creation/update in /etc/default/raspberrypi-kernel, like the post says. Uncomment INITRD=yes, but not the RPI_INITRD. This will enable creating/updating the initrd when the kernel package is installed and/or other packages trigger it via hooks. Second, naming: choose an initrd name. I have simply this in my config.txt:
initramfs initrd.img followkernel
So the value is fully hard-coded, and the actual work is done in the next part. Last, add an initramfs-hook in (e.g.) /etc/initramfs/post-update.d/rpi-initrd. Note that, by default (unless other packages have created it), the /etc/initramfs directory doesn t exist. It s not the confusingly-named /etc/initramfs-tools/ directory, which is related to building the initrd, but rather to doing things with the (already built) initrd. This directory is briefly explain in the Debian kernel-handbook guide. This is my contents:
#!/bin/bash

ABI="$1"
INITRD="$2"
BOOTDIR="$(dirname "$INITRD")"

# Note: the match below _must_ be synced with the boot kernel
if [[ "$ABI" == *-v8+ ]]; then
        echo "Building v8l+ image, updating top initrd"
        cp -p "$INITRD" "$BOOTDIR/initrd.img"
fi
This script seems to me much simpler, so in principle less chance of bugs, than all the renaming and editing config.txt I saw in that stack exchange post. As far as I know, all one needs to care about is to sync the ABI passed to this hook with the kernel one is running, and since there is only one kernel version installed at a time on Raspbian (as there s no versioning in the kernel names), this should work correctly. With this hook, the update also works correctly when packages trigger initrd updates, not only when new kernels are installed. Note that the cp there is needed since the book partition is FAT, so no symbolic links/hard links allowed. Happy initrd ing!

10 January 2021

Iustin Pop: Dealing with evil ads

Background I usually don t mind ads, as not as they not very intrusive. I get that the current media model is basically ad-funded, and that unless I want to pay $1/month or so to 50 web sites, I have to accept ads, so I don t run an ad-blocker. Sure, sometimes are annoying (hey YT, mid-roll ads are borderline), but I ve also seen many good ads, as in interesting or funny even. Well, I don t think I ever bought anything as direct result from ads, so I don t know how useful ads are for the companies, but hey, what do I care. Except there a few ad networks that run what I would say are basically revolting ads. Things I don t want to ever accidentally see while eating, or things that are really make you go WTF? Maybe you know them, maybe you don t, but I guess there are people who don t know how to clean their ears, or people for whom a fast 7 day weight loss routine actually works. Thankfully, most of the time I don t browse sites which use this networks, but randomly they do leak to even sites I do browse. If I m not very stressed already, I can ignore them, otherwise they really, really annoy me. Case in point, I was on Slashdot, and because I was logged on and recently had mod points, the right side column had a check-box disable ads . That sidebar had some relatively meaningful ads, like a VPN subscription (not that I would use it, but it is a tech thing), or even a book about Kali Linux, etc. etc. So I click the disable ads , and the right column goes away. I scroll down happily, only to be met, at the bottom, by the best way to clean your ear , the most 50 useless planes ever built (which had a drawing of something that was for sure never ever built outside of in movies), you won t believe how this child actor looks today , etc.

Solving the problem The above really, really pissed me off, so I went to search how to block ad network . To my surprise, the fix was not that simple, for standard users at least.

Method 1: hosts file The hosts file is reasonable as it is relatively cross-platform (Linux and Windows and Mac, I think), but how the heck do you edit hosts on your phone? And furthermore, it has some significant downsides. First, /etc/hosts lists individual hosts, so for an entire ad network, the example I had had two screens of host names. This is really unmaintainable, since rotating host names, or having a gazillion of them is trivial. Second, it cannot return negative answers. I.e. you have to give each of those hosts a valid IPv4/IPv6, and have something either reply with 404 or another 4xx response, or not listen on port 80/443. Too annoying. And finally, it s a client-side solution, so one would have to replicate it across all clients in a home, and keep it in sync.

Method 2: ad-blockers I dislike ad-blockers on principle, since they need wide permissions on all pages, but it is a recommended solution. However, to my surprise, one finds threads saying ad-blocker foo has whitelisted ad network bar, at which point you re WTF? Why do I use an ad-blocker if they get paid by the lowest of the ad networks to show the ads? And again, it s a client-side solution, and one would have to deploy it across N clients, and keep them in sync, etc.

Method 3: HTTP proxy blocking To my surprise, I didn t find this mentioned in a quick internet search. Well, HTTP proxies have long gone the way of the dodo due to HTTPs everywhere , and while one can still use them even with HTTPS, it s not that convenient:
  • you need to tunnel all traffic through them, which might result in bottlenecks (especially for media playing/maybe video-conference/etc.).
  • or even worse, there might be protocol issues/incompatibilities due to 100% tunneling.
  • running a proxy opens up some potential security issues on the internal network, so you need to harden the proxy as well, and maintain it.
  • you need to configure all clients to know about the proxy (via DHCP or manually), which might or might not work well, since it s client-dependent.
  • you can only block at CONNECT level (host name), and you have to build up regexes for the host name.
On the good side, the actual blocking configuration is centralised, and the only distributed configuration is pointing the clients through the proxy. While I used to run a proxy back in HTTP times, the gains were significant back them (media elements caching, downloads caching, all with a slow pipe, etc.), but today is not worth it, so I ve stopped and won t bring a proxy back just for this.

Method 4: DNS resolver filtering After thinking through all the options, I thought - hey, a caching/recursive DNS resolver is what most people with a local network run, right? How difficult is to block at resolver level? and oh my, it is so trivial, for some resolvers at least. And yes, I didn t know about this a week ago

Response Policy Zones Now, of course, there is a standard for this, called Response Policy Zone, and which is supported across multiple resolvers. There are many tutorials on how to use RPZs to configure things, some of them quite detailed - e.g. this one, or a simple/more straightforward one here. The upstream BIND documentation also explains things quite well here, so you can go that route as well. It looks a bit hairy for me thought, but it works, and since it is a standard, it can be more easily deployed. There are many discussions on the internet about how to configure RPZs, how to not even resolve the names (if you re going to return something explicitly/statically), etc. so there are docs, but again it seems a bit overdone.

Resolver hooks There s another way too, if your resolver allows scripting. For example, the PowerDNS resolver allow Lua scripting, and has a relatively simple API at least, to me it looks way, way simpler than the RPZ equivalent. After 20 minutes of reading the docs, I ended up with this, IMO trivial, solution (in a file named e.g. rules.lua):
ads = newDS()
ads:add( 'evilads.com', 'evilads.well-known-cdn.com', 'moreads.net' )

function preresolve(dq)
  if ads:check(dq.qname) then
    dq.rcode = pdns.NXDOMAIN
    return true;
  end
  return false;
end
and that s it. Well, enable it/load the file in the configuration, but nothing else. Syntax is pretty straightforward, matching by suffix here, and if you need more complex stuff, you can of course do it; it s just Lua and a simple API. I don t see any immediate equivalent in Bind, so there s that, but if you can use PowerDNS, then the above solution seems simple for simple cases, and could be extended if needed (not sure in which cases). The only other thing one needs to do is to serve the local/custom resolver to all clients, whether desktop or mobile, and that s it. DNS server is bread-and-butter in DHCP, so better support than proxy, and once the host name has been (mis)resolved, nothing is involved anymore in the communication path. True, your name server might get higher CPU usage, but for home network, this should not be a problem. Can this filtering method (either RPZ or hooks) be worked around by ad networks? Sure, like anything. But changing the base domain is not fun. DNSSEC might break it (note Bind RPZ can be configure to ignore DNSSEC), but I m more worried about DNS-over-HTTPS, which I thought initially it s done for the user, but now I m not so sure anymore. Not being in control even of your own DNS resolver seems evil , but what do I know.

Combined authoritative + recursive solution This solution was provided by Guillem Jover, who uses unbound, which is a combined authoritative name server and recursive resolver in one, and dnsmasq (which is even more things, I think):
For my LANs I use unbound, and then block this kind of thing in /etc/unbound/unbound.conf.d/block.conf, with stuff like:
server:
 local-zone: adsite.example.com refuse
But then for things that are mobile, and might get out of the LAN, such as laptops, I also block with dnsmasq in /etc/dnsmasq.d/block.conf, with stuff like:
 address=/adsite.example.com/
I still use ublock-origin to block stuff at the browser level, though, for yet an extra layer of noise suppression. :)
Thanks for the info!

Happy browsing! 10 lines of Lua, and now for sure I m going to get even fatter without the this natural method will melt your belly fat in 7 days information. Or I will just throw away banana peels without knowing what I could do with hem. After a few days, I asked myself but ads are not so bad, why did I and then realised that yes, ads are not so bad anymore. And Slashdot actually loads faster So, happy browsing!

Next.