Search Results: "ghe"

11 January 2026

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

5 January 2026

Russell Coker: Phone Charging Speeds With Debian/Trixie

One of the problems I encountered with the PinePhone Pro (PPP) when I tried using it as a daily driver [1] was the charge speed, both slow charging and a bad ratio of charge speed to discharge speed. I also tried using a One Plus 6 (OP6) which had a better charge speed and battery life but I never got VoLTE to work [2] and VoLTE is a requirement for use in Australia and an increasing number of other countries. In my tests with the Librem 5 from Purism I had similar issues with charge speed [3]. What I want to do is get an acceptable ratio of charge time to use time for a free software phone. I don t necessarily object to a phone that can t last an 8 hour day on a charge, but I can t use a phone that needs to be on charge for 4 hours during the day. For this part I m testing the charge speed and will test the discharge speed when I have solved some issues with excessive CPU use. I tested with a cheap USB power monitoring device that is inline between the power cable and the phone. The device has no method of export so I just watched it and when the numbers fluctuated I tried to estimate the average. I only give the results to two significant digits which is about all the accuracy that is available, as I copied the numbers separately the V*A might not exactly equal the W. I idly considered rounding off Voltages to the nearest Volt and current to the half amp but the way the PC USB ports have voltage drop at higher currents is interesting. This post should be useful for people who want to try out FOSS phones but don t want to buy the range of phones and chargers that I have bought. Phones Tested I have seen claims about improvements with charging speed on the Librem 5 with recent updates so I decided to compare a number of phones running Debian/Trixie as well as some Android phones. I m comparing an old Samsung phone (which I tried running Droidian on but is now on Android) and a couple of Pixel phones with the three phones that I currently have running Debian for charging. Chargers Tested HP Z640 The Librem 5 had problems with charging on a port on the HP ML110 Gen9 I was using as a workstation. I have sold the ML110 and can t repeat that exact test but I tested on the HP z640 that I use now. The z640 is a much better workstation (quieter and better support for audio and other desktop features) and is also sold as a workstation. The z640 documentation says that of the front USB ports the top one can do fast charge (up to 1.5A) with USB Battery Charging Specification 1.2 . The only phone that would draw 1.5A on that port was the Librem 5 but the computer would only supply 4.4V at that current which is poor. For every phone I tested the bottom port on the front (which apparently doesn t have USB-BC or USB-PD) charged at least as fast as the top port and every phone other than the OP6 charged faster on the bottom port. The Librem 5 also had the fastest charge rate on the bottom port. So the rumours about the Librem 5 being updated to address the charge speed on PC ports seem to be correct. The Wikipedia page about USB Hardware says that the only way to get more than 1.5A from a USB port while operating within specifications is via USB-PD so as USB 3.0 ports the bottom 3 ports should be limited to 5V at 0.9A for 4.5W. The Librem 5 takes 2.0A and the voltage drops to 4.6V so that gives 9.2W. This shows that the z640 doesn t correctly limit power output and the Librem 5 will also take considerably more power than the specs allow. It would be really interesting to get a powerful PSU and see how much power a Librem 5 will take without negotiating USB-PD and it would also be interesting to see what happens when you short circuit a USB port in a HP z640. But I recommend not doing such tests on hardware you plan to keep using! Of the phones I tested the only one that was within specifications on the bottom port of the z640 was the OP6. I think that is more about it just charging slowly in every test than conforming to specs. Monitor The next test target is my 5120*2160 Kogan monitor with a USB-C port [4]. This worked quite well and apart from being a few percent slower on the PPP it outperformed the PC ports for every device due to using USB-PD (the only way to get more than 5V) and due to just having a more powerful PSU that doesn t have a voltage drop when more than 1A is drawn. Ali Charger The Ali Charger is a device that I bought from AliExpress is a 240W GaN charger supporting multiple USB-PD devices. I tested with the top USB-C port that can supply 100W to laptops. The Librem 5 has charging going off repeatedly on the Ali charger and doesn t charge properly. It s also the only charger for which the Librem 5 requests a higher voltage than 5V, so it seems that the Librem 5 has some issues with USB-PD. It would be interesting to know why this problem happens, but I expect that a USB signal debugger is needed to find that out. On AliExpress USB 2.0 sniffers go for about $50 each and with a quick search I couldn t see a USB 3.x or USB-C sniffer. So I m not going to spend my own money on a sniffer, but if anyone in Melbourne Australia owns a sniffer and wants to visit me and try it out then let me know. I ll also bring it to Everything Open 2026. Generally the Ali charger was about the best charger from my collection apart from the case of the Librem 5. Dell Dock I got a number of free Dell WD15 (aka K17A) USB-C powered docks as they are obsolete. They have VGA ports among other connections and for the HDMI and DisplayPort ports it doesn t support resolutions higher than FullHD if both ports are in use or 4K if a single port is in use. The resolutions aren t directly relevant to the charging but it does indicate the age of the design. The Dell dock seems to not support any voltages other than 5V for phones and 19V (20V requested) for laptops. Certainly not the 9V requested by the Pixel 7 Pro and Pixel 8 phones. I wonder if not supporting most fast charging speeds for phones was part of the reason why other people didn t want those docks and I got some for free. I hope that the newer Dell docks support 9V, a phone running Samsung Dex will display 4K output on a Dell dock and can productively use a keyboard and mouse. Getting equivalent functionality to Dex working properly on Debian phones is something I m interested in. Battery The Battery I tested with is a Chinese battery for charging phones and laptops, it s allegedly capable of 67W USB-PD supply but so far all I ve seen it supply is 20V 2.5A for my laptop. I bought the 67W battery just in case I need it for other laptops in future, the Thinkpad X1 Carbon I m using now will charge from a 30W battery. There seems to be an overall trend of the most shonky devices giving the best charging speeds. Dell and HP make quality gear although my tests show that some HP ports exceed specs. Kogan doesn t make monitors, they just put their brand on something cheap. Buying one of the cheapest chargers from AliExpress and one of the cheaper batteries from China I don t expect the highest quality and I am slightly relieved to have done enough tests with both of those that a fire now seems extremely unlikely. But it seems that the battery is one of the fastest charging devices I own and with the exception of the Librem 5 (which charges slowly on all ports and unreliably on several ports) the Ali charger is also one of the fastest ones. The Kogan monitor isn t far behind. Conclusion Voltage and Age The Samsung Galaxy Note 9 was released in 2018 as was the OP6. The PPP was first released in 2022 and the Librem 5 was first released in 2020, but I think they are both at a similar technology level to the Note 9 and OP6 as the companies that specialise in phones have a pipeline for bringing new features to market. The Pixel phones are newer and support USB-PD voltage selection while the other phones either don t support USB-PD or support it but only want 5V. Apart from the Librem 5 which wants a higher voltage but runs it at a low current and repeatedly disconnects. Idle Power One of the major problems I had in the past which prevented me from using a Debian phone as my daily driver is the ratio of idle power use to charging power. Now that the phones seem to charge faster if I can get the idle power use under control then it will be usable. Currently the Librem 5 running Trixie is using 6% CPU time (24% of a core) while idle and the screen is off (but Caffeine mode is enabled so no deep sleep). On the PPP the CPU use varies from about 2% and 20% (12% to 120% of one core), this was mainly plasmashell and kwin_wayland. The OP6 has idle CPU use a bit under 1% CPU time which means a bit under 8% of one core. The Librem 5 and PPP seem to have configuration issues with KDE Mobile and Pipewire that result in needless CPU use. With those issues addressed I might be able to make a Librem 5 or PPP a usable phone if I have a battery to charge it. The OP6 is an interesting point of comparison as a Debian phone but is not a viable option as a daily driver due to problems with VoLTE and also some instability it sometimes crashes or drops off Wifi. The Librem 5 charges at 9.2W from a a PC that doesn t obey specs and 10W from a battery. That s a reasonable charge rate and the fact that it can request 12V (unsuccessfully) opens the possibility to potential higher charge rates in future. That could allow a reasonable ratio of charge time to use time. The PPP has lower charging speeds then the Librem 5 but works more consistently as there was no charger I found that wouldn t work well with it. This is useful for the common case of charging from a random device in the office. But the fact that the Librem 5 takes 10W from the battery while the PPP only takes 6.3W would be an issue if using the phone while charging. Now I know the charge rates for different scenarios I can work on getting the phones to use significantly less power than that on average. Specifics for a Usable Phone The 57W battery or something equivalent is something I think I will always need to have around when using a PPP or Librem 5 as a daily driver. The ability to charge fast while at a desk is also an important criteria. The charge speed of my home PC is good in that regard and the charge speed of my monitor is even better. Getting something equivalent at a desktop of an office I work in is a possibility. Improving the Debian distribution for phones is necessary. That s something I plan to work on although the code is complex and in many cases I ll have to just file upstream bug reports. I have also ordered a FuriLabs FLX1s [5] which I believe will be better in some ways. I will blog about it when it arrives.
Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W

4 January 2026

Sahil Dhiman: Full /8 IPv4 Address Block Holders in 2025

Here, I tried to collect entities that owned full /8 IPv4 address block(s) in 2025. A more comprehensive and historical list can be found at https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_address_blocks.

Some notable mentions: Note - HE s list mentions addresses originated (not just owned). Providers also advertise prefixes owned by their customers in Bring Your Own IP (BYOIP) setups. The number of address space owned by entity would mostly be more than their customer owned space (originated by the entity).
  • There s also 126.0.0.0/8 which is mostly Softbank.

Matthew Garrett: What is a PC compatible?

Wikipedia says An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models . But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet? Before we dig into that, let s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn t run elsewhere. CP/M s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn t need to care about the underlying hardware and would run on all systems that had a working CP/M port. By 1979, boards based on the 8086, Intel s successor to the 8080, were hitting the market. The 8086 wasn t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM s hardware, and the rest is history. But one key part of this was that despite what was now MS-DOS existing only to support IBM s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn t include all the code needed to run on a PC - you needed IBM s BIOS. To begin with this wasn t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor s ROM code wasn t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way. And here s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM s functionality, or didn t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you d think wouldn t be necessary given that s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it. You d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn t maintain compatibility. As long as everything went via the BIOS this shouldn t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them. And that s what happened. IBM was the biggest player, so people targeted IBM s platform. When BIOS interfaces weren t sufficient they hit the hardware directly - and even if they weren t doing that, they d end up depending on behavioural quirks of IBM s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof. So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you d need some additional media for hardware-specific drivers. It s something that still distinguishes the PC market from the ARM desktop market. But it s not as true as it used to be, and it s interesting to think about whether it ever was as true as people thought. Let s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no. Ok. But the hardware is broadly the same, right? There s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that s really not going to work when you have a PCI card that s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won t2, so you re still actually relying on the firmware to do the right thing but it s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work. But imagine you are, or imagine you re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you re trying to run something built with IBM Pascal 1.0? There s a risk that it ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it ll break. It d work fine on an actual PC, and it won t work here, so are we PC compatible? That s a very interesting abstract question and I m going to entirely ignore it. Let s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware. Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn t display correctly on any future PCs either. This is going to become a theme. There s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you re going to have a bad time. This isn t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You d likely say yes , but there s software written for the original PC that won t work there. And, well, let s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It s fine, we d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through. So, what s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it ll run most old software, as long as it doesn t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it ll potentially be unusable or crash because time is hard. The truth is that there s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn t run Flight Simulator. PC Compatible is a socially defined construct, just like Woman . We can get hung up on the details or we can just chill.

  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don t provide BIOS compatibility
  2. Back in the 90s and early 2000s operating systems didn t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that s how I made a laptop that could boot unmodified MacOS X
  3. (my name will not be Wolfwings Shadowflight)
  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA
  5. and by advanced we re still talking about the 90s, don t get excited

Valhalla's Things: And now for something completely different

Posted on January 4, 2026
Tags: topic:walking
Warning
mention of bodies being bodies and minds being minds, and not in the perfectly working sense.
One side of Porto Ceresio along the lake: there is a small strip of houses, and the hills behind them are covered in woods. A boat is parked around the middle of the picture. A lot of the youtube channels I follow tend to involve somebody making things, so of course one of the videos my SO and I watched a few days ago was about walking around San Francisco Bay, and that recalled my desire to go to places by foot. Now, for health-related reasons doing it properly would be problematic, and thus I ve never trained for that, but during this Christmas holiday-heavy time I suggested my very patient SO the next best thing: instead of our usual 1.5 hours uphill walk in the woods, a 2 hours and a bit mostly flat walk on paved streets, plus some train, to a nearby town: Porto Ceresio, on the Italian side of Lake Lugano. I started to prepare for it on the day before, by deciding it was a good time to upgrade my PinePhone, and wait, I m still on Trixie? I could try Forky, what could possibly go wrong? And well, the phone was no longer able to boot, and reinstalling from the latest weekly had a system where the on-screen keyboard didn t appear, and I didn t want to bother finding out why, so re-installed another time from the 13.0 image, and between that, and distracting myself with widelands while waiting for the downloads and uploads and reboots etc., well, all of the afternoon and the best part of the evening disappeared. So, in a hurry, between the evening and the next morning I prepared a nice healthy lunch, full of all the important nutrients such as sugar, salt, mercury and arsenic. Tuna (mercury) soboro (sugar and salt) on rice and since I was in a hurry I didn t prepare any vegetables, but used pickles (more salt) and shio kombu (arsenic and various heavy metals, sugar and salt). Plus a green tea mochi for dessert, in case we felt low on sugar. :D Then on the day of the walk we woke up a bit later than usual, and then my body decided it was a good day for my belly to not exactly hurt, but not not-hurt either, and there I took an executive decision to wear a corset, because if something feels like it wants to burst open, wrapping it in a steel reinforced cage will make it stop. (I m not joking. It does. At least in those specific circumstances.) This was followed by hurrying through the things I had to do before leaving the house, having a brief anxiety attack and feeling feverish (it wasn t fever), and finally being able to leave the house just half an hour late. A stream running on rocks with the woods to both sides. And then, 10 minutes after we had left, realizing that I had written down the password for the train website, since it was no longer saved on the phone, but i had forgotten the bit of paper at home. We could have gone back to take it, but decided not to bother, as we could also hopefully buy paper-ish tickets at the train station (we could). Later on, I also realized I had also forgotten my GPS tracker, so I have no record of where we went exactly (but it s not hard to recognize it on a map) nor on what the temperature was. It s a shame, but by that point it was way too late to go back. Anyway, that probably was when Murphy felt we had paid our respects, and from then on everything went lovingly well! Routing had been done on the OpenStreetMap website, with OSRM, and it looked pretty easy to follow, but we also had access to an Android phone, so we used OSMAnd to check that we were still on track. It tried to lead us to the Statale (i.e. most important and most trafficked road) a few times, but we ignored it, and after a few turns and a few changes of the precise destination point we managed to get it to cooperate. At one point a helpful person asked us if we needed help, having seen us looking at the phone, and gave us indication for the next fork (that way to Cuasso al Piano, that way to Porto Ceresio), but it was pretty easy, since the way was clearly marked also for cars. Then we started to notice red and white markings on poles and other places, and on the next fork there was a signpost for hiking routes with our destination and we decided to follow it instead of the sign for cars. I knew that from our starting point to or destination there was also a hiking route, uphill both ways :) , through the hills, about 5 or 6 hours instead of two, but the sign was pointing downhill and we were past the point where we would expect too long of a detour. A wide and flat unpaved track passing through a flat grassy area with trees to the sides and rolling hills in the background. And indeed, after a short while the paved road ended, but the path continued on a wide and flat track, and was a welcome detour through what looked like water works to prevent flood damage from a stream. In a warmer season, with longer grass and ticks maybe the fact that I was wearing a long skirt may have been an issue, but in winter it was just fine. And soon afterwards, we were in Porto Ceresio. I think I have been there as a child, but I had no memory of it. On the other hand, it was about as I expected: a tiny town with a lakeside street full of houses built in the early 1900s when the area was an important tourism destination, with older buildings a bit higher up on the hills (because streams in this area will flood). And of course, getting there by foot rather than by train we also saw the parts where real people live (but not work: that s cross-border commuters country). Dried winter grass with two strips of frost, exactly under the shade of a fence. Soon after arriving in Porto Ceresio we stopped to eat our lunch on a bench at the lakeside; up to then we had been pretty comfortable in the clothing we had decided to wear: there was plenty of frost on the ground, in the shade, but the sun was warm and the temperatures were cleanly above freezing. Removing the gloves to eat, however, resulted in quite cold hands, and we didn t want to stay still for longer than strictly necessary. So we spent another hour and a bit walking around Porto Ceresio like proper tourists and taking pictures. There was an exhibition of nativity scenes all around the streets, but to get a map one had to go to either facebook or instagram, or wait for the opening hours of an office that were later than the train we planned to get to go back home, so we only saw maybe half of them, as we walked around: some were quite nice, some were nativity scenes, and some showed that the school children must have had some fun making them. three gnome adjacent creatures made of branches of evergreen trees, with a pointy hat made of moss, big white moustaches and red Christmas tree balls at the end of the hat and as a nose. They are wrapped in LED strings, and the lake can be seen in the background. Another Christmas decoration were groups of creatures made of evergreen branches that dotted the sidewalks around the lake: I took pictures of the first couple of groups, and then after seeing a few more something clicked in my brain, and I noticed that they were wrapped in green LED strings, like chains, and they had a red ball that was supposed to be the nose, but could just be around the mouth area, and suddenly I felt the need to play a certain chord to release them, but sadly I didn t have a weaponized guitar on me :D A bench in the shape of an open book, half of the pages folded in a reversed U to make the seat and half of the pages standing straight to form the backrest. It has the title page and beginning of the Constitution of the Italian Republic. Another thing that we noticed were some benches in the shape of books, with book quotations on them; most were on reading-related topics, but the one with the Constitution felt worth taking a picture of, especially these days. And then, our train was waiting at the station, and we had to go back home for the afternoon; it was a nice outing, if a bit brief, and we agreed to do it again, possibly with a bit of a detour to make the walk a bit longer. And then maybe one day we ll train to do the whole 5-6 hour thing through the hills.

1 January 2026

Daniel Baumann: P r ch storage (in Western Europe)

The majority of P r ch ( ) is stored in Asia (predominantly China, some in Malaysia and Taiwan) because of its climatic conditions:
  • warm (~30 C)
  • humid (~75% RH)
  • stable (comparatively small day to day, day to night, and seasonal differences)
These are ideal for ageing and allow efficient storage in simple storehouses, usually without the need for any air conditioning. The climate in Western European countries is significantly colder, drier, and more variable. However, residential houses offers throughout the year a warm (~20 C), humid (~50% RH), and stable baseline to start with. High quality, long term storage and ageing is possible too with some of the Asian procedures slightly adjusted for the local conditions. Nevertheless, fast accelerated ageing still doesn t work here (even with massive climate control). Personally I prefer the balanced, natural storage over the classical wet or dry , or the mixed traditional storage (of course all storage types are not that meaningful as they are all relative terms anyway). Also I don t like strong fermentation either (P r is not my favourite tea, I only drink it in the 3 months of winter). Therefore, my intention is primarily to preserve the tea while it continues to age normally, keep it in optimal drinking condition and don t rush its fermentation. The value of correct storage is of great importance and has a big effect on P r, and is often overlooked and underrated. Here s a short summary on how to store P r ch (in Western Europe).
P ' r ch  storage (in Western Europe) Image: some D y P r ch stored at home
1. Location
1.1 No light Choose a location with neither direct nor indirect sunlight (= ultraviolet radiation) exposure:
  • direct sunlight damages organic material ( bleeching )
  • indirect sunlight by heating up the containers inbalances the microclimate ( suppression )
1.2 No odors Choose a location with neither direct nor indirect odor exposure:
  • direct odor immissions (like incense, food, air polution, etc.) makes the tea undrinkable
  • indirect odor immissions (especially other teas stored next to it) taint the taste and dilute the flavours
Always use individual containers for each tea, store only identical tea of the same year and batch in the same container. Idealy never store one cake, brick or tuo on its own, buy multiples and store them together for better ageing.
2. Climate
2.1 Consistent temperature Use a location with no interference from any devices or structures next to it:
  • use an regular indoor location for mild temperature curves (i.e. no attics with large day/night temperature differences)
  • aim for >= 20 C average temperature for natural storage (i.e. no cold basements)
  • don t use a place next to any heat dissipating devices (like radiators, computers, etc.)
  • don t use a place next to an outside facing wall
  • always leave 5 to 10cm distance to a wall for air to circulate (generally prevents mold on the wall but also isolates the containers further from possible immissions)
As consistent temperature as possible allows even and steady fermentation. However, neither air conditioning or filtering is needed. Regular day to day, day to night, and season to season fluctuations are balanced out fine with otherwise correct indoor storage. Also humidity control is much more important for storage and ageing, and much less forgiving than temperature control.
2.2 Consistent humidity Use humidity control packs to ensure a stable humidity level:
  • aim for ~65% RH
Lower than 60% RH will (completely) dry out the tea, higher than 70% RH increases chances for mold.
3. Equipment
3.1 Proper containers Use containers that are long term available and that are easily stackable, both in form and dimensions as well as load-bearing capacity. They should be inexpensive, otherwise it s most likely scam (there are people selling snake-oil containers specifically for tea storage). For long term storage and ageing:
  • never use plastic: they leak chemicals over time (no tupperdors , mylar and zip lock bags, etc.)
  • never use cardboard or bamboo: they let to much air in, absorb too much humity and dry to slow
  • never use wood: they emit odors (no humidors)
  • never use clay: they absorb all humidity and dry out the tea (glazed or porcelain urns are acceptable for short term storage)
Always use sealed but not air tight, cleaned steal cans (i.e. with no oil residues from manufacturing; such as these).
3.2 Proper humidity control Use two-way humidity control packs to either absorb or add moisture depending on the needs:
  • use humidity control pack(s) in every container
  • use one 60g pack (such as these) per 500g of tea
3.3 Proper labels Put labels on the outside of the containers to not need to open them to know what is inside and disturbe the ageing:
  • write at least manufacturer and name, pressing year, and batch number on the label (such as these)
  • if desired and not otherwise kept track already elsewhere, add additional information such as the amount of items, weights, previous storage location/type, date of purchase, vendor, or price, etc.
3.4 Proper monitoring Measuring and checking temperature and humidity regularly prevents storage parameters turning bad going unnoticed:
  • put temperature and humidity measurement sensors (such as these) inside some of the containers (e.g. at least one in every different size/type of container)
  • keep at least one temperature and humidity measurement sensor next to the containers on the outside to monitor the storage location
4. Storage
4.1 Continued maintenance Before beginning to initially store a new tea, let it acclimatize for 2h after unpacking from transport (or longer if temperature differences between indoors and outdoors are high, i.e. in winter). Then continuesly:
  • once a month briefly air all containers for a minute or two
  • once a month check for mold, just to be safe
  • every 3 to 6 months check all humidity control packs for need of replacement
  • monitor battery life of temperature and humidity measurement sensors
Humidity control packs can be recharged (usually voiding any warranty) by submerging them for 2 days in distilled water and 4 hours drying on paper towl afterwards. Check with a weight scale after recharging, they should regain their original weight (e.g. 60g plus ~2 to 5g for the packaging).
Finally With a correct P r ch storage:
  • beginn to drink Sh ng P r ( ) roughly after about 10-15 years, or later
  • beginn to drink Sh u P r ( ) roughly after about 3-5 years, or later
Prepare the tea by breaking it into, or breaking off from it, ~5 to 10g pieces about 1 to 3 months ahead of drinking and consider increase humidity to 70% RH during that time. 2026

31 December 2025

Ravi Dwivedi: Transit through Kuala Lumpur

In my last post, Badri and I reached Kuala Lumpur - the capital of Malaysia - on the 7th of December 2024. We stayed in Bukit Bintang, the entertainment district of the city. Our accommodation was pre-booked at Manor by Mingle , a hostel where I had stayed for a couple of nights in a dormitory room earlier in February 2024. We paid 4937 rupees (the payment was online, so we paid in Indian rupees) for 3 nights for a private room. From the Terminal Bersepadu Selatan (TBS) bus station, we took the metro to the Plaza Rakyat LRT station, which was around 500 meters from the hostel. Upon arriving at the hostel, we presented our passports at their request, followed by a 20 ringgit (400 rupee) deposit which would be refunded once we returned the room keys at checkout.
Outside view of the hostel Manor by Mingle Manor by Mingle - the hostel where we stayed at during our KL transit. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Our room was upstairs and it had a bunk bed. I had seen bunk beds in dormitories before, but this was my first time seeing a bunk bed in a private room. The room did not have any toilets, so we had to use shared toilets. Unusually, the hostel was equipped with a pool. It also had a washing machine with dryers - this was one of the reasons we chose this hostel, because we were traveling light and hadn t packed too many clothes. The machine and dryer cost 10 ringgits (200 rupees) per use, and we only used it once. The hostel provided complimentary breakfast, which included coffee. Outside of breakfast hours, there was also a paid coffee machine. During our stay, we visited a gurdwara - a place of worship for Sikhs - which was within walking distance from our hostel. The name of the gurdwara was Gurdwara Sahib Mainduab. However, it wasn t as lively as I had thought. The gurdwara was locked from the inside, and we had to knock on the gate and call for someone to open it. A man opened the gate and invited us in. The gurdwara was small, and there was only one other visitor - a man worshipping upstairs. We went upstairs briefly, then settled down on the first floor. We had some conversations with the person downstairs who kindly made chai for us. They mentioned that the langar (community meal) is organized on every Friday, which was unlike the gurdwaras I have been to where the langar is served every day. We were there for an hour before we left. We also went to Adyar Ananda Bhavan (a restaurant chain) near our hostel to try the chain in Malaysia. The chain is famous in Southern India and also known by its short name A2B. We ordered
A dosa Dosa served at Adyar Ananda Bhavan. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
All this came down to around 33 ringgits (including taxes), i.e. around 660 rupees. We also purchased some snacks such as murukku from there for our trip. We had planned a day trip to Malacca, but had to cancel it due to rain. We didn t do a lot in Kuala Lumpur, and it ended up acting as a transit point for us to other destinations: flights from Kuala Lumpur were cheaper than Singapore, and in one case a flight via Kuala Lumpur was even cheaper than a direct flight! We paid 15,000 rupees in total for the following three flights:
  1. Kuala Lumpur to Brunei,
  2. Brunei to Kuala Lumpur, and
  3. Kuala Lumpur to Ho Chi Minh City (Vietnam).
These were all AirAsia flights. The cheap tickets, however, did not include any checked-in luggage, and the cabin luggage weight limit was 7 kg. We also bought quite some stuff in Kuala Lumpur and Singapore, leading to an increase in the weight of our luggage. We estimated that it would be cheaper for us to take only essential items such as clothes, cameras, and laptops, and to leave behind souvenirs and other non-essentials in lockers at the TBS bus stand in Kuala Lumpur, than to pay more for check-in luggage. It would take 140 ringgits for us to add a checked-in bag from Kuala Lumpur to Bandar Seri Begawan and back, while the cost for lockers was 55 ringgits at the rate of 5 ringgits every six hours. We had seen these lockers when we alighted at the bus stand while coming from Johor Bahru. There might have been lockers in the airport itself as well, which would have been more convenient as we were planning to fly back in soon, but we weren t sure about finding lockers at the airport and we didn t want to waste time looking. We had an early morning flight for Brunei on the 10th of December. We checked out from our hostel on the night of the 9th of December, and left for TBS to take a bus to the airport. We took a metro from the nearest metro station to TBS. Upon reaching there, we put our luggage in the lockers. The lockers were automated and there was no staff there to guide us.
Lockers at TBS bus station Lockers at TBS bus station. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
We bought a ticket for the airport bus from a counter at TBS for 26 ringgits for both of us. In order to give us tickets, the person at the counter asked for our passports, and we handed it over to them promptly. Since paying in cash did not provide any extra anonymity, I would advise others to book these buses online. In Malaysia, you also need a boarding pass for buses. The bus terminal had kiosks for getting these printed, but they were broken and we had to go to a counter to obtain them. The boarding pass mentioned our gate number and other details such as our names and departure time of the bus. The company was Jet Bus.
My boarding pass for the bus to the airport in Kuala Lumpur My boarding pass for the bus to the airport in Kuala Lumpur. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To go to our boarding gate, we had to scan our boarding pass to let the AFC gates open. Then we went downstairs, leading into the waiting area. It had departure boards listing the bus timings and their respective gates. We boarded our bus around 10 minutes before the departure time - 00:00 hours. It departed at its scheduled time and took 45 minutes to reach KL Airport Terminal 2, where we alighted. We reached 6 hours before our flight s departure time of 06:30. We stopped at a convenience store at the airport to have some snacks. Then we weighed our bags at a weighing machine to check whether we were within the weight limit. It turned out that we were. We went to an AirAsia counter to get our boarding passes. The lady at our counter checked our Brunei visas carefully and looked for any Brunei stamps on the passports to verify whether we had used that visa in the past. However, she didn t weigh our bags to check whether they were within the limit, and gave us our boarding passes. We had more than 4 hours to go before our flight. This was the downside of booking an early morning flight - we weren t able to get a full night s sleep. A couple of hours before our flight time, we were hanging around our boarding gate. The place was crowded, so there were no seats available. There were no charging points. There was a Burger King outlet there which had some seating space and charging points. As we were hungry, we ordered two cups of cappuccino coffee (15.9 ringgits) and one large french fries (8.9 ringgits) from Burger King. The total amount was 24 ringgits. When it was time to board the flight, we went to the waiting area for our boarding gates. Soon, we boarded the plane. It took 2.5 hours to reach the Brunei International Airport in the capital city of Bandar Seri Begawan.
View of Kuala Lumpur from the aeroplane View of Kuala Lumpur from the aeroplane. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Stay tuned for our experiences in Brunei! Credits: Thanks to Badri, Benson and Contrapunctus for reviewing the draft.

Chris Lamb: Favourites of 2025

Here are my favourite books and movies that I read and watched throughout 2025.

Books

Eliza Clark: Boy Parts (2020)Rachel Cusk: The Outline Trilogy (2014 2018)Edith Wharton: The House of Mirth (1905)Michael Finkel: The Art Thief (2023)Tony Judt: When the Facts Change: Essays 1995-2010 (2010)Jennette McCurdy: I'm Glad My Mom Died (2022)Joan Didion: The Year of Magical Thinking (2005)Jill Lepore: These Truths: A History of the United States (2018)

Films Recent releases

Disappointments this year included 28 Years Later (Danny Boyle, 2025), Cover-Up (Laura Poitras & Mark Obenhaus, 2025), Bugonia (Yorgos Lanthimos, 2025) and Caught Stealing (Darren Aronofsky, 2025).
Older releases ie. Films released before 2024, and not including rewatches from previous years. Distinctly unenjoyable watches included War of the Worlds (Rich Lee, 2025), Highest 2 Lowest (Spike Lee, 2025), Elizabethtown (Cameron Crowe, 2005), Crazy Rich Asians (Jon M. Chu, 2018) and Spinal Tap II: The End Continues (Rob Reiner, 2025). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Chinatown (Roman Polanski, 1974), Koyaanisqatsi (Godfrey Reggio, 1982), Heat (Michael Mann, 1995) and Night of the Hunter (Charles Laughton, 1955).

kpcyrd: 2025 wrapped

Same as last year, this is a summary of what I ve been up to throughout the year. See also the recap/retrospection published by my friends (antiz, jvoisin, orhun). Thanks to everybody who has been part of my human experience, past or present. Especially those who ve been closest.

22 December 2025

C.J. Collier: I m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20
commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000
    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow  extra_src  in  c_test_config.json  to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in  test_c  target.
    - **c_test_config.json:**
        - Added missing type test files to  deps  and  extra_src  for  convert/sv_to_upb  and  convert/upb_to_sv  test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of  test_pool .
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing  test_util.h  include in new type test headers.
        - Completed the set of  upb_to_sv  test cases for all scalar types by adding optional and repeated tests for  sfixed32 ,  sfixed64 ,  sint32 , and  sint64 , and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated  01-xs-testing.md  with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated  xs_learnings.md  with details from the recent segfault.
        - Updated  llm-plan-execution-instructions.md  to emphasize debugging steps.
## 0.22	2025-12-19
commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000
    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates  Devel::PPPort  to generate  ppport.h  for better portability.
        *   Object files now retain their path structure (e.g.,  xs/convert/sv_to_upb.o ) instead of being flattened, improving build clarity.
        *   The  MY::postamble  is significantly revamped to dynamically generate build rules for all C tests located in  t/c/  based on the  t/c/c_test_config.json  file.
        *   C tests are linked against  libprotobuf_common.a  and use  ExtUtils::Embed  flags.
        *   Added  JSON::MaybeXS  to  PREREQ_PM .
        *   The  test  target now also depends on the  test_c  target.
    
    2.  **C Test Infrastructure ( t/c/ ):
        *   Introduced  t/c/c_test_config.json  to configure individual C test builds, specifying dependencies and extra source files.
        *   Created  t/c/convert/test_util.c  and  .h  for shared test functions like loading descriptors.
        *   Initial  t/c/convert/upb_to_sv.c  and  t/c/convert/sv_to_upb.c  test runners.
        *   Basic  t/c/integration/030_protobuf_coro.c  for Coro safety testing on core utils using  libcoro .
        *   Basic  t/c/integration/035_croak_test.c  for testing exception handling.
        *   Basic  t/c/integration/050_convert.c  for integration testing conversions.
    
    3.  **Test Proto:** Updated  t/data/test.proto  with more field types for conversion testing and regenerated  test_descriptor.bin .
    
    4.  **XS Test Harness ( t/c/upb-perl-test.h ):** Added  like_n  macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests ( t/c/ ) will primarily check *if* a  croak  occurs (i.e., that the exception path is taken), but will not assert on the string content of  ERRSV . Reliably testing  $@  content requires the full Perl test environment with  Test::More , which will be done in the  .t  files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.
## 0.21	2025-12-18
commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000
    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.
## 0.20	2025-12-18
commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000
    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.
## 0.19	2025-12-18
commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000
    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for  libcoro -based C tests
      (e.g.,  t/c/integration/050_convert_coro.c ) to all C layer
      integration milestones (050 through 220).
    - Updated  030_Integration_Protobuf.md  to standardise checklist
      items for the existing  030_protobuf_coro.c  test.
    - Removed the single  xt/author/coro-safe.t  item from
       010_Build.md .
    - Added checklist items for Perl-level  Coro  tests
      (e.g.,  xt/coro/240_arena.t ) to each Perl layer
      integration milestone (240 through 400).
    - Created  perl/t/c/c_test_config.json  to manage C test
      configurations externally.
    - Updated  perl/doc/architecture/testing/01-xs-testing.md  to describe
      both C-level  libcoro  and Perl-level  Coro  testing strategies.
## 0.18	2025-12-18
commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000
    added coro testing to c layer milestones
## 0.17	2025-12-18
commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000
    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of  Test::TestCoverage .
    - Expanded Well-Known Types milestone (350) to detail each type.
## 0.16	2025-12-18
commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000
    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the  perl/doc/plan/  directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g.,  t/c/integration/030_protobuf.c ,  t/integration/260_descriptor_pool.t ).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.
## 0.15	2025-12-17
commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000
    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to  [ ]  to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added  protobuf_unregister_object  to  xs/protobuf/obj_cache.c .
    - Updated  xs/protobuf/obj_cache.h  with the new function declaration.
    - Expanded tests in  t/c/protobuf_obj_cache.c  to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in  t/c/protobuf_obj_cache.c  to 17.
## 0.14	2025-12-17
commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000
    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds  PerlUpb_Arena_Destroy  for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in  PerlUpb_Arena_Get .
    - Expands C-level tests in  t/c/protobuf_arena.c  to cover memory allocation
      on the arena and lifecycle through  PerlUpb_Arena_Destroy .
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.
## 0.13	2025-12-17
commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000
    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in  t/c/  to better match the  xs  module structure:
        -  01-cache.c  ->  protobuf_obj_cache.c 
        -  02-arena.c  ->  protobuf_arena.c 
        -  03-utils.c  ->  protobuf_utils.c 
        -  04-convert.c  ->  convert.c 
        -  load_test.c  ->  upb_descriptor_load.c 
    - Updated  perl/Makefile.PL  to reflect the new test names in  MY::postamble 's  $c_test_config .
    - Refactored the  $c_test_config  generation in  Makefile.PL  to reduce repetition by using a default flags hash and common dependencies array.
    - Added a  fail()  macro to  perl/t/c/upb-perl-test.h  for consistency.
    - Modified  t/c/upb_descriptor_load.c  to use the  t/c/upb-perl-test.h  macros, making its output consistent with other C tests.
    - Added a skeleton for  t/c/convert.c  to test the conversion functions.
    - Updated documentation in  ProtobufPlan.md  and  architecture/testing/01-xs-testing.md  to reflect new test names.
## 0.12	2025-12-17
commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000
    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the  perl/xs/  directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component:  convert ,  descriptor ,  descriptor_containers ,  descriptor_pool ,  extension_dict ,  map ,  message ,  protobuf ,  repeated , and  unknown_fields .
    - Created skeleton  .h  and  .c  files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g.,  perl/xs/descriptor.h ) to include the new sub-headers.
    - Updated top-level component source files (e.g.,  perl/xs/descriptor.c ) to include their main header and added stub initialization functions (e.g.,  PerlUpb_InitDescriptor ).
    - Moved code from the original  perl/xs/protobuf.c  to new files in  perl/xs/protobuf/  (arena, obj_cache, utils).
    - Moved code from the original  perl/xs/convert.c  to new files in  perl/xs/convert/  (upb_to_sv, sv_to_upb).
    - Updated  perl/Makefile.PL  to use a glob ( xs/*/*.c ) to find the new C source files in the subdirectories.
    - Added  perl/doc/architecture/core/07-xs-file-organization.md  to document the new structure.
    - Updated  perl/doc/ProtobufPlan.md  and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.
## 0.11	2025-12-17
commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000
    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in  t/c/  with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache ( 01-cache.c ), arena wrapper ( 02-arena.c ), and utility functions ( 03-utils.c ).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache ( xs/protobuf.c ) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an  upb_Arena  wrapper ( xs/protobuf.c ).
        -   Consolidated common XS helper functions into  xs/protobuf.h  and  xs/protobuf.c .
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from  ExtUtils::Embed , and handling both  .c  and  .cc  source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g.,  message.c ,  descriptor.c ). Removed older, monolithic  .xs  files.
    -   **Typemap Expansion:** Added extensive typemap entries in  perl/typemap  to handle conversions between Perl objects and various  const upb_*Def*  pointers.
    -   **Descriptor Tests:** Added a new test suite  t/02-descriptor.t  to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines ( ProtobufPlan.md ,  xs_learnings.md , etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed  ppport.h  from  .gitignore  as it's no longer used, due to  -DPERL_NO_PPPORT  being set in  Makefile.PL .
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.
## 0.10	2025-12-17
commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000
    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests ( t/c/ )**:
        *   Added  t/c/Makefile  to build standalone C tests.
        *   Created  t/c/upb-perl-test.h  with macros for TAP-compliant C tests ( plan ,  ok ,  is ,  is_string ,  diag ).
        *   Implemented  t/c/01-cache.c  to test the object cache.
        *   Implemented  t/c/02-arena.c  to test  Protobuf::Arena  wrappers.
        *   Implemented  t/c/03-utils.c  to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache ( xs/protobuf.c )**:
        *   Switched to using stringified pointers ( %p ) as hash keys for stability.
        *   Fixed a critical double-free bug in  PerlUpb_ObjCache_Delete  by removing an extra  SvREFCNT_dec  on the lookup key.
    
    3.  **XS Arena Wrapper ( xs/protobuf.c )**:
        *   Corrected  PerlUpb_Arena_New  to use  newSVrv  and  PTR2IV  for opaque object wrapping.
        *   Corrected  PerlUpb_Arena_Get  to safely unwrap the arena pointer.
    
    4.  **Makefile.PL ( perl/Makefile.PL )**:
        *   Added  -Ixs  to  INC  to allow C tests to find  t/c/upb-perl-test.h  and  xs/protobuf.h .
        *   Added  LIBS  to link  libprotobuf_common.a  into the main  Protobuf.so .
        *   Added C test targets  01-cache ,  02-arena ,  03-utils  to the test config in  MY::postamble .
    
    5.  **Protobuf.pm ( perl/lib/Protobuf.pm )**:
        *   Added  use XSLoader;  to load the compiled XS code.
    
    6.  **New files  xs/util.h **:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.
## 0.09	2025-12-17
commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000
    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in  MY::postamble  to use a hash ( $c_test_config ) for better organization and test-specific flags.
        -   Integrated  ExtUtils::Embed  to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the  t/c/01-cache.c  test.
        -   Correctly constructs the path to the versioned Perl library ( libperl.so.X.Y.Z ) using  $Config archlib  and  $Config libperl  to ensure portability.
        -   Removed  VERSION_FROM  and  ABSTRACT_FROM  to avoid dependency on  .pm  files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in  xs/protobuf.c .
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented  PerlUpb_ObjCache_Init ,  PerlUpb_ObjCache_Add ,  PerlUpb_ObjCache_Get ,  PerlUpb_ObjCache_Delete , and  PerlUpb_ObjCache_Destroy .
        -   Uses a Perl hash ( HV* ) for the cache.
        -   Keys are string representations of the C pointers, created using  snprintf  with  "%llx" .
        -   Values are weak references ( sv_rvweaken ) to the Perl objects ( SV* ).
        -    PerlUpb_ObjCache_Get  now correctly returns an incremented reference to the original SV, not a copy.
        -    PerlUpb_ObjCache_Destroy  now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated  is_sv  to perform direct pointer comparison ( got == expected ).
    
    -   **Minor:** Added  util.h  (currently empty), updated  typemap .
    
    These changes establish a working C-level test environment for the XS components.
## 0.08	2025-12-17
commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000
    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.
## 0.07	2025-12-17
commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000
    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's  PyUpb_ObjCache .
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's  descriptor_containers.c .
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.
## 0.06	2025-12-17
commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000
    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces  xs/protobuf.c  and  xs/protobuf.h  to implement a caching mechanism ( protobuf_c_to_perl_obj ) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the  typemap  to use  protobuf_c_to_perl_obj  for  upb_MessageDef *  output, ensuring descriptor objects are cached.
        *   Corrected  sv_weaken  to the correct  sv_rvweaken  function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from  bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/ .
        *   Updated  INC  paths to correctly locate the generated headers.
        *   Refactored  MY::dynamic_lib  to ensure the static library  libprotobuf_common.a  is correctly linked into each generated  .so  module, resolving undefined symbol errors.
        *   Overrode  MY::test  to use  prove -b -j$(nproc) t/*.t xt/*.t  for running tests.
        *   Cleaned up  LIBS  and  LDDLFLAGS  usage.
    
    3.  **Documentation:**
        *   Updated  ProtobufPlan.md  to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added  object-caching.md  and  c-perl-interface.md .
        *   Updated  llm-guidance.md  with notes on  upb/upb.h  and  sv_rvweaken .
    
    4.  **Testing:**
        *   Fixed  xt/03-moo_immutable.t  to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.
## 0.05	2025-12-16
commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000
    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from  ProtoBuf  to  Protobuf 
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed  perl/lib/ProtoBuf  to  perl/lib/Protobuf .
    *   Moved and updated  ProtoBuf.pm  to  Protobuf.pm .
    *   Moved and updated  ProtoBuf::Descriptor  to  Protobuf::Descriptor  (.pm & .xs).
    *   Removed other  ProtoBuf::*  stubs (Arena, DescriptorPool, Message).
    *   Updated  MODULE  and  PACKAGE  in  Descriptor.xs .
    *   Updated  NAME ,  *_FROM  in  perl/Makefile.PL .
    *   Replaced  ProtoBuf  with  Protobuf  throughout  perl/typemap .
    *   Updated namespaces in test files  t/01-load-protobuf-descriptor.t  and  t/02-descriptor.t .
    *   Updated namespaces in all documentation files under  perl/doc/ .
    *   Updated paths in  perl/.gitignore .
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included  xs/*.c  files in the common object files list.
    *   Added  -I.  to the  INC  paths.
    *   Switched from  MYEXTLIB  to  LIBS => ['-L$(CURDIR) -lprotobuf_common']  for linking.
    *   Removed custom keys passed to  WriteMakefile  for postamble.
    *    MY::postamble  now sources variables directly from the main script scope.
    *   Added  all :: $ common_lib  dependency in  MY::postamble .
    *   Added  t/c/load_test.c  compilation rule in  MY::postamble .
    *   Updated  clean  target to include  blib .
    *   Added more modules to  TEST_REQUIRES .
    *   Removed the explicit  PM  and  XS  keys from  WriteMakefile , relying on  XSMULTI => 1 .
    
    **New Files:**
    
    *    perl/lib/Protobuf.pm 
    *    perl/lib/Protobuf/Descriptor.pm 
    *    perl/lib/Protobuf/Descriptor.xs 
    *    perl/t/01-load-protobuf-descriptor.t 
    *    perl/t/02-descriptor.t 
    *    perl/t/c/load_test.c : Standalone C test for UPB.
    *    perl/xs/types.c  &  perl/xs/types.h : For Perl/C type conversions.
    *    perl/doc/architecture/upb-interfacing.md 
    *    perl/xt/03-moo_immutable.t : Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files:  t/00_load.t ,  t/01_basic.t ,  t/02_serialize.t ,  t/03_message.t ,  t/04_descriptor_pool.t ,  t/05_arena.t ,  t/05_message.t .
    *   Removed  lib/ProtoBuf.xs  as it's not needed with  XSMULTI .
    
    **Other:**
    
    *   Updated  test_descriptor.bin  (binary change).
    *   Significant content updates to markdown documentation files in  perl/doc/architecture  and  perl/doc/internal  reflecting the new architecture and learnings.
## 0.04	2025-12-14
commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000
    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for  ProtoBuf::Message  object
    creation, linking it with  ProtoBuf::Descriptor  and  ProtoBuf::DescriptorPool ,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  ** ProtoBuf::Descriptor :** Added  _pool  attribute to hold a strong
        reference to the parent  ProtoBuf::DescriptorPool . This is essential to
        prevent the pool and its C  upb_DefPool  from being garbage collected
        while a descriptor is still in use.
    
    2.  ** ProtoBuf::DescriptorPool :**
        *    find_message_by_name : Now passes the  $self  (the pool object) to the
             ProtoBuf::Descriptor  constructor to establish the lifecycle link.
        *   XSUB  pb_dp_find_message_by_name : Updated to accept the pool  SV*  and
            store it in the descriptor's  _pool  attribute.
        *   XSUB  _load_serialized_descriptor_set : Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *    DEMOLISH : Made safer by checking for attribute existence.
    
    3.  ** ProtoBuf::Message :**
        *   Implemented using Moo with lazy builders for  _upb_arena  and
             _upb_message .
        *    _descriptor  is a required argument to  new() .
        *   XS functions added for creating the arena ( pb_msg_create_arena ) and
            the  upb_Message  ( pb_msg_create_upb_message ).
        *    pb_msg_create_upb_message  now extracts the  upb_MessageDef*  from the
            descriptor and uses  upb_MessageDef_MiniTable()  to get the minitable
            for  upb_Message_New() .
        *    DEMOLISH : Added to free the message's arena.
    
    4.  ** Makefile.PL :**
        *   Added  -g  to  CCFLAGS  for debugging symbols.
        *   Added Perl CORE include path to  MY::postamble 's  base_flags .
    
    5.  **Tests:**
        *    t/04_descriptor_pool.t : Updated to check the structure of the
            returned  ProtoBuf::Descriptor .
        *    t/05_message.t : Now uses a descriptor obtained from a real pool to
            test  ProtoBuf::Message->new() .
    
    6.  **Documentation:**
        *   Updated  ProtobufPlan.md  to reflect progress.
        *   Updated several files in  doc/architecture/  to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added  doc/internal/development_cycle.md  and  doc/internal/xs_learnings.md .
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.
## 0.03	2025-12-14
commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000
    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the  ProtoBuf::DescriptorPool  to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:**  ProtoBuf::DescriptorPool.pm  now uses  Moo  to define the class. The  upb_DefPool  pointer is stored as a lazy attribute  _upb_defpool .
    2.  **XS Lifecycle:**  DescriptorPool.xs  now has  pb_dp_create_pool  called by the Moo builder and  pb_dp_free_pool  called from  DEMOLISH  to manage the  upb_DefPool  lifecycle per object.
    3.  **Typemap:** The  perl/typemap  file has been significantly updated to handle the conversion between the  ProtoBuf::DescriptorPool  Perl object and the  upb_DefPool *  C pointer. This includes:
        *   Mapping  upb_DefPool *  to  T_PTR .
        *   An  INPUT  section for  ProtoBuf::DescriptorPool  to extract the pointer from the object's hash, triggering the lazy builder if needed via  call_method .
        *   An  OUTPUT  section for  upb_DefPool *  to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:**  add_file_descriptor_set_binary  is now  load_serialized_descriptor_set .
    5.  **Test Data:**
        *   Added  perl/t/data/test.proto  with a sample message and enum.
        *   Generated  perl/t/data/test_descriptor.bin  using  protoc .
        *   Removed  t/data/  from  .gitignore  to ensure test data is versioned.
    6.  **Test Update:**  t/04_descriptor_pool.t  is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected  #include  paths in  DescriptorPool.xs  to be relative to the  upb/  directory (e.g.,  upb/wire/decode.h ).
        *   Added  -I../upb  to  CCFLAGS  in  Makefile.PL .
        *   Reordered  INC  paths in  Makefile.PL  to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in  make test  runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.
## 0.02	2025-12-14
commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000
    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated  descriptor.upb.c  and  descriptor.upb_minitable.c  located in  bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/ .
    2.  **Updated Include Paths:** Added the  bazel-bin  path to  INC  in  WriteMakefile  and to  base_flags  in  MY::postamble  to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to  UPB_STAGE0_DIR  and no longer include headers or source files from  upb/reflection/stage0/ .
    4.  **-fPIC:** Explicitly added  -fPIC  to  CCFLAGS  in  WriteMakefile  and ensured  $(CCFLAGS)  is used in the custom compilation rules in  MY::postamble . This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used  File::Find  to recursively find UPB C sources, excluding  /conformance/  and  /reflection/stage0/  to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified  ProtoBuf::Arena::pb_arena_new  XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to  perl/.gitignore  to ignore generated C files from XS ( lib/*.c ,  lib/ProtoBuf/*.c ), the copied  src_google_protobuf_descriptor.pb.cc , and the  t/data  directory.
    8.  **Build Documentation:** Updated  perl/doc/architecture/upb-build-integration.md  to reflect the new build process, including the Bazel prerequisite, include paths,  -fPIC  usage, and  File::Find .
    
    Build Steps:
    1.   bazel build //src/google/protobuf:descriptor_upb_proto  (from repo root)
    2.   cd perl 
    3.   perl Makefile.PL 
    4.   make 
    5.   make test  (Currently has expected failures due to missing test data implementation).
## 0.01	2025-12-14
commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000
    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the  perl/  directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   ** Makefile.PL **: The core  ExtUtils::MakeMaker  build script. It's configured to:
        *   Build a static library ( libprotobuf_common.a ) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize  XSMULTI => 1  to create separate shared objects for  ProtoBuf ,  ProtoBuf::Arena , and  ProtoBuf::DescriptorPool .
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in  MY::postamble  to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs ( .xs  files)**:
        *    lib/ProtoBuf.xs : Placeholder for the main module's XS functions.
        *    lib/ProtoBuf/Arena.xs : XS interface for  upb_Arena  management.
        *    lib/ProtoBuf/DescriptorPool.xs : XS interface for  upb_DefPool  management.
    
    *   **Perl Module Stubs ( .pm  files)**:
        *    lib/ProtoBuf.pm : Main module, loads XS.
        *    lib/ProtoBuf/Arena.pm : Perl class for Arenas.
        *    lib/ProtoBuf/DescriptorPool.pm : Perl class for Descriptor Pools.
        *    lib/ProtoBuf/Message.pm : Base class for messages (TBD).
    
    *   **Configuration Files**:
        *    .gitignore : Ignores build artifacts, editor files, etc.
        *    .perlcriticrc : Configures Perl::Critic for static analysis.
        *    .perltidyrc : Configures perltidy for code formatting.
    
    *   ** Devel::PPPort **: Vendored version 3.72 to generate  ppport.h  for XS compatibility across different Perl versions.
    
    *   ** typemap **: Custom typemap for XS argument/result conversion.
    
    *   **Documentation ( doc/ )**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.

19 December 2025

Otto Kek l inen: Backtesting trailing stop-loss strategies with Python and market data

Featured image of post Backtesting trailing stop-loss strategies with Python and market dataIn January 2024 I wrote about the insanity of the Magnificent Seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional, I decided to analyze whether using stop-loss orders could reliably automate avoiding deep drawdowns. As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, a restaurant dinner that costs 20 euros today will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years. Hence, if you intend to retain the value of your hard-earned savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

What is a trailing stop-loss order? What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin. For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus, no matter what happens, you only lost 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash. In the simple case above, it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in too large a drawdown before exiting. If markets crash rapidly, it might be that nobody buys your stocks at the stop-loss price, and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock. First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the data points feel effortless. The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):
python
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f" TICKER - START_DATE -- end_date .parquet"

if cache_file.is_file():
 dataframe = pandas.read_parquet(cache_file)
 print(f"Loaded price data from cache:  cache_file ")
else:
 dataframe = yfinance.download(
 TICKER,
 start=START_DATE,
 end=end_date,
 progress=False,
 auto_adjust=False
 )

 dataframe.to_parquet(cache_file)
 print(f"Fetched new price data from Yahoo Finance and cached to:  cache_file ")
The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like, you can use:
python
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())
Example output:
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818
Adding new columns to the dataframe is easy. For example, I used a custom function to calculate the Relative Strength Index (RSI). To add a new column RSI with a value for every row based on the price from that row, only one line of code is needed, without custom loops:
python
df["RSI"] = compute_rsi(df["price"], period=14)
After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:
python
 baseline_series = [
  "time": ts, "value": val 
 for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
 ]

 baseline_json = json.dumps(baseline_series)
 template = jinja2.Template("template.html")
 rendered_html = template.render(
 title=title,
 heading=heading,
 description=description_html,
 ...
 baseline_json=baseline_json,
 ...
 )

 with open("report.html", "w", encoding="utf-8") as f:
 f.write(rendered_html)
 print("Report generated!")
In the HTML template, the marker variable in Jinja syntax gets replaced with the actual JSON:
html
<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="UTF-8">
 <title>  title  </title>
 ...
</head>
<body>
 <h1>  heading  </h1>
 <div id="chart"></div>
 <script>
 // Ensure the DOM is ready before we initialise the chart
 document.addEventListener('DOMContentLoaded', () =>  
 // Parse the JSON data passed from Python
 const baselineData =   baseline_json   safe  ;
 const strategyData =   strategy_json   safe  ;
 const markersData =   markers_json   safe  ;

 // Create the chart
 const chart = LightweightCharts.createChart(document.getElementById('chart'),  
 width: document.getElementById('chart').clientWidth,
 height: 500,
 layout:  
 background:   color: "#222"  ,
 textColor: "#ccc"
  ,
 grid:  
 vertLines:   color: "#555"  ,
 horzLines:   color: "#555"  
  
  );

 // Add baseline series
 const baselineSeries = chart.addLineSeries( 
 title: '  baseline_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 priceLineWidth: 1
  );
 baselineSeries.setData(baselineData);

 baselineSeries.priceScale().applyOptions( 
 entireTextOnly: true
  );

 // Add strategy series
 const strategySeries = chart.addLineSeries( 
 title: '  strategy_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 color: '#FF6D00'
  );
 strategySeries.setData(strategyData);

 // Add buy/sell markers to the strategy series
 strategySeries.setMarkers(markersData);

 // Fit the chart to show the full data range (full zoom)
 chart.timeScale().fitContent();
  )
 </script>
</body>
</html>
There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test. The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline Buy and hold scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way. Backtest run example

Results I experimented with multiple strategies and tested them with various parameters, but I don t think I found a strategy that was consistently and clearly better than just buy-and-hold. It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price. In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold. The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example, in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks; thus, the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash had been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price. Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if many people have stop-loss orders in place, a large price drop would trigger all of them, creating a flood of sell orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don t need to fear that too many people will be using them.

Conclusion Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of insurance policy to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels. Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall. Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently. Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

27 November 2025

Valhalla's Things: PDF Planners 2026

Posted on November 27, 2025
Tags: madeof:atoms, madeof:bits, craft:bookbinding
A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog, in case somebody was interested in using them. In 2024 I tried to do the same, and ended up being even more late, to the point where I didn t generate any (uooops). I did, however, start to write a Makefile to automate the generation (and got stuck on the fact that there wasn t an easy way to deduce the correct options needed from just the template name); this year, with the same promptness as in 2023 I got back to the Makefile and finished it, so maybe next year I will be able to post them early enough for people to print and bind them? maybe :) Anyway, these are all of the variants I currently generate, for 2026. The files with -book in the name have been imposed on A4 paper for a 16 pages signature. All of the fonts have been converted to paths, for ease of printing (yes, this means that customizing the font requires running the script, but the alternative also had its drawbacks). In English:
daily-95 186-en.pdf
blank daily pages, 95 mm 186 mm;
daily-A5-en.pdf daily-A5-en-book.pdf
blank daily pages, A5;
daily-A6-en.pdf daily-A6-en-book.pdf
blank daily pages, A6;
daily-graph-A5-en.pdf daily-graph-A5-en-book.pdf
graph paper (4 mm) daily pages, A5;
daily-points4mm-A5-en.pdf daily-points4mm-A5-en-book.pdf
pointed paper (4 mm), A5;
daily-ruled-A5-en.pdf daily-ruled-A5-en-book.pdf
ruled paper daily pages, A5;
week_on_two_pages-A6-en.pdf week_on_two_pages-A6-en-book.pdf
weekly planner, one week on two pages, A6;
week_on_one_page-A6-en.pdf week_on_one_page-A6-en-book.pdf
weekly planner, one week per page, A6;
week_on_one_page_dots-A6-en.pdf week_on_one_page_dots-A6-en-book.pdf
weekly planner, one week per page with 4 mm dots, A6;
week_health-A6-en.pdf week_health-A6-en-book.pdf
weekly health tracker, one week per page with 4 mm dots, A6;
month-A6-en.pdf month-A6-en-book.pdf
monthly planner, A6;
And the same planners, in Italian:
daily-95 186-it.pdf
blank daily pages, 95 mm 186 mm;
daily-A5-it.pdf daily-A5-it-book.pdf
blank daily pages, A5;
daily-A6-it.pdf daily-A6-it-book.pdf
blank daily pages, A6;
daily-graph-A5-it.pdf daily-graph-A5-it-book.pdf
graph paper (4 mm) daily pages, A5;
daily-points4mm-A5-it.pdf daily-points4mm-A5-it-book.pdf
pointed paper (4 mm), A5;
daily-ruled-A5-it.pdf daily-ruled-A5-it-book.pdf
ruled paper daily pages, A5;
week_on_two_pages-A6-it.pdf week_on_two_pages-A6-it-book.pdf
weekly planner, one week on two pages, A6;
week_on_one_page-A6-it.pdf week_on_one_page-A6-it-book.pdf
weekly planner, one week per page, A6;
week_on_one_page_dots-A6-it.pdf week_on_one_page_dots-A6-it-book.pdf
weekly planner, one week per page with 4 mm dots, A6;
week_health-A6-it.pdf week_health-A6-it-book.pdf
weekly health tracker, one week per page with 4 mm dots, A6;
month-A6-it.pdf month-A6-it-book.pdf
monthly planner, A6;
Some of the planners include ephemerids and moon phase data: these have been calculated for the town of Como, and specifically for geo:45.81478,9.07522?z=17, because that s what everybody needs, right? If you need the ephemerids for a different location and can t run the script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a few python modules such as dateutil, pypdf and jinja2), feel free to ask: unless I receive too many requests to make this sustainable I ll generate them and add them to this post. I hereby release all the PDFs linked in this blog post under the CC0 license. You may notice that I haven t decided on a license for the code dump repository; again if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I ll push decide on a license higher on the stack of things to do :D Finishing the Makefile meant that I had to add a tiny feature to one of the scripts involved, which required me to add a dependency to pypdf: up to now I have been doing the page manipulations with pdfjam, which is pretty convenient to use, but also uses LaTeX, and apparently not every computer comes with texlive installed (shocking, I know). If I m not mistaken, pypdf can do all of the things I m doing with pdfjam, so maybe for the next year I could convert my script to use that one instead. But then the planners 2027 will be quick and easy, and I will be able to publish them promptly, right?

25 November 2025

Russell Coker: EDID and my 8K TV

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution. This post covers many attempts to try and get the TV to work correctly and it doesn t have good answers. The best answer might be to not buy Hisense devices but I still lack data. Attempts to Force 8K I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn t give me the answer on a silver platter but pointed me in the right direction of EDID [4]. I installed the Debian packages read-edid, wxedid, and edid-decode. The command get-edid > out.edid saves the binary form of the edid to a file. The command wxedid out.edid allows graphical analysis of the EDID data. The command edid-decode out.edid dumps a plain text representation of the output, the command edid-decode out.edid grep VIC cut -d: -f2 sort -n shows an ordered list of video modes, in my case the highest resolution is 4096 2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).
xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320
I ran the above commands and got the below error:
xrandr: Configure crtc 0 failed
At this time I don t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of. I found a Github repository for EDID data [5] but that didn t have an entry for my TV and didn t appear to have any other entry for an 8K device I could use. Resolution for Web Browsing I installed a browser on the TV, Chrome and Firefox aren t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar which seems about right. My Note 9 phone reports 384*661 out of it s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have Screen zoom in system settings at 1/4. When I changed Screen zoom to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn t change the claimed resolution. The claimed Browser Viewport Size by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution! Netflix When I view Netflix shows using the Netflix app running on the TV is reports 4K which doesn t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the Device setting it reports Device Model as Hisense_SmartTV 8K FFM so the Netflix app knows all about 4K content and knows the text string 8K . YouTube When I view a YouTube video that s described as being 8K I don t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on State for Nerds and one line has Viewport / Frames 1920 1080*2.00 and another has Current / Optimal Res 3840 2160@60 / 3840 2160@60 so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to 2160p60 HDR . It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what s required for 8K. I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080. Have I Been Ripped Off? It looks like I might have been ripped off by this. I can t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080. The AI Upscaling isn t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer. Next Steps The next things I plan to do are to continue attempts to get the TV to do what it s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC. I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display. It would be nice if someone made a USB device that does 8K video output NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it. The one thing I haven t done yet is testing 8K MP4 files on a USB stick. That s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick. Most people would give up about now. But I am determined to solve this and buying another large TV isn t out of the question.

19 November 2025

Gunnar Wolf: While it is cold-ish season in the North hemisphere...

Last week, our university held a Mega Vaccination Center . Things cannot be small or regular with my university, ever! According to the official information, during last week 31,000 people were given a total of 74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile). I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd. Long, long line ( photo credit: La Jornada, 2025.11.14) Really vaccinated! And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to. Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher. I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter. If you plan to attend DebConf (hell If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

29 September 2025

Russ Allbery: Review: The Incandescent

Review: The Incandescent, by Emily Tesh
Publisher: Tor
Copyright: 2025
ISBN: 1-250-83502-X
Format: Kindle
Pages: 417
The Incandescent is a stand-alone magical boarding school fantasy.
Your students forgot you. It was natural for them to forget you. You were a brief cameo in their lives, a walk-on character from the prologue. For every sentimental my teacher changed my life story you heard, there were dozens of my teacher made me moderately bored a few times a week and then I got through the year and moved on with my life and never thought about them again. They forgot you. But you did not forget them.
Doctor Saffy Walden is Director of Magic at Chetwood, an elite boarding school for prospective British magicians. She has a collection of impressive degrees in academic magic, a specialization in demonic invocation, and a history of vague but lucrative government job offers that go with that specialty. She turned them down to be a teacher, and although she's now in a mostly administrative position, she's a good teacher, with the usual crop of promising, lazy, irritating, and nervous students. As the story opens, Walden's primary problem is Nikki Conway. Or, rather, Walden's primary problem is protecting Nikki Conway from the Marshals, and the infuriating Laura Kenning in particular. When Nikki was seven, she summoned a demon who killed her entire family and left her a ward of the school. To Laura Kenning, that makes her a risk who should ideally be kept far away from invocation. To Walden, that makes Nikki a prodigious natural talent who is developing into a brilliant student and who needs careful, professional training before she's tempted into trying to learn on her own. Most novels with this setup would become Nikki's story. This one does not. The Incandescent is Walden's story. There have been a lot of young-adult magical boarding school novels since Harry Potter became a mass phenomenon, but most of them focus on the students and the inevitable coming-of-age story. This is a story about the teachers: the paperwork, the faculty meetings, the funding challenges, the students who repeat in endless variations, and the frustrations and joys of attempting to grab the interest of a young mind. It's also about the temptation of higher-paying, higher-status, and less ethical work, which however firmly dismissed still nibbles around the edges. Even if you didn't know Emily Tesh is herself a teacher, you would guess that before you get far into this novel. There is a vividness and a depth of characterization that comes from being deeply immersed in the nuance and tedium of the life that your characters are living. Walden's exasperated fondness for her students was the emotional backbone of this book for me. She likes teenagers without idealizing the process of being a teenager, which I think is harder to pull off in a novel than it sounds.
It was hard to quantify the difference between a merely very intelligent student and a brilliant one. It didn't show up in a list of exam results. Sometimes, in fact, brilliance could be a disadvantage when all you needed to do was neatly jump the hoop of an examiner's grading rubric without ever asking why. It was the teachers who knew, the teachers who could feel the difference. A few times in your career, you would have the privilege of teaching someone truly remarkable; someone who was hard work to teach because they made you work harder, who asked you questions that had never occurred to you before, who stretched you to the very edge of your own abilities. If you were lucky as Walden, this time, had been lucky your remarkable student's chief interest was in your discipline: and then you could have the extraordinary, humbling experience of teaching a child whom you knew would one day totally surpass you.
I also loved the world-building, and I say this as someone who is generally not a fan of demons. The demons themselves are a bit of a disappointment and mostly hew to one of the stock demon conventions, but the rest of the magic system is deep enough to have practitioners who approach it from different angles and meaty enough to have some satisfying layered complexity. This is magic, not magical science, so don't expect a fully fleshed-out set of laws, but the magical system felt substantial and satisfying to me. Tesh's first novel, Some Desperate Glory, was by far my favorite science fiction novel of 2023. This is a much different book, which says good things about Tesh's range and the potential of her work yet to come: adult rather than YA, fantasy rather than science fiction, restrained and subtle in places where Some Desperate Glory was forceful and pointed. One thing the books do have in common, though, is some structure, particularly the false climax near the midpoint of the book. I like the feeling of uncertainty and possibility that gives both books, but in the case of The Incandescent, I was not quite in the mood for the second half of the story. My problem with this book is more of a reader preference than an objective critique: I was in the mood for a story about a confident, capable protagonist who was being underestimated, and Tesh was writing a novel with a more complicated and fraught emotional arc. (I'm being intentionally vague to avoid spoilers.) There's nothing wrong with the story that Tesh wanted to tell, and I admire the skill with which she did it, but I got a tight feeling in my stomach when I realized where she was going. There is a satisfying ending, and I'm still very happy I read this book, but be warned that this might not be the novel to read if you're in the mood for a purer competence porn experience. Recommended, and I am once again eagerly awaiting the next thing Emily Tesh writes (and reminding myself to go back and read her novellas). Content warnings: Grievous physical harm, mind control, and some body horror. Rating: 8 out of 10

21 September 2025

Joey Hess: cheap DIY solar fence design

A year ago I installed a 4 kilowatt solar fence. I'm revisiting it this Sun Day, to share the design, now that I have prooved it out.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard to buy for a small installation. They are selling to utility scale solar mostly. And those are installed by driving metal beams into the ground, which requires heavy machinery. Since I have experience with Ironridge rails for roof mount solar, I decided to adapt that system for a vertical mount. Which is something it was not designed for. I combined the Ironridge hardware with regular parts from the hardware store. The cost of mounting solar panels nowadays is often higher than the cost of the panels. I hoped to match the cost, and I nearly did. The solar panels cost $100 each, and the fence cost $110 per solar panel. This fence was significantly cheaper than conventional ground mount arrays that I considered as alternatives, and made a better use of a difficult hillside location. I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail. (Longer rails would need a center post anyway, and the 7 foot long rails have cheaper shipping, since they do not need to be shipped freight.) For the fence posts, I used regular 4x4" treated posts. 12 foot long, set in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6 inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3 inch hot-dipped galvanized lag screw. Since a treated post can react badly with an aluminum bracket, there needs to be some flashing between the post and bracket. I used Shurtape PW-100 tape for that. I see no sign of corrosion after 1 year. The rest of the Ironridge system is a T-bolt that connects the rail to the L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners (UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips. Since the Ironridge hardware is not designed to hold a solar panel at a 90 degree angle, I was concerned that the panels might slide downward over time. To help prevent that, I added some additional support brackets under the bottom of the panels. So far, that does not seem to have been a problem though. I installed Aptos 370 watt solar panels on the fence. They are bifacial, and while the posts block the back partially, there is still bifacial gain on cloudy days. I left enough space under the solar panels to be able to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence had its own 2 posts. This is the least elegant part of this design, but fitting 2 brackets next to one-another on a single post isn't feasible. I bolted the pairs of posts together with some spacers. A side benefit of doing it this way is that treated lumber can warp as it dries, and this prevented much twisting of the posts. Using separate posts for each segment also means that the fence can traverse a hill easily. And it does not need to be perfectly straight. In fact, my fence has a 30 degree bend in the middle. This means it has both south facing and south-west facing panels, so can catch the light for longer during the day. After building the fence, I noticed there was a slight bit of sway at the top, since 9 feet of wooden post is not entirely rigid. My worry was that a gusty wind could rattle the solar panels. While I did not actually observe that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope it will last 30 years, with the lifespan of the treated lumber being the likely determining factor. As part of my larger (and still ongoing) ground mount solar install, the solar fence has consistently provided great power. The vertical orientation works well at latitude 36. It also turned out that the back of the fence was useful to hang conduit and wiring and solar equipment, and so it turned into the electrical backbone of my whole solar field. But that's another story.. solar fence parts list
quantity cost per unit description
10 $27.89 7 foot Ironridge XR-10 rail
12 $20.18 12 foot treated 4x4
30 $4.86 Ironridge UFO-CL-01-A1
20 $0.87 Ironridge UFO-STP-40MM-M1
1 $12.62 Ironridge XR-10 end caps (20 pack)
20 $2.63 Ironridge LFT-03-M1
20 $1.69 Ironridge BHW-SQ-02-A1
22 $2.65 5/8 x 3 inch hot-dipped galvanized lag screw
10 $0.50 6 gravel per post
30 $6.91 50 lb bags of quickcrete
1 $15.00 Shurtape PW-100 Corrosion Protection Pipe Wrap Tape
N/A $30 other bolts and hardware (approximate)
$1100 total (Does not include cost of panels, wiring, or electrical hardware.)

10 September 2025

John Goerzen: ARM is great, ARM is terrible (and so is RISC-V)

I ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service I manage runs on an ARM instance at a cloud provider. ARM-based devices are cheap in a lot of ways: they use little power and there are many single-board computers based on them that are inexpensive. My 8-year-old s computer is a Raspberry Pi 400, in fact. So I like ARM. I ve been looking for ARM devices that have accelerated AES (Raspberry Pi 4 doesn t) so I can use full-disk encryption with them. There are a number of options, since ARM devices are starting to go more mid-range. Radxa s ROCK 5 series of SBCs goes up to 32GB RAM. The Orange Pi 5 Max and Ultra have up to 16GB RAM, as does the Raspberry Pi 5. Pine64 s Quartz64 has up to 8GB of RAM. I believe all of these have the ARM cryptographic extensions. They re all small and most are economical. But I also dislike ARM. There is a terrible lack of standardization in the ARM community. They say their devices run Linux, but the default there is that every vendor has their own custom Debian fork, and quite likely kernel fork as well. Most don t maintain them very well. Imagine if you were buying x86 hardware. You might have to manage AcerOS, Dellbian, HPian, etc. Most of them have no security support (particularly for the kernel). Some are based on Debian 11 (released in 2021), some Debian 12 (released in 2023), and none on Debian 13 (released a month ago). That is exactly the situation we have on ARM. While Raspberry Pi 4 and below can run Debian trixie directly, Raspberry Pi has not bothered to upstream support for the Pi 5 yet, and Raspberry Pi OS is only based on Debian bookworm (released in 2023) and very explicitly does not support a key Debian feature: you can t upgrade from one Raspberry Pi OS release to the next, so it s a complete reinstall every 2 years instead of just an upgrade. OrangePiOS only supports Debian bookworm but notably, their kernel is mostly stuck at 5.10 for every image they have (bookworm shipped with 6.1 and bookworm-backports supports 6.12). Radxa has a page on running Debian on one specific board, they seem to actually not support Debian directly, but rather their fork Radxa OS. There s a different installer for every board; for instance, this one for the Rock 4D. Looking at it, I can see that it uses files from here and here, with custom kernel, gstreamer, u-boot, and they put zfs in main for some reason. From Pine64, the Quartz64 seems to be based on an ancient 4.6 or 4.19 kernel. Perhaps, though, one might be able to use Debian s Pine A64+ instructions on it. Trixie doesn t have a u-boot image for the Quartz64 but it does have device tree files for it. RISC-V seems to be even worse; not only do we have this same issue there, but support in trixie is more limited and so is performance among the supported boards. The alternative is x86-based mini PCs. There are a bunch based on the N100, N150, or Celeron. Many of them support AES-NI and the prices are roughly in line with the higher-end ARM units. There are some interesting items out there; for instance, the Radxa X4 SBC features both an N100 and a RP2040. Fanless mini PCs are available from a number of vendors. Companies like ZimaBoard have interesting options like the ZimaBlade also. The difference in power is becoming less significant; it seems the newer ARM boards need 20W or 30W power supplies, and that may put them in the range of the mini PCs. As for cost, the newer ARM boards need a heat sink and fan, so by the time you add SBC, fan, storage, etc. you re starting to get into the price range of the mini PCs. It is great to see all the options of small SBCs with ARM and RISC-V processors, but at some point you ve got to throw up your hands and go this ecosystem has a lot of problems and consider just going back to x86. I m not sure if I m quite there yet, but I m getting close. Update 2025-09-11: I found a performant encryption option for the Pi 4, but was stymied by serial console problems; see the update post.

9 September 2025

John Goerzen: btrfs on a Raspberry Pi

I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others. I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics. Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs. Background: Moving towards ZFS and btrfs I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
  1. The checksums for every block help detect potential silent data corruption
  2. Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
  3. Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases. I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things. In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag. Filesystems on the Raspberry Pi I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account. But it was SLOW. Just really, glacially, slow, especially for Thunderbird. My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster. Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints. The conversion Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right? Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error. btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups. btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC. Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups. This was the only btrfs problem I encountered. Benchmarks I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird. After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same! Why might this be? It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk. Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs. SSD mount options and MicroSD endurance btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore. Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files). One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years. If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start. For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems). Conclusion I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.

6 September 2025

Reproducible Builds: Reproducible Builds in August 2025

Welcome to the August 2025 report from the Reproducible Builds project! Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds and live-bootstrap at WHY2025
  3. DALEQ Explainable Equivalence for Java Bytecode
  4. Reproducibility regression identifies issue with AppArmor security policies
  5. Rust toolchain fixes
  6. Distribution work
  7. diffoscope
  8. Website updates
  9. Reproducibility testing framework
  10. Upstream patches

Reproducible Builds Summit 2025 Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria!** We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!

Reproducible Builds and live-bootstrap at WHY2025 WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event . At this year s event, Frans Faase gave a talk on live-bootstrap, an attempt to provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system . Frans talk is available to watch on video and his slides are available as well.

DALEQ Explainable Equivalence for Java Bytecode Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:
[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.
Jens and Behnaz therefore propose a tool called DALEQ, which:
disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.
Jens also posted this news to our mailing list.

Reproducibility regression identifies issue with AppArmor security policies Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor. Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue. Work on the fix is ongoing at time of writing.

Rust toolchain fixes Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosth ne Gu don filed a new issue in the GitHub requesting a new check that would lint against non deterministic operations in proc-macros, such as iterating over a HashMap .

Distribution work In Debian this month: Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:
  • Improvements:
    • Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. [ ]
    • Move from a mono-utils dependency to versioned mono-devel mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. [ ]
    • Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. [ ]
  • Bug fixes:
    • Fix a test after the upload of systemd-ukify version 258~rc3. [ ]
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). [ ]
    • Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. [ ]
    • Don t check for PyPDF version 3 specifically; check for >= 3. [ ]
  • Misc:
    • Update copyright years. [ ][ ]
In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [ ] and Zbigniew J drzejewski-Szmek fixed compatibility with RPM 6 [ ]. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. [ ]

Website updates Once again, there were a number of improvements made to our website this month including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Run 4 workers on the o4 node again in order to speed up testing. [ ][ ][ ][ ]
    • Also test trixie-proposed-updates and trixie-updates etc. [ ][ ]
    • Gather seperate statistics for each tested release. [ ]
    • Support sources from all Debian suites. [ ]
    • Run new code from the prototype database rework branch for the amd64-pull184 pseudo-architecture. [ ][ ]
    • Add a number of helpful links. [ ][ ][ ][ ][ ][ ][ ][ ][ ]
    • Temporarily call debrebuild without the --cache argument to experiment with a new version of devscripts. [ ][ ][ ]
    • Update public TODO. [ ]
  • Installation tests:
    • Add comments to explain structure. [ ]
    • Mark more old jobs as old or dead . [ ][ ][ ]
    • Turn the maintenance job into a no-op. [ ]
  • Jenkins node maintenance:
    • Increase penalties if the osuosl5 or ionos7 nodes are down. [ ]
    • Stop trying to fix network automatically. [ ]
    • Correctly mark ppc64el architecture nodes when down. [ ]
    • Upgrade the remaining arm64 nodes to Debian trixie in anticipation of the release. [ ][ ]
    • Allow higher SSD temperatures on the riscv64 architecture. [ ]
  • Debian-related:
    • Drop the armhf architecture; many thanks to Vagrant for physically hosting the nodes for ten years. [ ][ ]
    • Add Debian forky, and archive bullseye. [ ][ ][ ][ ][ ][ ][ ]
    • Document the filesystem space savings from dropping the armhf architecture. [ ]
    • Exclude i386 and armhfr from JSON results. [ ]
    • Update TODOs for when Debian trixie and forky have been released. [ ][ ]
  • tests.reproducible-builds.org-related:
  • Misc:
    • Detect errors with openQA erroring out. [ ]
    • Drop the long-disabled openwrt_rebuilder jobs. [ ]
    • Use qa-jenkins-dev@alioth-lists.debian.net as the contact for jenkins.debian.net. [ ]
    • Redirect reproducible-builds.org/vienna25 to reproducible-builds.org/vienna2025. [ ]
    • Disable all OpenWrt reproducible CI jobs, in coordination with the OpenWrt community. [ ][ ]
    • Make reproduce.debian.net accessable via IPv6. [ ]
    • Ignore that the megacli RAID controller requires packages from Debian bookworm. [ ]
In addition,
  • James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [ ] and removed a note on reproduce.debian.net note after the release of Debian trixie [ ].
  • Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [ ][ ][ ][ ][ ][ ] as well as to the reproduce.debian.net service more generally [ ][ ][ ][ ][ ][ ][ ][ ].
  • Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [ ][ ][ ][ ][ ][ ] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. [ ]
  • Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

19 August 2025

Russell Coker: Colmi P80 SmartWatch First Look

I just bought a Colmi P80 SmartWatch from Aliexpress for $26.11 based on this blog post reviewing it [1]. The main things I was after in this was a larger higher resolution screen because my vision has apparently deteriorated during the time I ve been wearing a Pinetime [2] and I now can t read messages on it when not wearing my reading glasses. The watch hardware is quite OK. It has a larger and higher resolution screen and looks good. The review said that GadgetBridge (the FOSS SmartWatch software in the F-Droid repository) connected when told that the watch was a P79 and in a recent release got support for sending notifications. In my tests with GadgetBridge it doesn t set the time, can t seem to send notifications, can t read the battery level, and seems not to do anything other than just say connected . So I installed the proprietary app, as an aside it s a neat feature to have the watch display a QR code for installing the app, maybe InfiniTime should have a similar QR code for getting GadgetBridge from the F-Droid repository. The proprietary app is quote OK for the basic functionality and a less technical relative who is using one is happy. For my use the proprietary app is utterly broken. One of my main uses is to get notifications of Jabber messages from the Conversations app (that s in F-Droid). I have Conversations configured to always have a notification of how many accounts are connected which prevents Android from killing it, with GadgetBridge that notification isn t reported but the actual message contents are (I don t know how/why that happens) but with the Colmi app I get repeated notifcation messages on the watch about the accounts being connected. Also the proprietary app has on/off settings for messages to go to the watch for a hard coded list of 16 common apps and an Others setting for the rest. GadgetBridge lists the applications that are actually installed so I can configure it not to notify me about Reddit, connecting to my car audio, and many other less common notifications. I prefer the GadgetBridge option to have an allow-list for apps that I want notifications from but it also has a configuration option to use a deny list so you could have everything other than the app that gives lots of low value notifications. The proprietary app has a wide range of watch faces that it can send to the watch which is a nice feature that would be good to have in InfiniTime and GadgetBridge. The P80 doesn t display a code on screen when it is paired via Bluetooth so if you have multiple smart watches then you are at risk of connecting to the wrong one and there doesn t seem to be anything stopping a hostile party from connecting to one. Note that hostile parties are not restricted to the normal maximum transmission power and can use a high gain antenna for reception so they can connect from longer distances than normal Bluetooth devices. Conclusion The Colmi P80 hardware is quite decent, the only downside is that the vibration has an annoying tinny feel. Strangely it has a rotation sensor for a rotating button (similar to analogue watches) but doesn t seem to have a use for it as the touch screen does everything. The watch firmware is quite OK (not great but adequate) but lacking a password for pairing is a significant lack. The Colmi Android app has some serious issues that make it unusable for what I do and the release version of GadgetBridge doesn t work with it, so I have gone back to the PineTime for actual use. The PineTime cost twice as much, has less features (no sensor for O2 level in blood), but seems more solidly constructed. I plan to continue using the P80 with GadgetBridge and Debian based SmartWatch software to help develop the Debian Mobile project. I expect that at some future time GadgetBridge and the programs written for non-Android Linux distributions will support the P80 and I will transition to it. I am confident that it will work well for me at some future time and that I will get $26.11 of value from it. At this time I recommend that people who do the sort of things I do get one of each and that less technical people get a Colmi P80.

Next.