The seventeenth release of the qlcal package
arrivied at CRAN today, once
again following a QuantLib
release as 1.40 came out this morning.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases mainly synchronizes qlcal with
the QuantLib release 1.40. Only
one country calendar got updated; the diffstat
looks larger as the URL part of the copyright got updated throughout. We
also updated the URL for the GPL-2 badge: when CRAN checks this, they always hit
a timeout as the FSF server possibly keeps track of incoming requests;
we now link to version from the R Licenses page to avoid
this.
Changes in version 0.0.17
(2025-07-14)
Synchronized with QuantLib 1.40 released today
Calendar updates for Singapore
URL update in README.md
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples.
Updating old Debian Printing software to meet C23 requirements, by Thorsten Alteholz
The work of Thorsten fell under the motto gcc15 . Due to the introduction of
gcc15 in Debian, the default language version was changed to C23. This means
that for example, function declarations without parameters are no longer allowed.
As old software, which was created with ANSI C (or C89) syntax, made use of such
function declarations, it was a busy month. One could have used something like
-std=c17 as compile flags, but this would have just postponed the tasks. As a
result Thorsten uploaded modernized versions of ink, nm2ppa and rlpr for the
Debian printing team.
Work done to decommission packages.qa.debian.org, by Rapha l Hertzog
Rapha l worked to decommission the old package tracking system
(packages.qa.debian.org). After figuring out
that it was still receiving emails from the bug tracking system
(bugs.debian.org), from multiple debian lists and from
some release team tools, he reached out to the respective teams to either drop
those emails or adjust them so that they are sent to the current Debian Package
Tracker (tracker.debian.org).
rebootstrap uses *-for-host, by Helmut Grohne
Architecture cross bootstrapping is an ongoing effort that has shaped Debian in
various ways over the years. A longereffort to express
toolchain dependencies now bears fruit. When cross compiling, it becomes
important to express what architecture one is compiling for in Build-Depends.
As these packages have become available in trixie , more and more packages add
this extra information and in August, the libtool package
gained
a gfortran-for-host dependency. It was the first package in the essential
build closure to adopt this and required putting the pieces together in
rebootstrap that now has to
build gcc-defaults early on. There still are
hundreds of packages whose dependencies need to be updated
though.
Miscellaneous contributions
Rapha l dropped the Build Log Scan integration in tracker.debian.org
since it was showing stale data for a while as the underlying service has been
discontinued.
Emilio updated pixman to 0.46.4.
Emilio coordinated several transitions, and NMUed guestfs-tools to unblock one.
Stefano uploaded Python 3.14rc3 to Debian unstable. It s not yet used by any
packages, but it allows testing the level of support in packages to begin.
Stefano upgraded almost all of the debian-social infrastructure to Debian trixie .
Stefano attended the Debian Technical Committee meeting.
Stefano uploaded routine upstream updates for a handful of Python packages
(pycparser, beautifulsoup4, platformdirs, pycparser, python-authlib,
python-cffi, python-mitogen, python-resolvelib, python-super-collections,
twine).
Stefano reviewed and responded to DebConf 25 feedback.
Stefano investigated and fixed a request visibility bug in debian-reimbursements
(for admin-altered requests).
Lucas reviewed a couple of merge requests from external contributors for Go
and Ruby packages.
Lucas updated some ruby packages to its latest upstream version (thin,
passenger, and puma is still WIP).
Lucas set up the build environment to run rebuilds of reverse dependencies of
ruby using ruby3.4. As an alternative, he is looking for personal repositories
provided by Debusine to perform this task more easily. This is the preparation
for the transition to ruby3.4 as the default in Debian.
Lucas helped on the next round of the Outreachy internship program.
Helmut sent patches for 30 cross build failures and responded to cross
building support questions on the mailing list.
Helmut continued to maintain rebootstrap.
As gcc version 15 became the default, test jobs for version 14 had to be dropped.
A fair number of patches were applied to packages and could be dropped.
Helmut resumed removing RC-buggy packages from unstable and sponsored a
termrec upload to avoid its deletion. This work was paused to give packages
some time to migrate to forky .
While doing some new upstream release updates, thanks to Debusine s
reverse dependencies autopkgtest
checks, Santiago discovered that paramiko 4.0 will introduce a
regression in libcloud by the drop of support
for the obsolete DSA keys. Santiago finally uploaded to unstable both
paramiko 4.0,
and a regression fix for libcloud.
Santiago has taken part in different discussions and meetings for the
preparation of DebConf 26. The DebConf 26 local team aims to prepare for the
conference with enough time in advance.
Carles kept working on the missing-package-relations and reporting missing
Recommends. He improved the tooling to detect and report bugs creating
269 bugs
and followed up comments. 37 bugs have been resolved, others acknowledged.
The missing Recommends are a mixture of packages that are gone from Debian,
packages that changed name, typos and also packages that were recommended but
are not packaged in Debian.
Carles improved the missing-package-relations to report broken Suggests only
for packages that used to be in Debian but are removed from it now. No bugs have
been created yet for this case but identified 1320 of them.
Colin spent much of the month chasing down build/test regressions in various
Python packages due to other upgrades, particularly relating to pydantic,
python-pytest-asyncio, and rust-pyo3.
About 90% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
Some months I feel like I m pedalling furiously just to keep everything in a
roughly working state. This was one of those months.
Python team
I upgraded these packages to new upstream versions:
I had to spend a fair bit of time this month chasing down build/test
regressions in various packages due to some other upgrades, particularly to
pydantic, python-pytest-asyncio, and rust-pyo3:
I updated dh-python to suppress generated dependencies that would be
satisfied by python3 >=
3.11.
pkg_resources is
deprecated. In most cases
replacing it is a relatively simple matter of porting to
importlib.resources,
but packages that used its old namespace package support need more
complicated work to port them to implicit namespace
packages. We had quite a few bugs about
this on zope.* packages, but fortunately upstream did the hard part of
this recently. I went
round and cleaned up most of the remaining loose ends, with some help from
Alexandre Detiste. Some of these aren t completely done yet as they re
awaiting new upstream releases:
I fixed
jupyter-client
so that its autopkgtests would work in Debusine.
I fixed waitress to build with the
nocheck profile.
I fixed several other build/test failures:
Welcome to post 51 in the R4 series.
A while back I realized I should really just post a little more as
not all post have to be as deep and introspective as for example the
recent-ish two
cultures post #49.
So this post is a neat little trick I (somewhat belatedly) realized
somewhat recently. The context is the ongoing transition from
(Rcpp)Armadillo 14.6.3 and earlier to (Rcpp)Armadillo 15.0.2 or later.
(I need to write a bit more about that, but that may require a bit more
time.) (And there are a total of seven (!!) issue tickets managing the
transition with issue
#475 being the main parent issue, please see there for more
details.)
In brief, the newer and current Armadillo no longer allows C++11
(which also means it no longer allowes suppression of deprecation
warnings ). It so happens that around a decade ago packages were
actively encouraged to move towards C++11 so many either set an
explicit SystemRequirements: for it, or set CXX_STD=CXX11
in src/Makevars .win . CRAN has for some time now issued
NOTEs asking for this to be removed, and more recently enforced this
with actual deadlines. In RcppArmadillo I opted to accomodate old(er)
packages (using this by-now anti-pattern) and flip to Armadillo 14.6.3
during a transition period. That is what the package does now: It gives
you either Armadillo 14.6.3 in case C++11 was detected (or this legacy
version was actively selected via a compile-time #define),
or it uses Armadillo 15.0.2 or later.
So this means we can have either one of two versions, and may want to
know which one we have. Armadillo carries its own version macros, as
many libraries or projects do (R of course included). Many many years
ago (git blame points to sixteen and twelve for a revision)
we added the following helper function to the package (full source here,
we show it here without the full roxygen2 comment header)
// [[Rcpp::export]]Rcpp::IntegerVector armadillo_version(bool single)// These are declared as constexpr in Armadillo which actually does not define them// They are also defined as macros in arma_version.hpp so we just use thatconstunsignedint major = ARMA_VERSION_MAJOR;constunsignedint minor = ARMA_VERSION_MINOR;constunsignedint patch = ARMA_VERSION_PATCH;if(single)return Rcpp::wrap(10000* major +100* minor + patch);elsereturn Rcpp::IntegerVector::create(Rcpp::Named("major")= major, Rcpp::Named("minor")= minor, Rcpp::Named("patch")= patch);
It either returns a (named) vector of the standard major , minor ,
patch form of the common package versioning pattern, or a single
integer which can used more easily in C(++) via preprocessor macros. And
this being an Rcpp-using package, we can of course access either easily
from R:
>library(RcppArmadillo)>armadillo_version(FALSE)major minor patch 1502>armadillo_version(TRUE)[1] 150002>
Perfectly valid and truthful. But cumbersome at the R level. So
when preparing for these (Rcpp)Armadillo changes in one of my package, I
realized I could alter such a function and set the S3 type to
package_version. (Full version of one such variant here)
// [[Rcpp::export]]Rcpp::List armadilloVersion()// create a vector of major, minor, patchauto ver = Rcpp::IntegerVector::create(ARMA_VERSION_MAJOR, ARMA_VERSION_MINOR, ARMA_VERSION_PATCH);// and place it in a list (as e.g. packageVersion() in R returns)auto lst = Rcpp::List::create(ver);// and class it as 'package_version' accessing print() etc methods lst.attr("class")= Rcpp::CharacterVector::create("package_version","numeric_version");return lst;
Three statements each to
create the integeer vector of known dimensions and compile-time
known value
embed it in a list (as that is what the R type expects)
set the S3 class which is easy because Rcpp accesses attributes and
create character vectors
and return the value. And now in R we can operate more easily on this
(using three dots as I didn t export it from this package):
An object of class package_version inheriting from
numeric_version can directly compare against a (human- but
not normally machine-readable) string like 15.0.0 because the simple
S3 class defines appropriate operators, as well as print()
/ format() methods as the first expression shows. It is
these little things that make working with R so smooth, and we can
easily (three statements !!) do so from Rcpp-based packages too.
The underlying object really is merely a list containing a
vector:
but the S3 glue around it makes it behave nicely.
So next time you are working with an object you plan to return to R,
consider classing it to take advantage of existing infrastructure (if it
exists, of course). It s easy enough to do, and may smoothen the
experience at the R side.
During 2025-03-21-another-home-outage, I reflected upon what's a
properly ran service and blurted out what turned out to be something
important I want to outline more. So here it is, again, on its own
for my own future reference.
Typically, I tend to think of a properly functioning service as having
four things:
backups
documentation
monitoring
automation
high availability (HA)
Yes, I miscounted. This is why you need high availability.
A service doesn't properly exist if it doesn't at least have the first
3 of those. It will be harder to maintain without automation, and
inevitably suffer prolonged outages without HA.
The five components of a proper service
Backups
Duh. If data is maliciously or accidentally destroyed, you need a copy
somewhere. Preferably in a way that malicious Joe can't get to.
This is harder than you think.
You probably know this is hard, and this is why you're not doing
it. Do it anyways, you'll think it sucks, it will grow out of sync
with reality, but you'll be really grateful for whatever scraps you
wrote when you're in trouble.
Any docs, in other words, is better than no docs, but are no excuse
for doing the work correctly.
Monitoring
If you don't have monitoring, you'll know it fails too late, and you
won't know it recovers. Consider high availability, work hard to
reduce noise, and don't have machine wake people up, that's literally
torture and is against the Geneva convention.
Consider predictive algorithm to prevent failures, like "add storage
within 2 weeks before this disk fills up".
This is also harder than you think.
Automation
Make it easy to redeploy the service elsewhere.
Yes, I know you have backups. That is not enough: that typically
restores data and while it can also include configuration, you're
going to need to change things when you restore, which is what
automation (or call it "configuration management" if you will) will do
for you anyways.
This also means you can do unit tests on your configuration, otherwise
you're building legacy.
This is probably as hard as you think.
High availability
Make it not fail when one part goes down.
Eliminate single points of failures.
This is easier than you think, except for storage and DNS ("naming
things" not "HA DNS", that is easy), which, I guess, means it's
harder than you think too.
Assessment
In the above 5 items, I currently check two in my lab:
backups
documentation
And barely: I'm not happy about the offsite backups, and my
documentation is much better at work than at home (and even there, I
have a 15 year backlog to catchup on).
I barely have monitoring: Prometheus is scraping parts of the infra,
but I don't have any sort of alerting -- by which I don't mean
"electrocute myself when something goes wrong", I mean "there's a set
of thresholds and conditions that define an outage and I can look at
it".
Automation is wildly incomplete. My home server is a random collection
of old experiments and technologies, ranging from Apache with Perl and
CGI scripts to Docker containers running Golang applications. Most of
it is not Puppetized (but the ratio is growing). Puppet itself
introduces a huge attack vector with kind of catastrophic lateral
movement if the Puppet server gets compromised.
And, fundamentally, I am not sure I can provide high availability in
the lab. I'm just this one guy running my home network, and I'm
growing older. I'm thinking more about winding things down than
building things now, and that's just really sad, because I feel
we're losing (well that escalated
quickly).
Side note about Tor
The above applies to my personal home lab, not work!
At work, of course, it's another (much better) story:
all services have backups
lots of services are well documented, but not all
most services have at least basic monitoring
most services are Puppetized, but not crucial parts (DNS, LDAP,
Puppet itself), and there are important chunks of legacy coupling
between various services that make the whole system brittle
most websites, DNS and large parts of email are highly available,
but key services like the the Forum, GitLab and similar
applications are not HA, although most services run under
replicated VMs that can trivially survive a total, single-node
hardware failure (through Ganeti and DRBD)
Updates on FAIme service: Linux Mint 22.2 and trixie backports available
The FAIme service [1] now offers to
build customized installation images for Xfce edition of Linux Mint 22.2 'Zara'.
For Debian 13 installations, you can select the kernel from backports for
the trixie release, which is currently version 6.16. This will support
newer hardware.
The Incandescent is a stand-alone magical boarding school fantasy.
Your students forgot you. It was natural for them to forget you. You
were a brief cameo in their lives, a walk-on character from the
prologue. For every sentimental my teacher changed my life
story you heard, there were dozens of my teacher made me
moderately bored a few times a week and then I got through the year
and moved on with my life and never thought about them again.
They forgot you. But you did not forget them.
Doctor Saffy Walden is Director of Magic at Chetwood, an elite boarding
school for prospective British magicians. She has a collection of
impressive degrees in academic magic, a specialization in demonic
invocation, and a history of vague but lucrative government job offers
that go with that specialty. She turned them down to be a teacher, and
although she's now in a mostly administrative position, she's a good
teacher, with the usual crop of promising, lazy, irritating, and nervous
students.
As the story opens, Walden's primary problem is Nikki Conway. Or, rather,
Walden's primary problem is protecting Nikki Conway from the Marshals, and
the infuriating Laura Kenning in particular.
When Nikki was seven, she summoned a demon who killed her entire family
and left her a ward of the school. To Laura Kenning, that makes her a risk
who should ideally be kept far away from invocation. To Walden, that makes
Nikki a prodigious natural talent who is developing into a brilliant
student and who needs careful, professional training before she's tempted
into trying to learn on her own.
Most novels with this setup would become Nikki's story. This one does not.
The Incandescent is Walden's story.
There have been a lot of young-adult magical boarding school novels since
Harry Potter became a mass phenomenon, but most of them focus on
the students and the inevitable coming-of-age story. This is a story about
the teachers: the paperwork, the faculty meetings, the funding challenges,
the students who repeat in endless variations, and the frustrations and
joys of attempting to grab the interest of a young mind. It's also about
the temptation of higher-paying, higher-status, and less ethical work,
which however firmly dismissed still nibbles around the edges.
Even if you didn't know Emily Tesh is herself a teacher, you would guess
that before you get far into this novel. There is a vividness and a depth
of characterization that comes from being deeply immersed in the nuance
and tedium of the life that your characters are living. Walden's
exasperated fondness for her students was the emotional backbone of this
book for me. She likes teenagers without idealizing the process of
being a teenager, which I think is harder to pull off in a novel than it
sounds.
It was hard to quantify the difference between a merely very
intelligent student and a brilliant one. It didn't show up in a list
of exam results. Sometimes, in fact, brilliance could be a
disadvantage when all you needed to do was neatly jump the hoop of
an examiner's grading rubric without ever asking why. It was the
teachers who knew, the teachers who could feel the difference. A few
times in your career, you would have the privilege of teaching someone
truly remarkable; someone who was hard work to teach because they made
you work harder, who asked you questions that had never
occurred to you before, who stretched you to the very edge of your own
abilities. If you were lucky as Walden, this time, had been lucky
your remarkable student's chief interest was in your discipline: and
then you could have the extraordinary, humbling experience of teaching
a child whom you knew would one day totally surpass you.
I also loved the world-building, and I say this as someone who is
generally not a fan of demons. The demons themselves are a bit of a
disappointment and mostly hew to one of the stock demon conventions, but
the rest of the magic system is deep enough to have practitioners who
approach it from different angles and meaty enough to have some satisfying
layered complexity. This is magic, not magical science, so don't expect a
fully fleshed-out set of laws, but the magical system felt substantial and
satisfying to me.
Tesh's first novel, Some Desperate
Glory, was by far my favorite science fiction novel of 2023. This is a
much different book, which says good things about Tesh's range and the
potential of her work yet to come: adult rather than YA, fantasy rather
than science fiction, restrained and subtle in places where Some
Desperate Glory was forceful and pointed. One thing the books do have in
common, though, is some structure, particularly the false climax near the
midpoint of the book. I like the feeling of uncertainty and possibility
that gives both books, but in the case of The Incandescent, I was
not quite in the mood for the second half of the story.
My problem with this book is more of a reader preference than an objective
critique: I was in the mood for a story about a confident, capable
protagonist who was being underestimated, and Tesh was writing a novel
with a more complicated and fraught emotional arc. (I'm being
intentionally vague to avoid spoilers.) There's nothing wrong with the
story that Tesh wanted to tell, and I admire the skill with which she did
it, but I got a tight feeling in my stomach when I realized where she was
going. There is a satisfying ending, and I'm still very happy I read this
book, but be warned that this might not be the novel to read if you're in
the mood for a purer competence porn experience.
Recommended, and I am once again eagerly awaiting the next thing Emily
Tesh writes (and reminding myself to go back and read her novellas).
Content warnings: Grievous physical harm, mind control, and some body
horror.
Rating: 8 out of 10
In December 2024, I went on a trip through four countries - Singapore, Malaysia, Brunei, and Vietnam - with my friend Badri. This post covers our experiences in Singapore.
I took an IndiGo flight from Delhi to Singapore, with a layover in Chennai. At the Chennai airport, I was joined by Badri. We had an early morning flight from Chennai that would land in Singapore in the afternoon. Within 48 hours of our scheduled arrival in Singapore, we submitted an arrival card online. At immigration, we simply needed to scan our passports at the gates, which opened automatically to let us through, and then give our address to an official nearby. The process was quick and smooth, but it unfortunately meant that we didn t get our passports stamped by Singapore.
Before I left the airport, I wanted to visit the nature-themed park with a fountain I saw in pictures online. It is called Jewel Changi, and it took quite some walking to get there. After reaching the park, we saw a fountain that could be seen from all the levels. We roamed around for a couple of hours, then proceeded to the airport metro station to get to our hotel.
A shot of Jewel Changi. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
There were four ATMs on the way to the metro station, but none of them provided us with any cash. This was the first country (outside India, of course!) where my card didn t work at ATMs.
To use the metro, one can tap the EZ-Link card or bank cards at the AFC gates to get in. You cannot buy tickets using cash. Before boarding the metro, I used my credit card to get Badri an EZ-Link card from a vending machine. It was 10 Singapore dollars ( 630) - 5 for the card, and 5 for the balance. I had planned to use my Visa credit card to pay for my own fare. I was relieved to see that my card worked, and I passed through the AFC gates.
We had booked our stay at a hostel named Campbell s Inn, which was the cheapest we could find in Singapore. It was 1500 per night for dorm beds. The hostel was located in Little India. While Little India has an eponymous metro station, the one closest to our hostel was Rochor.
On the way to the hostel, we found out that our booking had been canceled.
We had booked from the Hostelworld website, opting to pay the deposit in advance and to pay the balance amount in person upon reaching. However, Hostelworld still tried to charge Badri s card again before our arrival. When the unauthorized charge failed, they sent an automatic message saying we tried to charge and to contact them soon to avoid cancellation, which we couldn t do as we were in the plane.
Despite this, we went to the hostel to check the status of our booking.
The trip from the airport to Rochor required a couple of transfers. It was 2 Singapore dollars (approx. 130) and took approximately an hour.
Upon reaching the hostel, we were informed that our booking had indeed been canceled, and were not given any reason for the cancelation. Furthermore, no beds were available at the hostel for us to book on the spot.
We decided to roam around and look for accommodation at other hostels in the area. Soon, we found a hostel by the name of Snooze Inn, which had two beds available. It was 36 Singapore dollars per person (around 2300) for a dormitory bed. Snooze Inn advertised supporting RuPay cards and UPI. Some other places in that area did the same. We paid using my card. We checked in and slept for a couple of hours after taking a shower.
By the time we woke up, it was dark. We met Praveen s friend Sabeel to get my FLX1 phone. We also went to Mustafa Center nearby to exchange Indian rupees for Singapore dollars. Mustafa Center also had a shopping center with shops selling electronic items and souvenirs, among other things. When we were dropping off Sabeel at a bus stop, we discovered that the bus stops in Singapore had a digital board mentioning the bus routes for the stop and the number of minutes each bus was going to take.
In addition to an organized bus system, Singapore had good pedestrian infrastructure. There were traffic lights and zebra crossings for pedestrians to cross the roads. Unlike in Indian cities, rules were being followed. Cars would stop for pedestrians at unmanaged zebra crossings; pedestrians would in turn wait for their crossing signal to turn green before attempting to walk across. Therefore, walking in Singapore was easy.
Traffic rules were taken so seriously in Singapore I (as a pedestrian) was afraid of unintentionally breaking them, which could get me in trouble, as breaking rules is dealt with heavy fines in the country. For example, crossing roads without using a marked crossing (while being within 50 meters of it) - also known as jaywalking - is an offence in Singapore.
Moreover, the streets were litter-free, and cleanliness seemed like an obsession.
After exploring Mustafa Center, we went to a nearby 7-Eleven to top up Badri s EZ-Link card. He gave 20 Singapore dollars for the recharge, which credited the card by 19.40 Singapore dollars (0.6 dollars being the recharge fee).
When I was planning this trip, I discovered that the World Chess Championship match was being held in Singapore. I seized the opportunity and bought a ticket in advance. The next day - the 5th of December - I went to watch the 9th game between Gukesh Dommaraju of India and Ding Liren of China. The venue was a hotel on Sentosa Island, and the ticket was 70 Singapore dollars, which was around 4000 at the time.
We checked out from our hostel in the morning, as we were planning to stay with Badri s aunt that night. We had breakfast at a place in Little India. Then we took a couple of buses, followed by a walk to Sentosa Island. Paying the fare for the buses was similar to the metro - I tapped my credit card in the bus, while Badri tapped his EZ-Link card. We also had to tap it while getting off.
If you are tapping your credit card to use public transport in Singapore, keep in mind that the total amount of all the trips taken on a day is deducted at the end. This makes it hard to determine the cost of individual trips. For example, I could take a bus and get off after tapping my card, but I would have no way to determine how much this journey cost.
When you tap in, the maximum fare amount gets deducted. When you tap out, the balance amount gets refunded (if it s a shorter journey than the maximum fare one). So, there is incentive for passengers not to get off without tapping out. Going by your card statement, it looks like all that happens virtually, and only one statement comes in at the end. Maybe this combining only happens for international cards.
We got off the bus a kilometer away from Sentosa Island and walked the rest of the way. We went on the Sentosa Boardwalk, which is itself a tourist attraction. I was using Organic Maps to navigate to the hotel Resorts World Sentosa, but Organic Maps route led us through an amusement park. I tried asking the locals (people working in shops) for directions, but it was a Chinese-speaking region, and they didn t understand English. Fortunately, we managed to find a local who helped us with the directions.
A shot of Sentosa Boardwalk. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Following the directions, we somehow ended up having to walk on a road which did not have pedestrian paths. Singapore is a country with strict laws, so we did not want to walk on that road. Avoiding that road led us to the Michael Hotel. There was a person standing at the entrance, and I asked him for directions to Resorts World Sentosa. The person told me that the bus (which was standing at the entrance) would drop me there! The bus was a free service for getting to Resorts World Sentosa. Here I parted ways with Badri, who went to his aunt s place.
I got to the Resorts Sentosa and showed my ticket to get in. There were two zones inside - the first was a room with a glass wall separating the audience and the players. This was the room to watch the game physically, and resembled a zoo or an aquarium. :) The room was also a silent room, which means talking or making noise was prohibited. Audiences were only allowed to have mobile phones for the first 30 minutes of the game - since I arrived late, I could not bring my phone inside that room.
The other zone was outside this room. It had a big TV on which the game was being broadcast along with commentary by David Howell and Jovanka Houska - the official FIDE commentators for the event. If you don t already know, FIDE is the authoritative international chess body.
I spent most of the time outside that silent room, giving me an opportunity to socialize. A lot of people were from Singapore. I saw there were many Indians there as well. Moreover, I had a good time with Vasudevan, a journalist from Tamil Nadu who was covering the match. He also asked questions to Gukesh during the post-match conference. His questions were in Tamil to lift Gukesh s spirits, as Gukesh is a Tamil speaker.
Tea and coffee were free for the audience. I also bought a T-shirt from their stall as a souvenir.
After the game, I took a shuttle bus from Resorts World Sentosa to a metro station, then travelled to Pasir Ris by metro, where Badri was staying with his aunt. I thought of getting something to eat, but could not find any caf s or restaurants while I was walking from the Pasir Ris metro station to my destination, and was positively starving when I got there.
Badri s aunt s place was an apartment in a gated community. On the gate was a security guard who asked me the address of the apartment. Upon entering, there were many buildings. To enter the building, you need to dial the number of the apartment you want to go to and speak to them. I had seen that in the TV show Seinfeld, where Jerry s friends used to dial Jerry to get into his building.
I was afraid they might not have anything to eat because I told them I was planning to get something on the way. This was fortunately not the case, and I was relieved to not have to sleep with an empty stomach.
Badri s uncle gave us an idea of how safe Singapore is. He said that even if you forget your laptop in a public space, you can go back the next day to find it right there in the same spot. I also learned that owning cars was discouraged in Singapore - the government imposes a high registration fee on them, while also making public transport easy to use and affordable. I also found out that 7-Eleven was not that popular among residents in Singapore, unlike in Malaysia or Thailand.
The next day was our third and final day in Singapore. We had a bus in the evening to Johor Bahru in Malaysia. We got up early, had breakfast, and checked out from Badri s aunt s home. A store by the name of Cat Socrates was our first stop for the day, as Badri wanted to buy some stationery. The plan was to take the metro, followed by the bus. So we got to Pasir Ris metro station. Next to the metro station was a mall. In the mall, Badri found an ATM where our cards worked, and we got some Singapore dollars.
It was noon when we reached the stationery shop mentioned above. We had to walk a kilometer from the place where the bus dropped us. It was a hot, sunny day in Singapore, so walking was not comfortable. We had to go through residential areas in Singapore. We saw some non-touristy parts of Singapore.
After we were done with the stationery shop, we went to a hawker center to get lunch. Hawker centers are unique to Singapore. They have a lot of shops that sell local food at cheap prices. It is similar to a food court. However, unlike the food courts in malls, hawker centers are open-air and can get quite hot.
This is the hawker center we went to. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To have something, you just need to buy it from one of the shops and find a table. After you are done, you need to put your tray in the tray-collecting spots. I had a kaya toast with chai, since there weren t many vegetarian options. I also bought a persimmon from a nearby fruit vendor. On the other hand, Badri sampled some local non-vegetarian dishes.
Table littering at the hawker center was prohibited by law. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Next, we took a metro to Raffles Place, as we wanted to visit Merlion, the icon of Singapore. It is a statue having the head of a lion and the body of a fish. While getting through the AFC gates, my card was declined. Therefore, I had to buy an EZ-Link card, which I had been avoiding because the card itself costs 5 Singapore dollars.
From the Raffles Place metro station, we walked to Merlion. The place also gave a nice view of Marina Bay Sands. It was filled with tourists clicking pictures, and we also did the same.
Merlion from behind, giving a good view of Marina Bay Sands. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
After this, we went to the bus stop to catch our bus to the border city of Johor Bahru, Malaysia. The bus was more than an hour late, and we worried that we had missed the bus. I asked an Indian woman at the stop who also planned to take the same bus, and she told us that the bus was late. Finally, our bus arrived, and we set off for Johor Bahru.
Before I finish, let me give you an idea of my expenditure. Singapore is an expensive country, and I realized that expenses could go up pretty quickly. Overall, my stay in Singapore for 3 days and 2 nights was approx. 5500 rupees. That too, when we stayed one night at Badri s aunt s place (so we didn t have to pay for accomodation for one of the nights) and didn t have to pay for a couple of meals. This amount doesn t include the ticket for the chess game, but includes the costs of getting there. If you are in Singapore, it is likely you will pay a visit to Sentosa Island anyway.
Stay tuned for our experiences in Malaysia!
Credits: Thanks to Dione, Sahil, Badri and Contrapunctus for reviewing the draft. Thanks to Bhe for spotting a duplicate sentence.
When this book was presented as available for review, I jumped on it. After
all, who doesn t love reading a nice bit of computing history, as told by a
well-known author (affectionaly known as Uncle Bob ), one who has been
immersed in computing since forever? What s not to like there?
Reading on, the book does not disappoint. Much to the contrary, it digs
into details absent in most computer history books that, being an operating
systems and computer architecture geek, I absolutely enjoyed. But let me
first address the book s organization.
The book is split into four parts. Part 1, Setting the Stage, is a short
introduction, answering the question Who are we? ( we being the
programmers, of course). It describes the fascination many of us felt when
we realized that the computer was there to obey us, to do our bidding, and
we could absolutely control it.
Part 2 talks about the giants of the computing world, on whose shoulders
we stand. It digs in with a level of detail I have never seen before,
discussing their personal lives and technical contributions (as well as the
hoops they had to jump through to get their work done). Nine chapters cover
these giants, ranging chronologically from Charles Babbage and Ada Lovelace
to Ken Thompson, Dennis Richie, and Brian Kernighan (understandably, giants
who worked together are grouped in the same chapter). This is the part with
the most historically overlooked technical details. For example, what was
the word size in the first computers, before even the concept of a byte
had been brought into regular use? What was the register structure of early
central processing units (CPUs), and why did it lead to requiring
self-modifying code to be able to execute loops?
Then, just as Unix and C get invented, Part 3 skips to computer history as
seen through the eyes of Uncle Bob. I must admit, while the change of
rhythm initially startled me, it ends up working quite well. The focus is
no longer on the giants of the field, but on one particular person (who
casts a very long shadow). The narrative follows the author s life: a boy
with access to electronics due to his father s line of work; a computing
industry leader, in the early 2000s, with extreme programming; one of the
first producers of training materials in video format a role that today
might be recognized as an influencer. This first-person narrative reaches
year 2023.
But the book is not just a historical overview of the computing world, of
course. Uncle Bob includes a final section with his thoughts on the future
of computing. As this is a book for programmers, it is fitting to start
with the changes in programming languages that we should expect to see and
where such changes are likely to take place. The unavoidable topic of
artificial intelligence is presented next: What is it and what does it
spell for computing, and in particular for programming? Interesting (and
sometimes surprising) questions follow: What does the future of hardware
development look like? What is prone to be the evolution of the World Wide
Web? What is the future of programming and programmers?
At just under 500 pages, the book is a volume to be taken seriously. But
space is very well used with this text. The material is easy to read, often
funny and always informative. If you enjoy computer history and
understanding the little details in the implementations, it might very well
be the book you want.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard
to buy for a small installation. They are selling to utility scale solar
mostly. And those are installed by driving metal beams into the ground,
which requires heavy machinery.
Since I have experience with Ironridge rails for roof mount solar, I
decided to adapt that system for a vertical mount. Which is something it
was not designed for. I combined the Ironridge hardware with regular parts
from the hardware store.
The cost of mounting solar panels nowadays is often higher than the cost of
the panels. I hoped to match the cost, and I nearly did. The solar panels cost
$100 each, and the fence cost $110 per solar panel. This fence was
significantly cheaper than conventional ground mount arrays that I
considered as alternatives, and made a better use of a difficult hillside
location.
I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail.
(Longer rails would need a center post anyway, and the 7 foot long rails
have cheaper shipping, since they do not need to be shipped freight.)
For the fence posts, I used regular 4x4" treated posts. 12 foot long, set
in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6
inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge
LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3
inch hot-dipped galvanized lag screw. Since a treated post can react badly
with an aluminum bracket, there needs to be some flashing between the post
and bracket. I used Shurtape PW-100 tape for that. I see no sign of
corrosion after 1 year.
The rest of the Ironridge system is a T-bolt that connects the rail to the
L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners
(UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips.
Since the Ironridge hardware is not designed to hold a solar panel at a 90
degree angle, I was concerned that the panels might slide downward over
time. To help prevent that, I added some additional support brackets under
the bottom of the panels. So far, that does not seem to have been a problem
though.
I installed Aptos 370 watt solar panels on the fence. They are bifacial,
and while the posts block the back partially, there is still bifacial
gain on cloudy days. I left enough space under the solar panels to be able
to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence
had its own 2 posts. This is the least elegant part of this design, but
fitting 2 brackets next to one-another on a single post isn't feasible.
I bolted the pairs of posts together with some spacers. A side benefit of
doing it this way is that treated lumber can warp as it dries, and this
prevented much twisting of the posts.
Using separate posts for each segment also means that the fence can
traverse a hill easily. And it does not need to be perfectly straight. In
fact, my fence has a 30 degree bend in the middle. This means it has both
south facing and south-west facing panels, so can catch the light for
longer during the day.
After building the fence, I noticed there was a slight bit of sway at the
top, since 9 feet of wooden post is not entirely rigid. My worry was that a
gusty wind could rattle the solar panels. While I did not actually observe
that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope
it will last 30 years, with the lifespan of the treated lumber
being the likely determining factor.
As part of my larger (and still ongoing) ground mount solar install, the
solar fence has consistently provided great power. The vertical orientation
works well at latitude 36. It also turned out that the back of the fence was
useful to hang conduit and wiring and solar equipment, and so it turned into
the electrical backbone of my whole solar field. But that's another story..
solar fence parts list
DebConf26 is already in the air in Argentina. Organizing DebConf26 give us the
opportunity to talk about Debian in our country again. This is not the first
time that Debian has come here, previously Argentina has hosted DebConf 8 in Mar
del Plata.
In August, Nattie Mayer-Hutchings and Stefano Rivera from DebConf Committee
visited the venue where the next DebConf will take place. They came to Argentina
in order to see what it is like to travel from Buenos Aires to Santa Fe (the
venue of the next DebConf). In addition, they were able to observe the layout
and size of the classrooms and halls, as well as the infrastructure available at
the venue, which will be useful for the Video Team.
But before going to Santa Fe, on the August 27th, we organized a meetup in
Buenos Aires at GCoop, where we hosted some talks:
Qu es Debian? - Pablo Gonzalez (sultanovich) / Emmanuel Arias
On August 28th, we had the opportunity to get to know the Venue. We walked around
the city and, obviously, sampled some of the beers from Santa Fe.
On August 29th we met with representatives of the University and local government
who were all very supportive. We are very grateful to them for opening
their doors to DebConf.
In the afternoon we met some of the local free software community at an event we
held in ATE Santa Fe. The event included several talks:
Qu es Debian? - Pablo (sultanovich) / Emmanuel Arias
Ciberrestauradores: Gestores de basura electr nica - Programa RAEES Acutis
Debian and DebConf (Stefano Rivera/Nattie Mayer-Hutchings)
Thanks to Debian Argentina, and all the people who will make DebConf26
possible.
Thanks to Nattie Mayer-Hutchings and Stefano Rivera for reviewing an earlier
version of this article.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1261 other packages on CRAN, downloaded 41.4 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 647 times according
to Google Scholar.
This versions updates the 15.0.2-1
release from last week. Following fairly extensive email discussions
with CRAN, we are now
accelerating the transition to Armadillo. When C++14 or newer
is used (which after all is the default since R 4.1.0 released May 2021,
see WRE
Section 1.2.4), or when opted into, the newer Armadillo is selected.
If on the other hand either C++11 is still forced, or the legacy version
is explicitly selected (which currently one package at CRAN does), then
Armadillo 14.6.3 is selected.
Most packages will not see a difference and automatically switch to
the newer Armadillo. However, some packages will see one or two types of
warning. First, if C++11 is still actively selected via for examples
CXX_STD then CRAN will nudge a change to a newer
compilation standard (as they have been doing for some time already).
Preferably the change should be to simply remove the constraint and let
R pick the standard based on its version and compiler availability.
These days that gives us C++17 in most cases; see WRE
Section 1.2.4 for details. (Some packages may need C++14 or C++17 or
C++20 explicitly and can also do so.)
Second, some packages may see a deprecation warning. Up until
Armadillo 14.6.3, the package suppressed these and you can still get
that effect by opting into that version by setting
-DARMA_USE_LEGACY. (However this route will be sunset
eventually too.) But one really should update the code to the
non-deprecated version. In a large number of cases this simply means
switching from using arma::is_finite() (typically called on
a scalar double) to calling std::isfinite().
But there are some other cases, and we will help as needed. If you
maintain a package showing deprecation warnings, and are lost here and
cannot workout the conversion to current coding styles, please open an
issue at the RcppArmadillo repository (i.e. here) or in
your own repository and tag me. I will also reach out to the maintainers
of a smaller set of packages with more than one reverse dependency.
A few small changes have been made internal packaging and
documentation, a small synchronization with upstream for two commits
since the 15.0.2 release, as well as a link to the ldlasb2 repository
and its demonstration
regarding some ill-stated benchmarks done elsewhere.
The detailed changes since the last CRAN release follow.
Changes in
RcppArmadillo version 15.0.2-2 (2025-09-18)
Here, in classic Goerzen deep dive fashion, is more information than you knew you wanted about a topic you ve probably never thought of. I found it pretty interesting, because it took me down a rabbit hole of subsystems I ve never worked with much and a mishmash of 1980s and 2020s tech.
I had previously tried and failed to get an actual 80x25 Linux console, but I ve since figured it out!
This post is about the Linux text console not X or Wayland. We re going to get the console right without using those systems. These instructions are for Debian trixie, but should be broadly applicable elsewhere also. The end result can look like this:
(That s a Wifi Retromodem that I got at VCFMW last year in the Hayes modem case)
What s a pixel?
How would you define a pixel these days? Probably something like a uniquely-addressable square dot in a two-dimensional grid .
In the world of VGA and CRTs, that was just a logical abstraction. We got an API centered around that because it was convenient. But, down the VGA cable and on the device, that s not what a pixel was.
A pixel, back then, was a time interval. On a multisync monitor, which were common except in the very early days of VGA, the timings could be adjusted which produced logical pixels of different sizes. Those screens often had a maximum resolution but not necessarily a native resolution in the sense that an LCD panel does. Different timings produced different-sized pixels with equal clarity (or, on cheaper monitors, equal fuzziness).
A side effect of this was that pixels need not be square. And, in fact, in the standard DOS VGA 80x25 text mode, they weren t.
You might be seeing why DVI, DisplayPort, and HDMI replaced VGA for LCD monitors: with a VGA cable, you did a pixel-to-analog-timings conversion, then the display did a timings-to-pixels conversion, and this process could be a bit lossy. (Hence why you sometimes needed to fill the screen with an image and push the center button on those older LCD screens)
(Note to the pedantically-inclined: yes I am aware that I have simplified several things here; for instance, a color LCD pixel is made up of approximately 3 sub-dots of varying colors, and that things like color eInk displays have two pixel grids with different sizes of pixels layered atop each other, and printers are another confusing thing altogether, and and and . MOST PEOPLE THINK OF A PIXEL AS A DOT THESE DAYS, OK?)
What was DOS text mode?
We think of this as the standard display: 80 columns wide and 25 rows tall. 80x25. By the time Linux came along, the standard Linux console was VGA text mode something like the 4th incarnation of text modes on PCs (after CGA, MDA, and EGA). VGA also supported certain other sizes of characters giving certain other text dimensions, but if I cover all of those, this will explode into a ridiculously more massive page than it already is.
So to display text on an 80x25 DOS VGA system, ultimately characters and attributes were written into the text buffer in memory. The VGA system then rendered it to the display as a 720x400 image (at 70Hz) with non-square pixels such that the result was approximately a 4:3 aspect ratio.
The font used for this rendering was a bitmapped one using 8x16 cells. You might do some math here and point out that 8 * 80 is only 640, and you d be correct. The fonts were 8x16 but the rendered cells were 9x16. The extra pixel was normally used for spacing between characters. However, in line graphics mode, characters 0xC0 through 0xDF repeated the 8th column in the position of the 9th, allowing the continuous line-drawing characters we re used to from TUIs.
Problems rendering DOS fonts on modern systems
By now, you re probably seeing some of the issues we have rendering DOS screens on more modern systems. These aren t new at all; I remember some of these from back in the days when I ran OS/2, and I think also saw them on various terminals and consoles in OS/2 and Windows.
Some issues you d encounter would be:
Incorrect aspect ratio caused by using the original font and rendering it using 1:1 square pixels (resulting in a squashed appearance)
Incorrect aspect ratio for ANOTHER reason, caused by failing to render column 9, resulting in text that is overall too narrow
Characters appearing to be touching each other when they shouldn t (failing to render column 9; looking at you, dosbox)
Gaps between line drawing characters that should be continuous, caused by rendering column 9 as empty space in all cases
Character set issues
DOS was around long before Unicode was. In the DOS world, there were codepages that selected the glyphs for roughly the high half of the 256 possible characters. CP437 was the standard for the USA; others existed for other locations that needed different characters. On Unix, the USA pre-Unicode standard was Latin-1. Same concept, but with different character mappings.
Nowadays, just about everything is based on UTF-8. So, we need some way to map our CP437 glyphs into Unicode space. If we are displaying DOS-based content, we ll also need a way to map CP437 characters to Unicode for display later, and we need these maps to match so that everything comes out right. Whew.
So, let s get on with setting this up!
Selecting the proper video mode
As explained in my previous post, proper hardware support for DOS text mode is limited to x86 machines that do not use UEFI. Non-x86 machines, or x86 machines with UEFI, simply do not contain the necessary support for it. As these are now standard, most of the time, the text console you see on Linux is actually the kernel driving the video hardware in graphics mode, and doing the text rendering in software.
That s all well and good, but it makes it quite difficult to actually get an 80x25 console.
First, we need to be running at 720x400. This is where I ran into difficulty last time. I realized that my laptop s LCD didn t advertise any video modes other than its own native resolution. However, almost all external monitors will, and 720x400@70 is a standard VGA mode from way back, so it should be well-supported.
You need to find the Linux device name for your device. You can look at the possible devices with ls -l /sys/class/drm. If you also have a GUI, xrandr may help too. But in any case, each directory under /sys/class/drm has a file named modes, and if you cat them all, you will eventually come across one with a bunch of modes defined. Drop the leading card0 or whatever from the directory name, and that s your device. (Verify that 720x400 is in modes while you re at it.)
Now, you re going to edit /etc/default/grub and add something like this to GRUB_CMDLINE_LINUX_DEFAULT:
video=DP-1:720x400@70
Of course, replace DP-1 with whatever your device is.
Now you can run update-grub and reboot. You should have a 720x400 display.
At first, I thought I had succeeded by using Linux s built-in VGA font with that mode. But it looked too tall. After noticing that repeated 0s were touching, I got suspicious about the missing 9th column in the cells. stty -a showed that my screen was 90x25, which is exactly what it would show if I was using 8x16 instead of 9x16 cells. Sooo . I need to prepare a 9x16 font.
Building it yourself
First, install some necessary software: apt-get install fontforge bdf2psf
Start by going to the Oldschool PC Font Pack Download page. Download oldschool_pc_font_pack_v2.2_FULL.zip and unpack it.
The file we re interested in is otb - Bm (linux bitmap)/Bm437_IBM_VGA_9x16.otb. Open it in fontforge by running fontforge BmPlus_IBM_VGA_9x16.otb. When it asks if you will load the bitmap fonts, hit select all, then yes. Go to File -> generate fonts. Save in a BDF, no need for outlines, and use guess for resolution.
Now you have a file such as Bm437_IBM_VGA_9x16-16.bdf. Excellent.
Now we need to generate a Unicode map file. We will make sure this matches the system s by enumerating every character from 0x00 to 0xFF, converting it from CP437 to Unicode, and writing the appropriate map.
Here s a Python script to do that:
By convention, we normally store these files gzipped, so gzip CP437-VGA.psf.
You can test it on the console with setfont CP437-VGA.psf.gz.
Now copy this file into /usr/local/etc.
Activating the font
Now, edit /etc/default/console-setup. It should look like this:
# CONFIGURATION FILE FOR SETUPCON
# Consult the console-setup(5) manual page.
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="UTF-8"
CODESET="Lat15"
FONTFACE="VGA"
FONTSIZE="8x16"
FONT=/usr/local/etc/CP437-VGA.psf.gz
VIDEOMODE=
# The following is an example how to use a braille font
# FONT='lat9w-08.psf.gz brl-8x8.psf'
At this point, you should be able to reboot. You should have a proper 80x25 display! Log in and run stty -a to verify it is indeed 80x25.
Using and testing CP437
Part of the point of CP437 is to be able to access BBSs, ANSI art, and similar.
Now, remember, the Linux console is still in UTF-8 mode, so we have to translate CP437 to UTF-8, then let our font map translate it back to CP437. A weird trip, but it works.
Let s test it using the Textfiles ANSI art collection. In the artworks section, I randomly grabbed a file near the top: borgman.ans. Download that, and display with:
clear; iconv -f CP437 -t UTF-8 < borgman.ans
You should see something similar to but actually more accurate than the textfiles PNG rendering of it, which you ll note has an incorrect aspect ratio and some rendering issues. I spot-checked with a few others and they seemed to look good. belinda.ans in particular tries quite a few characters and should give you a good sense if it is working.
Use with interactive programs
That s all well and good, but you re probably going to want to actually use this with some interactive program that expects CP437. Maybe Minicom, Kermit, or even just telnet?
For this, you ll want to apt-get install luit. luit maps CP437 (or any other encoding) to UTF-8 for display, and then of course the Linux console maps UTF-8 back to the CP437 font.
Here s a way you can repeat the earlier experiment using luit to run the cat program:
clear; luit -encoding CP437 cat borgman.ans
You can run any command under luit. You can even run luit -encoding CP437 bash if you like. If you do this, it is probably a good idea to follow my instructions on generating locales on my post on serial terminals, and then within luit, set LANG=en_us.IBM437. But note especially that you can run programs like minicom and others for accessing BBSs under luit.
Final words
This gave you a nice DOS-type console. Although it doesn t have glyphs for many codepoints, it does run in UTF-8 mode and therefore is compatible with modern software.
You can achieve greater compatibility with more UTF-8 codepoints with the DOS font, at the expense of accuracy of character rendering (especially for the double-line drawing characters) by using /usr/share/bdf2psf/standard.equivalents instead of /dev/null in the bdf2psf command.
Or you could go for another challenge, such as using the DEC vt-series fonts for coverage of ISO-8859-1. But just using fonts extracted from DEC ROM won t work properly, because DEC terminals had even more strangeness going on than DOS fonts.
Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let s recap what the most important things are every DBA should know about MariaDB in 2025.
Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:
Separate application-specific users with granular permissions allowing only necessary access and no more.
Distributing and storing passwords and credentials securely
Ensuring all remote connections are properly encrypted
For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.
How encrypting database connections with TLS differs from web server HTTP(S)
Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp.
Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser.
For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well.
Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established.
To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count.
At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server s TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS.
Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.
Why the root user in MariaDB has no password and how it makes the database more secure
Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won t be able to use old passwords. This password management is complex and error-prone.
Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server, and configured it with e.g. nano /etc/mysql/mariadb.cnf, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated.
Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.
Create separate database users for normal use and keep root for administrative use only
Out of the box a MariaDB installation is already secure by default, and only the local root user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required.
The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:
sqlCREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
Alternatively, if you want to use the parsec authentication method, run this to create the user:
sqlCREATE OR REPLACE USER 'app_user'@'%'
IDENTIFIED VIA parsec
USING PASSWORD('your_secure_password');
CREATEORREPLACEUSER'app_user'@'%' IDENTIFIED VIA parsec
USING PASSWORD('your_secure_password');
Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded fix this by running INSTALL SONAME 'auth_parsec';.
In the CREATE USER statements, the @'%' means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password.
If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES with a subset of privileges listed in the MariaDB documentation.
For new permissions to take effect, restart the database or run FLUSH PRIVILEGES.
Allow MariaDB to accept remote connections and enforce TLS
Using the above 'app_user'@'%' is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf, with the contents:
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
For settings to take effect, restart the server with systemctl restart mariadb. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed.
To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';" , which should now show 0.0.0.0.
When allowing remote connections, it is important to also always define require-secure-transport = on to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions.
On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key, ssl_cert and ssl_ca values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3 to ensure only the latest TLS protocol version is used.
Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz to read more configuration tips.
Should TLS encryption be used also on internal networks?
If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn t happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted.
If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can t use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up.
In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases.
Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!
My trip to pgday.at started Wednesday at the airport in D sseldorf. I was there on time, and the plane started with an estimated flight time of about 90 minutes. About half an hour into the flight, the captain announced that we would be landing in 30 minutes - in D sseldorf, because of some unspecified technical problems. Three hours after the original departure time, the plane made another attempt, and we made it to Vienna.
On the plane I had already met Dirk Krautschick who had the great honor of bringing Slonik (in the form of a big extra bag) to the conference, and we took a taxi to the hotel. On the taxi, the next surprise happened: Hans-J rgen Sch nig unfortunately couldn't make it to the conference, and his talks had to be replaced. I had submitted a talk to the conference, but it was not accepted, and neither queued on the reserve list. But two speakers on the reserve list had cancelled, and another was already giving a talk in parallel to the slot that had to be filled, so Pavlo messaged me if I could hold the talk - well of course I could. Before, I didn't have any specific plans for the evening yet, but suddenly I was a speaker, so I joined the folks going to the speakers dinner at the Wiener Grill Haus two corners from the hotel. It was a very nice evening, chatting with a lot of folks from the PostgreSQL community that I had not seen for a while.
Thursday was the conference day. The hotel was a short walk from the venue, the Apothekertrakt in Vienna's Schloss Sch nbrunn. The courtyard was already filled with visitors registering for the conference. Since I originally didn't have a talk scheduled, I had signed up to volunteer for a shift as room host. We got our badge and swag bag, and I changed into the "crew" T-shirt.
The opening and sponsor keynotes took place in the main room, the Orangerie. We were over 100 people in the room, but apparently still not enough to really fill it, so the acoustics with some echo made it a bit difficult to understand everything. I hope that part can be improved for next time (which is planned to happen!).
I was host for the Maximilian room, where the sponsor sessions were scheduled in the morning. The first talk was by our Peter Hofer, also replacing the absent Hans. He had only joined the company at the beginning of the same week, and was already tasked to give Hans' talk on PostgreSQL as Open Source. Of course he did well.
Next was Tanmay Sinha from Readyset. They are building a system that caches expensive SQL queries and selectively invalidates the cache whenever any data used by these queries changes. Whenever actually fixing the application isn't feasible, that system looks like an interesting alternative to manually maintaining materialized views, or perhaps using pg_ivm.
After lunch, I went to Federico Campoli's Mastering Index Performance, but really spent the time polishing the slides for my talk. I had given the original version at pgconf.de in Berlin in May, and the slides were still in German, so I had to do some translating. Luckily, most slides are just git commit messages, so the effort was manageable.
The next slot was mine, talking about Modern VACUUM. I started with a recap of MVCC, vacuum and freezing in PostgreSQL, and then showed how over the past years, the system was updated to be more focused (the PostgreSQL 8.4 visibility map tells vacuum which pages to visit), faster (12 made autovacuum run 10 times faster by default), less scary (14 has an emergency mode where freezing switches to maximum speed if it runs out of time; 16 makes freezing create much less WAL) and more performant (17 makes vacuum use much less memory). In summary, there is still room for the DBA to tune some knobs (for example, the default autovacuum_max_workers=3 isn't much), but the vacuum default settings are pretty much okay these days for average workloads. Specific workloads still have a whopping 31 postgresql.conf settings at their disposal just for vacuum.
Right after my talk, there was another vacuum talk: When Autovacuum Met FinOps by Mayuresh Bagayatkar. He added practical advice on tuning the performance in cloud environments. Luckily, our contents did not overlap.
After the coffee break, I was again room host, now for Floor Drees and Contributing to Postgres beyond code. She presented the various ways in which PostgreSQL is more than just the code in the Git repository: translators, web site, system administration, conference organizers, speakers, bloggers, advocates. As a member of the PostgreSQL Contributors Committee, I could only approve and we should closer cooperate in the future to make people's contributions to PostgreSQL more visible and give them the recognition they deserve.
That was already the end of the main talks and everyone rushed to the Orangerie for the lightning talks. My highlight was the Sheldrick Wildlife Trust. Tickets for the conference had included the option to donate for the elephants in Kenya, and the talk presented the trust's work in the elephant orphanage there.
After the conference had officially closed, there was a bonus track: the Celebrity DB Deathmatch, aptly presented by Boriss Mejias. PostgreSQL, MongoDB, CloudDB and Oracle were competing for the grace of a developer. MongoDB couldn't stand the JSON workload, CloudDB was dismissed for handing out new invoices all the time, and Oracle had even brought a lawyer to the stage, but then lost control over a literally 10 meter long contract with too much fine print. In the end, PostgreSQL (played by Floor) won the love of the developer (played by our Svitlana Lytvynenko).
The day closed with a gathering at the Brandauer Schlossbr u - just at the other end of the castle ground, but still a 15min walk away. We enjoyed good beer and Kaiserschmarrn. I went back to the hotel a bit before midnight, but some extended that time quite some bit more.
On Friday, my flight back was only in the afternoon, so I spent some time in morning in the Technikmuseum just next to the hotel, enjoying some old steam engines and a live demonstration of Tesla coils. This time, the flight actually went to the destination, and I was back in D sseldorf in the late afternoon.
In summary, pgday.at was a very nice event in a classy location. Thanks to the organizers for putting in all the work - and next year, Hans will hopefully be present in person!
The post A Trip To Vienna With Surprises appeared first on CYBERTEC PostgreSQL Services & Support.
Just wanted to share I enjoy reading George V. Neville s Kode Vicious
column,
which regularly appears on some of ACM s publications I follow, such as
ACM Queue or
Communications.
Today I was very pleasantly surprised, while reading the column titled
Can t we have nice
thingsKode
Vicious answers to a question on why computing has nothing comparable to
the beauty of ancient physics laboratories turned into museums
(i.e. Faraday s laboratory) by giving a great hat tip to a project stemmed
off Debian, and where many of my good Debian friends spend a lot of their
energies: Reproducible builds. KV says:
Once the proper measurement points are known, we want to constrain the
system such that what it does is simple enough to understand and easy to
repeat. It is quite telling that the push for software that enables
reproducible builds only really took off after an embarrassing widespread
security issue ended up affecting the entire Internet. That there had
already been 50 years of software development before anyone thought that
introducing a few constraints might be a good idea is, well, let s just
say it generates many emotions, none of them happy, fuzzy ones.
Yes, KV is a seasoned free software author. But I found it heart warming
that the Reproducible Builds project is mentioned without needing to
introduce it (assuming familiarity across the computing industry and
academia), recognized as game-changing as we understood it would be over
ten years ago when it was first announced, and enabling of beauty in
computing.
Congratulations to all of you who have made this possible!
Preparing for setup.py install deprecation, by Colin Watson
setuptools upstream will be removing the setup.py install command
on 31 October. While this may not trickle down immediately into Debian, it does
mean that in the near future nearly all Python packages will have to use
pybuild-plugin-pyproject (though they don t necessarily have to use
pyproject.toml; this is just a question of how the packaging runs the build
system). Some of the Python team talked about this a bit at DebConf, and Colin
volunteered to write up some notes
on cases where this isn t straightforward. This page will likely grow as the
team works on this problem.
Salsa CI, by Santiago Ruano Rinc n
Santiago fixed some pending issues in the MR that moves the pipeline to sbuild+unshare,
and after several months, Santiago was able to mark the MR as ready. Part of the
recent fixes include handling external repositories,
honoring the RELEASE autodetection from d/changelog
(thanks to Ahmed Siam for spotting the main reason of the issue), and fixing a
regression about the apt resolver for *-backports releases.
Santiago is currently waiting for a final review and approval from other members
of the Salsa CI team, and being able to merge it. Thanks to all the folks who
have helped testing the changes or provided feedback so far. If you want to test
the current MR, you need to include the following pipeline definition in your
project s CI config file:
As a reminder, this MR will make the Salsa CI pipeline build the packages more
similar to how it s built by the Debian official builders. This will also save
some resources, since the default pipeline will have one stage less (the
provisioning) stage, and will make it possible for more projects to be built on
salsa.debian.org (including large projects and
those from the OCaml ecosystem), etc. See the different issues being fixed in
the MR description.
Debian 13 trixie release, by Emilio Pozuelo Monfort
On August 9th, Debian 13 trixie was released, building on two years worth of
updates and bug fixes from hundreds of developers. Emilio helped coordinate the
release, communicating with several teams involved in the process.
DebConf 26 Site Visit, by Stefano Rivera
Stefano visited Santa Fe, Argentina, the site for DebConf 26
next year. The aim of the visit was to help build a local team and see the
conference venue first-hand. Stefano and Nattie represented the DebConf
Committee. The local team organized Debian meetups in Buenos Aires and Santa Fe,
where Stefano presented a talk
on Debian and DebConf. Venues were scouted
and the team met with the university management and local authorities.
Miscellaneous contributions
Rapha l updated tracker.debian.org after the
trixie release to add the new forky release in the set of monitored
distributions.
He also reviewed and deployed the work of Scott Talbert
showing open merge requests from salsa in the action needed panel.
Rapha l reviewed some DEP-3 changes
to modernize the embedded examples in light of the broad git adoption.
Rapha l configured new workflows
on debusine.debian.net to upload to trixie and
trixie-security, and officially announced the service
on debian-devel-announce, inviting Debian developers to try the service for
their next upload to unstable.
Carles created a merge request
for django-compressor upstream to fix an error when concurrent node processing
happened. This will allow removing a workaround
added in openstack-dashboard and avoid the same bug in other projects that use
django-compressor.
Carles prepared a system to detect packages that Recommends packages which
don t exist in unstable. Processed (either reported
or ignored due to mis-detected problems or temporary problems) 16% of the
reports. Will continue next month.
Carles got familiar and gave feedback for the freedict-wikdict package.
Planned contributions with the maintainer to improve the package.
Helmut responded to queries related to /usr-move.
Helmut adapted crossqa.d.n to the release of
trixie .
Helmut diagnosed sufficient failures in rebootstrap
to make it work with gcc-15.
Faidon discovered that the Multi-Arch hinter would emit confusing hints about
:any annotations. Helmut identified the root cause to be the handling of
virtual packages and fixed it.
Colin upgraded about 70 Python packages to new upstream versions, which is
around 10% of the backlog; this included a complicated Pydantic upgrade in
collaboration with the Rust team.
Colin fixed
a bug in debbugs that caused incoming emails to bugs.debian.org with certain
header contents to go missing.
Thorsten uploaded sane-airscan, which was already in experimental, to unstable.
Thorsten created a script to automate the upload of new upstream versions of
foomatic-db. The database contains information about printers and regularly gets
an update. Now it is possible to keep the package more up to date in Debian.
Stefano prepared updates to almost all of his packages that had new versions
waiting to upload to unstable. (beautifulsoup4, hatch-vcs, mkdocs-macros-plugin,
pypy3, python-authlib, python-cffi, python-mitogen, python-pip, python-pipx,
python-progress, python-truststore, python-virtualenv, re2, snowball, soupsieve).
Stefano uploaded two new python3.13 point releases to unstable.
Stefano updated distro-info-data in stable releases, to document the trixie
release and expected EoL dates.
Stefano did some debian.social sysadmin work (keeping up quotas with growing
databases and filesystems).
Stefano supported the Debian treasurers in processing some of the DebConf 25
reimbursements.
Lucas uploaded ruby3.4 to experimental. It was already approved by FTP masters.
Lucas uploaded ruby-defaults to experimental to add support for ruby3.4. It
will allow us to start triggering test rebuilds and catch any FTBFS with ruby3.4.
Lucas did some administrative work for Google Summer of Code (GSoC) and
replied to some queries from mentors and students.
Anupa helped to organize release parties for Debian 13 and Debian Day events.
Anupa did the live coverage for the Debian 13 release and prepared the Bits
post for the release announcement and 32nd Debian Day as part of the Debian
Publicity team.
Anupa attended a Debian Day event
organized by FOSS club SSET as a speaker.
Release 0.2.9 of our RcppSMC package arrived at
CRAN today. RcppSMC
provides Rcpp-based bindings to R for the Sequential Monte Carlo
Template Classes (SMCTC) by Adam
Johansen described in his JSS article. Sequential
Monte Carlo is also referred to as Particle Filter
in some contexts. The package now also features the Google Summer of Code
work by Leah South
in 2017, and by Ilya Zarubin in
2021.
This release is again entirely internal. It updates the code for the
just-released RcppArmadillo
15.0.2-1, in particular opts into Armadillo 15.0.2. And it makes one
small tweak to the continuous integration setup switching to the
r-ci action.
The release is summarized below.
Changes in RcppSMC
version 0.2.9 (2025-09-09)
Adjust to RcppArmadillo 15.0.* by
setting ARMA_USE_CURRENT and updating two expressions from
deprecated code
Rely on r-ci GitHub Action which includes the bootstrap
step
I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others.
I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics.
Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs.
Background: Moving towards ZFS and btrfs
I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
The checksums for every block help detect potential silent data corruption
Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases.
I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things.
In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag.
Filesystems on the Raspberry Pi
I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account.
But it was SLOW. Just really, glacially, slow, especially for Thunderbird.
My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster.
Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints.
The conversion
Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right?
Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error.
btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups.
btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC.
Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups.
This was the only btrfs problem I encountered.
Benchmarks
I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird.
After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same!
Why might this be?
It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk.
Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs.
SSD mount options and MicroSD endurance
btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore.
Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files).
One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years.
If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start.
For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems).
Conclusion
I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.
Here on a summer night in the grass and lilac smell
Drunk on the crickets and the starry sky,
Oh what fine stories we could tell
With this moonlight to tell them by.
A summer night, and you, and paradise,
So lovely and so filled with grace,
Above your head, the universe has hung its lights,
And I reach out my hand and touch your face.
I sit outside today, at the picnic table on our side porch. I was called out here; in late summer, the cicadas and insects of the plains are so loud that I can hear them from inside our old farmhouse.
I sit and hear the call and response of buzzing cicadas, the chirp of crickets during their intermission. The wind rustles off and on through the treetops. And now our old cat has heard me, and she comes over, spreading tan cat hair across my screen. But I don t mind; I hear her purr as she comes over to relax nearby.
Aside from the gentle clack of my keyboard as I type, I hear no sounds of humans. Occasionally I hear the distant drone of a small piston airplane, and sometimes the faint horn of a train, 6 miles away.
As I look up, I see grass, the harvested wheat field, the trees, and our gravel driveway. Our road is on the other side of a hill. I see no evidence of it from here, but I know it s there. Maybe 2 or 3 vehicles will pass on a day like today; if they re tall delivery trucks, I ll see their roof glide silently down the road, and know the road is there. The nearest paved road is several miles away, so not much comes out here.
I reflect of those times years ago, when this was grandpa s house, and the family would gather on Easter. Grandpa hid not just Easter eggs, but Easter bags all over the yard. This yard. Here s the tree that had a nice V-shaped spot to hide things in; there s the other hiding spot.
I reflect on the wildlife. This afternoon, it s the insects that I hear. On a foggy, cool, damp morning, the birds will be singing from all the trees, the fog enveloping me with unseen musical joy. On a quiet evening, the crickets chirp and the coyotes howl in the distance.
Now the old cat has found my lap. She sits there purring, tail swishing. 12 years ago when she was a kitten, our daughter hadn t yet been born. She is old and limps, and is blind in one eye, but beloved by all. Perfectly content with life, she stretches and relaxes.
I have visited many wonderful cities in this world. I ve seen Aida at the Metropolitan Opera, taken trains all over Europe, wandered the streets of San Francisco and Brussels and Lindos, visited the Christmas markets in the lightly-snowy evenings in Regensburg, felt the rumble of the Underground beneath me in London. But rarely do the city people come here.
Oh, some of them think they ve visited the country. But no, my friends, no; if you don t venture beyond the blacktop roads, you ve not experienced it yet. You ve not gone to a restaurant in town , recognized by several old friends. You ve not stopped by the mechanic the third generation of that family fixing cars that belong to yours who more often than not tells you that you don t need to fix that something just yet. You ve not sat outside, in this land where regular people each live in their own quiet Central Park. You ve not seen the sunset, with is majestic reds and oranges and purples and blues and grays, stretching across the giant iMax dome of the troposphere, suspended above the hills and trees to the west. You ve not visited the grocery store, with your car unlocked and keys in the ignition, unconcerned about vehicle theft. You ve not struggled with words when someone asks what city are you from and you lack the vocabulary to help them understand what it means when you say none .
Out there in the land of paved roads and bright lights, the problems of the world churn. The problems near and far: a physical and mental health challenges with people we know, global problems with politics and climate.
But here, this lazy summer afternoon, I forget about the land of the paved roads and bright lights. As it should be; they ve forgotten the land of the buzzing cicadas and muddy roads.
I believe in impulse, in all that is green,
In the foolish vision that comes out true.
I believe that all that is essential is unseen,
And for this lifetime, I believe in you.
All of the lovers and the love they made:
Nothing that was between them was a mistake.
All that we did for love s sake,
Was not wasted and will never fade.
All who have loved will be forever young
And walk in grandeur on a summer night
Along the avenue.
They live in every song that is sung,
In every painting of pure light,
In every pas de deux.
O love that shines from every star,
Love reflected in the silver moon;
It is not here, but it is not far.
Not yet, but it will be here soon.
No two days are alike. But this day comes whenever I pause to let it.
May you find the buzzing cicadas and muddy roads near you, wherever you may be.
Poetry from A Summer Night by Garrison Keillor