Ancestral Night is a far-future space opera novel and the first of
a series. It shares a universe with Bear's earier
Jacob's Ladder trilogy, and there is a passing
reference to the events of Grail that
would be a spoiler if you put the pieces together, but it's easy to miss.
You do not need to read the earlier series to read this book (although
it's a good series and you might enjoy it).
Halmey Dz is a member of the vast interstellar federation called the
Synarche, which has put an end to war and other large-scale anti-social
behavior through a process called rightminding. Every person has a neural
implant that can serve as supplemental memory, off-load some thought
processes, and, crucially, regulate neurotransmitters and hormones to help
people stay on an even keel. It works, mostly.
One could argue Halmey is an exception. Raised in a clade that took
rightminding to an extreme of suppression of individual personality into a
sort of hive mind, she became involved with a terrorist during her legally
mandated time outside of her all-consuming family before she could make an
adult decision to stay with them (essentially a rumspringa). The
result was a tragedy that Halmey doesn't like to think about, one that's
left deep emotional scars. But Halmey herself would argue she's not an
exception: She's put her history behind her, found partners that she
trusts, and is a well-adjusted member of the Synarche.
Eventually, I realized that I was wasting my time, and if I wanted to
hide from humanity in a bottle, I was better off making it a titanium
one with a warp drive and a couple of carefully selected companions.
Halmey does salvage: finding ships lost in white space and retrieving
them. One of her partners is Connla, a pilot originally from a somewhat
atavistic world called Spartacus. The other is their salvage tug.
The boat didn't have a name.
He wasn't deemed significant enough to need a name by the
authorities and registries that govern such things. He had a
registration number 657-2929-04, Human/Terra and he had a class,
salvage tug, but he didn't have a name.
Officially.
We called him Singer. If Singer had an opinion on the issue,
he'd never registered it but he never complained. Singer was the
shipmind as well as the ship or at least, he inhabited the ship's
virtual spaces the same way we inhabited the physical ones but my
partner Connla and I didn't own him. You can't own a sentience in
civilized space.
As Ancestral Night opens, the three of them are investigating a tip
of a white space anomoly well off the beaten path. They thought it might
be a lost ship that failed a transition. What they find instead is a dead
Ativahika and a mysterious ship equipped with artificial gravity.
The Ativahikas are a presumed sentient race of living ships that are on
the most alien outskirts of the Synarche confederation. They don't
communicate, at least so far as Halmey is aware. She also wasn't aware
they died, but this one is thoroughly dead, next to an apparently
abandoned ship of unknown origin with a piece of technology beyond the
capabilities of the Synarche.
The three salvagers get very little time to absorb this scene before they
are attacked by pirates.
I have always liked Bear's science fiction better than her fantasy, and
this is no exception. This was great stuff. Halmey is a talkative,
opinionated infodumper, which is a great first-person protagonist to have
in a fictional universe this rich with delightful corners. There are some
Big Dumb Object vibes (one of my favorite parts of salvage stories), solid
character work, a mysterious past that has some satisfying heft once it's
revealed, and a whole lot more moral philosophy than I was expecting from
the setup. All of it is woven together with experienced skill,
unsurprising given Bear's long and prolific career. And it's full of
delightful world-building bits: Halmey's afthands (a surgical adaptation
for zero gravity work) and grumpiness at the sheer amount of
gravity she has to deal with over the course of this book, the
Culture-style ship names, and a faster-than-light travel system that of
course won't pass physics muster but provides a satisfying quantity of
hooky bits for plot to attach to.
The backbone of this book is an ancient artifact mystery crossed with a
murder investigation. Who killed the Ativahika? Where did the gravity
generator come from? Those are good questions with interesting answers.
But the heart of the book is a philosophical conflict: What are the
boundaries between identity and society? How much power should society
have to reshape who we are? If you deny parts of yourself to fit in with
society, is this necessarily a form of oppression?
I wrote a couple of paragraphs of elaboration, and then deleted them; on
further thought, I don't want to give any more details about what Bear is
doing in this book. I will only say that I was not expecting this level of
thoughtfulness about a notoriously complex and tricky philosophical topic
in a full-throated adventure science fiction novel. I think some people
may find the ending strange and disappointing. I loved it, and weeks after
finishing this book I'm still thinking about it.
Ancestral Night has some pacing problems. There is a long stretch
in the middle of the book that felt repetitive and strained, where Bear
holds the reader at a high level of alert and dread for long enough that I
found it enervating. There are also a few political cheap shots where Bear
picks the weakest form of an opposing argument instead of the strongest.
(Some of the cheap shots are rather satisfying, though.) The dramatic arc
of the book is... odd, in a way that I think was entirely intentional
given how well it works with the thematic message, but which is also
unsettling. You may not get the catharsis that you're expecting.
But all of this serves a purpose, and I thought that purpose was
interesting. Ancestral Night is one of those books that I
liked more a week after I finished it than I did when I finished it.
Epiphanies are wonderful. I m really grateful that our brains do so
much processing outside the line of sight of our consciousnesses. Can
you imagine how downright boring thinking would be if you had to go
through all that stuff line by line?
Also, for once, I think Bear hit on exactly the right level of description
rather than leaving me trying to piece together clues and hope I
understood the plot. It helps that Halmey loves to explain things, so
there are a lot of miniature infodumps, but I found them interesting and a
satisfying throwback to an earlier style of science fiction that focused
more on world-building than on interpersonal drama. There is drama,
but most of it is internal, and I thought the balance was about right.
This is solid, well-crafted work and a good addition to the genre. I am
looking forward to the rest of the series.
Followed by Machine, which shifts to a different protagonist.
Rating: 8 out of 10
Yesterday, I had my first successful AI coding
experience.
I ve used AI coding tools before and come away disappointed. The
results were underwhelming: low-quality code, inconsistent abstraction
levels, and subtle bugs that take longer to fix than it would take to
write the whole thing from scratch.
Those problems haven t vanished. The code quality this time was still
disappointing. As I asked the AI to refined its work, it would randomly
drop important constraints or refactor things in unhelpful ways. And
yet, this experience was different and genuinely valuable for
two reasons.
The first benefit was the obvious one: the AI helped me get over the
blank-page problem. It produced a workable skeleton for
the project imperfect, but enough to start building on.
The second benefit was more surprising. I was working on a problem in
odds-ratio preference optimization specifically,
finding a way to combine similar examples in datasets for AI training. I
wanted an ideal algorithm, one that extracted every ounce of
value from the data.
The AI misunderstood my description. Its first attempt was laughably
simple it just concatenated two text strings. Thanks, but I can call
strcat or the Python equivalent without help.
However, the second attempt was different. It was still not what I
had asked for but as I thought about it, I realized it was good enough.
The AI had created a simpler algorithm that would probably solve my
problem in practice.
In trying too hard to make the algorithm perfect, I d overlooked that
the simpler approach might be the right one. The AI, by
misunderstanding, helped me see that.
This experience reminded me of something that happened years ago when
I was mentoring a new developer. They came to me asking how to solve a
difficult problem. Rather than telling them it was impossible, I
explained what would be required: a complex authorization framework,
intricate system interactions, and a series of political and
organizational hurdles that would make deployment nearly impossible.
A few months later, they returned and said they d found a solution. I
was astonished until I looked more closely. What they d built wasn t the
full, organization-wide system I had envisioned. Instead, they d
reframed the problem. By narrowing the scope reducing the need for
global trust and deep integration they d built a local solution that
worked well enough within their project.
They succeeded precisely because they didn t see all the
constraints I did. Their inexperience freed them from assumptions that
had trapped me.
That s exactly what happened with the AI. It didn t know which
boundaries not to cross. In its simplicity, it found a path forward that
I had overlooked.
My conclusion isn t that AI coding is suddenly great. It s that
working with someone or something that thinks differently can
open new paths forward. Whether it s an AI, a peer, or a less
experienced engineer, that collaboration can bring fresh perspectives
that challenge your assumptions and reveal simpler, more practical ways
to solve problems.
Since it's spooky season, let me present to you the FrankenKeyboard!
8bitdo retro keyboard
For some reason I can't fathom, I was persuaded into buying an
8bitdo retro mechanical keyboard.
It was very reasonably priced, and has a few nice fun features:
built-in bluetooth and 2.4GHz wireless (with the supplied dongle);
colour scheme inspired by the Nintendo Famicom; fun to use knobs
for volume control; some basic macro support; and funky oversized
mashable macro keys (which work really well as "Copy" and "Paste")
The 8bitdo keyboards come with switch-types I had not previously
experienced: Kailh Box White v2. I'm used to Cherry MX Reds, but
I loved the feel of the Box White v2s. The 8bitdo keyboards all
have hot-swappable key switches.
It's relatively compact (comes without a numpad), but still larger
than my TEX Shura, which (at home) is my daily driver. I also
miss the trackpoint mouse on the Shura. Finally, the 8bitdo model
I bought has American ANSI key layout, which I can tolerate but is
not as nice as ISO. I later learned that they have a limited range
of ISO-layout keyboards too, but not (yet) in the Famicom colour
scheme I'd bought.
DIY Shura
My existing Shura's key switches are soldered on and can't be swapped
out. But I really preferred the Kailh white switches.
I decided to buy a second Shura, this time as a "DIY kit" which
accepts hot-swappable switches. I then moved the Kailh Box White v2
switches over from the 8bitdo keyboard.
keycaps
Part of justifying buying the DIY kit was the possibility that I could sell on
my older Shura with the Cherry MX Red switches. My existing Shura's key caps
are for the ISO-GB layout and have their legends printed onto them. After three
years the legends have faded in a few places.
The DIY kit comes with a set of ABS "double-shot" key caps (where the
key legends are plastic rather than printed). They look a lot nicer, but
I don't look at my keys. I'm considering applying the new, pristine key
caps to the old Shura board, to make it more attractive to buyers. One
problem is I'm not sure the new set of caps includes the ISO-UK specific
ones. It might be that potential buyers might prefer to have used caps
with the correct legends rather than pristine ones which are mislabelled.
franken keyboard
Given I wasn't going to use the new key cap set, I borrowed most of the caps
from the 8bitdo keyboard. I had to retain the G, H and B keys from my older
Shura as they are specially molded to leave space for the trackpoint, and a
couple of the modifier keys which weren't the right size. Hence the odd look!
(It needs some tweaking. That left-ALT looks out of place. It may be that the
8bitdo caps are temporary. Left "cmd" is really Fn, and "Caps lock" is really
"Super". The right-hand red dot is a second "Super".)
Since taking the photo I've removed the "stabilisers" under the right-shift
and backspace keys, in order to squeeze a couple more keys in their place.
the new keycap set includes a regular-sized "BS" key, as the JIS keyboard
layout has a regular-sized backspace. (Everyone should have a BS key in my
opinion.)
I plan to map my new keys to "Copy" and "Paste" actions following the advice in
this article.
Rory Stewart is a former British diplomat, non-profit executive, member of
Parliament, and cabinet minister. Politics on the Edge is a memoir
of his time in the UK Parliament from 2019 to 2019 as a Tory
(Conservative) representing the Penrith and The Border constituency in
northern England. It ends with his failed run against Boris Johnson for
leader of the Conservative Party and Prime Minister.
This book provoked many thoughts, only some of which are about the book.
You may want to get a beverage; this review will be long.
Since this is a memoir told in chronological order, a timeline may be
useful. After Stewart's time as a regional governor in occupied Iraq
(see The Prince of the Marshes), he
moved to Kabul to found and run an NGO to preserve traditional Afghani
arts and buildings (the
Turquoise Mountain Foundation, about which I know nothing except what
Stewart wrote in this book). By his telling, he found that work deeply
rewarding but thought the same politicians who turned Iraq into a mess
were going to do the same to Afghanistan. He started looking for ways to
influence the politics more directly, which led him first to Harvard and
then to stand for Parliament.
The bulk of this book covers Stewart's time as MP for Penrith and The
Border. The choice of constituency struck me as symbolic of Stewart's
entire career: He was not a resident and had no real connection to the
district, which he chose for political reasons and because it was the
nearest viable constituency to his actual home in Scotland. But once he
decided to run, he moved to the district and seems sincerely earnest in
his desire to understand it and become part of its community. After five
years as a backbencher, he joined David Cameron's government in a minor
role as Minister of State in the Department for Environment, Food, and
Rural Affairs. He then bounced through several minor cabinet positions
(more on this later) before being elevated to Secretary of State for
International Development under Theresa May. When May's government
collapsed during the fight over the Brexit agreement, he launched a
quixotic challenge to Boris Johnson for leader of the Conservative Party.
I have enjoyed Rory Stewart's writing ever since The Places in Between. This book is no exception. Whatever one's
other feelings about Stewart's politics (about which I'll have a great
deal more to say), he's a talented memoir writer with an understated and
contemplative style and a deft ability to shift from concrete description
to philosophical debate without bogging down a story. Politics on
the Edge is compelling reading at the prose level. I spent several
afternoons happily engrossed in this book and had great difficulty putting
it down.
I find Stewart intriguing since, despite being a political conservative,
he's neither a neoliberal nor any part of the new right. He is instead an
apparently-sincere throwback to a conservatism based on epistemic
humility, a veneration of rural life and long-standing traditions, and a
deep commitment to the concept of public service. Some of his principles
are baffling to me, and I think some of his political views are obvious
nonsense, but there were several things that struck me throughout this
book that I found admirable and depressingly rare in politics.
First, Stewart seems to learn from his mistakes. This goes beyond
admitting when he was wrong and appears to include a willingness to
rethink entire philosophical positions based on new experience.
I had entered Iraq supporting the war on the grounds that we could at
least produce a better society than Saddam Hussein's. It was one of
the greatest mistakes in my life. We attempted to impose programmes
made up by Washington think tanks, and reheated in air-conditioned
palaces in Baghdad a new taxation system modelled on Hong Kong; a
system of ministers borrowed from Singapore; and free ports, modelled
on Dubai. But we did it ultimately at the point of a gun, and our
resources, our abstract jargon and optimistic platitudes could not
conceal how much Iraqis resented us, how much we were failing, and how
humiliating and degrading our work had become. Our mission was a
grotesque satire of every liberal aspiration for peace, growth and
democracy.
This quote comes from the beginning of this book and is a sentiment
Stewart already expressed in The Prince of the Marshes, but he
appears to have taken this so seriously that it becomes a theme of his
political career. He not only realized how wrong he was on Iraq, he
abandoned the entire neoliberal nation-building project without abandoning
his belief in the moral obligation of international aid. And he, I think
correctly, identified a key source of the error: an ignorant,
condescending superiority that dismissed the importance of deep expertise.
Neither they, nor indeed any of the 12,000 peacekeepers and policemen
who had been posted to South Sudan from sixty nations, had spent a
single night in a rural house, or could complete a sentence in Dinka,
Nuer, Azande or Bande. And the international development strategy
written jointly between the donor nations resembled a fading mission
statement found in a new space colony, whose occupants had all been
killed in an alien attack.
Second, Stewart sincerely likes ordinary people. This shone through
The Places in Between and recurs here in his descriptions of his
constituents. He has a profound appreciation for individual people who
have spent their life learning some trade or skill, expresses thoughtful
and observant appreciation for aspects of local culture, and appears to
deeply appreciate time spent around people from wildly different social
classes and cultures than his own. Every successful politician can at
least fake gregariousness, and perhaps that's all Stewart is doing, but
there is something specific and attentive about his descriptions of other
people, including long before he decided to enter politics, that makes me
think it goes deeper than political savvy.
Third, Stewart has a visceral hatred of incompetence. I think this is the
strongest through-line of his politics in this book: Jobs in government
are serious, important work; they should be done competently and well; and
if one is not capable of doing that, one should not be in government.
Stewart himself strikes me as an insecure overachiever: fiercely
ambitious, self-critical, a bit of a micromanager (I suspect he would be
difficult to work for), but holding himself to high standards and appalled
when others do not do the same. This book is scathing towards multiple
politicians, particularly Boris Johnson whom Stewart clearly despises, but
no one comes off worse than Liz Truss.
David Cameron, I was beginning to realise, had put in charge of
environment, food and rural affairs a Secretary of State who openly
rejected the idea of rural affairs and who had little interest in
landscape, farmers or the environment. I was beginning to wonder
whether he could have given her any role she was less suited to
apart perhaps from making her Foreign Secretary. Still, I could also
sense why Cameron was mesmerised by her. Her genius lay in exaggerated
simplicity. Governing might be about critical thinking; but the new
style of politics, of which she was a leading exponent, was not. If
critical thinking required humility, this politics demanded absolute
confidence: in place of reality, it offered untethered hope; instead
of accuracy, vagueness. While critical thinking required scepticism,
open-mindedness and an instinct for complexity, the new politics
demanded loyalty, partisanship and slogans: not truth and reason but
power and manipulation. If Liz Truss worried about the consequences of
any of this for the way that government would work, she didn't reveal
it.
And finally, Stewart has a deeply-held belief in state capacity and
capability. He and I may disagree on the appropriate size and role of the
government in society, but no one would be more disgusted by an
intentional project to cripple government in order to shrink it than
Stewart.
One of his most-repeated criticisms of the UK political system in this
book is the way the cabinet is formed. All ministers and secretaries come
from members of Parliament and therefore branches of government are led by
people with no relevant expertise. This is made worse by constant cabinet
reshuffles that invalidate whatever small amounts of knowledge a minister
was able to gain in nine months or a year in post. The center portion of
this book records Stewart's time being shuffled from rural affairs to
international development to Africa to prisons, with each move
representing a complete reset of the political office and no transfer of
knowledge whatsoever.
A month earlier, they had been anticipating every nuance of Minister
Rogerson's diary, supporting him on shifts twenty-four hours a day,
seven days a week. But it was already clear that there would be no
pretence of a handover no explanation of my predecessor's strategy,
and uncompleted initiatives. The arrival of a new minister was
Groundhog Day. Dan Rogerson was not a ghost haunting my office, he was
an absence, whose former existence was suggested only by the black
plastic comb.
After each reshuffle, Stewart writes of trying to absorb briefings, do
research, and learn enough about his new responsibilities to have the hope
of making good decisions, while growing increasingly frustrated with the
system and the lack of interest by most of his colleagues in doing the
same. He wants government programs to be successful and believes success
requires expertise and careful management by the politicians, not only by
the civil servants, a position that to me both feels obviously correct and
entirely at odds with politics as currently practiced.
I found this a fascinating book to read during the accelerating collapse
of neoliberalism in the US and, to judge by current polling results, the
UK. I have a theory that the political press are so devoted to a
simplistic left-right political axis based on seating arrangements during
the French Revolution that they are missing a significant minority whose
primary political motivation is contempt for arrogant incompetence. They
could be convinced to vote for Sanders or Trump, for Polanski or Farage,
but will never vote for Biden, Starmer, Romney, or Sunak.
Such voters are incomprehensible to those who closely follow and debate
policies because their hostile reaction to the center is not about
policies. It's about lack of trust and a nebulous desire for justice.
They've been promised technocratic competence and the invisible hand of
market forces for most of their lives, and all of it looks like lies.
Everyday living is more precarious, more frustrating, more abusive and
dehumanizing, and more anxious, despite (or because of) this wholehearted
embrace of economic "freedom." They're sick of every complaint about the
increasing difficulty of life being met with accusations about their
ability and work ethic, and of being forced to endure another round of
austerity by people who then catch a helicopter ride to a party on some
billionaire's yacht.
Some of this is inherent in the deep structural weaknesses in neoliberal
ideology, but this is worse than an ideological failure. The degree to
which neoliberalism started as a project of sincere political thinkers is
arguable, but that is clearly not true today. The elite class in politics
and business is now thoroughly captured by people whose primary skill is
the marginal manipulation of complex systems for their own power and
benefit. They are less libertarian ideologues than narcissistic
mediocrities. We are governed by management consultants. They are firmly
convinced their organizational expertise is universal, and consider the
specific business of the company, or government department, irrelevant.
Given that context, I found Stewart's instinctive revulsion towards David
Cameron quite revealing. Stewart, later in the book, tries to give Cameron
some credit by citing several policy accomplishments and comparing him
favorably to Boris Johnson (which, true, is a bar Cameron probably flops
over). But I think Stewart's baffled astonishment at Cameron's vapidity
says a great deal about how we have ended up where we are. This last quote
is long, but I think it provides a good feel for Stewart's argument in
this book.
But Cameron, who was rumoured to be sceptical about nation-building
projects, only nodded, and then looking confidently up and down the
table said, "Well, at least we all agree on one extremely
straightforward and simple point, which is that our troops are doing
very difficult and important work and we should all support them."
It was an odd statement to make to civilians running humanitarian
operations on the ground. I felt I should speak. "No, with respect, we
do not agree with that. Insofar as we have focused on the troops, we
have just been explaining that what the troops are doing is often
futile, and in many cases making things worse." Two small red dots
appeared on his cheeks. Then his face formed back into a smile. He
thanked us, told us he was out of time, shook all our hands, and left
the room.
Later, I saw him repeat the same line in interviews: "the purpose of
this visit is straightforward... it is to show support for what our
troops are doing in Afghanistan". The line had been written, in
London, I assumed, and tested on focus groups. But he wanted to
convince himself it was also a position of principle.
"David has decided," one of his aides explained, when I met him later,
"that one cannot criticise a war when there are troops on the ground."
"Why?"
"Well... we have had that debate. But he feels it is a principle of
British government."
"But Churchill criticised the conduct of the Boer War; Pitt the war
with America. Why can't he criticise wars?"
"British soldiers are losing their lives in this war, and we can't
suggest they have died in vain."
"But more will die, if no one speaks up..."
"It is a principle thing. And he has made his decision. For him and
the party."
"Does this apply to Iraq too?"
"Yes. Again he understands what you are saying, but he voted to
support the Iraq War, and troops are on the ground."
"But surely he can say he's changed his mind?"
The aide didn't answer, but instead concentrated on his food. "It is
so difficult," he resumed, "to get any coverage of our trip." He
paused again. "If David writes a column about Afghanistan, we will
struggle to get it published."
"But what would he say in an article anyway?" I asked.
"We can talk about that later. But how do you get your articles on
Afghanistan published?"
I remembered how the US politicians and officials had shown their
mastery of strategy and detail. I remembered the earnestness of Gordon
Brown when I had briefed him on Iraq. Cameron seemed somehow less
serious. I wrote as much in a column in the New York Times,
saying that I was afraid the party of Churchill was becoming the party
of Bertie Wooster.
I don't know Stewart's reputation in Britain, or in the constituency that
he represented. I know he's been accused of being a self-aggrandizing
publicity hound, and to some extent this is probably true. It's hard to
find an ambitious politician who does not have that instinct. But whatever
Stewart's flaws, he can, at least, defend his politics with more substance
than a corporate motto. One gets the impression that he would respond
favorably to demonstrated competence linked to a careful argument, even if
he disagreed. Perhaps this is an illusion created by his writing, but even
if so, it's a step in the right direction.
When people become angry enough at a failing status quo, any option that
promises radical change and punishment for the current incompetents will
sound appealing. The default collapse is towards demagogues who are
skilled at expressing anger and disgust and are willing to promise simple
cures because they are indifferent to honesty. Much of the political
establishment in the US, and possibly (to the small degree that I can
analyze it from an occasional news article) in the UK, can identify the
peril of the demagogue, but they have no solution other than a return to
"politics as usual," represented by the amoral mediocrity of a McKinsey
consultant. The rare politicians who seem to believe in something, who
will argue for personal expertise and humility, who are disgusted by
incompetence and have no patience for facile platitudes, are a breath of
fresh air.
There are a lot of policies on which Stewart and I would disagree, and
perhaps some of his apparent humility is an affectation from the
rhetorical world of the 1800s that he clearly wishes he were inhabiting,
but he gives the strong impression of someone who would shoulder a
responsibility and attempt to execute it with competence and attention to
detail. He views government as a job, where coworkers should cooperate to
achieve defined goals, rather than a reality TV show. The arc of this
book, like the arc of current politics, is the victory of the reality TV
show over the workplace, and the story of Stewart's run against Boris
Johnson is hard reading because of it, but there's a portrayal here of a
different attitude towards politics that I found deeply rewarding.
If you liked Stewart's previous work, or if you want an inside look at
parliamentary politics, highly recommended. I will be thinking about this
book for a long time.
Rating: 9 out of 10
A couple of weeks ago there was an article on the Freexian blog about Using
JavaScript in Debusine without depending on
JavaScript. It
describes how JavaScript is used in the Debusine Django app, namely for
progressive enhancement rather than core functionality .
This is an approach I also follow when implementing web interfaces and I think
developments in web technologies and standardization in recent years have made
this a lot easier.
One of the examples described in the post, the Bootstrap toast messages, was
something that I implemented myself recently, in a similar but slightly
different way.
In the main app I develop for my day job we also use the Bootstrap
framework. I have also used it for different
personal projects (for example the GSOC project I did for Debian in 2018, was
also a Django app that used
Bootstrap).
Bootstrap is still primarily a CSS framework, but it also comes with a
JavaScript library for some functionality. Previous versions of Bootstrap
depended on jQuery, but since version 5 of Bootstrap, you don t need jQuery
anymore. In my experience, two of the more commonly used JavaScript utilities
of Bootstrap are modals
(also called lightbox or popup, they are elements that are displayed above
the main content of a website) and
toasts (also called
alerts, they are little notification windows that often disappear after a
timeout). The thing is, Bootstrap 5 was released in 2021 and a lot has happened
since then regarding web technologies. I believe that both these UI components
can nowadays be implemented using standard HTML5 elements.
An eye opening talk I watched was Stop using JS for
that from last years JSConf(!).
In this talk the speaker argues that the Rule of least
power is one of the core
principles of web development, which means we should use HTML over CSS and CSS
over JavaScript. And the speaker also presents some CSS rules and HTML elements
that added recently and that help to make that happen, one of them being the
dialog
element:
The <dialog> HTML element represents a modal or non-modal dialog box or other
interactive component, such as a dismissible alert, inspector, or subwindow.
The Dialog element at MDN
The baseline for this element is widely available :
This feature is well established and works across many devices and browser
versions. It s been available across browsers since March 2022.
The Dialog element at MDN
This means there is an HTML element that does what a modal Bootstrap does!
Once I had watched that talk I removed all my Bootstrap modals and replaced
them with HTML <dialog> elements (JavaScript is still needed to .show() and
.close() the elements, though, but those are two methods instead of a full
library). This meant not only that I replaced code that depended on an external
library, I m now also a lot more flexible regarding the styling of the
elements.
When I started implementing notifications for our app, my first approach was to
use Bootstrap toasts, similar to how it is implemented in Debusine. But looking
at the amount of HTML code I had to write for a simple toast message, I thought
that it might be possible to also implement toasts with the <dialog> element.
I mean, basically it is the same, only the styling is a bit different. So what
I did was that I added a #snackbar area to the DOM of the app. This would be
the container for the toast messages. All the toast messages are simply
<dialog> elements with the open attribute, which means that they are
visible right away when the page loads.
<divid="snackbar">
% for message in messages %
<dialogclass="mytoast alert alert- message.tags "role="alert"open>
message
</dialog>
% endfor %
</div>
This looks a lot simpler than the Bootstrap toasts would have.
To make the <dialog> elements a little bit more fancy, I added some CSS to make
them fade in and out:
(If one would want to use the same HTML code for both script and noscript users,
then the CSS should probably adapted: it fades away and if there is no
JavaScript to close the element, it stays visible after the animation is over.
A solution would for example be to use a close button and for noscript users
simply let it stay visible - this is also what happens with the noscript
messages in Debusine).
So there are many new elements in HTML and a lot of new features of CSS. It
makes sense to sometimes ask ourselves if instead of the solutions we know (or
what a web search / some AI shows us as the most common solution) there might
be some newer solution that was not there when the first choice was
created. Using standardized solutions instead of custom libraries makes the
software more maintainable. In web development I also prefer standardized
elements over a third party library because they have usually better
accessibility and UX.
In How Functional Programming Shaped (and Twisted) Frontend
Development
the author writes:
Consider the humble modal dialog. The web has <dialog>, a native element with
built-in functionality: it manages focus trapping, handles Escape key
dismissal, provides a backdrop, controls scroll-locking on the body, and
integrates with the accessibility tree. It exists in the DOM but remains
hidden until opened. No JavaScript mounting required.
[ ]
you ve trained developers to not even look for native solutions. The platform
becomes invisible. When someone asks how do I build a modal? , the answer is
install a library or here s my custom hook, never use <dialog>.
Ahmad Alfy
In Could the XZ backdoor have been detected with better Git and Debian
packaging practices? ,
Otto contrasts git-buildpackage managed git repositories with dgit
managed repositories , saying that the dgit managed repositories cannot
incorporate the upstream git history and are thus less useful for auditing
the full software supply-chain in git .
Otto does qualify this earlier with a package that has not had the
history recorded in dgit earlier , but the last sentence of the section is a
misleading oversimplification. It s true for repositories that have been
synthesized by dgit (which indeed was the focus of that section of Otto s
article), but it s not true in general for repositories that are managed
by dgit.
I suspect this was just slightly unclear writing, so I don t want to nitpick
here, but rather to take the opportunity to try to clear up some
misconceptions around dgit
that I ve often heard at conferences and seen on mailing lists.
I m not a dgit developer, although I m a happy user of it and I ve tried to
help out in various design discussions over the years.
dgit and git-buildpackage sit at different layers
It seems very common for people to think of git-buildpackage and dgit as
alternatives, as the example I quoted at the start of this article suggests.
It s really better to think of dgit as a separate and orthogonal layer.
You can use dgit together with tools such as git-buildpackage. In that
case, git-buildpackage handles the general shape of your git history, such
as helping you to import new upstream versions, and dgit handles gatewaying
between the archive and git. The advantages become evident when you start
using tag2upload, in which case you
can just use git debpush to push a tag and the tag2upload service deals
with building the source package and uploading it to the archive for you.
This is true regardless of how you put your package s git history together.
(There s currently a wrinkle around pristine-tar support, so at the moment I personally tend to
use dgit push-source for new upstream versions and git debpush for
new Debian revisions, since I haven t yet convinced myself that I see no
remaining value in pristine upstream tarballs.)
dgit supports complete history
If the maintainer has never used dgit, and so dgit clone synthesizes a
repository based on the current contents of the Debian archive, then there s
indeed no useful history there; in that situation it doesn t go back and
import everything from the snapshot archive the way that gbp import-dscs
--debsnap does.
However, if the maintainer uses dgit, then dgit s view will include more
history, and it s absolutely possible for that to include complete upstream
git history as well. Try this:
$ dgitcloneman-db
canonical suite name for unstable is sidfetching existing git historylast upload to archive: specified git info (debian)downloading http://ftp.debian.org/debian//pool/main/m/man-db/man-db_2.13.1.orig.tar.xz... % Total%Received%XferdAverageSpeedTimeTimeTimeCurrent
Dload Upload Total Spent Left Speed100 2060k 100 2060k 0 0 4643k 0 --:--:-- --:--:-- --:--:-- 4652kdownloading http://ftp.debian.org/debian//pool/main/m/man-db/man-db_2.13.1.orig.tar.xz.asc... % Total%Received%XferdAverageSpeedTimeTimeTimeCurrent
Dload Upload Total Spent Left Speed100 833 100 833 0 0 16322 0 --:--:-- --:--:-- --:--:-- 16660HEAD is now at 167835b0 releasing package man-db version 2.13.1-1dgit ok: ready for work in man-db$ git-Cman-dblog--graph--onelinehead
* 167835b0 releasing package man-db version 2.13.1-1* f7910493 New upstream release (2.13.1) \ * 3073b72e Import man-db_2.13.1.orig.tar.xz \ * 349ce503 Release man-db 2.13.1 * 0d6635c1 Update Russian manual page translation * cbf87caf Update Italian translation * fb5c5017 Update German manual page translation * dae2057b Update Brazilian Portuguese manual page translation
That package uses git-dpm, since I
prefer the way it represents patches. But it works fine with
git-buildpackage too:
$ dgitcloneisort
canonical suite name for unstable is sidfetching existing git historylast upload to archive: specified git info (debian)downloading http://ftp.debian.org/debian//pool/main/i/isort/isort_7.0.0.orig.tar.gz... % Total%Received%XferdAverageSpeedTimeTimeTimeCurrent
Dload Upload Total Spent Left Speed100 786k 100 786k 0 0 1772k 0 --:--:-- --:--:-- --:--:-- 1774kHEAD is now at f812aae releasing package isort version 7.0.0-1dgit ok: ready for work in isort$ git-Cisortlog--graph--onelinehead
* f812aae releasing package isort version 7.0.0-1* efde62f Update upstream source from tag 'upstream/7.0.0' \ * 9694f3d New upstream version 7.0.0* 9cbfe0b releasing package isort version 6.1.0-1* 5423ffe Mark isort and python3-isort Multi-Arch: foreign* 5eaf5bf Update upstream source from tag 'upstream/6.1.0' \ * edafbfc New upstream version 6.1.0* aedfd25 Merge branch 'debian/master' into fix992793
If you look closely you ll see another difference here: the second only
includes one commit representing the new upstream release, and doesn t have
complete upstream history. This doesn t represent a difference between
git-dpm and git-buildpackage. Both tools can operate in both ways: for
example, git-dpm import-new-upstream --parent and gbp import-orig
--upstream-vcs-tag do broadly similar things, and something like gbp
import-dscs --debsnap --upstream-vcs-tag='%(version)s' can be used to do a
bulk import provided that upstream s tags are named consistently enough.
This is not generally the default because adding complete upstream history
requires extra setup: the maintainer has to add an extra git remote pointing
to upstream and select the correct tag when importing a new version, and
some upstreams forget to push git tags or don t have the sort of consistency
you might want.
The Debian Python team s policy
says that Complete upstream Git history should be avoided in the upstream
branch , which is why the isort history above looks the way it does. I
don t love this because I think the results are less useful, but I
understand why it s there: in a moderately large team maintaining thousands
of packages, getting everyone to have the right git remotes set up would be
a recipe for frustrating inconsistency.
However, in packages I maintain myself, I strongly value having complete
upstream history in order to make it easier to debug problems, and I think
it makes things a bit more transparent to auditors too, so I m willing to go
to a little extra work to make that happen. Doing that is completely
compatible with using dgit.
The discovery of a backdoor in XZ Utils in the spring of 2024 shocked the open source community, raising critical questions about software supply chain security. This post explores whether better Debian packaging practices could have detected this threat, offering a guide to auditing packages and suggesting future improvements.
The XZ backdoor in versions 5.6.0/5.6.1 made its way briefly into many major Linux distributions such as Debian and Fedora, but luckily didn t reach that many actual users, as the backdoored releases were quickly removed thanks to the heroic diligence of Andres Freund. We are all extremely lucky that he detected a half a second performance regression in SSH, cared enough to trace it down, discovered malicious code in the XZ library loaded by SSH, and reported promtly to various security teams for quick coordinated actions.
This episode makes software engineers pondering the following questions:
Why didn t any Linux distro packagers notice anything odd when importing the new XZ version 5.6.0/5.6.1 from upstream?
Is the current software supply-chain in the most popular Linux distros easy to audit?
Could we have similar backdoors lurking that haven t been detected yet?
As a Debian Developer, I decided to audit the xz package in Debian, share my methodology and findings in this post, and also suggest some improvements on how the software supply-chain security could be tightened in Debian specifically.
Note that the scope here is only to inspect how Debian imports software from its upstreams, and how they are distributed to Debian s users. This excludes the whole story of how to assess if an upstream project is following software development security best practices. This post doesn t discuss how to operate an individual computer running Debian to ensure it remains untampered as there are plenty of guides on that already.
Downloading Debian and upstream source packages
Let s start by working backwards from what the Debian package repositories offer for download. As auditing binaries is extremely complicated, we skip that, and assume the Debian build hosts are trustworthy and reliably building binaries from the source packages, and the focus should be on auditing the source code packages.
As with everything in Debian, there are multiple tools and ways to do the same thing, but in this post only one (and hopefully the best) way to do something is presented for brevity.
The first step is to download the latest version and some past versions of the package from the Debian archive, which is easiest done with debsnap. The following command will download all Debian source packages of xz-utils from Debian release 5.2.4-1 onwards:
Verifying authenticity of upstream and Debian sources using OpenPGP signatures
As seen in the output of debsnap, it already automatically verifies that the downloaded files match the OpenPGP signatures. To have full clarity on what files were authenticated with what keys, we should verify the Debian packagers signature with:
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
The upstream tarball signature (if available) can be verified with:
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
Note that this only proves that there is a key that created a valid signature for this content. The authenticity of the keys themselves need to be validated separately before trusting they in fact are the keys of these people. That can be done by checking e.g. the upstream website for what key fingerprints they published, or the Debian keyring for Debian Developers and Maintainers, or by relying on the OpenPGP web-of-trust .
Verifying authenticity of upstream sources by comparing checksums
In case the upstream in question does not publish release signatures, the second best way to verify the authenticity of the sources used in Debian is to download the sources directly from upstream and compare that the sha256 checksums match.
This should be done using the debian/watch file inside the Debian packaging, which defines where the upstream source is downloaded from. Continuing on the example situation above, we can unpack the latest Debian sources, enter and then run uscan to download:
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
The original files downloaded from upstream are now in /tmp along with the files renamed to follow Debian conventions. Using everything downloaded so far the sha256 checksums can be compared across the files and also to what the .dsc file advertised:
In the example above the checksum 0b54f79df85... is the same across the files, so it is a match.
Repackaged upstream sources can t be verified as easily
Note that uscan may in rare cases repackage some upstream sources, for example to exclude files that don t adhere to Debian s copyright and licensing requirements. Those files and paths would be listed under the Files-Excluded section in the debian/copyright file. There are also other situations where the file that represents the upstream sources in Debian isn t bit-by-bit the same as what upstream published. If checksums don t match, an experienced Debian Developer should review all package settings (e.g. debian/source/options) to see if there was a valid and intentional reason for divergence.
Reviewing changes between two source packages using diffoscope
Diffoscope is an incredibly capable and handy tool to compare arbitrary files. For example, to view a report in HTML format of the differences between two XZ releases, run:
If the changes are extensive, and you want to use a LLM to help spot potential security issues, generate the report of both the upstream and Debian packaging differences in Markdown with:
The Markdown files created above can then be passed to your favorite LLM, along with a prompt such as:
Based on the attached diffoscope output for a new Debian package version compared with the previous one, list all suspicious changes that might have introduced a backdoor, followed by other potential security issues. If there are none, list a short summary of changes as the conclusion.
Reviewing Debian source packages in version control
As of today only 93% of all Debian source packages are tracked in git on Debian s GitLab instance at salsa.debian.org. Some key packages such as Coreutils and Bash are not using version control at all, as their maintainers apparently don t see value in using git for Debian packaging, and the Debian Policy does not require it. Thus, the only reliable and consistent way to audit changes in Debian packages is to compare the full versions from the archive as shown above.
However, for packages that are hosted on Salsa, one can view the git history to gain additional insight into what exactly changed, when and why. For packages that are using version control, their location can be found in the Git-Vcs header in the debian/control file. For xz-utils the location is salsa.debian.org/debian/xz-utils.
Note that the Debian policy does not state anything about how Salsa should be used, or what git repository layout or development practices to follow. In practice most packages follow the DEP-14 proposal, and use git-buildpackage as the tool for managing changes and pushing and pulling them between upstream and salsa.debian.org.
To get the XZ Utils source, run:
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
At the time of writing this post the git history shows:
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
This shows both the changes on the debian/unstable branch as well as the intermediate upstream import branch, and the actual real upstream development branch. See my Debian source packages in git explainer for details of what these branches are used for.
To only view changes on the Debian branch, run git log --graph --oneline --first-parent or git log --graph --oneline -- debian.
The Debian branch should only have changes inside the debian/ subdirectory, which is easy to check with:
If the upstream in question signs commits or tags, they can be verified with e.g.:
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
The main benefit of reviewing changes in git is the ability to see detailed information about each individual change, instead of just staring at a massive list of changes without any explanations. In this example, to view all the upstream commits since the previous import to Debian, one would view the commit range from afba662b New upstream version 5.8.0 to fa1e8796 New upstream version 5.8.1 with git log --reverse -p afba662b...fa1e8796. However, a far superior way to review changes would be to browse this range using a visual git history viewer, such as gitk. Either way, looking at one code change at a time and reading the git commit message makes the review much easier.
Comparing Debian source packages to git contents
As stated in the beginning of the previous section, and worth repeating, there is no guarantee that the contents in the Debian packaging git repository matches what was actually uploaded to Debian. While the tag2upload project in Debian is getting more and more popular, Debian is still far from having any system to enforce that the git repository would be in sync with the Debian archive contents.
To detect such differences we can run diff across the Debian source packages downloaded with debsnap earlier (path source-xz-utils/xz-utils_5.8.1-2.debian) and the git repository cloned in the previous section (path xz-utils):
diff$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
In the case above diff revealed that the timestamp in the changelog in the version uploaded to Debian is different from what was committed to git. This is not malicious, just a mistake by the maintainer who probably didn t run gbp tag immediately after upload, but instead some dch command and ended up with having a different timestamps in the git compared to what was actually uploaded to Debian.
Creating syntetic Debian packaging git repositories
If no Debian packaging git repository exists, or if it is lagging behind what was uploaded to Debian s archive, one can use git-buildpackage s import-dscs feature to create synthetic git commits based on the files downloaded by debsnap, ensuring the git contents fully matches what was uploaded to the archive. To import a single version there is gbp import-dsc (no s at the end), of which an example invocation would be:
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
Example commit history from a repository with commits added with gbp import-dsc:
An online example repository with only a few missing uploads added using gbp import-dsc can be viewed at salsa.debian.org/otto/xz-utils-2025-09-29/-/network/debian%2Funstable
An example repository that was fully crafted using gbp import-dscs can be viewed at salsa.debian.org/otto/xz-utils-gbp-import-dscs-debsnap-generated/-/network/debian%2Flatest.
There exists also dgit, which in a similar way creates a synthetic git history to allow viewing the Debian archive contents via git tools. However, its focus is on producing new package versions, so fetching a package with dgit that has not had the history recorded in dgit earlier will only show the latest versions:
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
Unlike git-buildpackage managed git repositories, the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git.
Comparing upstream source packages to git contents
Equally important to the note in the beginning of the previous section, one must also keep in mind that the upstream release source packages, often called release tarballs, are not guaranteed to have the exact same contents as the upstream git repository. Projects might strip out test data or extra development files from their release tarballs to avoid shipping unnecessary files to users, or projects might add documentation files or versioning information into the tarball that isn t stored in git. While a small minority, there are also upstreams that don t use git at all, so the plain files in a release tarball is still the lowest common denominator for all open source software projects, and exporting and importing source code needs to interface with it.
In the case of XZ, the release tarball has additional version info and also a sizeable amount of pregenerated compiler configuration files. Detecting and comparing differences between git contents and tarballs can of course be done manually by running diff across an unpacked tarball and a checked out git repository. If using git-buildpackage, the difference between the git contents and tarball contents can be made visible directly in the import commit.
In this XZ example, consider this git history:
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
The commit a522a226 was the upstream release commit, which upstream also tagged v5.8.1. The merge commit 2808ec2d applied the new upstream import branch contents on the Debian branch. Between these is the special commit fa1e8796 New upstream version 5.8.1 tagged upstream/v5.8. This commit and tag exists only in the Debian packaging repository, and they show what is the contents imported into Debian. This is generated automatically by git-buildpackage when running git import-orig --uscan for Debian packages with the correct settings in debian/gbp.conf. By viewing this commit one can see exactly how the upstream release tarball differs from the upstream git contents (if at all).
In the case of XZ, the difference is substantial, and shown below in full as it is very interesting:
To be able to easily inspect exactly what changed in the release tarball compared to git release tag contents, the best tool for the job is Meld, invoked via git difftool --dir-diff fa1e8796^..fa1e8796.
To compare changes across the new and old upstream tarball, one would need to compare commits afba662b New upstream version 5.8.0 and fa1e8796 New upstream version 5.8.1 by running git difftool --dir-diff afba662b..fa1e8796.
With all the above tips you can now go and try to audit your own favorite package in Debian and see if it is identical with upstream, and if not, how it differs.
Should the XZ backdoor have been detected using these tools?
The famous XZ Utils backdoor (CVE-2024-3094) consisted of two parts: the actual backdoor inside two binary blobs masqueraded as a test files (tests/files/bad-3-corrupt_lzma2.xz, tests/files/good-large_compressed.lzma), and a small modification in the build scripts (m4/build-to-host.m4) to extract the backdoor and plant it into the built binary. The build script was not tracked in version control, but generated with GNU Autotools at release time and only shipped as additional files in the release tarball.
The entire reason for me to write this post was to ponder if a diligent engineer using git-buildpackage best practices could have reasonably spotted this while importing the new upstream release into Debian. The short answer is no . The malicious actor here clearly anticipated all the typical ways anyone might inspect both git commits, and release tarball contents, and masqueraded the changes very well and over a long timespan.
First of all, XZ has for legitimate reasons for several carefully crafted .xz files as test data to help catch regressions in the decompression code path. The test files are shipped in the release so users can run the test suite and validate that the binary is built correctly and xz works properly. Debian famously runs massive amounts of testing in its CI and autopkgtest system across tens of thousands of packages to uphold high quality despite frequent upgrades of the build toolchain and while supporting more CPU architectures than any other distro. Test data is useful and should stay.
When git-buildpackage is used correctly, the upstream commits are visible in the Debian packaging for easy review, but the commit cf44e4b that introduced the test files does not deviate enough from regular sloppy coding practices to really stand out. It is unfortunately very common for git commit to lack a message body explaining why the change was done, and to not be properly atomic with test code and test data together in the same commit, and for commits to be pushed directly to mainline without using code reviews (the commit was not part of any PR in this case). Only another upstream developer could have spotted that this change is not on par to what the project expects, and that the test code was never added, only test data, and thus that this commit was not just a sloppy one but potentially malicious.
Secondly, the fact that a new Autotools file appeared (m4/build-to-host.m4) in the XZ Utils 5.6.0 is not suspicious. This is perfectly normal for Autotools. In fact, starting from XZ Utils version 5.8.1 it is now shipping a m4/build-to-host.m4 file that it actually uses now.
Spotting that there is anything fishy is practically impossible by simply reading the code, as Autotools files are full custom m4 syntax interwoven with shell script, and there are plenty of backticks () that spawn subshells and evals that execute variable contents further, which is just normal for Autotools. Russ Cox s XZ post explains how exactly the Autotools code fetched the actual backdoor from the test files and injected it into the build.
There is only one tiny thing that maybe a very experienced Autotools user could potentially have noticed: the serial 30 in the version header is way too high. In theory one could also have noticed this Autotools file deviates from what other packages in Debian ship with the same filename, such as e.g. the serial 3, serial 5a or 5b versions. That would however require and an insane amount extra checking work, and is not something we should plan to start doing. A much simpler solution would be to simply strongly recommend all open source projects to stop using Autotools to eventually get rid of it entirely.
Not detectable with reasonable effort
While planting backdoors is evil, it is hard not to feel some respect to the level of skill and dedication of the people behind this. I ve been involved in a bunch of security breach investigations during my IT career, and never have I seen anything this well executed.
If it hadn t slowed down SSH by ~500 milliseconds and been discovered due to that, it would most likely have stayed undetected for months or years. Hiding backdoors in closed source software is relatively trivial, but hiding backdoors in plain sight in a popular open source project requires some unusual amount of expertise and creativity as shown above.
Is the software supply-chain in Debian easy to audit?
While maintaining a Debian package source using git-buildpackage can make the package history a lot easier to inspect, most packages have incomplete configurations in their debian/gbp.conf, and thus their package development histories are not always correctly constructed or uniform and easy to compare. The Debian Policy does not mandate git usage at all, and there are many important packages that are not using git at all. Additionally the Debian Policy also allows for non-maintainers to upload new versions to Debian without committing anything in git even for packages where the original maintainer wanted to use git. Uploads that bypass git unfortunately happen surpisingly often.
Because of the situation, I am afraid that we could have multiple similar backdoors lurking that simply haven t been detected yet. More audits, that hopefully also get published openly, would be welcome! More people auditing the contents of the Debian archives would probably also help surface what tools and policies Debian might be missing to make the work easier, and thus help improve the security of Debian s users, and improve trust in Debian.
Is Debian currently missing some software that could help detect similar things?
To my knowledge there is currently no system in place as part of Debian s QA or security infrastructure to verify that the upstream source packages in Debian are actually from upstream. I ve come across a lot of packages where the debian/watch or other configs are incorrect and even cases where maintainers have manually created upstream tarballs as it was easier than configuring automation to work. It is obvious that for those packages the source tarball now in Debian is not at all the same as upstream. I am not aware of any malicious cases though (if I was, I would report them of course).
I am also aware of packages in the Debian repository that are misconfigured to be of type 1.0 (native) packages, mixing the upstream files and debian/ contents and having patches applied, while they actually should be configured as 3.0 (quilt), and not hide what is the true upstream sources. Debian should extend the QA tools to scan for such things. If I find a sponsor, I might build it myself as my next major contribution to Debian.
In addition to better tooling for finding mismatches in the source code, Debian could also have better tooling for tracking in built binaries what their source files were, but solutions like Fraunhofer-AISEC s supply-graph or Sony s ESSTRA are not practical yet. Julien Malka s post about NixOS discusses the role of reproducible builds, which may help in some cases across all distros.
Or, is Debian missing some policies or practices to mitigate this?
Perhaps more importantly than more security scanning, the Debian Developer community should switch the general mindset from anyone is free to do anything to valuing having more shared workflows. The ability to audit anything is severely hampered by the fact that there are so many ways to do the same thing, and distinguishing what is a normal deviation from a malicious deviation is too hard, as the normal can basically be almost anything.
Also, as there is no documented and recommended default workflow, both those who are old and new to Debian packaging might never learn any one optimal workflow, and end up doing many steps in the packaging process in a way that kind of works, but is actually wrong or unnecessary, causing process deviations that look malicious, but turn out to just be a result of not fully understanding what would have been the right way to do something.
In the long run, once individual developers workflows are more aligned, doing code reviews will become a lot easier and smoother as the excess noise of workflow differences diminishes and reviews will feel much more productive to all participants. Debian fostering a culture of code reviews would allow us to slowly move from the current practice of mainly solo packaging work towards true collaboration forming around those code reviews.
I have been promoting increased use of Merge Requests in Debian already for some time, for example by proposing DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are involved in Debian development, please give a thumbs up in dep-team/deps!21 if you want me to continue promoting it.
Can we trust open source software?
Yes and I would argue that we can only trust open source software. There is no way to audit closed source software, and anyone using e.g. Windows or MacOS just have to trust the vendor s word when they say they have no intentional or accidental backdoors in their software. Or, when the news gets out that the systems of a closed source vendor was compromised, like Crowdstrike some weeks ago, we can t audit anything, and time after time we simply need to take their word when they say they have properly cleaned up their code base.
In theory, a vendor could give some kind of contractual or financial guarantee to its customer that there are no preventable security issues, but in practice that never happens. I am not aware of a single case of e.g. Microsoft or Oracle would have paid damages to their customers after a security flaw was found in their software. In theory you could also pay a vendor more to have them focus more effort in security, but since there is no way to verify what they did, or to get compensation when they didn t, any increased fees are likely just pocketed as increased profit.
Open source is clearly better overall. You can, if you are an individual with the time and skills, audit every step in the supply-chain, or you could as an organization make investments in open source security improvements and actually verify what changes were made and how security improved.
If your organisation is using Debian (or derivatives, such as Ubuntu) and you are interested in sponsoring my work to improve Debian, please reach out.
The seventeenth release of the qlcal package
arrivied at CRAN today, once
again following a QuantLib
release as 1.40 came out this morning.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases mainly synchronizes qlcal with
the QuantLib release 1.40. Only
one country calendar got updated; the diffstat
looks larger as the URL part of the copyright got updated throughout. We
also updated the URL for the GPL-2 badge: when CRAN checks this, they always hit
a timeout as the FSF server possibly keeps track of incoming requests;
we now link to version from the R Licenses page to avoid
this.
Changes in version 0.0.17
(2025-07-14)
Synchronized with QuantLib 1.40 released today
Calendar updates for Singapore
URL update in README.md
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples.
Updating old Debian Printing software to meet C23 requirements, by Thorsten Alteholz
The work of Thorsten fell under the motto gcc15 . Due to the introduction of
gcc15 in Debian, the default language version was changed to C23. This means
that for example, function declarations without parameters are no longer allowed.
As old software, which was created with ANSI C (or C89) syntax, made use of such
function declarations, it was a busy month. One could have used something like
-std=c17 as compile flags, but this would have just postponed the tasks. As a
result Thorsten uploaded modernized versions of ink, nm2ppa and rlpr for the
Debian printing team.
Work done to decommission packages.qa.debian.org, by Rapha l Hertzog
Rapha l worked to decommission the old package tracking system
(packages.qa.debian.org). After figuring out
that it was still receiving emails from the bug tracking system
(bugs.debian.org), from multiple debian lists and from
some release team tools, he reached out to the respective teams to either drop
those emails or adjust them so that they are sent to the current Debian Package
Tracker (tracker.debian.org).
rebootstrap uses *-for-host, by Helmut Grohne
Architecture cross bootstrapping is an ongoing effort that has shaped Debian in
various ways over the years. A longereffort to express
toolchain dependencies now bears fruit. When cross compiling, it becomes
important to express what architecture one is compiling for in Build-Depends.
As these packages have become available in trixie , more and more packages add
this extra information and in August, the libtool package
gained
a gfortran-for-host dependency. It was the first package in the essential
build closure to adopt this and required putting the pieces together in
rebootstrap that now has to
build gcc-defaults early on. There still are
hundreds of packages whose dependencies need to be updated
though.
Miscellaneous contributions
Rapha l dropped the Build Log Scan integration in tracker.debian.org
since it was showing stale data for a while as the underlying service has been
discontinued.
Emilio updated pixman to 0.46.4.
Emilio coordinated several transitions, and NMUed guestfs-tools to unblock one.
Stefano uploaded Python 3.14rc3 to Debian unstable. It s not yet used by any
packages, but it allows testing the level of support in packages to begin.
Stefano upgraded almost all of the debian-social infrastructure to Debian trixie .
Stefano attended the Debian Technical Committee meeting.
Stefano uploaded routine upstream updates for a handful of Python packages
(pycparser, beautifulsoup4, platformdirs, pycparser, python-authlib,
python-cffi, python-mitogen, python-resolvelib, python-super-collections,
twine).
Stefano reviewed and responded to DebConf 25 feedback.
Stefano investigated and fixed a request visibility bug in debian-reimbursements
(for admin-altered requests).
Lucas reviewed a couple of merge requests from external contributors for Go
and Ruby packages.
Lucas updated some ruby packages to its latest upstream version (thin,
passenger, and puma is still WIP).
Lucas set up the build environment to run rebuilds of reverse dependencies of
ruby using ruby3.4. As an alternative, he is looking for personal repositories
provided by Debusine to perform this task more easily. This is the preparation
for the transition to ruby3.4 as the default in Debian.
Lucas helped on the next round of the Outreachy internship program.
Helmut sent patches for 30 cross build failures and responded to cross
building support questions on the mailing list.
Helmut continued to maintain rebootstrap.
As gcc version 15 became the default, test jobs for version 14 had to be dropped.
A fair number of patches were applied to packages and could be dropped.
Helmut resumed removing RC-buggy packages from unstable and sponsored a
termrec upload to avoid its deletion. This work was paused to give packages
some time to migrate to forky .
While doing some new upstream release updates, thanks to Debusine s
reverse dependencies autopkgtest
checks, Santiago discovered that paramiko 4.0 will introduce a
regression in libcloud by the drop of support
for the obsolete DSA keys. Santiago finally uploaded to unstable both
paramiko 4.0,
and a regression fix for libcloud.
Santiago has taken part in different discussions and meetings for the
preparation of DebConf 26. The DebConf 26 local team aims to prepare for the
conference with enough time in advance.
Carles kept working on the missing-package-relations and reporting missing
Recommends. He improved the tooling to detect and report bugs creating
269 bugs
and followed up comments. 37 bugs have been resolved, others acknowledged.
The missing Recommends are a mixture of packages that are gone from Debian,
packages that changed name, typos and also packages that were recommended but
are not packaged in Debian.
Carles improved the missing-package-relations to report broken Suggests only
for packages that used to be in Debian but are removed from it now. No bugs have
been created yet for this case but identified 1320 of them.
Colin spent much of the month chasing down build/test regressions in various
Python packages due to other upgrades, particularly relating to pydantic,
python-pytest-asyncio, and rust-pyo3.
About 90% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
Some months I feel like I m pedalling furiously just to keep everything in a
roughly working state. This was one of those months.
Python team
I upgraded these packages to new upstream versions:
I had to spend a fair bit of time this month chasing down build/test
regressions in various packages due to some other upgrades, particularly to
pydantic, python-pytest-asyncio, and rust-pyo3:
I updated dh-python to suppress generated dependencies that would be
satisfied by python3 >=
3.11.
pkg_resources is
deprecated. In most cases
replacing it is a relatively simple matter of porting to
importlib.resources,
but packages that used its old namespace package support need more
complicated work to port them to implicit namespace
packages. We had quite a few bugs about
this on zope.* packages, but fortunately upstream did the hard part of
this recently. I went
round and cleaned up most of the remaining loose ends, with some help from
Alexandre Detiste. Some of these aren t completely done yet as they re
awaiting new upstream releases:
I fixed
jupyter-client
so that its autopkgtests would work in Debusine.
I fixed waitress to build with the
nocheck profile.
I fixed several other build/test failures:
Welcome to post 51 in the R4 series.
A while back I realized I should really just post a little more as
not all post have to be as deep and introspective as for example the
recent-ish two
cultures post #49.
So this post is a neat little trick I (somewhat belatedly) realized
somewhat recently. The context is the ongoing transition from
(Rcpp)Armadillo 14.6.3 and earlier to (Rcpp)Armadillo 15.0.2 or later.
(I need to write a bit more about that, but that may require a bit more
time.) (And there are a total of seven (!!) issue tickets managing the
transition with issue
#475 being the main parent issue, please see there for more
details.)
In brief, the newer and current Armadillo no longer allows C++11
(which also means it no longer allowes suppression of deprecation
warnings ). It so happens that around a decade ago packages were
actively encouraged to move towards C++11 so many either set an
explicit SystemRequirements: for it, or set CXX_STD=CXX11
in src/Makevars .win . CRAN has for some time now issued
NOTEs asking for this to be removed, and more recently enforced this
with actual deadlines. In RcppArmadillo I opted to accomodate old(er)
packages (using this by-now anti-pattern) and flip to Armadillo 14.6.3
during a transition period. That is what the package does now: It gives
you either Armadillo 14.6.3 in case C++11 was detected (or this legacy
version was actively selected via a compile-time #define),
or it uses Armadillo 15.0.2 or later.
So this means we can have either one of two versions, and may want to
know which one we have. Armadillo carries its own version macros, as
many libraries or projects do (R of course included). Many many years
ago (git blame points to sixteen and twelve for a revision)
we added the following helper function to the package (full source here,
we show it here without the full roxygen2 comment header)
// [[Rcpp::export]]Rcpp::IntegerVector armadillo_version(bool single)// These are declared as constexpr in Armadillo which actually does not define them// They are also defined as macros in arma_version.hpp so we just use thatconstunsignedint major = ARMA_VERSION_MAJOR;constunsignedint minor = ARMA_VERSION_MINOR;constunsignedint patch = ARMA_VERSION_PATCH;if(single)return Rcpp::wrap(10000* major +100* minor + patch);elsereturn Rcpp::IntegerVector::create(Rcpp::Named("major")= major, Rcpp::Named("minor")= minor, Rcpp::Named("patch")= patch);
It either returns a (named) vector of the standard major , minor ,
patch form of the common package versioning pattern, or a single
integer which can used more easily in C(++) via preprocessor macros. And
this being an Rcpp-using package, we can of course access either easily
from R:
>library(RcppArmadillo)>armadillo_version(FALSE)major minor patch 1502>armadillo_version(TRUE)[1] 150002>
Perfectly valid and truthful. But cumbersome at the R level. So
when preparing for these (Rcpp)Armadillo changes in one of my package, I
realized I could alter such a function and set the S3 type to
package_version. (Full version of one such variant here)
// [[Rcpp::export]]Rcpp::List armadilloVersion()// create a vector of major, minor, patchauto ver = Rcpp::IntegerVector::create(ARMA_VERSION_MAJOR, ARMA_VERSION_MINOR, ARMA_VERSION_PATCH);// and place it in a list (as e.g. packageVersion() in R returns)auto lst = Rcpp::List::create(ver);// and class it as 'package_version' accessing print() etc methods lst.attr("class")= Rcpp::CharacterVector::create("package_version","numeric_version");return lst;
Three statements each to
create the integeer vector of known dimensions and compile-time
known value
embed it in a list (as that is what the R type expects)
set the S3 class which is easy because Rcpp accesses attributes and
create character vectors
and return the value. And now in R we can operate more easily on this
(using three dots as I didn t export it from this package):
An object of class package_version inheriting from
numeric_version can directly compare against a (human- but
not normally machine-readable) string like 15.0.0 because the simple
S3 class defines appropriate operators, as well as print()
/ format() methods as the first expression shows. It is
these little things that make working with R so smooth, and we can
easily (three statements !!) do so from Rcpp-based packages too.
The underlying object really is merely a list containing a
vector:
but the S3 glue around it makes it behave nicely.
So next time you are working with an object you plan to return to R,
consider classing it to take advantage of existing infrastructure (if it
exists, of course). It s easy enough to do, and may smoothen the
experience at the R side.
During 2025-03-21-another-home-outage, I reflected upon what's a
properly ran service and blurted out what turned out to be something
important I want to outline more. So here it is, again, on its own
for my own future reference.
Typically, I tend to think of a properly functioning service as having
four things:
backups
documentation
monitoring
automation
high availability (HA)
Yes, I miscounted. This is why you need high availability.
A service doesn't properly exist if it doesn't at least have the first
3 of those. It will be harder to maintain without automation, and
inevitably suffer prolonged outages without HA.
The five components of a proper service
Backups
Duh. If data is maliciously or accidentally destroyed, you need a copy
somewhere. Preferably in a way that malicious Joe can't get to.
This is harder than you think.
You probably know this is hard, and this is why you're not doing
it. Do it anyways, you'll think it sucks, it will grow out of sync
with reality, but you'll be really grateful for whatever scraps you
wrote when you're in trouble.
Any docs, in other words, is better than no docs, but are no excuse
for doing the work correctly.
Monitoring
If you don't have monitoring, you'll know it fails too late, and you
won't know it recovers. Consider high availability, work hard to
reduce noise, and don't have machine wake people up, that's literally
torture and is against the Geneva convention.
Consider predictive algorithm to prevent failures, like "add storage
within 2 weeks before this disk fills up".
This is also harder than you think.
Automation
Make it easy to redeploy the service elsewhere.
Yes, I know you have backups. That is not enough: that typically
restores data and while it can also include configuration, you're
going to need to change things when you restore, which is what
automation (or call it "configuration management" if you will) will do
for you anyways.
This also means you can do unit tests on your configuration, otherwise
you're building legacy.
This is probably as hard as you think.
High availability
Make it not fail when one part goes down.
Eliminate single points of failures.
This is easier than you think, except for storage and DNS ("naming
things" not "HA DNS", that is easy), which, I guess, means it's
harder than you think too.
Assessment
In the above 5 items, I currently check two in my lab:
backups
documentation
And barely: I'm not happy about the offsite backups, and my
documentation is much better at work than at home (and even there, I
have a 15 year backlog to catchup on).
I barely have monitoring: Prometheus is scraping parts of the infra,
but I don't have any sort of alerting -- by which I don't mean
"electrocute myself when something goes wrong", I mean "there's a set
of thresholds and conditions that define an outage and I can look at
it".
Automation is wildly incomplete. My home server is a random collection
of old experiments and technologies, ranging from Apache with Perl and
CGI scripts to Docker containers running Golang applications. Most of
it is not Puppetized (but the ratio is growing). Puppet itself
introduces a huge attack vector with kind of catastrophic lateral
movement if the Puppet server gets compromised.
And, fundamentally, I am not sure I can provide high availability in
the lab. I'm just this one guy running my home network, and I'm
growing older. I'm thinking more about winding things down than
building things now, and that's just really sad, because I feel
we're losing (well that escalated
quickly).
Side note about Tor
The above applies to my personal home lab, not work!
At work, of course, it's another (much better) story:
all services have backups
lots of services are well documented, but not all
most services have at least basic monitoring
most services are Puppetized, but not crucial parts (DNS, LDAP,
Puppet itself), and there are important chunks of legacy coupling
between various services that make the whole system brittle
most websites, DNS and large parts of email are highly available,
but key services like the the Forum, GitLab and similar
applications are not HA, although most services run under
replicated VMs that can trivially survive a total, single-node
hardware failure (through Ganeti and DRBD)
Updates on FAIme service: Linux Mint 22.2 and trixie backports available
The FAIme service [1] now offers to
build customized installation images for Xfce edition of Linux Mint 22.2 'Zara'.
For Debian 13 installations, you can select the kernel from backports for
the trixie release, which is currently version 6.16. This will support
newer hardware.
The Incandescent is a stand-alone magical boarding school fantasy.
Your students forgot you. It was natural for them to forget you. You
were a brief cameo in their lives, a walk-on character from the
prologue. For every sentimental my teacher changed my life
story you heard, there were dozens of my teacher made me
moderately bored a few times a week and then I got through the year
and moved on with my life and never thought about them again.
They forgot you. But you did not forget them.
Doctor Saffy Walden is Director of Magic at Chetwood, an elite boarding
school for prospective British magicians. She has a collection of
impressive degrees in academic magic, a specialization in demonic
invocation, and a history of vague but lucrative government job offers
that go with that specialty. She turned them down to be a teacher, and
although she's now in a mostly administrative position, she's a good
teacher, with the usual crop of promising, lazy, irritating, and nervous
students.
As the story opens, Walden's primary problem is Nikki Conway. Or, rather,
Walden's primary problem is protecting Nikki Conway from the Marshals, and
the infuriating Laura Kenning in particular.
When Nikki was seven, she summoned a demon who killed her entire family
and left her a ward of the school. To Laura Kenning, that makes her a risk
who should ideally be kept far away from invocation. To Walden, that makes
Nikki a prodigious natural talent who is developing into a brilliant
student and who needs careful, professional training before she's tempted
into trying to learn on her own.
Most novels with this setup would become Nikki's story. This one does not.
The Incandescent is Walden's story.
There have been a lot of young-adult magical boarding school novels since
Harry Potter became a mass phenomenon, but most of them focus on
the students and the inevitable coming-of-age story. This is a story about
the teachers: the paperwork, the faculty meetings, the funding challenges,
the students who repeat in endless variations, and the frustrations and
joys of attempting to grab the interest of a young mind. It's also about
the temptation of higher-paying, higher-status, and less ethical work,
which however firmly dismissed still nibbles around the edges.
Even if you didn't know Emily Tesh is herself a teacher, you would guess
that before you get far into this novel. There is a vividness and a depth
of characterization that comes from being deeply immersed in the nuance
and tedium of the life that your characters are living. Walden's
exasperated fondness for her students was the emotional backbone of this
book for me. She likes teenagers without idealizing the process of
being a teenager, which I think is harder to pull off in a novel than it
sounds.
It was hard to quantify the difference between a merely very
intelligent student and a brilliant one. It didn't show up in a list
of exam results. Sometimes, in fact, brilliance could be a
disadvantage when all you needed to do was neatly jump the hoop of
an examiner's grading rubric without ever asking why. It was the
teachers who knew, the teachers who could feel the difference. A few
times in your career, you would have the privilege of teaching someone
truly remarkable; someone who was hard work to teach because they made
you work harder, who asked you questions that had never
occurred to you before, who stretched you to the very edge of your own
abilities. If you were lucky as Walden, this time, had been lucky
your remarkable student's chief interest was in your discipline: and
then you could have the extraordinary, humbling experience of teaching
a child whom you knew would one day totally surpass you.
I also loved the world-building, and I say this as someone who is
generally not a fan of demons. The demons themselves are a bit of a
disappointment and mostly hew to one of the stock demon conventions, but
the rest of the magic system is deep enough to have practitioners who
approach it from different angles and meaty enough to have some satisfying
layered complexity. This is magic, not magical science, so don't expect a
fully fleshed-out set of laws, but the magical system felt substantial and
satisfying to me.
Tesh's first novel, Some Desperate
Glory, was by far my favorite science fiction novel of 2023. This is a
much different book, which says good things about Tesh's range and the
potential of her work yet to come: adult rather than YA, fantasy rather
than science fiction, restrained and subtle in places where Some
Desperate Glory was forceful and pointed. One thing the books do have in
common, though, is some structure, particularly the false climax near the
midpoint of the book. I like the feeling of uncertainty and possibility
that gives both books, but in the case of The Incandescent, I was
not quite in the mood for the second half of the story.
My problem with this book is more of a reader preference than an objective
critique: I was in the mood for a story about a confident, capable
protagonist who was being underestimated, and Tesh was writing a novel
with a more complicated and fraught emotional arc. (I'm being
intentionally vague to avoid spoilers.) There's nothing wrong with the
story that Tesh wanted to tell, and I admire the skill with which she did
it, but I got a tight feeling in my stomach when I realized where she was
going. There is a satisfying ending, and I'm still very happy I read this
book, but be warned that this might not be the novel to read if you're in
the mood for a purer competence porn experience.
Recommended, and I am once again eagerly awaiting the next thing Emily
Tesh writes (and reminding myself to go back and read her novellas).
Content warnings: Grievous physical harm, mind control, and some body
horror.
Rating: 8 out of 10
In December 2024, I went on a trip through four countries - Singapore, Malaysia, Brunei, and Vietnam - with my friend Badri. This post covers our experiences in Singapore.
I took an IndiGo flight from Delhi to Singapore, with a layover in Chennai. At the Chennai airport, I was joined by Badri. We had an early morning flight from Chennai that would land in Singapore in the afternoon. Within 48 hours of our scheduled arrival in Singapore, we submitted an arrival card online. At immigration, we simply needed to scan our passports at the gates, which opened automatically to let us through, and then give our address to an official nearby. The process was quick and smooth, but it unfortunately meant that we didn t get our passports stamped by Singapore.
Before I left the airport, I wanted to visit the nature-themed park with a fountain I saw in pictures online. It is called Jewel Changi, and it took quite some walking to get there. After reaching the park, we saw a fountain that could be seen from all the levels. We roamed around for a couple of hours, then proceeded to the airport metro station to get to our hotel.
A shot of Jewel Changi. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
There were four ATMs on the way to the metro station, but none of them provided us with any cash. This was the first country (outside India, of course!) where my card didn t work at ATMs.
To use the metro, one can tap the EZ-Link card or bank cards at the AFC gates to get in. You cannot buy tickets using cash. Before boarding the metro, I used my credit card to get Badri an EZ-Link card from a vending machine. It was 10 Singapore dollars ( 630) - 5 for the card, and 5 for the balance. I had planned to use my Visa credit card to pay for my own fare. I was relieved to see that my card worked, and I passed through the AFC gates.
We had booked our stay at a hostel named Campbell s Inn, which was the cheapest we could find in Singapore. It was 1500 per night for dorm beds. The hostel was located in Little India. While Little India has an eponymous metro station, the one closest to our hostel was Rochor.
On the way to the hostel, we found out that our booking had been canceled.
We had booked from the Hostelworld website, opting to pay the deposit in advance and to pay the balance amount in person upon reaching. However, Hostelworld still tried to charge Badri s card again before our arrival. When the unauthorized charge failed, they sent an automatic message saying we tried to charge and to contact them soon to avoid cancellation, which we couldn t do as we were in the plane.
Despite this, we went to the hostel to check the status of our booking.
The trip from the airport to Rochor required a couple of transfers. It was 2 Singapore dollars (approx. 130) and took approximately an hour.
Upon reaching the hostel, we were informed that our booking had indeed been canceled, and were not given any reason for the cancelation. Furthermore, no beds were available at the hostel for us to book on the spot.
We decided to roam around and look for accommodation at other hostels in the area. Soon, we found a hostel by the name of Snooze Inn, which had two beds available. It was 36 Singapore dollars per person (around 2300) for a dormitory bed. Snooze Inn advertised supporting RuPay cards and UPI. Some other places in that area did the same. We paid using my card. We checked in and slept for a couple of hours after taking a shower.
By the time we woke up, it was dark. We met Praveen s friend Sabeel to get my FLX1 phone. We also went to Mustafa Center nearby to exchange Indian rupees for Singapore dollars. Mustafa Center also had a shopping center with shops selling electronic items and souvenirs, among other things. When we were dropping off Sabeel at a bus stop, we discovered that the bus stops in Singapore had a digital board mentioning the bus routes for the stop and the number of minutes each bus was going to take.
In addition to an organized bus system, Singapore had good pedestrian infrastructure. There were traffic lights and zebra crossings for pedestrians to cross the roads. Unlike in Indian cities, rules were being followed. Cars would stop for pedestrians at unmanaged zebra crossings; pedestrians would in turn wait for their crossing signal to turn green before attempting to walk across. Therefore, walking in Singapore was easy.
Traffic rules were taken so seriously in Singapore I (as a pedestrian) was afraid of unintentionally breaking them, which could get me in trouble, as breaking rules is dealt with heavy fines in the country. For example, crossing roads without using a marked crossing (while being within 50 meters of it) - also known as jaywalking - is an offence in Singapore.
Moreover, the streets were litter-free, and cleanliness seemed like an obsession.
After exploring Mustafa Center, we went to a nearby 7-Eleven to top up Badri s EZ-Link card. He gave 20 Singapore dollars for the recharge, which credited the card by 19.40 Singapore dollars (0.6 dollars being the recharge fee).
When I was planning this trip, I discovered that the World Chess Championship match was being held in Singapore. I seized the opportunity and bought a ticket in advance. The next day - the 5th of December - I went to watch the 9th game between Gukesh Dommaraju of India and Ding Liren of China. The venue was a hotel on Sentosa Island, and the ticket was 70 Singapore dollars, which was around 4000 at the time.
We checked out from our hostel in the morning, as we were planning to stay with Badri s aunt that night. We had breakfast at a place in Little India. Then we took a couple of buses, followed by a walk to Sentosa Island. Paying the fare for the buses was similar to the metro - I tapped my credit card in the bus, while Badri tapped his EZ-Link card. We also had to tap it while getting off.
If you are tapping your credit card to use public transport in Singapore, keep in mind that the total amount of all the trips taken on a day is deducted at the end. This makes it hard to determine the cost of individual trips. For example, I could take a bus and get off after tapping my card, but I would have no way to determine how much this journey cost.
When you tap in, the maximum fare amount gets deducted. When you tap out, the balance amount gets refunded (if it s a shorter journey than the maximum fare one). So, there is incentive for passengers not to get off without tapping out. Going by your card statement, it looks like all that happens virtually, and only one statement comes in at the end. Maybe this combining only happens for international cards.
We got off the bus a kilometer away from Sentosa Island and walked the rest of the way. We went on the Sentosa Boardwalk, which is itself a tourist attraction. I was using Organic Maps to navigate to the hotel Resorts World Sentosa, but Organic Maps route led us through an amusement park. I tried asking the locals (people working in shops) for directions, but it was a Chinese-speaking region, and they didn t understand English. Fortunately, we managed to find a local who helped us with the directions.
A shot of Sentosa Boardwalk. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Following the directions, we somehow ended up having to walk on a road which did not have pedestrian paths. Singapore is a country with strict laws, so we did not want to walk on that road. Avoiding that road led us to the Michael Hotel. There was a person standing at the entrance, and I asked him for directions to Resorts World Sentosa. The person told me that the bus (which was standing at the entrance) would drop me there! The bus was a free service for getting to Resorts World Sentosa. Here I parted ways with Badri, who went to his aunt s place.
I got to the Resorts Sentosa and showed my ticket to get in. There were two zones inside - the first was a room with a glass wall separating the audience and the players. This was the room to watch the game physically, and resembled a zoo or an aquarium. :) The room was also a silent room, which means talking or making noise was prohibited. Audiences were only allowed to have mobile phones for the first 30 minutes of the game - since I arrived late, I could not bring my phone inside that room.
The other zone was outside this room. It had a big TV on which the game was being broadcast along with commentary by David Howell and Jovanka Houska - the official FIDE commentators for the event. If you don t already know, FIDE is the authoritative international chess body.
I spent most of the time outside that silent room, giving me an opportunity to socialize. A lot of people were from Singapore. I saw there were many Indians there as well. Moreover, I had a good time with Vasudevan, a journalist from Tamil Nadu who was covering the match. He also asked questions to Gukesh during the post-match conference. His questions were in Tamil to lift Gukesh s spirits, as Gukesh is a Tamil speaker.
Tea and coffee were free for the audience. I also bought a T-shirt from their stall as a souvenir.
After the game, I took a shuttle bus from Resorts World Sentosa to a metro station, then travelled to Pasir Ris by metro, where Badri was staying with his aunt. I thought of getting something to eat, but could not find any caf s or restaurants while I was walking from the Pasir Ris metro station to my destination, and was positively starving when I got there.
Badri s aunt s place was an apartment in a gated community. On the gate was a security guard who asked me the address of the apartment. Upon entering, there were many buildings. To enter the building, you need to dial the number of the apartment you want to go to and speak to them. I had seen that in the TV show Seinfeld, where Jerry s friends used to dial Jerry to get into his building.
I was afraid they might not have anything to eat because I told them I was planning to get something on the way. This was fortunately not the case, and I was relieved to not have to sleep with an empty stomach.
Badri s uncle gave us an idea of how safe Singapore is. He said that even if you forget your laptop in a public space, you can go back the next day to find it right there in the same spot. I also learned that owning cars was discouraged in Singapore - the government imposes a high registration fee on them, while also making public transport easy to use and affordable. I also found out that 7-Eleven was not that popular among residents in Singapore, unlike in Malaysia or Thailand.
The next day was our third and final day in Singapore. We had a bus in the evening to Johor Bahru in Malaysia. We got up early, had breakfast, and checked out from Badri s aunt s home. A store by the name of Cat Socrates was our first stop for the day, as Badri wanted to buy some stationery. The plan was to take the metro, followed by the bus. So we got to Pasir Ris metro station. Next to the metro station was a mall. In the mall, Badri found an ATM where our cards worked, and we got some Singapore dollars.
It was noon when we reached the stationery shop mentioned above. We had to walk a kilometer from the place where the bus dropped us. It was a hot, sunny day in Singapore, so walking was not comfortable. We had to go through residential areas in Singapore. We saw some non-touristy parts of Singapore.
After we were done with the stationery shop, we went to a hawker center to get lunch. Hawker centers are unique to Singapore. They have a lot of shops that sell local food at cheap prices. It is similar to a food court. However, unlike the food courts in malls, hawker centers are open-air and can get quite hot.
This is the hawker center we went to. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To have something, you just need to buy it from one of the shops and find a table. After you are done, you need to put your tray in the tray-collecting spots. I had a kaya toast with chai, since there weren t many vegetarian options. I also bought a persimmon from a nearby fruit vendor. On the other hand, Badri sampled some local non-vegetarian dishes.
Table littering at the hawker center was prohibited by law. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Next, we took a metro to Raffles Place, as we wanted to visit Merlion, the icon of Singapore. It is a statue having the head of a lion and the body of a fish. While getting through the AFC gates, my card was declined. Therefore, I had to buy an EZ-Link card, which I had been avoiding because the card itself costs 5 Singapore dollars.
From the Raffles Place metro station, we walked to Merlion. The place also gave a nice view of Marina Bay Sands. It was filled with tourists clicking pictures, and we also did the same.
Merlion from behind, giving a good view of Marina Bay Sands. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
After this, we went to the bus stop to catch our bus to the border city of Johor Bahru, Malaysia. The bus was more than an hour late, and we worried that we had missed the bus. I asked an Indian woman at the stop who also planned to take the same bus, and she told us that the bus was late. Finally, our bus arrived, and we set off for Johor Bahru.
Before I finish, let me give you an idea of my expenditure. Singapore is an expensive country, and I realized that expenses could go up pretty quickly. Overall, my stay in Singapore for 3 days and 2 nights was approx. 5500 rupees. That too, when we stayed one night at Badri s aunt s place (so we didn t have to pay for accomodation for one of the nights) and didn t have to pay for a couple of meals. This amount doesn t include the ticket for the chess game, but includes the costs of getting there. If you are in Singapore, it is likely you will pay a visit to Sentosa Island anyway.
Stay tuned for our experiences in Malaysia!
Credits: Thanks to Dione, Sahil, Badri and Contrapunctus for reviewing the draft. Thanks to Bhe for spotting a duplicate sentence.
When this book was presented as available for review, I jumped on it. After
all, who doesn t love reading a nice bit of computing history, as told by a
well-known author (affectionaly known as Uncle Bob ), one who has been
immersed in computing since forever? What s not to like there?
Reading on, the book does not disappoint. Much to the contrary, it digs
into details absent in most computer history books that, being an operating
systems and computer architecture geek, I absolutely enjoyed. But let me
first address the book s organization.
The book is split into four parts. Part 1, Setting the Stage, is a short
introduction, answering the question Who are we? ( we being the
programmers, of course). It describes the fascination many of us felt when
we realized that the computer was there to obey us, to do our bidding, and
we could absolutely control it.
Part 2 talks about the giants of the computing world, on whose shoulders
we stand. It digs in with a level of detail I have never seen before,
discussing their personal lives and technical contributions (as well as the
hoops they had to jump through to get their work done). Nine chapters cover
these giants, ranging chronologically from Charles Babbage and Ada Lovelace
to Ken Thompson, Dennis Richie, and Brian Kernighan (understandably, giants
who worked together are grouped in the same chapter). This is the part with
the most historically overlooked technical details. For example, what was
the word size in the first computers, before even the concept of a byte
had been brought into regular use? What was the register structure of early
central processing units (CPUs), and why did it lead to requiring
self-modifying code to be able to execute loops?
Then, just as Unix and C get invented, Part 3 skips to computer history as
seen through the eyes of Uncle Bob. I must admit, while the change of
rhythm initially startled me, it ends up working quite well. The focus is
no longer on the giants of the field, but on one particular person (who
casts a very long shadow). The narrative follows the author s life: a boy
with access to electronics due to his father s line of work; a computing
industry leader, in the early 2000s, with extreme programming; one of the
first producers of training materials in video format a role that today
might be recognized as an influencer. This first-person narrative reaches
year 2023.
But the book is not just a historical overview of the computing world, of
course. Uncle Bob includes a final section with his thoughts on the future
of computing. As this is a book for programmers, it is fitting to start
with the changes in programming languages that we should expect to see and
where such changes are likely to take place. The unavoidable topic of
artificial intelligence is presented next: What is it and what does it
spell for computing, and in particular for programming? Interesting (and
sometimes surprising) questions follow: What does the future of hardware
development look like? What is prone to be the evolution of the World Wide
Web? What is the future of programming and programmers?
At just under 500 pages, the book is a volume to be taken seriously. But
space is very well used with this text. The material is easy to read, often
funny and always informative. If you enjoy computer history and
understanding the little details in the implementations, it might very well
be the book you want.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard
to buy for a small installation. They are selling to utility scale solar
mostly. And those are installed by driving metal beams into the ground,
which requires heavy machinery.
Since I have experience with Ironridge rails for roof mount solar, I
decided to adapt that system for a vertical mount. Which is something it
was not designed for. I combined the Ironridge hardware with regular parts
from the hardware store.
The cost of mounting solar panels nowadays is often higher than the cost of
the panels. I hoped to match the cost, and I nearly did. The solar panels cost
$100 each, and the fence cost $110 per solar panel. This fence was
significantly cheaper than conventional ground mount arrays that I
considered as alternatives, and made a better use of a difficult hillside
location.
I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail.
(Longer rails would need a center post anyway, and the 7 foot long rails
have cheaper shipping, since they do not need to be shipped freight.)
For the fence posts, I used regular 4x4" treated posts. 12 foot long, set
in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6
inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge
LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3
inch hot-dipped galvanized lag screw. Since a treated post can react badly
with an aluminum bracket, there needs to be some flashing between the post
and bracket. I used Shurtape PW-100 tape for that. I see no sign of
corrosion after 1 year.
The rest of the Ironridge system is a T-bolt that connects the rail to the
L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners
(UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips.
Since the Ironridge hardware is not designed to hold a solar panel at a 90
degree angle, I was concerned that the panels might slide downward over
time. To help prevent that, I added some additional support brackets under
the bottom of the panels. So far, that does not seem to have been a problem
though.
I installed Aptos 370 watt solar panels on the fence. They are bifacial,
and while the posts block the back partially, there is still bifacial
gain on cloudy days. I left enough space under the solar panels to be able
to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence
had its own 2 posts. This is the least elegant part of this design, but
fitting 2 brackets next to one-another on a single post isn't feasible.
I bolted the pairs of posts together with some spacers. A side benefit of
doing it this way is that treated lumber can warp as it dries, and this
prevented much twisting of the posts.
Using separate posts for each segment also means that the fence can
traverse a hill easily. And it does not need to be perfectly straight. In
fact, my fence has a 30 degree bend in the middle. This means it has both
south facing and south-west facing panels, so can catch the light for
longer during the day.
After building the fence, I noticed there was a slight bit of sway at the
top, since 9 feet of wooden post is not entirely rigid. My worry was that a
gusty wind could rattle the solar panels. While I did not actually observe
that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope
it will last 30 years, with the lifespan of the treated lumber
being the likely determining factor.
As part of my larger (and still ongoing) ground mount solar install, the
solar fence has consistently provided great power. The vertical orientation
works well at latitude 36. It also turned out that the back of the fence was
useful to hang conduit and wiring and solar equipment, and so it turned into
the electrical backbone of my whole solar field. But that's another story..
solar fence parts list
DebConf26 is already in the air in Argentina. Organizing DebConf26 give us the
opportunity to talk about Debian in our country again. This is not the first
time that Debian has come here, previously Argentina has hosted DebConf 8 in Mar
del Plata.
In August, Nattie Mayer-Hutchings and Stefano Rivera from DebConf Committee
visited the venue where the next DebConf will take place. They came to Argentina
in order to see what it is like to travel from Buenos Aires to Santa Fe (the
venue of the next DebConf). In addition, they were able to observe the layout
and size of the classrooms and halls, as well as the infrastructure available at
the venue, which will be useful for the Video Team.
But before going to Santa Fe, on the August 27th, we organized a meetup in
Buenos Aires at GCoop, where we hosted some talks:
Qu es Debian? - Pablo Gonzalez (sultanovich) / Emmanuel Arias
On August 28th, we had the opportunity to get to know the Venue. We walked around
the city and, obviously, sampled some of the beers from Santa Fe.
On August 29th we met with representatives of the University and local government
who were all very supportive. We are very grateful to them for opening
their doors to DebConf.
In the afternoon we met some of the local free software community at an event we
held in ATE Santa Fe. The event included several talks:
Qu es Debian? - Pablo (sultanovich) / Emmanuel Arias
Ciberrestauradores: Gestores de basura electr nica - Programa RAEES Acutis
Debian and DebConf (Stefano Rivera/Nattie Mayer-Hutchings)
Thanks to Debian Argentina, and all the people who will make DebConf26
possible.
Thanks to Nattie Mayer-Hutchings and Stefano Rivera for reviewing an earlier
version of this article.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1261 other packages on CRAN, downloaded 41.4 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 647 times according
to Google Scholar.
This versions updates the 15.0.2-1
release from last week. Following fairly extensive email discussions
with CRAN, we are now
accelerating the transition to Armadillo. When C++14 or newer
is used (which after all is the default since R 4.1.0 released May 2021,
see WRE
Section 1.2.4), or when opted into, the newer Armadillo is selected.
If on the other hand either C++11 is still forced, or the legacy version
is explicitly selected (which currently one package at CRAN does), then
Armadillo 14.6.3 is selected.
Most packages will not see a difference and automatically switch to
the newer Armadillo. However, some packages will see one or two types of
warning. First, if C++11 is still actively selected via for examples
CXX_STD then CRAN will nudge a change to a newer
compilation standard (as they have been doing for some time already).
Preferably the change should be to simply remove the constraint and let
R pick the standard based on its version and compiler availability.
These days that gives us C++17 in most cases; see WRE
Section 1.2.4 for details. (Some packages may need C++14 or C++17 or
C++20 explicitly and can also do so.)
Second, some packages may see a deprecation warning. Up until
Armadillo 14.6.3, the package suppressed these and you can still get
that effect by opting into that version by setting
-DARMA_USE_LEGACY. (However this route will be sunset
eventually too.) But one really should update the code to the
non-deprecated version. In a large number of cases this simply means
switching from using arma::is_finite() (typically called on
a scalar double) to calling std::isfinite().
But there are some other cases, and we will help as needed. If you
maintain a package showing deprecation warnings, and are lost here and
cannot workout the conversion to current coding styles, please open an
issue at the RcppArmadillo repository (i.e. here) or in
your own repository and tag me. I will also reach out to the maintainers
of a smaller set of packages with more than one reverse dependency.
A few small changes have been made internal packaging and
documentation, a small synchronization with upstream for two commits
since the 15.0.2 release, as well as a link to the ldlasb2 repository
and its demonstration
regarding some ill-stated benchmarks done elsewhere.
The detailed changes since the last CRAN release follow.
Changes in
RcppArmadillo version 15.0.2-2 (2025-09-18)
Here, in classic Goerzen deep dive fashion, is more information than you knew you wanted about a topic you ve probably never thought of. I found it pretty interesting, because it took me down a rabbit hole of subsystems I ve never worked with much and a mishmash of 1980s and 2020s tech.
I had previously tried and failed to get an actual 80x25 Linux console, but I ve since figured it out!
This post is about the Linux text console not X or Wayland. We re going to get the console right without using those systems. These instructions are for Debian trixie, but should be broadly applicable elsewhere also. The end result can look like this:
(That s a Wifi Retromodem that I got at VCFMW last year in the Hayes modem case)
What s a pixel?
How would you define a pixel these days? Probably something like a uniquely-addressable square dot in a two-dimensional grid .
In the world of VGA and CRTs, that was just a logical abstraction. We got an API centered around that because it was convenient. But, down the VGA cable and on the device, that s not what a pixel was.
A pixel, back then, was a time interval. On a multisync monitor, which were common except in the very early days of VGA, the timings could be adjusted which produced logical pixels of different sizes. Those screens often had a maximum resolution but not necessarily a native resolution in the sense that an LCD panel does. Different timings produced different-sized pixels with equal clarity (or, on cheaper monitors, equal fuzziness).
A side effect of this was that pixels need not be square. And, in fact, in the standard DOS VGA 80x25 text mode, they weren t.
You might be seeing why DVI, DisplayPort, and HDMI replaced VGA for LCD monitors: with a VGA cable, you did a pixel-to-analog-timings conversion, then the display did a timings-to-pixels conversion, and this process could be a bit lossy. (Hence why you sometimes needed to fill the screen with an image and push the center button on those older LCD screens)
(Note to the pedantically-inclined: yes I am aware that I have simplified several things here; for instance, a color LCD pixel is made up of approximately 3 sub-dots of varying colors, and that things like color eInk displays have two pixel grids with different sizes of pixels layered atop each other, and printers are another confusing thing altogether, and and and . MOST PEOPLE THINK OF A PIXEL AS A DOT THESE DAYS, OK?)
What was DOS text mode?
We think of this as the standard display: 80 columns wide and 25 rows tall. 80x25. By the time Linux came along, the standard Linux console was VGA text mode something like the 4th incarnation of text modes on PCs (after CGA, MDA, and EGA). VGA also supported certain other sizes of characters giving certain other text dimensions, but if I cover all of those, this will explode into a ridiculously more massive page than it already is.
So to display text on an 80x25 DOS VGA system, ultimately characters and attributes were written into the text buffer in memory. The VGA system then rendered it to the display as a 720x400 image (at 70Hz) with non-square pixels such that the result was approximately a 4:3 aspect ratio.
The font used for this rendering was a bitmapped one using 8x16 cells. You might do some math here and point out that 8 * 80 is only 640, and you d be correct. The fonts were 8x16 but the rendered cells were 9x16. The extra pixel was normally used for spacing between characters. However, in line graphics mode, characters 0xC0 through 0xDF repeated the 8th column in the position of the 9th, allowing the continuous line-drawing characters we re used to from TUIs.
Problems rendering DOS fonts on modern systems
By now, you re probably seeing some of the issues we have rendering DOS screens on more modern systems. These aren t new at all; I remember some of these from back in the days when I ran OS/2, and I think also saw them on various terminals and consoles in OS/2 and Windows.
Some issues you d encounter would be:
Incorrect aspect ratio caused by using the original font and rendering it using 1:1 square pixels (resulting in a squashed appearance)
Incorrect aspect ratio for ANOTHER reason, caused by failing to render column 9, resulting in text that is overall too narrow
Characters appearing to be touching each other when they shouldn t (failing to render column 9; looking at you, dosbox)
Gaps between line drawing characters that should be continuous, caused by rendering column 9 as empty space in all cases
Character set issues
DOS was around long before Unicode was. In the DOS world, there were codepages that selected the glyphs for roughly the high half of the 256 possible characters. CP437 was the standard for the USA; others existed for other locations that needed different characters. On Unix, the USA pre-Unicode standard was Latin-1. Same concept, but with different character mappings.
Nowadays, just about everything is based on UTF-8. So, we need some way to map our CP437 glyphs into Unicode space. If we are displaying DOS-based content, we ll also need a way to map CP437 characters to Unicode for display later, and we need these maps to match so that everything comes out right. Whew.
So, let s get on with setting this up!
Selecting the proper video mode
As explained in my previous post, proper hardware support for DOS text mode is limited to x86 machines that do not use UEFI. Non-x86 machines, or x86 machines with UEFI, simply do not contain the necessary support for it. As these are now standard, most of the time, the text console you see on Linux is actually the kernel driving the video hardware in graphics mode, and doing the text rendering in software.
That s all well and good, but it makes it quite difficult to actually get an 80x25 console.
First, we need to be running at 720x400. This is where I ran into difficulty last time. I realized that my laptop s LCD didn t advertise any video modes other than its own native resolution. However, almost all external monitors will, and 720x400@70 is a standard VGA mode from way back, so it should be well-supported.
You need to find the Linux device name for your device. You can look at the possible devices with ls -l /sys/class/drm. If you also have a GUI, xrandr may help too. But in any case, each directory under /sys/class/drm has a file named modes, and if you cat them all, you will eventually come across one with a bunch of modes defined. Drop the leading card0 or whatever from the directory name, and that s your device. (Verify that 720x400 is in modes while you re at it.)
Now, you re going to edit /etc/default/grub and add something like this to GRUB_CMDLINE_LINUX_DEFAULT:
video=DP-1:720x400@70
Of course, replace DP-1 with whatever your device is.
Now you can run update-grub and reboot. You should have a 720x400 display.
At first, I thought I had succeeded by using Linux s built-in VGA font with that mode. But it looked too tall. After noticing that repeated 0s were touching, I got suspicious about the missing 9th column in the cells. stty -a showed that my screen was 90x25, which is exactly what it would show if I was using 8x16 instead of 9x16 cells. Sooo . I need to prepare a 9x16 font.
Building it yourself
First, install some necessary software: apt-get install fontforge bdf2psf
Start by going to the Oldschool PC Font Pack Download page. Download oldschool_pc_font_pack_v2.2_FULL.zip and unpack it.
The file we re interested in is otb - Bm (linux bitmap)/Bm437_IBM_VGA_9x16.otb. Open it in fontforge by running fontforge BmPlus_IBM_VGA_9x16.otb. When it asks if you will load the bitmap fonts, hit select all, then yes. Go to File -> generate fonts. Save in a BDF, no need for outlines, and use guess for resolution.
Now you have a file such as Bm437_IBM_VGA_9x16-16.bdf. Excellent.
Now we need to generate a Unicode map file. We will make sure this matches the system s by enumerating every character from 0x00 to 0xFF, converting it from CP437 to Unicode, and writing the appropriate map.
Here s a Python script to do that:
By convention, we normally store these files gzipped, so gzip CP437-VGA.psf.
You can test it on the console with setfont CP437-VGA.psf.gz.
Now copy this file into /usr/local/etc.
Activating the font
Now, edit /etc/default/console-setup. It should look like this:
# CONFIGURATION FILE FOR SETUPCON
# Consult the console-setup(5) manual page.
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="UTF-8"
CODESET="Lat15"
FONTFACE="VGA"
FONTSIZE="8x16"
FONT=/usr/local/etc/CP437-VGA.psf.gz
VIDEOMODE=
# The following is an example how to use a braille font
# FONT='lat9w-08.psf.gz brl-8x8.psf'
At this point, you should be able to reboot. You should have a proper 80x25 display! Log in and run stty -a to verify it is indeed 80x25.
Using and testing CP437
Part of the point of CP437 is to be able to access BBSs, ANSI art, and similar.
Now, remember, the Linux console is still in UTF-8 mode, so we have to translate CP437 to UTF-8, then let our font map translate it back to CP437. A weird trip, but it works.
Let s test it using the Textfiles ANSI art collection. In the artworks section, I randomly grabbed a file near the top: borgman.ans. Download that, and display with:
clear; iconv -f CP437 -t UTF-8 < borgman.ans
You should see something similar to but actually more accurate than the textfiles PNG rendering of it, which you ll note has an incorrect aspect ratio and some rendering issues. I spot-checked with a few others and they seemed to look good. belinda.ans in particular tries quite a few characters and should give you a good sense if it is working.
Use with interactive programs
That s all well and good, but you re probably going to want to actually use this with some interactive program that expects CP437. Maybe Minicom, Kermit, or even just telnet?
For this, you ll want to apt-get install luit. luit maps CP437 (or any other encoding) to UTF-8 for display, and then of course the Linux console maps UTF-8 back to the CP437 font.
Here s a way you can repeat the earlier experiment using luit to run the cat program:
clear; luit -encoding CP437 cat borgman.ans
You can run any command under luit. You can even run luit -encoding CP437 bash if you like. If you do this, it is probably a good idea to follow my instructions on generating locales on my post on serial terminals, and then within luit, set LANG=en_us.IBM437. But note especially that you can run programs like minicom and others for accessing BBSs under luit.
Final words
This gave you a nice DOS-type console. Although it doesn t have glyphs for many codepoints, it does run in UTF-8 mode and therefore is compatible with modern software.
You can achieve greater compatibility with more UTF-8 codepoints with the DOS font, at the expense of accuracy of character rendering (especially for the double-line drawing characters) by using /usr/share/bdf2psf/standard.equivalents instead of /dev/null in the bdf2psf command.
Or you could go for another challenge, such as using the DEC vt-series fonts for coverage of ISO-8859-1. But just using fonts extracted from DEC ROM won t work properly, because DEC terminals had even more strangeness going on than DOS fonts.