Valhalla's Things: Honeycomb shirt
Tags: madeof:atoms, craft:sewing, FreeSoftWear, GNU Terry Pratchett





Series: | Finder Chronicles #3 |
Publisher: | DAW |
Copyright: | 2021 |
ISBN: | 0-7564-1516-0 |
Format: | Kindle |
Pages: | 458 |
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS. "Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?" "Oh, no, I am not missing this," Isla said, and got out of the podcar. "I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened." "And then?" "And then the bad things on the other side, who we were trying to lock away, will be free to travel through."Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun. The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments. We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well. There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into. There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts. Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly. This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended. Followed by Ghostdrift. Rating: 9 out of 10
Publisher: | Erewhon |
Copyright: | November 2024 |
ISBN: | 1-64566-099-0 |
Format: | Kindle |
Pages: | 443 |
Know I adore you. Look out over the glow. The cities sundered, their machines inverted, mountains split and prairies blazing, that long foreseen Hereafter crowning fast. This calamity is a promise made to you. A prayer to you, and to your shadow which has become my second self, tucked behind my eye and growing in tandem with me, pressing outwards through the pupil, the smarter, truer, almost bursting reason for our wrath. Do not doubt me. Just look. Watch us rise as the sun comes up over the beauty. The future stains the bleakness so pink. When my violence subsides, we will have nothing, and be champions.Marney Honeycutt is twelve years old, a factory worker, and lustertouched. She works in the Yann I. Chauncey Ichorite Foundry in Ignavia City, alongside her family and her best friend, shaping the magical metal ichorite into the valuable industrial products of a new age of commerce and industry. She is the oldest of the lustertouched, the children born to factory workers and poisoned by the metal. It has made her allergic, prone to fits at any contact with ichorite, but also able to exert a strange control over the metal if she's willing to pay the price of spasms and hallucinations for hours afterwards. As Metal from Heaven opens, the workers have declared a strike. Her older sister is the spokesperson, demanding shorter hours, safer working conditions, and an investigation into the health of the lustertouched children. Chauncey's response is to send enforcer snipers to kill the workers, including the entirety of her family.
The girl sang, "Unalone toward dawn we go, toward the glory of the new morning." An enforcer shot her in the belly, and when she did not fall, her head.Marney survives, fleeing into the city, swearing an impossible personal revenge against Yann Chauncey. An act of charity gets her a ticket on a train into the countryside. The woman who bought her ticket is a bandit who is on the train to rob it. Marney's ability to control ichorite allows her to help the bandits in return, winning her a place with the Highwayman's Choir who have been preying on the shipments of the rich and powerful and then disappearing into the hills. The Choir's secret is that the agoraphobic and paranoid Baron of the Fingerbluffs is dead and has been for years. He was killed by his staff, Hereafterist idealists, who have turned his remote territory into an anarchist commune and haven for pirates and bandits. This becomes Marney's home and the Choir becomes her family, but she never forgets her oath of revenge or the childhood friend she left behind in the piles of bodies and to whom this story is narrated. First, Clarke's writing is absolutely gorgeous.
We scaled the viny mountain jags at Montrose Barony's legal edge, the place where land was and wasn't Ignavia, Royston, and Drustland alike. There was a border but it was diffuse and hallucinatory, even more so than most. On legal papers and state maps there were harsh lines that squashed topography and sanded down the mountains into even hills in planter's rows, but here among the jutting rocks and craggy heather, the ground was lineless.The rhythm of it, the grasp of contrast and metaphor, the word choice! That climactic word "lineless," with its echo of limitless. So good. Second, this is the rarest of books: a political fantasy that takes class and religion seriously and uses them for more than plot drivers. This is not at all our world, and the technology level is somewhat ambiguous, but the parallels to the Gilded Age and Progressive Era are unmistakable. The Hereafterists that Marney joins are political anarchists, not in the sense of alternative governance structures and political theory sanitized for middle-class liberals, but in the sense of Emma Goldman and Peter Kropotkin. The society they have built in the Fingerbluffs is temporary, threatened, and contingent, but it is sincere and wildly popular among the people who already lived there. Even beyond politics, class is a tangible force in this book. Marney is a factory worker and the child of factory workers. She barely knows how to read and doesn't magically learn over the course of the book. She has friends who are clever in the sense rewarded by politics and nobility, who navigate bureaucracies and political nuance, but that is not Marney's world. When, towards the end of the book, she has to deal with a gathering of high-class women, the contrast is stark, and she navigates that gathering only by being entirely unexpected. Perhaps the best illustration of the subtlety of this is the terminology in the book for lesbian. Marney is a crawly, which is a slur thrown at people like her (and one of the rare fictional slurs that work exactly as the author intended) but is also simply what she calls herself. Whether or not it functions as a slur depends on context, and the context is never hard to understand. The high-class lesbians she meets later are Lunarists, and react to crawly as a vile and insulting word. They use language to separate themselves from both the insult and from the social class that uses it. Language is an indication of culture and manners and therefore of morality, unlike deeds, which admit endless justifications.
Conversation was fleeting. Perdita managed with whomever stood near her, chipper about every prettiness she saw, the flitting butterflies, the dappled light between the leaves, the lushness and the fragrance of untamed land, and her walking companions took turns sharing in her delight. It was infectious, how happy she was. She was going to slaughter millions. She was going to skip like this all the while.The handling of religion is perhaps even better. Marney was raised a Tullian, which sits alongside two other fleshed-out fictional religions and sketches of several more. Tullians tend to be conservative and patriarchal, and Marney has a realistically complicated relationship with faith: sticking with some Tullian worship practices and gestures because they're part of who she is, feeling a kinship to other Tullians, discarding beliefs that don't fit her, and revising others. Every major religion has a Hereafterist spin or reinterpretation that upends or reverses the parts of the religion that were used to prop up the existing social order and brings it more in line with Hereafterist ideals. We see the Tullian Hereafterist variation in detail, and as someone who has studied a lot of methods of reinterpreting Christianity, I was impressed by how well Clarke invents both a belief system and its revisionist rewrite. This is exactly how religions work in human history, but one almost never sees this subtlety in fantasy novels. Marney's allergy to ichorite causes her internal dialogue to dissolve into hallucinatory synesthesia when she's manipulating or exposed to it. Since that's most of the book, substantial portions read like drug trips with growing body horror. I normally hate this type of narration, so it's a sign of just how good Clarke's writing is that I tolerated it and even enjoyed parts. It helps that the descriptions are irreverent and often surprising, full of unexpected metaphors and sudden turns. It's very hard not to quote paragraph after paragraph of this book. Clarke is also doing a lot with gender that I don't feel qualified to comment in detail on, but it would not surprise me to see this book in the Otherwise Award recommendation list. I can think of three significant male characters, all of whom are well-done, but every other major character is female by at least some gender definition. Within that group, though, is huge gender diversity of the complicated and personal type that doesn't force people into defined boxes. Marney's sexuality is similarly unclassified and sometimes surprising. My one complaint is that I thought the sex scenes (which, to warn, are often graphic) fell into the literary fiction trap of being described so closely and physically that it didn't feel like anyone involved was actually enjoying themselves. (This is almost certainly a matter of personal taste.) I had absolutely no idea how Clarke was going to end this book, and the last couple of chapters caught me by surprise. I'm still not sure what I think about the climax. It's not the ending that I wanted, but one of the merits of this book is that it never did what I thought I wanted and yet made me enjoy the journey anyway. It is, at least, a genre ending, not a literary ending: The reader gets a full explanation of what is going on, and the setting is not static the way that it so often is in literary fiction. The characters can change the world, for good or for ill. The story felt frustrating and incomplete when I first finished it, but I haven't stopped thinking about this book and I think I like the shape of it a bit more now. It was certainly unexpected, at least by me. Clarke names Dhalgren as one of their influences in the acknowledgments, and yes, Metal from Heaven is that kind of book. This is the first 2024 novel I've read that felt like the kind of book that should be on award shortlists. I'm not sure it was entirely successful, and there are parts of it that I didn't like or that weren't for me, but it's trying to do something different and challenging and uncomfortable, and I think it mostly worked. And the writing is so good.
She looked like a mythic princess from the old woodcuts, who ruled nature by force of goodness and faith and had no legal power.Metal from Heaven is not going to be everyone's taste. If you do not like literary fantasy, there is a real chance that you will hate this. I am very glad that I read it, and also am going to take a significant break from difficult books before I tackle another one. But then I'm probably going to try the Scapegracers series, because Clarke is an author I want to follow. Content notes: Explicit sex, including sadomasochistic sex. Political violence, mostly by authorities. Murdered children, some body horror, and a lot of serious injuries and death. Rating: 8 out of 10
isoinfo -d
).Publisher: | Tachyon |
Copyright: | 2024 |
ISBN: | 1-61696-415-4 |
Format: | Kindle |
Pages: | 394 |
/etc/rhsm
, like those not subscribed to Red Hatvirt01.conova.theforeman.org
to CentOS Stream 9.
I've also used it to upgrade a server at home that's responsible for running important containers like Home Assistant and UniFi.
So it's absolutely battle tested and production grade! It's also hungry for kittens.
As mentioned above, you can't just use upstream Leapp, but I have a Copr: evgeni/leapp.
# dnf copr enable evgeni/leapp # dnf install leapp leapp-upgrade-el8toel9
# vim /etc/leapp/files/leapp_upgrade_repositories.repo [c9-baseos] name=CentOS Stream $releasever - BaseOS metalink=https://mirrors.centos.org/metalink?repo=centos-baseos-9-stream&arch=$basearch&protocol=https,http gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial gpgcheck=1 repo_gpgcheck=0 metadata_expire=6h countme=1 enabled=1 [c9-appstream] name=CentOS Stream $releasever - AppStream metalink=https://mirrors.centos.org/metalink?repo=centos-appstream-9-stream&arch=$basearch&protocol=https,http gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial gpgcheck=1 repo_gpgcheck=0 metadata_expire=6h countme=1 enabled=1
$stream
substitution is not used as Leapp doesn't override that and you'd end up with CentOS Stream 8 repos again.
Once all that is in place, we can call leapp preupgrade
and let it analyze the system.
Ideally, the output will look like this:
# leapp preupgrade ============================================================ REPORT OVERVIEW ============================================================ Reports summary: Errors: 0 Inhibitors: 0 HIGH severity reports: 0 MEDIUM severity reports: 0 LOW severity reports: 3 INFO severity reports: 3 Before continuing consult the full report: A report has been generated at /var/log/leapp/leapp-report.json A report has been generated at /var/log/leapp/leapp-report.txt ============================================================ END OF REPORT OVERVIEW ============================================================
AllowZoneDrifting
Is Unsupported
EL7 and EL8 shipped with AllowZoneDrifting=yes
, but since EL9 this is not supported anymore.
As this can potentially break the networking of the system, the upgrade gets inhibited.
Newest installed kernel not in use
Admit it, you also don't reboot into every new kernel available!
Well, Leapp won't let that pass and inhibits the upgrade.
Cannot perform the VDO check of block devices
In EL8 there are two ways to manage VDO: using the dedicated vdo
tool and via LVM.
If your system uses LVM (it should!) but not VDO, you probably don't have the vdo
package installed.
But then Leapp can't check if your LVM devices really aren't VDO without the vdo
tooling and will inhibit the upgrade.
So you gotta install vdo
for it to find out that you don't use VDO
LUKS encrypted partition detected
Yeah. Sorry.
Using LUKS? Straight into the inhibit corner!
But hey, if you don't use LUKS for /
you can probably get away by deleting the inhibitwhenluks
actor.
That worked for me, but remember the kittens!
Really upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp
The headings are getting silly, huh?
Anyway, once leapp preupgrade
is happy and doesn't throw any inhibitors anymore,
the actual (real?) upgrade can be done by calling leapp upgrade
.
This will download all necessary packages and create an intermediate initramfs that contains all the things needed for the upgrade and ask you to reboot.
Once booted, the upgrade itself takes somewhere between 5 and 10 minutes.
Then another minute or 5 to relabel your disks with the new SELinux policy.
And three reboots (into the upgrade initramfs, into SELinux relabel, into real OS) of a ProLiant DL325 - 5 minutes each?
And then for good measure another one, to flip SELinux from permissive to enforcing.
Are we done yet? Nope.
There are a few post-upgrade tasks you get to do yourself.
Yes, the switching of SELinux back to enforcing
is one of them.
Please don't forget it.
Using the system after the upgrade
A customer once said "We're not running those systems for the sake of running systems, but for the sake of running some application ontop of them".
This is very true.
libvirt doesn't support Spice/QXL
In EL9, support for Spice/QXL was dropped, so if you try to boot a VM using it, libvirt will nicely error out with
Error starting domain: unsupported configuration: domain configuration does not support video model 'qxl'
virt-manager
(at least the one in Fedora 39) as removing/fixing one part requires applying the new configuration which is still invalid.
So virsh edit <vm>
it is!
Look for entries like
<channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <graphics type='spice' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='spice'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='3'/> </redirdev>
<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='none'/> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video>
ghcr.io
.
After the CentOS Stream 9 upgrade (which included an upgrade to Podman 5), pulls stopped working with authentication/permission errors.
No idea what exactly happened, but a simple podman login
fixed this issue quickly.
$ echo ghp_token podman login ghcr.io -u <user> --password-stdin
shim
has an el8
tag
One of the documented post-upgrade tasks is to verify that no EL8 packages are installed, and to remove those if there are any.
However, when you do this, you'll notice that the shim-x64
package has an EL8 version: shim-x64-15-15.el8_2.x86_64
.
That's because the same build is used in both CentOS Stream 8 and CentOS Stream 9. Confusing, but should really not be uninstalled if you want the machine to boot ;-)
Are we done yet?
Yes! That's it. Enjoy your CentOS Stream 9!
Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years. On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that. Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team). I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it! I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work. Founding Debian as a standalone entity This was my number one goal for the project this last term, which was a carried over item from my previous terms. I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail. In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR. Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either. My terms in a nutshell COVID-19 and Debian 11 era My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already. I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication. We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it... Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release. During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here: https://www.debian.org/News/2021/20211117 In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes: In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences. I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA. More info: https://wiki.debian.org/Teams/DebianNet You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!) DebConf22 and Debian 12 era DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home. After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year). While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days. Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware. At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released. The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future! Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved. UsrMerge wiki page: https://wiki.debian.org/UsrMerge DebConf 23 and Debian 13 era DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise. A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!). As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting
https://nm.debian.org/person/ACCOUNTNAME/certificate/
and replacing
ACCOUNTNAME
with your Debian account name, and if you're a DD, you
should see your certificate. It still needs a few minor changes and a
DPL signature, but at this point I think that will be finished up when
the new DPL start. Thanks to Enrico for working on this!
Since my first term, I've been trying to find ways to improve all our
accounting/finance issues. Tracking what we spend on things, and
getting an annual overview is hard, especially over 3 trusted
organisations. The reimbursement process can also be really tedious,
especially when you have to provide files in a certain order and
combine them into a PDF. So, at DebConf22 we had a meeting along with
the treasurer team and Stefano Rivera who said that it might be
possible for him to work on a new system as part of his Freexian work.
It worked out, and Freexian funded the development of the system since
then, and after DebConf23 we handled the reimbursements for the
conference via the new reimbursements site:
https://reimbursements.debian.net/
It's still early days, but over time it should be linked to all our TOs
and we'll use the same category codes across the board. So, overall,
our reimbursement process becomes a lot simpler, and also we'll be able
to get information like how much money we've spent on any category in
any period. It will also help us to track how much money we have
available or how much we spend on recurring costs. Right now that needs
manual polling from our TOs. So I'm really glad that this is a big
long-standing problem in the project that is being fixed.
For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports.
But we're also gaining riscv64 and loongarch64 as release
architectures! I have 3 different RISC-V based machines on my desk here
that I haven't had much time to work with yet, you can expect some blog
posts about them soon after my DPL term ends!
As Debian is a unix-like system, we're affected by the
Year 2038 problem, where systems that uses 32 bit time in seconds
since 1970 run out of available time and will wrap back to 1970 or have
other undefined behaviour. A detailed wiki page explains how this
works in Debian, and currently we're going through a rather large
transition to make this possible.
I believe this is the right time for Debian to be addressing this,
we're still a bit more than a year away for the Debian 13 release, and
this provides enough time to test the implementation before 2038 rolls
along.
Of course, big complicated transitions with dependency loops that
causes chaos for everyone would still be too easy, so this past weekend
(which is a holiday period in most of the west due to Easter weekend)
has been filled with dealing with an upstream bug in xz-utils, where a
backdoor was placed in this key piece of software. An Ars Technica
covers it quite well, so I won't go into all the details here. I
mention it because I want to give yet another special thanks to
everyone involved in dealing with this on the Debian side. Everyone
involved, from the ftpmasters to security team and others involved were
super calm and professional and made quick, high quality decisions.
This also lead to the archive being frozen on Saturday, this is the
first time I've seen this happen since I've been a DD, but I'm sure
next week will go better!
Looking forward
It's really been an honour for me to serve as DPL. It might well be my
biggest achievement in my life. Previous DPLs range from prominent
software engineers to game developers, or people who have done things
like complete Iron Man, run other huge open source projects and are
part of big consortiums. Ian Jackson even authored dpkg and is now
working on the very interesting tag2upload service!
I'm a relative nobody, just someone who grew up as a poor kid in South
Africa, who just really cares about Debian a lot. And, above all, I'm
really thankful that I didn't do anything major to screw up Debian for
good.
Not unlike learning how to use Debian, and also becoming a Debian
Developer, I've learned a lot from this and it's been a really valuable
growth experience for me.
I know I can't possible give all the thanks to everyone who deserves
it, so here's a big big thanks to everyone who have worked so hard and
who have put in many, many hours to making Debian better, I consider
you all heroes!
-Jonathan
Publisher: | Princeton University Press |
Copyright: | 2006, 2008 |
Printing: | 2008 |
ISBN: | 0-691-13640-8 |
Format: | Trade paperback |
Pages: | 278 |
![]() |
Photo by Pixabay |
Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:
2x less disk space used (1,417MB vs 2,940MB, including initrd)
3x less peak RAM usage for the initrd boot (68MB vs 204MB)
0.5x increase in download size (949MB vs 600MB)
2.5x faster initrd generation (4.5s vs 11.3s)
approximately the same total time (103s vs 98s, hardware dependent)
For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:
1.3x less disk space used (548MB vs 742MB)
2.2x less peak RAM usage for initrd boot (27MB vs 62MB)
0.4x increase in download size (207MB vs 146MB)
Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.
This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.
The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.
The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.
Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.
Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:
docker
on a Linux system with
k3d or kind to test it as
CNI and
Service Mesh.
I wrote some scripts to do a local installation and evaluate cilium
to use it
at work (in fact we are using cilium
on an EKS
cluster now), but I thought it would be a good idea to share my original
scripts in this blog just in case they are useful to somebody, at least for
playing a little with the technology.
cilium
I m providing some links for the reader interested on reading about it:
cilium
as CNI, metallb
for BGP (I tested the
cilium
options, but I wasn t able to configure them right) and nginx
as the
ingress controller (again, I tried to use cilium
but something didn t work
either).
To be able to use the previous components some default options have been
disabled on k3d
and kind
and, in the case of k3d
, a lot of k3s
options
(traefik
, servicelb
, kubeproxy
, network-policy
, ) have also been
disabled to avoid conflicts.
To use the scripts we need to install cilium
, docker
, helm
, hubble
,
k3d
, kind
, kubectl
and tmpl
in our system.
After cloning the repository, the sbin/tools.sh
script can be used to do that on a linux-amd64
system:
$ git clone https://gitea.mixinet.net/blogops/cilium-docker.git
$ cd cilium-docker
$ ./sbin/tools.sh apps
k3d
(for kind
replace
k3d
by kind
) we can use the
sbin/cilium-install.sh
script as
follows:
$ # Deploy first k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 1 full
[...]
$ # Deploy second k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 2 full
[...]
$ # The 2nd cluster-mesh installation connects the clusters
cilium status
after the installation we should get an
output similar to the one seen on the following screenshot:
tmpl/k3d-config.yaml
: configuration to
deploy the k3d
cluster.tmpl/kind-config.yaml
: configuration to
deploy the kind
cluster.tmpl/metallb-crds.yaml
and
tmpl/ippols.yaml
: configurations for the
metallb
deployment.tmpl/cilium.yaml
: values to deploy the
cilium using the helm chart.sbin/cilium-remove.sh
script.
cilium
deployment needs to mount the
bpffs
on /sys/fs/bpf
and cgroupv2
on /run/cilium/cgroupv2
; that is
done automatically on kind
, but fails on k3d
because the image does not
include bash
(see this issue).To fix it we mount a script on all the k3d
containers that is executed each
time they are started (the script is mounted as /bin/k3d-entrypoint-cilium.sh
because the /bin/k3d-entrypoint.sh
script executes the scripts that follow
the pattern /bin/k3d-entrypoint-*.sh
before launching the k3s
daemon).
The source code of the script is available
here.k3d
we have found issues
with open files, looks like they are related to inotify
(see
this
page on the kind documentation); adding the following to the
/etc/sysctl.conf
file fixed the issue:# fix inotify issues with docker & k3d
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
cilium
as the cluster ingress
yet (it did not work, so it is no longer enabled)
and we are also ignoring the gateway-api
for now.cilium
cli to do all the installations, but I
noticed that following that route the current version does not work right with
hubble
(it messes up the TLS support, there are some notes about the
problems on this cilium
issue), so we are deploying with helm
right now.The problem with the helm
approach is that there is no official documentation
on how to install the cluster mesh
with it (there is a request for
documentation here), so we are
using the cilium
cli for now and it looks that it does not break the hubble
configuration.cilium
we have used some scripts & additional config files that are
available on the test sub directory of the repository:
cilium-connectivity.sh
: a script
that runs the cilium connectivity
test for one cluster or in multi cluster
mode (for mesh testing).If we export the variable HUBBLE_PF=true
the script executes the command
cilium hubble port-forward
before launching the tests.http-sw.sh
: Simple tests for cilium policies
from the cilium demo;
the script deploys the Star Wars demo application and allows us to add the
L3/L4 policy or the L3/L4/L7 policy, test the connectivity and view the
policies.ingress-basic.sh
: This test is for
checking the ingress controller, it is prepared to work against cilium
and
nginx
, but as explained before the use of cilium
as an ingress controller
is not working as expected, so the idea is to call it with nginx
always as
the first argument for now.mesh-test.sh
: Tool to deploy a global
service on two clusters, change the service affinity to local
or remote
,
enable or disable if the service is shared and test how the tools respond.cilium-connectivity.sh
executes the standard cilium
tests:
$ ./test/cilium-connectivity.sh k3d 12
Monitor aggregation detected, will skip some flow validation
steps
[k3d-cilium1] Creating namespace cilium-test for connectivity
check...
[k3d-cilium2] Creating namespace cilium-test for connectivity
check...
[...]
All 33 tests (248 actions) successful, 2 tests skipped,
0 scenarios skipped.
http-sw.sh
script:
kubectx k3d-cilium2 # (just in case)
# Create test namespace and services
./test/http-sw.sh create
# Test without policies (exaust-port fails by design)
./test/http-sw.sh test
# Create and view L3/L4 CiliumNetworkPolicy
./test/http-sw.sh policy-l34
# Test policy (no access from xwing, exaust-port fails)
./test/http-sw.sh test
# Create and view L7 CiliumNetworkPolicy
./test/http-sw.sh policy-l7
# Test policy (no access from xwing, exaust-port returns 403)
./test/http-sw.sh test
# Delete http-sw test
./test/http-sw.sh delete
mesh-test.sh
script:
# Create services on both clusters and test
./test/mesh-test.sh k3d create
./test/mesh-test.sh k3d test
# Disable service sharing from cluster 1 and test
./test/mesh-test.sh k3d svc-shared-false
./test/mesh-test.sh k3d test
# Restore sharing, set local affinity and test
./test/mesh-test.sh k3d svc-shared-default
./test/mesh-test.sh k3d svc-affinity-local
./test/mesh-test.sh k3d test
# Delete deployment from cluster 1 and test
./test/mesh-test.sh k3d delete-deployment
./test/mesh-test.sh k3d test
# Delete test
./test/mesh-test.sh k3d delete
ed25519-sk
key. Here is a guide I recommend if
you plan on setting those up with a Solo V2.
The Bad and the Ugly
Sadly, the Solo V2 is far from being a perfect project. First of all, since the
crowdfunding campaign is still being fulfilled, it is not currently
commercially available. Chances are you won't be able to buy one directly
before at least Q4 2023.
I've also hit what seems to be a pretty big firmware bug, or at least, one that
affects my use case quite a bit. Invoking gpg
crashes the Solo V2 completely
if you also have scdaemon
installed. Since scdaemon
is necessary to use
gpg
with an OpenPGP smartcard, this means you cannot issue any gpg
commands
(like signing a git commit...) while the Solo V2 is plugged in.
Any gpg
commands that queries scdaemon
, such as gpg --edit-card
or gpg
--sign foo.txt
times out after about 20 seconds and leaves the token
unresponsive to both touch and CLI commands.
The way to "fix" this issue is to make sure scdaemon
does not interact with
the Solo V2 anymore, using the reader-port
argument:
scdaemon
sees, run the following command: $
echo scd getinfo reader_list gpg-connect-agent --decode awk '/^D/ print
$2 '
20A0:4211:FSIJ-1.2.15-43211613:0
~/.gnupg/scdaemon.conf
with the following line
reader-port $YOUR_TOKEN_ID
. For example, in my case I have: reader-port
20A0:4211:FSIJ-1.2.15-43211613:0
scdaemon
: $ gpgconf --reload scdaemon
scdaemon
again, as I've had previous issues with it.
Which leads me to my biggest gripe so far: it seems SoloKeys (the company)
isn't really fixing firmware issues anymore and doesn't seems to care. The last
firmware release is about a year old.
Although people are experiencing serious bugs, there is no official way to
report them, which leads to issues being seemingly ignored. For
example, the NFC feature is apparently killing keys (!!!), but no one
from the company seems to have acknowledged the issue. The same goes for my
GnuPG bug, which was flagged in September 2022.
For a project that mainly differentiates itself from its (superior) competition
by being "Open", it's not a very good look... Although SoloKeys is still an
unprofitable open source side business of its creators 3, this kind of
attitude certainly doesn't help foster trust.
Conclusion
If you want to have a nice, durable FIDO2 token, I would suggest you get one of
the many models Yubico offers. They are similarly priced, are readily
commercially available, are part of a nice and maintained software ecosystem
and have more features than the Solo V2 (OpenPGP support being the one I miss
the most). Yubikeys are the practical option.
What they are not is open-source hardware, whereas the Solo V2 is. As
bunnie very well explained on his blog in 2019, it does not mean
the later is inherently more trustable than the former, but it does make the
Solo V2 the ideological option. Knowledge is power and it should be free.
As such, tread carefully with SoloKeys, but don't dismiss them altogether: the
Solo V2 is certainly functioning well enough for me.
Thank you to Holger for organising this event yet again!
Next.