Search Results: "rra"

11 June 2023

Michael Prokop: What to expect from Debian/bookworm #newinbookworm

Bookworm Banner, Copyright 2022 Juliette Taka Debian v12 with codename bookworm was released as new stable release on 10th of June 2023. Similar to what we had with #newinbullseye and previous releases, now it s time for #newinbookworm! I was the driving force at several of my customers to be well prepared for bookworm. As usual with major upgrades, there are some things to be aware of, and hereby I m starting my public notes on bookworm that might be worth also for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective. Further readings As usual start at the official Debian release notes, make sure to especially go through What s new in Debian 12 + Issues to be aware of for bookworm. Package versions As a starting point, let s look at some selected packages and their versions in bullseye vs. bookworm as of 2023-02-10 (mainly having amd64 in mind):
Package bullseye/v11 bookworm/v12
ansible 2.10.7 2.14.3
apache 2.4.56 2.4.57
apt 2.2.4 2.6.1
bash 5.1 5.2.15
ceph 14.2.21 16.2.11
docker 20.10.5 20.10.24
dovecot 2.3.13 2.3.19
dpkg 1.20.12 1.21.22
emacs 27.1 28.2
gcc 10.2.1 12.2.0
git 2.30.2 2.39.2
golang 1.15 1.19
libc 2.31 2.36
linux kernel 5.10 6.1
llvm 11.0 14.0
lxc 4.0.6 5.0.2
mariadb 10.5 10.11
nginx 1.18.0 1.22.1
nodejs 12.22 18.13
openjdk 11.0.18 + 17.0.6 17.0.6
openssh 8.4p1 9.2p1
openssl 1.1.1n 3.0.8-1
perl 5.32.1 5.36.0
php 7.4+76 8.2+93
podman 3.0.1 4.3.1
postfix 3.5.18 3.7.5
postgres 13 15
puppet 5.5.22 7.23.0
python2 2.7.18 (gone!)
python3 3.9.2 3.11.2
qemu/kvm 5.2 7.2
ruby 2.7+2 3.1
rust 1.48.0 1.63.0
samba 4.13.13 4.17.8
systemd 247.3 252.6
unattended-upgrades 2.8 2.9.1
util-linux 2.36.1 2.38.1
vagrant 2.2.14 2.3.4
vim 8.2.2434 9.0.1378
zsh 5.8 5.9
Linux Kernel The bookworm release ships a Linux kernel based on version 6.1, whereas bullseye shipped kernel 5.10. As usual there are plenty of changes in the kernel area, including better hardware support, and this might warrant a separate blog entry, but to highlight some changes: See Kernelnewbies.org for further changes between kernel versions. Configuration management puppet s upstream sadly still doesn t provide packages for bookworm (see PA-4995), though Debian provides puppet-agent and puppetserver packages, and even puppetdb is back again, see release notes for further information. ansible is also available and made it with version 2.14 into bookworm. Prometheus stack Prometheus server was updated from v2.24.1 to v2.42.0 and all the exporters that got shipped with bullseye are still around (in more recent versions of course). Virtualization docker (v20.10.24), ganeti (v3.0.2-3), libvirt (v9.0.0-4), lxc (v5.0.2-1), podman (v4.3.1), openstack (Zed), qemu/kvm (v7.2), xen (v4.17.1) are all still around. Vagrant is available in version 2.3.4, also Vagrant upstream provides their packages for bookworm already. If you re relying on VirtualBox, be aware that upstream doesn t provide packages for bookworm yet (see ticket 21524), but thankfully version 7.0.8-dfsg-2 is available from Debian/unstable (as of 2023-06-10) (VirtualBox isn t shipped with stable releases since quite some time due to lack of cooperation from upstream on security support for older releases, see #794466). rsync rsync was updated from v3.2.3 to v3.2.7, and we got a few new options: OpenSSH OpenSSH was updated from v8.4p1 to v9.2p1, so if you re interested in all the changes, check out the release notes between those version (8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1 + 9.2). Let s highlight some notable new features: One important change you might wanna be aware of is that as of OpenSSH v8.8, RSA signatures using the SHA-1 hash algorithm got disabled by default, but RSA/SHA-256/512 AKA RSA-SHA2 gets used instead. OpenSSH has supported RFC8332 RSA/SHA-256/512 signatures since release 7.2 and existing ssh-rsa keys will automatically use the stronger algorithm where possible. A good overview is also available at SSH: Signature Algorithm ssh-rsa Error. Now tools/libraries not supporting RSA-SHA2 fail to connect to OpenSSH as present in bookworm. For example python3-paramiko v2.7.2-1 as present in bullseye doesn t support RSA-SHA2. It tries to connect using the deprecated RSA-SHA-1, which is no longer offered by default with OpenSSH as present in bookworm, and then fails. Support for RSA/SHA-256/512 signatures in Paramiko was requested e.g. at #1734, and eventually got added to Paramiko and in the end the change made it into Paramiko versions >=2.9.0. Paramiko in bookworm works fine, and a backport by rebuilding the python3-paramiko package from bookworm for bullseye solves the problem (BTDT). Misc unsorted Thanks to everyone involved in the release, happy upgrading to bookworm, and let s continue with working towards Debian/trixie. :)

6 June 2023

Shirish Agarwal: Odisha Train Crash and Coverup, Demonetization 2.0 & NHFS-6 Survey

Just a few days back we came to know about the horrific Train Crash that happened in Odisha (Orissa). There are some things that are known and somethings that can be inferred by observance. Sadly, it seems the incident is going to be covered up  . Some of the facts that have not been contested in the public domain are that there were three lines. One loop line on which the Goods Train was standing and there was an up and a down line. So three lines were there. Apparently, the signalling system and the inter-locking system had issues as highlighted by an official about a month back. That letter, thankfully is in the public domain and I have downloaded it as well. It s a letter that goes to 4 pages. The RW is incensed that the letter got leaked and is in public domain. They are blaming everyone and espousing conspiracy theories rather than taking the minister to task. Incidentally, the Minister has three ministries that he currently holds. Ministry of Communication, Ministry of Electronics and Information Technology (MEIT), and Railways Ministry. Each Ministry in itself is important and has revenues of more than 6 lakh crore rupees. How he is able to do justice to all the three ministries is beyond me  The other thing is funds both for safety and relaying of tracks has been either not sanctioned or unutilized. In fact, CAG and the Railway Brass had shared how derailments have increased and unfulfilled vacancies but they were given no importance  In fact, not talking about safety in the recently held Chintan Shivir (brainstorming session) tells you how much the Govt. is serious about safety. In fact, most of the programme was on high speed rail which is a white elephant. I have shared a whitepaper done by RW in the U.S. that tells how high-speed rail doesn t make economic sense. And that is an economy that is 20 times + the Indian Economy. Even the Chinese are stopping with HSR as it doesn t make economic sense. Incidentally, Air Fares again went up 200% yesterday. Somebody shared in the region of 20k + for an Air ticket from their place to Bangalore  Coming back to the story itself. the Goods Train was on the loopline. Some say it was a little bit on the outer, some say otherwise, but it is established that it was on the loopline. This is standard behavior on and around Railway Stations around the world. Whether it was in the Inner or Outer doesn t make much of a difference with what happened next. The first train that collided with the goods train was the 12864 (SMVB-HWH) Yashwantpur Howrah Express and got derailed on to the next track where from the opposite direction 12841 (Shalimar- Bangalore) Coramandel Express was coming. Now they have said that around 300 people have died and that seems to be part of the cover-up. Both the trains are long trains, having between 23 odd coaches each. Even if you have reserved tickets you have 80 odd people in a coach and usually in most of these trains, it is at least double of that. Lot of money goes to TC and then above (Corruption). The Railway fares have gone up enormously but that s a question for perhaps another time  . So at the very least, we could be looking at more than 1000 people having died. The numbers are being under-reported so that nobody has to take responsibility. The Railways itself has told that it is unable to identify 80% of the people who have died. This means that 80% were unreserved ticket holders or a majority of them. There have been disturbing images as how bodies have been flung over on tractors and whatnot to be either buried or cremated without a thought. We are in peak summer season so bodies will start to rot within 24-48 hours  No arrangements made to cool the bodies and take some information and identifying marks or whatever. The whole thing being done in a very callous manner, not giving dignity to even those who have died for no fault of their own. The dissent note also tells that a cover-up is also in the picture. Apparently, India doesn t have nor does it feel to have a need for something like the NTSB that the U.S. used when it hauled both the plane manufacturer (Boeing) and the FAA when the 737 Max went down due to improper data collection and sharing of data with pilots. And with no accountability being fixed to Minister or any of the senior staff, a small junior staff person may be fired. Perhaps the same official that actually told them about the signal failures almost 3 months back  There were and are also some reports that some jugaadu /temporary fixes were applied to signalling and inter-locking just before this incident happened. I do not know nor confirm one way or the other if the above happened. I can however point out that if such a thing happened, then usually a traffic block is announced and all traffic on those lines are stopped. This has been the thing I know for decades. Traveling between Mumbai and Pune multiple times over the years am aware about traffic block. If some repair work was going on and it wasn t able to complete the work within the time-frame then that may well have contributed to the accident. There is also a bit muddying of the waters where it is being said that one of the trains was 4 hours late, which one is conflicting stories. On top of the whole thing, they have put the case to be investigated by CBI and hinting at sabotage. They also tried to paint a religious structure as mosque, later turned out to be a temple. The RW says done by Muslims as it was Friday not taking into account as shared before that most Railway maintenance works are usually done between Friday Monday. This is a practice followed not just in India but world over. There has been also move over a decade to remove wooden sleepers and have concrete sleepers. Unlike the wooden ones they do not expand and contract as much and their life is much more longer than the wooden ones. Funds had been marked (although lower than last few years) but not yet spent. As we know in case of any accident, it is when all the holes in cheese line up it happens. Fukushima is a great example of that, no sea wall even though Japan is no stranger to Tsunamis. External power at the same level as the plant. (10 meters above sea-level), no training for cascading failures scenarios which is what happened. The Days mini-series shares some but not all the faults that happened at Fukushima and the Govt. response to it. There is a difference though, the Japanese Prime Minister resigned on moral grounds. Here, nor the PM, nor the Minister would be resigning on moral grounds or otherwise :(. Zero accountability and that was partly a natural disaster, here it s man-made. In fact, both the Minister and the Prime Minister arrived with their entourages, did a PR blitzkrieg showing how concerned they are. Within 50 hours, the lines were cleared. The part-time Railway Minister shared that he knows the root cause and then few hours later has given the case to CBI. All are saying, wait for the inquiry report. To date, none of the accidents even in this Govt. has produced an investigation report. And even if it did, I am sure it will whitewash as it did in case of Adani as I had shared before in the previous blog post. Incidentally, it is reported that Adani paid off some of its debt, but when questioned as to where they got the money, complete silence on that part :(. As can be seen cover-up after cover-up  FWIW, the Coramandel Express is known as the Migrant train so has a huge number of passengers, the other one which was collided with is known as sick train as huge number of cancer patients use it to travel to Chennai and come back

Demonetization 2.0 Few days back, India announced demonetization 2.0. Surprised, don t be. Apparently, INR 2k/- is being used for corruption and Mr. Modi is unhappy about it. He actually didn t like the INR 2k/- note but was told that it was needed, who told him we are unaware to date. At that time the RBI Governor was Mr. Urjit Patel who didn t say about INR 2k/- he had said that INR 1k/- note redesigned would come in the market. That has yet to happen. What has happened is that just like INR 500/- and INR 1k/- note is concerned, RBI will no longer honor the INR 2k/- note. Obviously, this has made our neighbors angry, namely Nepal, Sri Lanka, Bhutan etc. who do some trading with us. 2 Deccan herald columns share the limelight on it. Apparently, India wants to be the world s currency reserve but doesn t want to play by the rules for everyone else. It was pointed out that both the U.S. and Singapore had retired their currencies but they will honor that promise even today. The Singapore example being a bit closer (as it s in Asia) is perhaps a bit more relevant than the U.S. one. Singapore retired the SGD $10,000 as of 2014 but even in 2022, it remains as legal tender. They also retired the SGD $1,000 in 2020 but still remains legal tender.

So let s have a fictitious example to illustrate what is meant by what Singapore has done. Let s say I go to Singapore, rent a flat, and find a $1000 note in that house somewhere. Both practically and theoretically, I could go down to any of the banks, get the amount transferred to my wallet, bank account etc. and nobody will question. Because they have promised the same. Interestingly, the Singapore Dollar has been pretty resilient against the USD for quite a number of years vis-a-vis other Asian currencies. Most of the INR 2k/- notes were also found and exchanged in Gujarat in just a few days (The PM and HM s state.). I am sure you are looking into the mental gymnastics that the RW indulge in :(. What is sadder that most of the people who try to defend can t make sense one way or the other and start to name-call and get personal as they have nothing else

Disability questions dropped in NHFS-6 Just came to know today that in the upcoming National Family Health Survey-6 disability questions are being dropped. Why is this important. To put it simply, if you don t have numbers, you won t and can t make policies for them. India is one of the worst countries to live if you are disabled. The easiest way to share to draw attention is most Railway platforms are not at level with people. Just as Mick Lynch shares in the UK, the same is pretty much true for India too. Meanwhile in Europe, they do make an effort to be level so even disabled people have some dignity. If your public transport is sorted, then people would want much more and you will be obligated to provide for them as they are citizens. Here, we have had many reports of women being sexually molested when being transferred from platform to coach irrespective of their age or whatnot  The main takeaway is if you do not have their voice, you won t make policies for them. They won t go away but you will make life hell for them. One thing to keep in mind that most people assume that most people are disabled from birth. This may or may not be true. For e.g. in the above triple Railways accidents, there are bound to be disabled people or newly disabled people who were healthy before the accident. The most common accident is road accidents, some involving pedestrians and vehicles or both, the easiest is Ministry of Road Transport data that says 4,00,000 people sustained injuries in 2021 alone in road mishaps. And this is in a country where even accidents are highly under-reported, for more than one reason. The biggest reason especially in 2 and 4 wheeler is the increased premium they would have to pay if in an accident, so they usually compromise with the other and pay off the Traffic Inspector. Sadly, I haven t read a new book, although there are a few books I m looking forward to have. People living in India and neighbors please be careful as more heat waves are expected. Till later.

4 June 2023

Debian Brasil: Oficina de tradu o do Manual do(a) Administrador(a) Debian em 13 de junho

A equipe de tradu o do Debian para o portugu s do Brasil realizar , no dia 13 de junho a partir das 20h, uma oficina de tradu o do Manual do(a) Administrador(a) Debian (The Debian Administrator's Handbook). O objetivo mostrar aos( s) iniciantes como colaborar na tradu o deste importante material, que existe desde 2004 e vem sendo traduzido para o portugu s ao longo dos anos. Agora a tradu o precisa ser atualizada para a vers o 12 do Debian (bookworm), que ser lan ada este m s. A ferramenta usada para traduzir o Manual o site weblate, ent o voc j pode criar sua conta e acessar o Projeto Debian Handbook para se ambientar. A oficina acontecer no formato online, e o link para participar da sala no jitsi ser divulgado no grupo debl10nptBR no telegram e no canal #debian-l10n-br do IRC.

1 June 2023

Jamie McClelland: Enough about the AI Apocalypse Already

After watching Democracy Now s segment on artificial intelligence I started to wonder - am I out of step on this topic? When people claim artificial intelligence will surpass human intelligence and thus threaten humanity with extinction, they seem to be referring specifically to advances made with large language models. As I understand them, large language models are probability machines that have ingested massive amounts of text scraped from the Internet. They answer questions based on the probability of one series of words (their answer) following another series of words (the question). It seems like a stretch to call this intelligence, but if we accept that definition then it follows that this kind of intelligence is nothing remotely like human intelligence, which makes the claim that it will surpass human intelligence confusing. Hasn t this kind of machine learning surpassed us decades ago? Or when we say surpass does that simply refer to fooling people into thinking an AI machine is a human via conversation? That is an important milestone, but I m not ready to accept the turing test as proof of equal intelligence. Furthermore, large language models hallucinate and also reflect the biases of their training data. The word hallucinate seems like a euphemism, as if it could be corrected with the right medication when in fact it seems hard to avoid when your strategy is to correlate words based on probability. But even if you could solve the here is a completely wrong answer presented with sociopathic confidence problem, reflecting the biases of your data sources seems fairly intractable. In what world would a system with built-in bias be considered on the brink of surpassing human intelligence? The danger from LLMs seems to be their ability to convince people that their answers are correct, including their patently wrong and/or biased answers. Why do people think they are giving correct answers? Oh right terrifying right wing billionaires (with terrifying agendas have been claiming AI will exceed human intelligence and threaten humanity and every time they sign a hyperbolic statement they get front page mainstream coverage. And even progressive news outlets are spreading this narrative with minimal space for contrary opinions (thank you Tawana Petty from the Algorithmic Justice League for providing the only glimpse of reason in the segment). The belief that artificial intelligence is or will soon become omnipotent has real world harms today: specifically it creates the misperception that current LLMs are accurate, which paves the way for greater adoption among police forces, social service agencies, medical facilities and other places where racial and economic biases have life and death consequences. When the CEO of OpenAI calls the technology dangerous and in need of regulation, he gets both free advertising promoting the power and supposed accuracy of his product and the possibility of freezing further developments in the field that might challenge OpenAI s current dominance. The real threat to humanity is not AI, it s massive inequality and the use of tactics ranging from mundane bureaucracy to deadly force and incarceration to segregate the affluent from the growing number of people unable to make ends meet. We have spent decades training bureaucrats, judges and cops to robotically follow biased laws to maintain this order without compassion or empathy. Replacing them with AI would be make things worse and should be stopped. But, let s be clear, the narrative that AI is poised to surpass human intelligence and make humanity extinct is a dangerous distraction that runs counter to a much more important story about the very real and very present exploitative practices of the [companies building AI], who are rapidly centralizing power and increasing social inequities. . Maybe we should talk about that instead?

31 May 2023

Russell Coker: Links May 2023

Petter Reinholdtsen wrote an interesting blog post about their work on packaging speech to text for Debian [1]. The work of the Debian Deep Learning Team seems really interesting and I look forward to playing with this sort of thing after the release of Bookworm (the packages in question will NOT go in Bookworm but I ll run at least one system on Testing after Bookworm). It would be nice to get more information on the hardware used for running such programs, the minimum hardware needed for real-time speech to text would be interesting to know. Brian Krebs wrote an informative article about attacks involving supply chain compromise and fake LinkedIn profiles [2]. The attacks targetted Linux as well as Windows. Interesting video about the Illium cameras, a bit harsh though, they criticise Illium devices for being too low resolution, too expensive, and taking too much CPU time to process [3]. The Illium cameras still sell for decent prices on eBay, I wonder if it s because of curious people like me who would like to play with them and have money to spare or whether some other interesting things are being done. I wonder how a 4*4 array of the rectangular cameras secured together with duct tape would go. The ideas of Illium should work better if implemented for multi-core CPUs or GPUs. Bruce Schneier with Henry Farrell and Nathan Sanders wrote an insightful blog post about how AT Chatbots could improve democracy [4]. Wired has an interesting article about the way DJI drones transmit the location of the drone operator without encryption by design [5]. Apparently this has been used for targetting attacks on drone operators in Ukraine. This video about robot mice navigating mazes is interesting [6]. But I think it became less interesting when they got to the stage of milliseconds counting for the win, it s very optimised for one case just like F1. I think it would be interesting if they had a rally contest where they go across grass or sand, 3D mazes both in air and water, and contests where Tungsten weights have to be transported. They should push some of the other limits of engineering as completing a maze quickly has been solved. The Guardian has an interesting article about a blood test for sleepy driving [7]. Once they have an objective test they can punish people for it. This github repository listing public APIs is interesting [8]. Lots of fun ideas for phone apps there. Simon Josefsson wrote an insightful blog post about the threat model of security devices [9]. Unfortunately the security of most people is way below the level where this is an issue. But it s good to think about future steps needed for good security. Cory Doctorow wrote an interesting article The Swivel Eyed Loons have a Point [10] about the fact that some of the nuttiest people are protesting about real issues, just in the wrong way.

30 May 2023

Russ Allbery: Review: The Mimicking of Known Successes

Review: The Mimicking of Known Successes, by Malka Older
Series: Mossa and Pleiti #1
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-86051-2
Format: Kindle
Pages: 169
The Mimicking of Known Successes is a science fiction mystery novella, the first of an expected series. (The second novella is scheduled to be published in February of 2024.) Mossa is an Investigator, called in after a man disappears from the eastward platform on the 4 63' line. It's an isolated platform, five hours away from Mossa's base, and home to only four residential buildings and a pub. The most likely explanation is that the man jumped, but his behavior before he disappeared doesn't seem consistent with that theory. He was bragging about being from Valdegeld University, talking to anyone who would listen about the important work he was doing not typically the behavior of someone who is suicidal. Valdegeld is the obvious next stop in the investigation. Pleiti is a Classics scholar at Valdegeld. She is also Mossa's ex-girlfriend, making her both an obvious and a fraught person to ask for investigative help. Mossa is the last person she expected to be waiting for her on the railcar platform when she returns from a trip to visit her parents. The Mimicking of Known Successes is mostly a mystery, following Mossa's attempts to untangle the story of what happened to the disappeared man, but as you might have guessed there's a substantial sapphic romance subplot. It's also at least adjacent to Sherlock Holmes: Mossa is brilliant, observant, somewhat monomaniacal, and very bad at human relationships. All of this story except for the prologue is told from Pleiti's perspective as she plays a bit of a Watson role, finding Mossa unreadable, attractive, frustrating, and charming in turn. Following more recent Holmes adaptations, Mossa is portrayed as probably neurodivergent, although the story doesn't attach any specific labels. I have no strong opinions about this novella. It was fine? There's a mystery with a few twists, there's a sapphic romance of the second chance variety, there's a bit of action and a bit of hurt/comfort after the action, and it all felt comfortably entertaining but kind of predictable. Susan Stepney has a "passes the time" review rating, and while that may be a bit harsh, that's about where I ended up. The most interesting part of the story is the science fiction setting. We're some indefinite period into the future. Humans have completely messed up Earth to the point of making it uninhabitable. We then took a shot at terraforming Mars and messed that planet up to the point of uninhabitability as well. Now, what's left of humanity (maybe not all of it the story isn't clear) lives on platforms connected by rail lines high in the atmosphere of Jupiter. (Everyone in the story calls Jupiter "Giant" for reasons that I didn't follow, given that they didn't rename any of its moons.) Pleiti's position as a Classics scholar means that she studies Earth and its now-lost ecosystems, whereas the Modern faculty focus on their new platform life. This background does become relevant to the mystery, although exactly how is not clear at the start. I wouldn't call this a very realistic setting. One has to accept that people are living on platforms attached to artificial rings around the solar system's largest planet and walk around in shirt sleeves and only minor technological support due to "atmoshields" of some unspecified capability, and where the native atmosphere plays the role of London fog. Everything feels vaguely Edwardian, including to the occasional human porter and message runner, which matches the story concept but seems unlikely as a plausible future culture. I also disbelieve in humanity's ability to do anything to Earth that would make it less inhabitable than the clouds of Jupiter. That said, the setting is a lot of fun, which is probably more important. It's fun to try to visualize, and it has that slightly off-balance, occasionally surprising feel of science fiction settings where everyone is recognizably human but the things they consider routine and unremarkable are unexpected by the reader. This novella also has a great title. The Mimicking of Known Successes is simultaneously a reference a specific plot point from late in the story, a nod to the shape of the romance, and an acknowledgment of the Holmes pastiche, and all of those references work even better once you know what the plot point is. That was nicely done. This was not very memorable apart from the setting, but it was pleasant enough. I can't say that I'm inspired to pre-order the next novella in this series, but I also wouldn't object to reading it. If you're in the mood for gender-swapped Holmes in an exotic setting, you could do worse. Followed by The Imposition of Unnecessary Obstacles. Rating: 6 out of 10

29 May 2023

John Goerzen: Recommendations for Tools for Backing Up and Archiving to Removable Media

I have several TB worth of family photos, videos, and other data. This needs to be backed up and archived. Backups and archives are often thought of as similar. And indeed, they may be done with the same tools at the same time. But the goals differ somewhat: Backups are designed to recover from a disaster that you can fairly rapidly detect. Archives are designed to survive for many years, protecting against disaster not only impacting the original equipment but also the original person that created them. Reflecting on this, it implies that while a nice ZFS snapshot-based scheme that supports twice-hourly backups may be fantastic for that purpose, if you think about things like family members being able to access it if you are incapacitated, or accessibility in a few decades time, it becomes much less appealing for archives. ZFS doesn t have the wide software support that NTFS, FAT, UDF, ISO-9660, etc. do. This post isn t about the pros and cons of the different storage media, nor is it about the pros and cons of cloud storage for archiving; these conversations can readily be found elsewhere. Let s assume, for the point of conversation, that we are considering BD-R optical discs as well as external HDDs, both of which are too small to hold the entire backup set. What would you use for archiving in these circumstances? Establishing goals The goals I have are: I would welcome your ideas for what to use. Below, I ll highlight different approaches I ve looked into and how they stack up. Basic copies of directories The initial approach might be one of simply copying directories across. This would work well if the data set to be archived is smaller than the archival media. In that case, you could just burn or rsync a new copy with every update and be done. Unfortunately, this is much less convenient with data of the size I m dealing with. rsync is unavailable in that case. With some datasets, you could manually design some rsyncs to store individual directories on individual devices, but that gets unwieldy fast and isn t scalable. You could use something like my datapacker program to split the data across multiple discs/drives efficiently. However, updates will be a problem; you d have to re-burn the entire set to get a consistent copy, or rely on external tools like mtree to reflect deletions. Not very convenient in any case. So I won t be using this. tar or zip While you can split tar and zip files across multiple media, they have a lot of issues. GNU tar s incremental mode is clunky and buggy; zip is even worse. tar files can t be read randomly, making it extremely time-consuming to extract just certain files out of a tar file. The only thing going for these formats (and especially zip) is the wide compatibility for restoration. dar Here we start to get into the more interesting tools. Dar is, in my opinion, one of the best Linux tools that few people know about. Since I first wrote about dar in 2008, it s added some interesting new features; among them, binary deltas and cloud storage support. So, dar has quite a few interesting features that I make use of in other ways, and could also be quite helpful here: Additionally, dar comes with a dar_manager program. dar_manager makes a database out of dar catalogs (or archives). This can then be used to identify the precise archive containing a particular version of a particular file. All this combines to make a useful system for archiving. Isolated catalogs are tiny, and it would be easy enough to include the isolated catalogs for the entire set of archives that came before (or even the dar_manager database file) with each new incremental archive. This would make restoration of a particular subset easy. The main thing to address with dar is that you do need dar to extract the archive. Every dar release comes with source code and a win64 build. dar also supports building a statically-linked Linux binary. It would therefore be easy to include win64 binary, Linux binary, and source with every archive run. dar is also a part of multiple Linux and BSD distributions, which are archived around the Internet. I think this provides a reasonable future-proofing to make sure dar archives will still be readable in the future. The other challenge is user ability. While dar is highly portable, it is fundamentally a CLI tool and will require CLI abilities on the part of users. I suspect, though, that I could write up a few pages of instructions to include and make that a reasonably easy process. Not everyone can use a CLI, but I would expect a person that could follow those instructions could be readily-enough found. One other benefit of dar is that it could easily be used with tapes. The LTO series is liked by various hobbyists, though it could pose formidable obstacles to non-hobbyists trying to aceess data in future decades. Additionally, since the archive is a big file, it lends itself to working with par2 to provide redundancy for certain amounts of data corruption. git-annex git-annex is an interesting program that is designed to facilitate managing large sets of data and moving it between repositories. git-annex has particular support for offline archive drives and tracks which drives contain which files. The idea would be to store the data to be archived in a git-annex repository. Then git-annex commands could generate filesystem trees on the external drives (or trees to br burned to read-only media). In a post about using git-annex for blu-ray backups, an earlier thread about DVD-Rs was mentioned. This has a few interesting properties. For one, with due care, the files can be stored on archival media as regular files. There are some different options for how to generate the archives; some of them would place the entire git-annex metadata on each drive/disc. With that arrangement, one could access the individual files without git-annex. With git-annex, one could reconstruct the final (or any intermediate) state of the archive appropriately, handling deltions, renames, etc. You would also easily be able to know where copies of your files are. The practice is somewhat more challenging. Hundreds of thousands of files what I would consider a medium-sized archive can pose some challenges, running into hours-long execution if used in conjunction with the directory special remote (but only minutes-long with a standard git-annex repo). Ruling out the directory special remote, I had thought I could maybe just work with my files in git-annex directly. However, I ran into some challenges with that approach as well. I am uncomfortable with git-annex mucking about with hard links in my source data. While it does try to preserve timestamps in the source data, these are lost on the clones. I wrote up my best effort to work around all this. In a forum post, the author of git-annex comments that I don t think that CDs/DVDs are a particularly good fit for git-annex, but it seems a couple of users have gotten something working. The page he references is Managing a large number of files archived on many pieces of read-only medium. Some of that discussion is a bit dated (for instance, the directory special remote has the importtree feature that implements what was being asked for there), but has some interesting tips. git-annex supplies win64 binaries, and git-annex is included with many distributions as well. So it should be nearly as accessible as dar in the future. Since git-annex would be required to restore a consistent recovery image, similar caveats as with dar apply; CLI experience would be needed, along with some written instructions. Bacula and BareOS Although primarily tape-based archivers, these do also also nominally support drives and optical media. However, they are much more tailored as backup tools, especially with the ability to pull from multiple machines. They require a database and extensive configuration, making them a poor fit for both the creation and future extractability of this project. Conclusions I m going to spend some more time with dar and git-annex, testing them out, and hope to write some future posts about my experiences.

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

26 May 2023

Valhalla's Things: Correspondence Book

Posted on May 26, 2023
A Coptic bound book open to the first page with the title  Book of <space> Correspondence / Volume <space> Years <space> I write letters. The kind that are written on paper with a dip pen 1 and ink, stamped and sent through the post, spend a few days or weeks maturing like good wine in a depot somewhere2, and then get delivered to the recipient. Some of them (mostly cards) are to people who will receive them and thank me via xmpp (that sounds odd, but actually works out nicely), but others are proper letters with long texts that I exchange with penpals. Most of those are fountain pen frea^Wenthusiasts, so I usually use a different ink each time, and try to vary the paper, and I need to keep track of what I ve used. Some time ago, I ve read a Victorian book3 which recommended keeping a correspondence book to register all mail received and sent, the topics and whether it had been replied or otherwise acted upon. I don t have the mail traffic of a Victorian lady (or even middle class woman), but this looked like something fun to do, and if I added fields for the inks and paper used it would also have useful side effect. A page with writing lines with the title of the field below it: it has a number and then date, sender / recipient (at the ends of the same line, in reply to / replied, ink, paper, pen, topics / notes. So I headed over to the obvious program anybody would use for these things (XeLaTeX, of course) and quickly designed a page with fields for the basic thinks I want to record; it was a bit hurried, and I may improve on it the next time I make one, but I expect this one to last me two or three years, and it is good enough. I ve decided to make it A6 sized, so that it doesn t require a lot of space on my busy desktop, and it could be carried inside a portable desktop, if I ever decide to finish the one for which I ve made a mockup years ago :) Picture of book open to the correspondent pages: the fields are name, letters sent, letters received, address and notes. I ve also added a few pages for the addresses of my correspondents (and an index of the letters I ve exchanged with them), and a few empty pages for other notes. Then I ve used my a6_book.py script to rearrange the A6 pages into signatures and impress them on A4; to reduce later effort I ve added an option to order the pages in such a way that if I then cut four A4 sheet in half at a time (the limit of my rotary cutter) the signatures are ready to be folded. It s not the default because it requires that the pages are a multiple of 32 rather than just 16 (and they are padded up with empty pages if they aren t). If you re also interested in making one, here are the files: the book open to the page of letter two, which is repeated twice. After printing (an older version where some of the pages are repeated. whoops, but it only happened 4 times, and it s not a big deal), it was time for binding this into a book. I ve opted for Coptic stitch, so that the book will open completely flat and writing on it will be easier and the covers are 2 mm cardboard covered in linen-look bookbinding paper (sadly I no longer have a source for bookbinding cloth made from actual cloth). The grey cover of the book with the word correspondence, a stylised envelope and a border in blue. I tried to screenprint a simple design on the cover: the first attempt was unusable (the paper was smaller than the screen, so I couldn t keep it in the right place and moved as I was screenprinting); on the second attempt I used some masking tape to keep the paper in place, and they were a bit better, but I need more practice with the technique. Finally, I decided that for such a Victorian thing I will use an Iron-gall ink, but it s Rohrer & Knlingner Scabiosa, with a purple undertone, because life s too short to use blue-black ink :D And now, I m off to write an actual letter, rather than writing online about things that are related to letter writing.

  1. not a quill! I m a modern person who uses steel nibs!
  2. Milano Roserio, I m looking at you. a month to deliver a postcard from Lombardy to Ticino? not even a letter, which could have hidden contraband, a postcard.
  3. I think. I ve looked at some plausible candidates and couldn t find the source.

25 May 2023

Bits from Debian: New Debian Developers and Maintainers (March and April 2023)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

22 May 2023

Adnan Hodzic: rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform

Considering I ve created my own private cloud in my home as part of: wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology).... The post rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform appeared first on FoolControl: Phear the penguin.

Russ Allbery: Review: Tsalmoth

Review: Tsalmoth, by Steven Brust
Series: Vlad Taltos #16
Publisher: Tor
Copyright: 2023
ISBN: 1-4668-8970-5
Format: Kindle
Pages: 277
Tsalmoth is the sixteenth book in the Vlad Taltos series and (some fans of the series groan) yet another flashback novel to earlier in Vlad's life. It takes place between Yendi and the interludes in Dragon (or, perhaps more straightforwardly, between Yendi and Jhereg. Most of the books of this series stand alone to some extent, so you could read this book out of order and probably not be horribly confused, but I suspect it would also feel weirdly pointless outside of the context of the larger series. We're back to Vlad running a fairly small operation as a Jhereg, who are the Dragaeran version of organized crime. A Tsalmoth who owes Vlad eight hundred imperials has rudely gotten himself murdered, thoroughly enough that he can't be revived. That's a considerable amount of money, and Vlad would like it back, so he starts poking around. As you might expect if you've read any other book in this series, things then get a bit complicated. This time, they involve Jhereg politics, Tsalmoth house politics, and necromancy (which in this universe is more about dimensional travel than it is about resurrecting the dead). The main story is... fine. Kragar is around being unnoticeable as always, Vlad is being cocky and stubborn and bantering with everyone, and what appears to be a straightforward illegal business relationship turns out to involve Dragaeran magic and thus Vlad's highly-placed friends. As usual, they're intellectually curious about the magic and largely ambivalent to the rest of Vlad's endeavors. The most enjoyable part of the story is Vlad's insistence on getting his money back while everyone else in the story cannot believe he would be this persistent over eight hundred imperials and is certain he has some other motive. It's otherwise a fairly forgettable little adventure. The implications for the broader series, though, are significant, although essentially none of the payoff is here. Brust has been keeping a major secret about Vlad that's finally revealed here, one that has little impact on the plot of this book (although it causes Vlad a lot of angst) but which I suspect will become very important later in the series. That was intriguing but rather unsatisfying, since it stays only a future hook with an attached justification for why we're only finding out about it now. If one has read the rest of the series, it's also nice to see Vlad and Cawti working together, bantering with each other and playing off of each other's strengths. It's reminiscent of the best parts of Yendi. As with many of the books of this series, the chapter introductions tell a parallel story; this time, it's Vlad and Cawti's wedding. I think previous books already mentioned that Vlad is narrating this series into some sort of recording device, and a bit about why he's doing that, but this is made quite explicit here. We get as much of the surrounding frame as we've ever seen before. There are no obvious plot consequences from this it's still all hints and guesswork but I suspect this will also become important by the end of the series. If you've read this much of the series, you'll obviously want to read this one as well, but unfortunately don't get your hopes up for significant plot advancement. This is another station-keeping book, which is a bit of a disappointment. We haven't gotten major plot advancement since Hawk in 2014, and I'm getting impatient. Thankfully, Lyorn has a release date already (April 9, 2024), and assuming all goes according to the grand plan, there are only two books left after Lyorn (Chreotha and The Last Contract). I'm getting hopeful that we're going to get to see the entire series. Meanwhile, I am very tempted to do a complete re-read of the series to date, probably in series chronological order rather than in publication order (as much as that's possible given the fractured timelines of Dragon and Tiassa) so that I can see how the pieces fit together. The constant jumping back and forth and allusions to events that have already happened but that we haven't seen yet is hard to keep track of. I'm very glad the Lyorn Records exists. Followed by Lyorn. Rating: 7 out of 10

15 May 2023

Sven Hoexter: GCP: Private Service Connect Forwarding Rules can not be Updated

PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:
Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target':
'<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/
k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalid
Worked around that with the help of terrraform_data and lifecycle:
resource "terraform_data" "replacement"  
    input = var.gcp_psc_data["target"]
 
resource "google_compute_forwarding_rule" "this"  
    count   = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0
    name    = "$ var.gcp_psc_name -psc"
    region  = var.gcp_region
    project = var.gcp_project
    target                = var.gcp_psc_data["target"]
    load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment
    network               = var.gcp_network
    ip_address            = google_compute_address.this.id
    lifecycle  
        replace_triggered_by = [
            terraform_data.replacement
        ]
     
 
See also terraform data for replace_triggered_by.

12 May 2023

Dirk Eddelbuettel: crc32c 0.0.2 on CRAN: Build Fixes

A first follow-up to the initial announcement just days ago of the new crc32c package. The package offers cyclical checksum with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64. This follow-up was needed because I missed, when switching to a default static library build, that newest compilers would complain if -fPIC was not set. gcc-12 on my box was happy, gcc-13 on recent Fedora as used at CRAN was not. A second error was assuming that saying SystemRequirements: cmake would suffice. But hold on whippersnapper: macOS always has a surprise for you! As described at the end of the appropriate section in Writing R Extensions, on that OS you have to go the basement, open four cupboards, rearrange three shelves and then you get to use it. And then in doing so (from an added configure script) I failed to realize Windows needed a fallback. Gee. The NEWS entry for this (as well the initial release) follows.

Changes in version 0.0.2 (2023-05-11)
  • Explicitly set cmake property for position-independent code
  • Help macOS find its cmake binary as detailed also in WRE
  • Help Windows with a non-conditional Makevars.win pointing at cmake
  • Add more badges to README.md

Changes in version 0.0.1 (2023-05-07)
  • Initial release version and CRAN upload

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 May 2023

Shirish Agarwal: India Press freedom, Profiteering, AMD issues in the wild.

India Press Freedom Just about a week back, India again slipped in the Freedom index, this time falling to 161 out of 180 countries. The RW again made lot of noise as they cannot fathom why it has been happening so. A recent news story gives some idea. Every year NCRB (National Crime Records Bureau) puts out its statistics of crimes happening across the country. The report is in public domain. Now according to report shared, around 40k women from Gujarat alone disappeared in the last five years. This is a state where BJP has been ruling for the last 30 odd years. When this report became viral, almost all national newspapers the news was censored/blacked out. For e.g. check out newindianexpress.com, likewise TOI and other newspapers, the news has been 404. The only place that you can get that news is in minority papers like siasat. But the story didn t remain till there. While the NCW (National Commission of Women) pointed out similar stuff happening in J&K, Gujarat Police claimed they got almost 39k women back. Now ideally, it should have been in NCRB data as an addendum as the report can be challenged. But as this news was made viral, nobody knows the truth or false in the above. What BJP has been doing is whenever they get questioned, they try to muddy the waters like that. And most of the time, such news doesn t make to court so the party gets a freebie in a sort as they are not legally challenged. Even if somebody asks why didn t Gujarat Police do it as NCRB report is jointly made with the help of all states, and especially with BJP both in Center and States, they cannot give any excuse. The only excuse you see or hear is whataboutism unfortunately

Profiteering on I.T. Hardware I was chatting with a friend yesterday who is an enthusiast like me but has been more alert about what has been happening in the CPU, motherboard, RAM world. I was simply shocked to hear the prices of motherboards which are three years old, even a middling motherboard. For e.g. the last time I bought a mobo, I spent about 6k but that was for an ATX motherboard. Most ITX motherboards usually sold for around INR 4k/- or even lower. I remember Via especially as their mobos were even cheaper around INR 1.5-2k/-. Even before pandemic, many motherboard manufacturers had closed down shop leaving only a few in the market. As only a few remained, prices started going higher. The pandemic turned it to a seller s market overnight as most people were stuck at home and needed good rigs for either work or leisure or both. The manufacturers of CPU, motherboards, GPU s, Powersupply (SMPS) named their prices and people bought it. So in 2023, high prices remained while warranty periods started coming down. Governments also upped customs and various other duties. So all are in hand in glove in the situation. So as shared before, what I have been offered is a 4 year motherboard with a CPU of that time. I haven t bought it nor do I intend to in short-term future but extremely disappointed with the state of affairs

AMD Issues It s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response.  Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever

2 May 2023

Neil Williams: Carrying Grief

This isn't a book review, although the reason that I am typing this now is because of a book, You Are Not Alone: from the creator and host of Griefcast, Cariad Lloyd, ISBN: 978-1526621870 and I include a handful of quotes from Cariad where there is really no better way of describing things. Many people experience death for the first time as a child, often relating to a family pet. Death is universal but every experience of death is unique. One of the myths of grief is the idea of the Five Stages but this is a misinterpretation. Denial, Anger, Bargaining, Depression and Acceptance represent the five stage model of death and have nothing to do with grief. The five stages were developed from studying those who are terminally ill, the dying, not those who then grieve for the dead person and have to go on living without them. Grief is for those who loved the person who has died and it varies between each of those people just as people vary in how they love someone. The Five Stages end at the moment of death, grief is what comes next and most people do not grieve in stages, it can be more like a tangled knot. Death has a date and time, so that is why the last stage of the model is Acceptance. Grief has no timetable, those who grieve will carry that grief for the rest of their lives. Death starts the process of grief in those who go on living just as it ends the life of the person who is loved. "Grief eases and changes and returns but it never disappears.". I suspect many will have already stopped reading by this point. People do not talk about death and grief enough and this only adds to the burden of those who carry their grief. It can be of enormous comfort to those who have carried grief for some time to talk directly about the dead, not in vague pleasantries but with specific and strong memories. Find a safe place without distractions and talk with the person grieving face to face. Name the dead person. Go to places with strong memories and be there alongside. Talk about the times with that person before their death. Early on, everything about grief is painful and sad. It does ease but it remains unpredictable. Closing it away in a box inside your head (as I did at one point) is like cutting off a damaged limb but keeping the pain in a box on the shelf. You still miss the limb and eventually, the box starts leaking. For me, there were family pets which died but my first job out of university was to work in hospitals, helping the nurses manage the medication regimen and providing specialist advice as a pharmacist. It will not be long in that environment before everyone on the ward gets direct experience of the death of a person. In some ways, this helped me to separate the process of death from the process of grief. I cared for these people as patients but these were not my loved ones. Later, I worked in specialist terminal care units, including providing potential treatments as part of clinical trials. Here, it was not expected for any patient to be discharged alive. The more aggressive chemotherapies had already been tried and had failed, this was about pain relief, symptom management and helping the loved ones. Palliative care is not just about the patient, it involves helping the loved ones to accept what is happening as this provides comfort to the patient by closing the loop. Grief is stressful. One of the most common causes of personal stress is bereavement. The death of your loved one is outside of your control, it has happened, no amount of regret can change that. Then come all the other stresses, maybe about money or having somewhere to live as a result of what else has changed after the death or having to care for other loved ones. In the early stages, the first two years, I found it helpful to imagine my life as a box containing a ball and a button. The button triggers new waves of pain and loss each time it is hit. The ball bounces around the box and hits the button at random. Initially, the button is large and the ball is enormous, so the button is hit almost constantly. Over time, both the button and the ball change size. Starting off at maximum, initially there is only one direction of change. There are two problems with this analogy. First is that the grief ball has infinite energy which does not happen in reality. The ball may get smaller and the button harder to hit but the ball will continue bouncing. Secondly, the life box is not a predictable shape, so the pattern of movement of the ball is unpredictable. A single stress is one thing, but what has happened since has just kept adding more stress for me. Shortly before my father died 5 years ago now, I had moved house. Then, I was made redundant on the day of the first anniversary of my father's death. A year or so later, my long term relationship failed and a few months after that COVID-19 appeared. As the country eased out of the pandemic in 2021, my mother died (unrelated to COVID itself). A year after that, I had to take early retirement. My brother and sister, of course, share a lot of those stressors. My brother, in particular, took the responsibility for organising both funerals and did most of the visits to my mother before her death. The grief is different for each of the surviving family. Cariad's book helped me understand why I was getting frequent ideas about going back to visit places which my father and I both knew. My parents encouraged each of us to work hard to leave Port Talbot (or Pong Toilet locally) behind, in no small part due to the unrestrained pollution and deprivation that is common to small industrial towns across Wales, the midlands and the north of the UK. It wasn't that I wanted to move house back to our ancestral roots. It was my grief leaking out of the box. Yes, I long for mountains and the sea because I'm now living in a remorselessly flat and landlocked region after moving here for employment. However, it was my grief driving those longings - not for the physical surroundings but out of the shared memories with my father. I can visit those memories without moving house, I just need to arrange things so that I can be undisturbed and undistracted. I am not alone with my grief and I am grateful to my friends who have helped whilst carrying their own grief. It is necessary for everyone to think and talk about death and grief. In respect of your own death, no matter how far ahead that may be, consider Advance Care Planning and Expressions of Wish as well as your Will. Talk to people, document what you want. Your loved ones will be grateful and they deserve that much whilst they try to cope with the first onslaught of grief. Talk to your loved ones and get them to do the same for themselves. Normalise talking about death with your family, especially children. None of us are getting out of this alive and we will all leave behind people who will grieve.

1 May 2023

Gunnar Wolf: Scanning heaps of 8mm movies

After my father passed away, I brought home most of the personal items he had, both at home and at his office. Among many, many (many, many, many) other things, I brought two of his personal treasures: His photo collection and a box with the 8mm movies he shot approximately between 1956 and 1989, when he was forced into modernity and got a portable videocassette recorder. I have talked with several friends, as I really want to get it all in a digital format, and while I ve been making slow but steady advances scanning the photo reels, I was particularly dismayed (even though it was most expected most personal electronic devices aren t meant to last over 50 years) to find out the 8mm projector was no longer in working conditions; the lamp and the fans work, but the spindles won t spin. Of course, it is quite likely it is easy to fix, but it is beyond my tinkering abilities and finding photographic equipment repair shops is no longer easy. Anyway, even if I got it fixed, filming a movie from a screen, even with a decent camera, is a lousy way to get it digitized. But almost by mere chance, I got in contact with my cousin Daniel, ho came to Mexico to visit his parents, and had precisely brought with him a 8mm/Super8 movie scanner! It is a much simpler piece of equipment than I had expected, and while it does present some minor glitches (i.e. the vertical framing slightly loses alignment over the course of a medium-length film scanning session, and no adjustment is possible while the scan is ongoing), this is something that can be decently fixed in post-processing, and a scanning session can be split with no ill effects. Anyway, it is quite uncommon a mid-length (5min) film can be done without interrupting i.e. to join a splice, mostly given my father didn t just film, but also edited a lot (this is, it s not just family pictures, but all different kinds of fiction and documentary work he did). So, Daniel lent me a great, brand new, entry-level film scanner; I rushed to scan as many movies as possible before his return to the USA this week, but he insisted he bought it to help preserve our family s memory, and given we are still several cousins living in Mexico, I could keep hold of it so any other of the cousins will find it more easily. Of course, I am thankful and delighted! So, this equipment is a Magnasonic FS81. It is entry-level, as it lacks some adjustment abilities a professional one would surely have, and I m sure a better scanner will make the job faster but it s infinitely superior to not having it! The scanner processes roughly two frames per second (while the nominal 8mm/Super8 speed is 24 frames per second), so a 3 minute film reel takes a bit over 35 minutes And a long, ~20 minute film reel takes Close to 4hr, if nothing gets in your way :- And yes, with longer reels, the probability of a splice breaking are way higher than with a short one not only because there is simply a longer film to process, but also because, both at the unwinding and at the receiving reels, mechanics play their roles. The films don t advance smoothly, but jump to position each frame in the scanner s screen, so every bit of film gets its fair share of gentle tugs. My professional consultant on how and what to do is my good friend Chema Serralde, who has stopped me from doing several things I would regret later otherwise (such as joining spliced tapes with acidic chemical adhesives such as Kola Loka, a.k.a. Krazy Glue even if it s a bit trickier to do it, he insisted me on best using simple transparent tape if I m not buying fancy things such as film-adhesive). Chema also explained me the importance of the loopers (las Lupes in his technical Spanish translation), which I feared increased the likelihood of breaking a bit of old glue due to the angle in which the film gets pulled but if skipped, result in films with too much jumping. Not all of the movies I have are for public sharing Some of them are just family movies, with high personal value, but probably of very little interest to others. But some are! I have been uploading some of the movies, after minor post-processing, to the Internet Archive. Among them: Anyway, I have a long way forward for scanning. I have 20 3min reels, 19 5min reels, and 8 20min reels. I want to check the scanning quality, but I think my 20min reels are mostly processed (we paid for scanning them some years ago). I mostly finished the 3min reels, but might have to go over some of them again due to the learning process. And Well, I m having quite a bit of fun in the process!

29 April 2023

Junichi Uekawa: Draw graph of number of commits.

Draw graph of number of commits. I thought I had lots of commits this month for Chrome OS. To count those, all merged commits have committer Chromeos LUCI. To get the timestamp of the commits, git log --committer="Chromeos LUCI" --pretty=%ct . gives me the list of timestamps. UNIX timestamp can be parsed with datetime.fromtimestamp, and then that array can be processed with density graph plot or histogram plotting tool, such as plt.hist.

27 April 2023

Jonathan McDowell: Repurposing my C.H.I.P.

Way back at DebConf16 Gunnar managed to arrange for a number of Next Thing Co. C.H.I.P. boards to be distributed to those who were interested. I was lucky enough to be amongst those who received one, but I have to confess after some initial experimentation it ended up sitting in its box unused. The reasons for that were varied; partly about not being quite sure what best to do with it, partly due to a number of limitations it had, partly because NTC sadly went insolvent and there was less momentum around the hardware. I ve always meant to go back to it, poking it every now and then but never completing a project. I m finally almost there, and I figure I should write some of it up. TL;DR: My C.H.I.P. is currently running a mainline Linux 6.3 kernel with only a few DTS patches, an upstream u-boot v2022.1 with a couple of minor patches and an unmodified Debian bullseye armhf userspace.

Storage The main issue with the C.H.I.P. is that it uses MLC NAND, in particular mine has an 8MB H27QCG8T2E5R. That ended up unsupported in Linux, with the UBIFS folk disallowing operation on MLC devices. There s been subsequent work to enable an SLC emulation mode which makes the device more reliable at the cost of losing capacity by pairing up writes/reads in cells (AFAICT). Some of this hit for the H27UCG8T2ETR in 5.16 kernels, but I definitely did some experimentation with 5.17 without having much success. I should maybe go back and try again, but I ended up going a different route. It turned out that BytePorter had documented how to add a microSD slot to the NTC C.H.I.P., using just a microSD to full SD card adapter. Every microSD card I buy seems to come with one of these, so I had plenty lying around to test with. I started with ensuring the kernel could see it ok (by modifying the device tree), but once that was all confirmed I went further and built a more modern u-boot that talked to the SD card, and defaulted to booting off it. That meant no more relying on the internal NAND at all! I do see some flakiness with the SD card, which is possibly down to the dodgy way it s hooked up (I should probably do a basic PCB layout with JLCPCB instead). That s mostly been mitigated by forcing it into 1-bit mode instead of 4-bit mode (I tried lowering the frequency too, but that didn t make a difference). The problem manifests as:
sunxi-mmc 1c11000.mmc: data error, sending stop command
and then all storage access freezing (existing logins still work, if the program you re trying to run is in cache). I can t find a conclusive software solution to this; I m pretty sure it s the hardware, but I don t understand why the recovery doesn t generally work.

Random power offs After I had storage working I d see random hangs or power offs. It wasn t quite clear what was going on. So I started trying to work out how to find out the CPU temperature, in case it was overheating. It turns out the temperature sensor on the R8 is part of the touchscreen driver, and I d taken my usual approach of turning off all the drivers I didn t think I d need. Enabling it (CONFIG_TOUCHSCREEN_SUN4I) gave temperature readings and seemed to help somewhat with stability, though not completely. Next I ended up looking at the AXP209 PMIC. There were various scripts still installed (I d started out with the NTC Debian install and slowly upgraded it to bullseye while stripping away the obvious pieces I didn t need) and a start-up script called enable-no-limit. This turned out to not be running (some sort of expectation of i2c-dev being loaded and another failing check), but looking at the script and the data sheet revealed the issue. The AXP209 can cope with 3 power sources; an external DC source, a Li-battery, and finally a USB port. I was powering my board via the USB port, using a charger rated for 2A. It turns out that the AXP209 defaults to limiting USB current to 900mA, and that with wifi active and the CPU busy the C.H.I.P. can rise above that. At which point the AXP shuts everything down. Armed with that info I was able to understand what the power scripts were doing and which bit I needed - i2cset -f -y 0 0x34 0x30 0x03 to set no limit and disable the auto-power off. Additionally I also discovered that the AXP209 had a built in temperature sensor as well, so I added support for that via iio-hwmon.

WiFi WiFi on the C.H.I.P. is provided by an RTL8723BS SDIO attached device. It s terrible (and not just here, I had an x86 based device with one where it also sucked). Thankfully there s a driver in staging in the kernel these days, but I ve still found it can fall out with my house setup, end up connecting to a further away AP which then results in lots of retries, dropped frames and CPU consumption. Nailing it to the AP on the other side of the wall from where it is helps. I haven t done any serious testing with the Bluetooth other than checking it s detected and can scan ok.

Patches I patched u-boot v2022.01 (which shows you how long ago I was trying this out) with the following to enable boot from external SD:
u-boot C.H.I.P. external SD patch
diff --git a/arch/arm/dts/sun5i-r8-chip.dts b/arch/arm/dts/sun5i-r8-chip.dts
index 879a4b0f3b..1cb3a754d6 100644
--- a/arch/arm/dts/sun5i-r8-chip.dts
+++ b/arch/arm/dts/sun5i-r8-chip.dts
@@ -84,6 +84,13 @@
 		reset-gpios = <&pio 2 19 GPIO_ACTIVE_LOW>; /* PC19 */
 	 ;
 
+	mmc2_pins_e: mmc2@0  
+		pins = "PE4", "PE5", "PE6", "PE7", "PE8", "PE9";
+		function = "mmc2";
+		drive-strength = <30>;
+		bias-pull-up;
+	 ;
+
 	onewire  
 		compatible = "w1-gpio";
 		gpios = <&pio 3 2 GPIO_ACTIVE_HIGH>; /* PD2 */
@@ -175,6 +182,16 @@
 	status = "okay";
  ;
 
+&mmc2  
+	pinctrl-names = "default";
+	pinctrl-0 = <&mmc2_pins_e>;
+	vmmc-supply = <&reg_vcc3v3>;
+	vqmmc-supply = <&reg_vcc3v3>;
+	bus-width = <4>;
+	broken-cd;
+	status = "okay";
+ ;
+
 &ohci0  
 	status = "okay";
  ;
diff --git a/arch/arm/include/asm/arch-sunxi/gpio.h b/arch/arm/include/asm/arch-sunxi/gpio.h
index f3ab1aea0e..c0dfd85a6c 100644
--- a/arch/arm/include/asm/arch-sunxi/gpio.h
+++ b/arch/arm/include/asm/arch-sunxi/gpio.h
@@ -167,6 +167,7 @@ enum sunxi_gpio_number  
 
 #define SUN8I_GPE_TWI2		3
 #define SUN50I_GPE_TWI2		3
+#define SUNXI_GPE_SDC2		4
 
 #define SUNXI_GPF_SDC0		2
 #define SUNXI_GPF_UART0		4
diff --git a/board/sunxi/board.c b/board/sunxi/board.c
index fdbcd40269..f538cb7e20 100644
--- a/board/sunxi/board.c
+++ b/board/sunxi/board.c
@@ -433,9 +433,9 @@ static void mmc_pinmux_setup(int sdc)
 			sunxi_gpio_set_drv(pin, 2);
 		 
 #elif defined(CONFIG_MACH_SUN5I)
-		/* SDC2: PC6-PC15 */
-		for (pin = SUNXI_GPC(6); pin <= SUNXI_GPC(15); pin++)  
-			sunxi_gpio_set_cfgpin(pin, SUNXI_GPC_SDC2);
+		/* SDC2: PE4-PE9 */
+		for (pin = SUNXI_GPE(4); pin <= SUNXI_GPE(9); pin++)  
+			sunxi_gpio_set_cfgpin(pin, SUNXI_GPE_SDC2);
 			sunxi_gpio_set_pull(pin, SUNXI_GPIO_PULL_UP);
 			sunxi_gpio_set_drv(pin, 2);
 		 

I ve sent some patches for the kernel device tree upstream - there s an outstanding issue with the Bluetooth wake GPIO causing the serial port not to probe(!) that I need to resolve before sending a v2, but what s there works for me. The only remaining piece is patch to enable the external SD for Linux; I don t think it s appropriate to send upstream but it s fairly basic. This limits the bus to 1 bit rather than the 4 bits it s capable of, as mentioned above.
Linux C.H.I.P. external SD DTS patch diff diff --git a/arch/arm/boot/dts/sun5i-r8-chip.dts b/arch/arm/boot/dts/sun5i-r8-chip.dts index fd37bd1f3920..2b5aa4952620 100644 --- a/arch/arm/boot/dts/sun5i-r8-chip.dts +++ b/arch/arm/boot/dts/sun5i-r8-chip.dts @@ -163,6 +163,17 @@ &mmc0 status = "okay"; ; +&mmc2 + pinctrl-names = "default"; + pinctrl-0 = <&mmc2_4bit_pe_pins>; + vmmc-supply = <&reg_vcc3v3>; + vqmmc-supply = <&reg_vcc3v3>; + bus-width = <1>; + non-removable; + disable-wp; + status = "okay"; + ; + &ohci0 status = "okay"; ;

As for what I m doing with it, I think that ll have to be a separate post.

Arturo Borrero Gonz lez: Kubecon and CloudNativeCon 2023 Europe summary

Post logo This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in Austin, Texas (USA) in 2017. I also attended once in a virtual fashion. The content here is mostly generated for the sake of my own recollection and learnings, and is written from the notes I took during the event. The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and share the excitement about the days ahead. Some astonishing numbers were announced: there were more than 10.000 people attending, and apparently it could confidently be said that it was the largest open source technology conference taking place in Europe in recent times. It was also communicated that the next couple iteration of the event will be run in China in September 2023 and Paris in March 2024. More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000 contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the industry for cloud-native technology adoption and all-things related to PaaS and IaaS. The event program had different tracks, and in each one there was an interesting mix of low-level and higher level talks for a variety of audience. On many occasions I found that reading the talk title alone was not enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in previous editions, I didn t have the feeling that the purpose of the conference was to try selling me anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me. On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized with frustration that the more interesting talks were immediately fully booked, with people waiting in line some 45 minutes before the session time. Because of this, I missed a few important sessions that I ll hopefully watch online later. Finally, on a more technical side, I ve learned many things, that instead of grouping by session I ll group by topic, given how some subjects were mentioned in several talks. On gitops and CI/CD pipelines Most of the mentions went to FluxCD and ArgoCD. At that point there were no doubts that gitops was a mature approach and both flux and argoCD could do an excellent job. ArgoCD seemed a bit more over-engineered to be a more general purpose CD pipeline, and flux felt a bit more tailored for simpler gitops setups. I discovered that both have nice web user interfaces that I wasn t previously familiar with. However, in two different talks I got the impression that the initial setup of them was simple, but migrating your current workflow to gitops could result in a bumpy ride. This is, the challenge is not deploying flux/argo itself, but moving everything into a state that both humans and flux/argo can understand. I also saw some curious mentions to the config drifts that can happen in some cases, even if the goal of gitops is precisely for that to never happen. Such mentions were usually accompanied by some hints on how to operate the situation by hand. Worth mentioning, I missed any practical information about one of the key pieces to this whole gitops story: building container images. Most of the showcased scenarios were using pre-built container images, so in that sense they were simple. Building and pushing to an image registry is one of the two key points we would need to solve in Toolforge Kubernetes if adopting gitops. In general, even if gitops were already in our radar for Toolforge Kubernetes, I think it climbed a few steps in my priority list after the conference. Another learning was this site: https://opengitops.dev/. Group On etcd, performance and resource management I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking about the exact same problems we have had in Toolforge Kubernetes, like api-server and etcd failure modes, and how sensitive etcd is to disk latency, IO pressure and network throughput. Even though Toolforge Kubernetes scale is small compared to other Kubernetes deployments out there, I found it very interesting to see other s approaches to the same set of challenges. I learned how most Kubernetes components and apps can overload the api-server. Because even the api-server talks to itself. Simple things like kubectl may have a completely different impact on the API depending on usage, for example when listing the whole list of objects (very expensive) vs a single object. The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get calls). I already knew some of this, and for example the jobs-framework-emailer was already making use of this ResourceVersion functionality. There have been a lot of improvements in the performance side of Kubernetes in recent times, or more specifically, in how resources are managed and used by the system. I saw a review of resource management from the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware scheduling decisions and dynamic resource claims (changing the pod resource claims without re-defining/re-starting the pods). On cluster management, bootstrapping and multi-tenancy I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers themselves. This was of interest to me because as of today we use it for Toolforge. They shared all the latest developments and improvements, and the plans and roadmap for the future, with a special mention to something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing certificates and such. I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was the best, from the point of view of being a well established and well-known workflow, plus having a very active contributor base. The kubeadm developers invited the audience to submit feature requests, so I did. The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play with it sometime down the road. On networking I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the network to achieve its goal. The conference program had sessions to cover topics ranging from network debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering it is the one we use for Toolforge Kubernetes. Slide On jobs I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning models and other fancy AI projects. It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some limitations that are now gone, or at least greatly improved. Some new functionalities have been added as well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger batch of data based on that index. It would allow for full grid-like features like sequential (or again, indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that something we would like to enable in Toolforge Jobs Framework? On policy and security A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice to learn two realities:
  1. kubernetes is capable of doing pretty much anything security-wise and create greatly secured environments.
  2. it does not by default. The defaults are not security-strict on purpose.
It kind of made sense to me: Kubernetes was used for a wide range of use cases, and developers didn t know beforehand to which particular setup they should accommodate the default security levels. One session in particular covered the most basic security features that should be enabled for any Kubernetes system that would get exposed to random end users. In my opinion, the Toolforge Kubernetes setup was already doing a good job in that regard. To my joy, some sessions referred to the Pod Security Admission mechanism, which is one of the key security features we re about to adopt (when migrating away from Pod Security Policy). I also learned a bit more about Secret resources, their current implementation and how to leverage a combo of CSI and RBAC for a more secure usage of external secrets. Finally, one of the major takeaways from the conference was learning about kyverno and kubeaudit. I was previously aware of the OPA Gatekeeper. From the several demos I saw, it was to me that kyverno should help us make Toolforge Kubernetes more sustainable by replacing all of our custom admission controllers with it. I already opened a ticket to track this idea, which I ll be proposing to my team soon. Final notes In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. List of sessions I attended on the first day: List of sessions I attended on the second day: List of sessions I attended on third day: The videos have been published on Youtube.

Next.

Previous.