Search Results: "ed"

26 May 2024

Russell Coker: USB-A vs USB-C

USB-A is the original socket for USB at the PC end. There are 2 variants of it, the first is for USB 1.1 to USB 2 and the second is for USB 3 which adds extra pins in a plug and socket compatible manner you can plug a USB-A device into a USB-A socket without worrying about the speeds of each end as long as you don t need USB 3 speeds. The differences between USB-A and USB-C are:
  1. USB-C has the same form factor as Thunderbolt and the Thunderbolt protocol can run over it if both ends support it.
  2. USB-C generally supports higher power modes for charging (like 130W for Dell laptops, monitors, and plugpacks) but there s no technical reason why USB-A couldn t do it. You can buy chargers that do 60W over USB-A which could power one of our laptops via a USB-A to USB-C cable. So high power USB-A is theoretically possible but generally you won t see it.
  3. USB-C has DisplayPort alternate mode which means using some of the wires for DisplayPort.
  4. USB-C is more likely to support the highest speeds than USB-A sockets for super speed etc. This is not a difference in the standards just a choice made by manufacturers.
While USB-C tends to support higher power delivery modes in actual implementations for connecting to a PC the PC end seems to only support lower power modes regardless of port. I think it would be really good if workstations could connect to monitors via USB-C and provide power, DisplayPort, and keyboard, mouse, etc over the same connection. But unfortunately the PC and monitor ends don t appear to support such things. If you don t need any of those benefits in the list above (IE you are using USB for almost anything we do other than connecting a laptop to a dock/monitor/charger) then USB-A will do the job just as well as USB-C. The choice of which type to use should be based on price and which ports are available, EG My laptop has 2*USB-C ports and 2*USB-A so given that one USB-C port is almost always used for the monitor or for charging I don t really want to use USB-C for anything else to avoid running out of ports. When buying USB devices you can t always predict which systems you will need to connect them to. Currently there are a lot of systems without USB-C that are working well and have no need to be replaced. I haven t yet seen a system where the majority of ports are USB-C but that will probably happen in the next few years. Maybe in 2027 there will be PCs on sale with only two USB-A sockets forcing people who don t want to use a USB hub to save both of them for keyboard and mouse. Currently USB-C keyboards and mice are available on AliExpress but they are expensive and I haven t seen them in Australian stores. Most computer users don t wear out keyboards or mice so a lot of USB-A keyboard and mice will be in service for a long time. As an aside there are still many PCs with PS/2 keyboard and mouse ports in service so these things don t go away for a long time. There is one corner case where USB-C is convenient which is when you want to connect a mass storage device for system recovery or emergency backup, want a high speed, and don t want to spend time figuring out which of the ports are super speed (which can be difficult at the back of a PC with poor lighting). With USB-C you can expect a speed of at least 5Gbit/s and don t have to worry about accidentally connecting to a USB 2 port as is the situation with USB-A. For my own use the only times that I prefer USB-C over USB-A are for devices to connect to phones. Eventually I ll get a laptop that only has USB-C ports and this will change, but even then adaptors are possible. For someone who doesn t know the details of how things works it s not unreasonable to just buy the newest stuff and assume it s better as it usually is. But hopefully blog posts like this can help people make more informed decisions.

Guido G nther: Don't unblank in my back pack please

Since phoc 0.39.0 it is possible to configure which keys unidle your phone (which results in unblanking the screen). The current default is that all keys unblank which is usually fine for e.g. laptops but not the desired result for phones and tablets where this depends on the position and function of keys. Volume keys and other exposed keys usually shouldn t unblank - maybe with the exception of some Home buttons on devices that have those.

25 May 2024

Gunnar Wolf: How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting

This post is a review for Computing Reviews for How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning
If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington s enticing book and the result is quite amazing. The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends. But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth s classic book on this same topic [2], but could very well be a primer for readers to approach it. The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution. This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References
  1. S nchez y G ndara, A.; Magari os Lamas, F.; Wolf, K. B., Manual de lenguaje y tipograf a cient fica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html
  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

24 May 2024

Julian Andres Klode: Observations in Debian dependency solving

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs. You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don t actually have any pure literals in there). We can control it in a bunch of ways:
  1. We can mark packages as install or reject
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.
This is about all that we really want to do, we can t go if we reach a conflict, say oh but this conflict was introduced by that upgrade, and it seems more important, so let s not backtrack on the upgrade request but on this dependency instead. . This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a which of these packages should I flip the opposite way to break the conflict kind of thinking. Now our test suite has a whole bunch of these semantics encoded in it, and I m going to share some problems and ideas for how to solve them. I can t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let s be honest).

apt upgrade is hard The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages. Now, consider the following package is installed:
X Depends: A (= 1)   B
An upgrade from A=1 to A=2 is available. What should happen? The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it s answer is quite clear: Keep back the upgrade of A. The new solver however sees two possible solutions:
  1. Install B to satisfy X Depends A (= 1) B.
  2. Keep back the upgrade of A
Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So
  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) A (= 1) and sees it is satisfied already and is content.
  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) B, sees that A (= 1) is not satisfiable, and picks B.
We have two ways to approach this issue:
  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases. But let s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:
  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)
This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, promotions :
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.
This neatly solves the problem for us. We will never break Recommends that are satisfied. Likewise, we already have a Recommends demotion rule:
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).
Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn t autoremove them, but treat them as optional?

tightening of versioned dependencies Another case of versioned dependencies with alternatives that has complex behavior is something like
X Depends: A (>= 2)   B
X Recommends: A (>= 2)   B
In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B. We can solve this again as in the previous example by ordering the keep A installed requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate
Depends: A (>= 2)
into two rules:
  1. The package selection rule:
     Depends: A
    
    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) A (= 2) in an example with two versions for A.
  2. The version narrowing rule:
     Conflicts: A (<< 2)
    
    This outright would reject a choice of A (= 1).
So now we have 3 kinds of clauses:
  1. package selection
  2. version narrowing
  3. version selection
If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions. This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) B but e.g. Depends: A (= 3) B A (= 2). He d expect us to fall back to B if A (= 3) is not installable, and not to B. But we d like to enqueue A and reject all choices other than 3 and 2. I think it s fair to say: Don t do that, then here.

Implementing strict pinning correctly APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions. But of course, APT allows you to specify a non-candidate version of a package to install, for example:
apt install foo/oracular-proposed
The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy. The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I d really like to get rid of it. But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache. The current implementation of allowed version is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) A (= 1). However this has two disadvantages. (1) It means if we show you why A could not be installed, you don t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides. So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost One of the common issues we have is that when we have a dependency group
 A   B   C   D 
we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn t perhaps the best choice of operation. I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don t do this here: We have already lowered the representation of the dependency group into a list of versions, so we d need to extract the package back out of it. This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:
  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A
Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway). The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly #A * (#B+#C+#D) Each dependency group we need to check i.e. is X Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X Y and Y X are different dependencies of course, but that is to be expected they are. But any dependency of the same order will have the same memory layout. So really the cost is roughly N^4. This isn t nice. You can apply various heuristics here on how to improve that, or you can even apply binary logic:
  1. Enqueue common dependencies of A B C D
  2. Move into the left half, enqueue of A B
  3. Again divide and conquer and select A.
This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one. Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

Freexian Collaborators: Discover release 0.3.0 of the debusine software factory (by Colin Watson)

Debusine is a Free Software project developed by Freexian to manage scheduling and distribution of Debian-related tasks to a network of worker machines. It was started some time back, but its development pace has recently increased significantly thanks to funding from the Sovereign Tech Fund. You can read more about it in its documentation. For more background, Enrico Zini and Carles Pina i Estany gave a talk on Debusine in November 2023 at the mini-DebConf in Cambridge. We described the work from our first funded milestone in a post to debian-devel-announce in March. We ve recently finished work on our second funded milestone, culminating in releasing version 0.3.0 to unstable. Our focus on this milestone was on new building blocks to allow us to automatically orchestrate QA tasks in bulk. Full details are in our release history document. As usual, debusine.debian.net is up to date with our latest work.

Collections In the previous milestone, debusine could store artifacts and run tasks against those artifacts. However, on its own this required the user to do a lot of manual work, because the only way to refer to an artifact was by its ID. We now have the concept of a collection, which can store references to other artifacts (or indeed to other collections) with some attached metadata. These are structured by category, so for example a debian:suite collection contains references to source and binary package artifacts with their names, versions, and architectures as metadata. This allows us to look up artifacts using a simple query language instead of just by ID. At the moment, the main visible effect of this is that our Getting started with debusine tutorial no longer needs users of debusine.debian.net to create their own build environments before being able to submit other work requests: they can refer to existing environments using something like debian/match:codename=trixie:variant=sbuild instead. We also have a basic user interface allowing you to browse existing collections, accessible via the relevant workspace (such as the default System workspace).

Workflows We ve always known that individual tasks were just a starting point: real-world requirements often involve chaining many tasks together, as many Debian developers already do using the Salsa CI pipeline. debusine intends to approach a similar problem from a different angle, defining common workflows that can be applied at the scale of a whole distribution without being tightly coupled to where each package s code is hosted. In time we intend to define a way for users to specify their own workflows, but rather than getting too bogged down in this we started by building a couple of predefined workflows into debusine. The update_environments workflow is used to create multiple build environments in bulk, while the sbuild workflow builds a source package for all the architectures that it supports and for which debusine has workers. (debusine.debian.net currently has amd64 and arm64 workers, supporting the amd64, arm64, armel, armhf, and i386 architectures between them.) Upcoming work will build on this by adding more workflows that chain tasks together in various ways, such as workflows that build a package and run QA tasks on the results, or a workflow that builds a package and uploads the result to an upload queue.

Next steps Our next planned milestone involves expanding debusine s capability as a build daemon. For that, we already know that there are a number of specific extra workflow steps we need to add, and we ve reached out to some members of Debian s buildd team to ask for feedback on what they consider necessary. We hope to be able to replace some of Freexian s own build infrastructure with debusine in the near future.

Reproducible Builds (diffoscope): diffoscope 268 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 268. This version includes the following changes:
[ Chris Lamb ]
* Drop apktool from Build-Depends; we can still test our APK code
  via autopkgtests. (Closes: #1071410)
* Fix tests for 7zip version 24.05.
* Add a versioned dependency for at least version 5.4.5 for the xz
  tests; they fail under (at least xz 5.2.8).
  (Closes: reproducible-builds/diffoscope#374)
[ Vagrant Cascadian ]
* Relax Chris' versioned xz test dependency (5.4.5) to also allow
  version 5.4.1.
You find out more by visiting the project homepage.

22 May 2024

Evgeni Golov: Upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp

Warning to the Planet Debian readers: the following post might shock you, if you're used to Debian's smooth upgrades using only the package manager. Leapp?! Contrary to distributions like Debian and Fedora, RHEL can't be upgraded using the package manager alone. Instead there is a tool called Leapp that takes care of orchestrating the update and also includes a set of checks whether a system can be upgraded at all. Have a look at the RHEL documentation about upgrading if you want more details on the process itself. You might have noticed that the title of this post says "CentOS Stream" but here I am talking about RHEL. This is mostly because Leapp was originally written with RHEL in mind. Upgrading CentOS 7 to EL8 When people started pondering upgrading their CentOS 7 installations, AlmaLinux started the ELevate project to allow upgrading CentOS 7 to CentOS Stream 8 but also to AlmaLinux 8, Rocky 8 or Oracle Linux 8. ELevate was essentially Leapp with patches to allow working on CentOS, which has different package signature keys, different OS release versioning, etc. Sadly these patches were never merged back into Leapp. Making Leapp work with CentOS Stream 8 (and other distributions) At some point I noticed that things weren't moving and EL8 to EL9 upgrades were coming closer (and I had my own systems that I wanted to be able to upgrade in place). Annoyed-Evgeni-Development is best development? Not sure, but it produced a set of patches that allowed some movement: However, this is not yet the end of the story. At least convert dot-less CentOS versions to X.999 is open, and another followup would be needed if we go that route. But I don't expect this to be merged soon, as the patch is technically wrong - yet it makes things mostly work. The big problem here is that CentOS Stream doesn't have X.Y versioning, just X as it's a constant stream with no point releases. Leapp however relies on X.Y versioning to know which package changes it needs to perform. Pretending CentOS Stream 8 is "RHEL" 8.999 works if you assume that Stream is always ahead of RHEL. This is however a CentOS only problem. I still need to properly test that, but I'd expect things to work fine with upstream Leapp on AlmaLinux/Rocky if you feed it the right signature and repository data. Actually upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp Like I've already teased in my HPE rant, I've actually used that code to upgrade virt01.conova.theforeman.org to CentOS Stream 9. I've also used it to upgrade a server at home that's responsible for running important containers like Home Assistant and UniFi. So it's absolutely battle tested and production grade! It's also hungry for kittens. As mentioned above, you can't just use upstream Leapp, but I have a Copr: evgeni/leapp.
# dnf copr enable evgeni/leapp
# dnf install leapp leapp-upgrade-el8toel9
Apart from the software, we'll also need to tell it which repositories to use for the upgrade.
# vim /etc/leapp/files/leapp_upgrade_repositories.repo
[c9-baseos]
name=CentOS Stream $releasever - BaseOS
metalink=https://mirrors.centos.org/metalink?repo=centos-baseos-9-stream&arch=$basearch&protocol=https,http
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
[c9-appstream]
name=CentOS Stream $releasever - AppStream
metalink=https://mirrors.centos.org/metalink?repo=centos-appstream-9-stream&arch=$basearch&protocol=https,http
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
Depending on the setup and installed packages, more repositories might be needed. Just make sure that the $stream substitution is not used as Leapp doesn't override that and you'd end up with CentOS Stream 8 repos again. Once all that is in place, we can call leapp preupgrade and let it analyze the system. Ideally, the output will look like this:
# leapp preupgrade
 
============================================================
                      REPORT OVERVIEW                       
============================================================
Reports summary:
    Errors:                      0
    Inhibitors:                  0
    HIGH severity reports:       0
    MEDIUM severity reports:     0
    LOW severity reports:        3
    INFO severity reports:       3
Before continuing consult the full report:
    A report has been generated at /var/log/leapp/leapp-report.json
    A report has been generated at /var/log/leapp/leapp-report.txt
============================================================
                   END OF REPORT OVERVIEW                   
============================================================
But trust me, it won't ;-) As mentioned above, Leapp analyzes the system before the upgrade. Some checks can completely inhibit the upgrade, while others will just be logged as "you better should have a look". Firewalld Configuration AllowZoneDrifting Is Unsupported EL7 and EL8 shipped with AllowZoneDrifting=yes, but since EL9 this is not supported anymore. As this can potentially break the networking of the system, the upgrade gets inhibited. Newest installed kernel not in use Admit it, you also don't reboot into every new kernel available! Well, Leapp won't let that pass and inhibits the upgrade. Cannot perform the VDO check of block devices In EL8 there are two ways to manage VDO: using the dedicated vdo tool and via LVM. If your system uses LVM (it should!) but not VDO, you probably don't have the vdo package installed. But then Leapp can't check if your LVM devices really aren't VDO without the vdo tooling and will inhibit the upgrade. So you gotta install vdo for it to find out that you don't use VDO LUKS encrypted partition detected Yeah. Sorry. Using LUKS? Straight into the inhibit corner! But hey, if you don't use LUKS for / you can probably get away by deleting the inhibitwhenluks actor. That worked for me, but remember the kittens! Really upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp The headings are getting silly, huh? Anyway, once leapp preupgrade is happy and doesn't throw any inhibitors anymore, the actual (real?) upgrade can be done by calling leapp upgrade. This will download all necessary packages and create an intermediate initramfs that contains all the things needed for the upgrade and ask you to reboot. Once booted, the upgrade itself takes somewhere between 5 and 10 minutes. Then another minute or 5 to relabel your disks with the new SELinux policy. And three reboots (into the upgrade initramfs, into SELinux relabel, into real OS) of a ProLiant DL325 - 5 minutes each? And then for good measure another one, to flip SELinux from permissive to enforcing. Are we done yet? Nope. There are a few post-upgrade tasks you get to do yourself. Yes, the switching of SELinux back to enforcing is one of them. Please don't forget it. Using the system after the upgrade A customer once said "We're not running those systems for the sake of running systems, but for the sake of running some application ontop of them". This is very true. libvirt doesn't support Spice/QXL In EL9, support for Spice/QXL was dropped, so if you try to boot a VM using it, libvirt will nicely error out with
Error starting domain: unsupported configuration: domain configuration does not support video model 'qxl'
Interestingly, because multiple parts of the VM are invalid, you can't edit it in virt-manager (at least the one in Fedora 39) as removing/fixing one part requires applying the new configuration which is still invalid. So virsh edit <vm> it is! Look for entries like
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <audio id='1' type='spice'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'> 
      <address type='usb' bus='0' port='2'/> 
    </redirdev> 
    <redirdev bus='usb' type='spicevmc'> 
      <address type='usb' bus='0' port='3'/> 
    </redirdev>
and either just delete the or (better) replace them with VNC/cirrus
    <graphics type='vnc' port='-1' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
Podman needs re-login to private registries One of the machines I've updated runs Podman and pulls containers from GitHub which are marked as private. To do so, I have a personal access token that I've used to login to ghcr.io. After the CentOS Stream 9 upgrade (which included an upgrade to Podman 5), pulls stopped working with authentication/permission errors. No idea what exactly happened, but a simple podman login fixed this issue quickly.
$ echo ghp_token   podman login ghcr.io -u <user> --password-stdin
shim has an el8 tag One of the documented post-upgrade tasks is to verify that no EL8 packages are installed, and to remove those if there are any. However, when you do this, you'll notice that the shim-x64 package has an EL8 version: shim-x64-15-15.el8_2.x86_64. That's because the same build is used in both CentOS Stream 8 and CentOS Stream 9. Confusing, but should really not be uninstalled if you want the machine to boot ;-) Are we done yet? Yes! That's it. Enjoy your CentOS Stream 9!

21 May 2024

Michael Ablassmeier: lvm thin send/recv

A few days ago i found this mail on the LKML that introduces support for userspace access to LVM thin provisioned metadata snapshots. I didn t know this is possible. Using the thin provisioning tools you can then export the metadata information for your LVM snapshots to track changed regions between them. The workflow is pretty straight forward, yet not really documented:
# lvcreate -ay -Ky --snapshot -n full_backup thingroup/vol1
  # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 reserve_metadata_snap
  # lvcreate -ay -Ky --snapshot -n inc_backup thingroup/vol1
  # thin_delta  -m --snap1 $(lvs --noheadings -o thin_id thingroup/full_backup) --snap2 $(lvs --noheadings -o thin_id thingroup/inc_backup) > delta_dump
  # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 release_metadata_snap
This all has already been implemented by a nice utility called thin-send-recv, which based on this functionality allows to (incrementally) send LVM snapshots to remote systems just like zfs send or zfs recv.

20 May 2024

Russell Coker: Respect and Children

I attended the school Yarra Valley Grammer (then Yarra Valley Anglican School which I will refer to as YV ) and completed year 12 in 1990. The school is currently in the news for a spreadsheet some boys made rating girls where unrapeable was one of the ratings. The school s PR team are now making claims like Respect for each other is in the DNA of this school . I d like to know when this DNA change allegedly occurred because respect definitely wasn t in the school DNA in 1990! Before I go any further I have to note that if the school threatens legal action against me for this post it will be clear evidence that they don t believe in respect. The actions of that school have wronged me, several of my friends, many people who aren t friends but who I wish they hadn t had to suffer and I hadn t had to witness it, and presumably countless others that I didn t witness. If they have any decency they would not consider legal action but I have learned that as an institution they have no decency so I have to note that they should read the Wikipedia page about the Streisand Effect [1] and keep it in mind before deciding on a course of action. I think it is possible to create a school where most kids enjoy being there and enjoy learning, where hardly any students find it a negative experience and almost no-one finds it traumatic. But it is not possible to do that with the way schools tend to be run. When I was at high school there was a general culture that minor sex crimes committed by boys against boys weren t a problem, this probably applied to all high schools. Things like ripping a boy s pants off (known as dakking ) were considered a big joke. If you accept that ripping the pants off an unwilling boy is a good thing (as was the case when I was at school) then that leads to thinking that describing girls as unrapeable is acceptable. The Wikipedia page for Pantsing [2] has a reference for this issue being raised as a serious problem by the British Secretary of State for Education and Skills Alan Johnson in 2007. So this has continued to be a widespread problem around the world. Has YV become better than other schools in dealing with it or is Dakking and Wedgies as well accepted now as it was when I attended? There is talk about schools preparing kids for the workforce, but grabbing someone s underpants without consent will result in instant dismissal from almost all employment. There should be more tolerance for making mistakes at school than at work, but they shouldn t tolerate what would be serious crimes in other contexts. For work environments there have been significant changes to what is accepted, so it doesn t seem unreasonable to expect that schools can have a similar change in culture. One would hope that spending 6 years wondering who s going to grab your underpants next would teach boys the importance of consent and some sympathy for victims of other forms of sexual assault. But that doesn t seem to happen, apparently it s often the opposite. When I was young Autism wasn t diagnosed for anyone who was capable of having a normal life. Teachers noticed that I wasn t like other kids, some were nice, but some encouraged other boys to attack me as a form of corporal punishment by proxy not a punishment for doing anything wrong (detentions were adequate for that) but for being different. The lesson kids will take from that sort of thing is that if you are in a position of power you can mistreat other people and get away with it. There was a girl in my year level at YV who would probably be diagnosed as Autistic by today s standards, the way I witnessed her being treated was considerably worse than what was described in the recent news reports but it is quite likely that worse things have been done recently which haven t made the news yet. If this issue is declared to be over after 4 boys were expelled then I ll count that as evidence of a cover-up. These things don t happen in a vacuum, there s a culture that permits and encourages it. The word respect has different meanings, it can mean treat a superior as the master or treat someone as a human being . The phrase if you treat me with respect I ll treat you with respect usually means if you treat me as the boss then I ll treat you as a human being . The distinction is very important when discussing respect in schools. If teachers are considered the ultimate bosses whose behaviour can never be questioned then many boys won t need much help from Andrew Tate in developing the belief that they should be the boss of girls in the same way. Do any schools have a process for having students review teachers? Does YV have an ombudsman to take reports of misbehaving teachers in the way that corporations typically have an ombudsman to take reports about bad managers? Any time you have people whose behaviour is beyond scrutiny or oversight you will inevitably have bad people apply for jobs, then bad things will happen and it will create a culture of bad behaviour. If teachers can treat kids badly then kids will treat other kids badly, and this generally ends with girls being treated badly by boys. My experience at YV was that kids barely had the status of people. It seemed that the school operated more as a caretaker of the property of parents than as an organisation that cares for people. The current YV website has a Whistleblower policy [3] that has only one occurrence of the word student and that is about issues that endanger the health or safety of students. Students are the people most vulnerable to reprisal for complaining and not being listed as an eligible whistleblower shows their status. The web site also has a flowchart for complaints and grievances [4] which doesn t describe any policy for a complaint to be initiated by a student. One would hope that parents would advocate for their children but that often isn t the case. When discussing the possibility of boys being bullied at school with parents I ve had them say things like my son wouldn t be so weak that he would be bullied , no boy will tell his parents about being bullied if that s their attitude! I imagine that there are similar but different issues of parents victim-blaming when their daughter is bullied (presumably substituting immoral for weak) but don t have direct knowledge of the topic. The experience of many kids is being disrespected by their parents, the school system, and often siblings too. A school can t solve all the world s problems but can ideally be a refuge for kids who have problems at home. When I was at school the culture in the country and the school was homophobic. One teacher when discussing issues such as how students could tell him if they had psychological problems and no-one else to talk to said some things like the Village People make really good music which was the only time any teacher said anything like It s OK to be gay (the Village People were the gayest pop group at the time). A lot of the bullying at school had a sexual component to it. In addition to the wedgies and dakking (which while not happening often was something you had to constantly be aware of) I routinely avoided PE classes where a shower was necessary because of a thug who hung around by the showers and looked hungrily at my penis, I don t know if he had a particular liking to mine or if he stared at everyone that way. Flashing and perving was quite common in change rooms. Presumably as such boy-boy sexual misbehaviour was so accepted that led to boys mistreating girls. I currently work for a company that is active in telling it s employees about the possibility of free psychological assistance. Any employee can phone a psychologist to discuss problems (whether or not they are work related) free of charge and without their manager or colleagues knowing. The company is billed and is only given a breakdown of the number of people who used the service and roughly what the issue was (work stress, family, friends, grief, etc). When something noteworthy happens employees are given reminders about this such as if you need help after seeing a homeless man try to steal a laptop from the office then feel free to call the assistance program . Do schools offer something similar? With the school fees paid to a school like YV they should be able to afford plenty of psychologist time. Every day I was at YV I saw something considerably worse than laptop theft, most days something was done to me. The problems with schools are part of larger problems with society. About half of the adults in Australia still support the Liberal party in spite of their support of Christian Porter, Cardinal Pell, and Bruce Lehrmann. It s not logical to expect such parents to discourage their sons from mistreating girls or to encourage their daughters to complain when they are mistreated. The Anglican church has recently changed it s policy to suggesting that victims of sexual abuse can contact the police instead of or in addition to the church, previously they had encouraged victims to only contact the church which facilitated cover-ups. One would hope that schools associated with the Anglican church have also changed their practices towards such things. I approve of the respect is in our DNA concept, it s like Google s former slogan of Don t be evil which is something that they can be bound to. Here s a list of questions that could be asked of schools (not just YV but all schools) by journalists when reporting on such things:
  1. Do you have a policy of not trying to silence past students who have been treated badly?
  2. Do you take all sexual assaults seriously including wedgies and dakking?
  3. Do you take all violence at school seriously? Even if there s no blood? Even if the victim says they don t want to make an issue of it?
  4. What are your procedures to deal with misbehaviour from teachers? Do the students all know how to file complaints? Do they know that they can file a complaint if they aren t the victim?
  5. Does the school have policies against homophobia and transphobia and are they enforced?
  6. Does the school offer free psychological assistance to students and staff who need it? NB This only applies to private schools like YV that have huge amounts of money, public schools can t afford that.
  7. Are serious incidents investigated by people who are independent of the school and who don t have a vested interest in keeping things quiet?
  8. Do you encourage students to seek external help from organisations like the ones on the resources list of the Grace Tame Foundation [5]? Having your own list of recommended external organisations would be good too.
Counter Arguments I ve had practice debating such things, here s some responses to common counter arguments. Conclusion I don t think that YV is necessarily worse than other schools, although I m sure that representatives of other private schools are now working to assure parents of students and prospective students that they are. I don t think that all the people who were employed as teachers there when I attended were bad people, some of them were nice people who were competent teachers. But a few good people can t turn around a bad system. I will note that when I attended all the sports teachers were decent people, it was the only department I could say such things about. But sports involves situations that can lead to a bad result, issues started at other times and places can lead to violence or harassment in PE classes regardless of how good the teachers are. Teachers who know that there are problems need to be able to raise issues with the administration. When a teacher quits teaching to join the clergy and another teacher describes it as a loss for the clergy but a gain for YV it raises the question of why the bad teacher in question couldn t have been encouraged to leave earlier. A significant portion of the population will do whatever is permitted. If you say no teacher would ever bully a student so we don t need to look out for that then some teacher will do exactly that. I hope that this will lead to changes both in YV and in other schools. But if they declare this issue as resolved after expelling 4 students then something similar or worse will happen again. At least now students know that when this sort of thing happens they can send evidence to journalists to get some action.

18 May 2024

Russell Coker: Kogan 5120*2160 40 Monitor

I ve just got a new Kogan 5120*2160 40 curved monitor. It cost $599 including shipping etc which is much cheaper than the Dell monitor with similar specs selling for about $2500. For monitors with better than 4K resolution (by which I don t mean 5K*1440) this is the cheapest option. The nearest competitors are the 27 monitors that do 5120*2880 from Apple and some companies copying Apple s specs. While 5120*2880 is a significantly better resolution than what I got it s probably not going to help me at 27 size. I ve had a Dell 32 4K monitor since the 1st of July 2022 [1]. It is a really good monitor and I had no complaints at all about it. It was clearer than the Samsung 27 4K monitor I used before it and I m not sure how much of that is due to better display technology (the Samsung was from 2017) and how much was due to larger size. But larger size was definitely a significant factor. I briefly owned a Phillips 43 4K monitor [2] and determined that a 43 flat screen was definitely too big. At the time I thought that about 35 would have been ideal but after a couple of years using a flat 32 screen I think that 32 is about the upper limit for a flat screen. This is the first curved monitor I ve used but I m already thinking that maybe 40 is too big for a 21:9 aspect ratio even with a curved screen. Maybe if it was 4:4 or even 16:9 that would be ok. Otherwise the ideal for a curved screen for me would be something between about 36 and 38 . Also 43 is awkward to move around my desk. But this is still quite close to ideal. The first system I tested this on was a work laptop, a Dell Latitude 7400 2in1. On the Dell dock that did 4K resolution and on a HDMI cable it did 1440p which was a disappointment as that laptop has talked to many 4K monitors at native resolution on the HDMI port with the same cable. This isn t an impossible problem, as I work in the IT department I can just go through all the laptops in the store room until I find one that supports it. But the 2in1 is a very nice laptop, so I might even just keep using it in 4K resolution when WFH. The laptop in question is deemed an executive laptop so I have to wait another 2 years for the executives to get new laptops before I can get a newer 2in1. On my regular desktop I had the problem of the display going off for a few seconds every minute or so and also occasionally giving a white flicker. That was using 5120*2160 with a DisplayPort switch as described in the blog post about the Dell 32 monitor. When I ran it in 4K resolution with the DisplayPort switch from my desktop it was fine. I then used the DisplayPort cable that came with the monitor directly connecting the video card to the display and it was fine at 5120*2160 with 75Hz. The monitor has the joystick thing that seems to have become some sort of standard for controlling modern monitors. It s annoying that pressing it in powers it off. I think there should be a separate button for that. Also the UI in general made me wonder if one of the vendors of expensive monitors had paid whoever designed it to make the UI suck. The monitor had a single dead pixel in the center of the screen about 1/4 the way down from the top when I started writing this post. Now it s gone away which is a concern as I don t know which pixels might have problems next or if the number of stuck pixels will increase. Also it would be good if there was a dark mode for the WordPress editor. I use dark mode wherever possible so I didn t notice the dead pixel for several hours until I started writing this blog post. I watched a movie on Netflix and it took the entire screen area, I don t know if they are storing movies in 64:27 ratio or if the clipped the top and bottom, it was probably clipped but still looked OK. The monitor has different screen modes which make it look different, I can t see much benefit to the different modes. The standard mode is what I usually use and it s brighter and the movie mode seems OK for the one movie I ve watched so far. In other news BenQ has just announced a 3840*2560 28 monitor specifically designed for programming [3]. This is the first time I ve heard of a monitor with 3:2 ratio with modern resolution, we still aren t at the 4:3 type ratio that we were used to when 640*480 was high resolution but it s a definite step in the right direction. It s also the only time I recall ever seeing a monitor advertised as being designed for programming. In the 80s there were home computers advertised as being computers for kids to program, but at that time it was either TV sets for monitors or monitors sold with computers. It was only after the IBM PC compatible market took off that having a choice of different monitors for one computer was a thing. In recent years monitors advertised as being for office use (meaning bright and expensive) have become common as are monitors designed for gamer use (meaning high refresh rate). But BenQ seems to be the first to advertise a monitor for the purpose of programming. They have a desktop partition feature (which could be software or hardware the article doesn t make it clear) to give some of the benefits of a tiled window manager to people who use OSs that don t support such things. The BenQ monitor is a bit small for my taste, I don t know if my vision is good enough to take advantage of 3840*2560 in a 28 monitor nowadays. I think at least 32 would be better. Google seems to be really into buying good monitors for their programmers, if every Google programmer got one of those BenQ monitors then that would be enough sales to make it worth-while for them. I had hoped that we would have 6K monitors become affordable this year and 8K become less expensive than most cars. Maybe that won t happen and we will instead have a wider range of products like the ultra wide monitor I just bought and the BenQ programmer s monitor. If so I don t think that will be a bad result. Now the question is whether I can use this monitor for 2 years before finding something else that makes me want to upgrade. I can afford to spend the equivalent of a bit under $1/day on monitor upgrades.

James Morrison: Goodbye Firefox

I've been on Chromebooks for a while. However, since I had to recently try a Mac, I figured it was time to give Firefox a try again. After two weeks of trying, I've given up. At least for myself, I figured I'd write down the reasons I've given up.Reasons:

17 May 2024

Debian Brasil: MiniDebConf Belo Horizonte 2024 - um breve relato

De 27 a 30 de abril de 2024 foi realizada a MiniDebConf Belo Horizonte 2024 no Campus Pampulha da UFMG - Universidade Federal de Minas Gerais, em Belo Horizonte - MG. MiniDebConf BH 2024 banners Esta foi a quinta vez que uma MiniDebConf (como um evento presencial exclusivo sobre Debian) aconteceu no Brasil. As edi es anteriores foram em Curitiba (2016, 2017, e 2018), e em Bras lia 2023. Tivemos outras edi es de MiniDebConfs realizadas dentro de eventos de Software Livre como o FISL e a Latinoware, e outros eventos online. Veja o nosso hist rico de eventos. Paralelamente MiniDebConf, no dia 27 (s bado) aconteceu o FLISOL - Festival Latino-americano de Instala o de Software Livre, maior evento da Am rica Latina de divulga o de Software Livre realizado desde o ano de 2005 simultaneamente em v rias cidades. A MiniDebConf Belo Horizonte 2024 foi um sucesso (assim como as edi es anteriores) gra as participa o de todos(as), independentemente do n vel de conhecimento sobre o Debian. Valorizamos a presen a tanto dos(as) usu rios(as) iniciantes que est o se familiarizando com o sistema quanto dos(as) desenvolvedores(as) oficiais do projeto. O esp rito de acolhimento e colabora o esteve presente em todos os momentos. MiniDebConf BH 2024 flisol N meros da edi o 2024 Durante os quatro dias de evento aconteceram diversas atividades para todos os n veis de usu rios(as) e colaboradores(as) do projeto Debian. A programa o oficial foi composta de: MiniDebConf BH 2024 palestra Os n meros finais da MiniDebConf Belo Horizonte 2024 mostram que tivemos um recorde de participantes. Dos 224 participantes, 15 eram contribuidores(as) oficiais brasileiros sendo 10 DDs (Debian Developers) e 05 (Debian Maintainers), al m de diversos(as) contribuidores(as) n o oficiais. A organiza o foi realizada por 14 pessoas que come aram a trabalhar ainda no final de 2023, entre elas o Lo c Cerf do Departamento de Computa o que viabilizou o evento na UFMG, e 37 volunt rios(as) que ajudaram durante o evento. Como a MiniDebConf foi realizado nas instala es da UFMG, tivemos a ajuda de mais de 10 funcion rios da Universidade. Veja a lista com os nomes das pessoas que ajudaram de alguma forma na realiza o da MiniDebConf Belo Horizonte 2024. A diferen a entre o n mero de pessoas inscritas e o n mero de pessoas presentes provavelmente se explica pelo fato de n o haver cobran a de inscri o, ent o se a pessoa desistir de ir ao evento ela n o ter preju zo financeiro. A edi o 2024 da MiniDebconf Belo Horizonte foi realmente grandiosa e mostra o resultado dos constantes esfor os realizados ao longo dos ltimos anos para atrair mais colaboradores(as) para a comunidade Debian no Brasil. A cada edi o os n meros s aumentam, com mais participantes, mais atividades, mais salas, e mais patrocinadores/apoiadores. MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo Atividades A programa o da MiniDebConf foi intensa e diversificada. Nos dias 27, 29 e 30 (s bado, segunda e ter a-feira) tivemos palestras, debates, oficinas e muitas atividades pr ticas. MiniDebConf BH 2024 palestra J no dia 28 (domingo), ocorreu o Day Trip, um dia dedicado a passeios pela cidade. Pela manh sa mos do hotel e fomos, em um nibus fretado, para o Mercado Central de Belo Horizonte. O pessoal aproveitou para comprar v rias coisas como queijos, doces, cacha as e lembrancinhas, al m de experimentar algumas comidas locais. MiniDebConf BH 2024 mercado Depois de 2 horas de passeio pelo Mercado, voltamos para o nibus e pegamos a estrada para almo armos em um restaurante de comida t pica mineira. MiniDebConf BH 2024 palestra Com todos bem alimentados, voltamos para Belo Horizonte para visitarmos o principal ponto tur stico da cidade: a Lagoa da Pampulha e a Capela S o Francisco de Assis, mais conhecida como Igrejinha da Pampulha. MiniDebConf BH 2024 palestra Voltamos para o hotel e o dia terminou no hacker space que montamos na sala de eventos para o pessoal conversar, empacotar, e comer umas pizzas. MiniDebConf BH 2024 palestra Financiamento coletivo Pela terceira vez fizemos uma campanha de financiamento coletivo e foi incr vel como as pessoas contribu ram! A meta inicial era arrecadar o valor equivalente a uma cota ouro de R$ 3.000,00. Ao atingirmos essa meta, definimos uma nova, equivalente a uma cota ouro + uma cota prata (R$ 5.000,00). E novamente atingimos essa meta. Ent o propusermos como meta final o valor de uma cota ouro + prata + bronze, que seria equivalente a R$ 6.000,00. O resultado foi que arrecadamos R$ 6.706,79 com a ajuda de mais de 100 pessoas! Muito obrigado as pessoas que contribu ram com qualquer valor. Como forma de agradecimento, listamos os nomes das pessoas que doaram. MiniDebConf BH 2024 doadores Bolsas de alimenta o, hospedagem e/ou passagens para participantes Cada edi o da MiniDebConf trouxe alguma inova o, ou algum benef cio diferente para os(a) participantes. Na edi o deste ano em Belo Horizonte, assim como acontece nas DebConfs, oferecemos bolsas de alimenta o, hospedagem e/ou passagens para ajudar aquelas pessoas que gostariam de vir para o evento mas que precisariam de algum tipo de ajuda. No formul rio de inscri o, colocamos a op o para a pessoa solicitar bolsa de alimenta o, hospedagem e/ou passagens, mas para isso, ela deveria se identificar como contribuidor(a) (oficial ou n o oficial) do Debian e escrever uma justificativa para o pedido. N mero de pessoas beneficiadas: A bolsa de alimenta o forneceu almo o e jantar todos os dias. Os almo os inclu ram pessoas que moram em Belo Horizonte e regi o. J o jantares foram pagos para os(as) participantes que tamb m receberam a bolsa de hospedagem e/ou passagens. A hospedagem foi realizada no Hotel BH Jaragu . E as passagens inclu ram de avi o ou de nibus, ou combust vel (para quem veio de carro ou moto). Boa parte do dinheiro para custear as bolsas vieram do Projeto Debian, principalmente para as passagens. Enviamos um or amento o ent o l der do Debian Jonathan Carter, e ele prontamente aprovou o nosso pedido. Al m deste or amento do evento, o l der tamb m aprovou os pedidos individuais enviados por alguns DDs que preferiram solicitar diretamente para ele. A experi ncia de oferecer as bolsas foi realmente muito boa porque permitiu a vinda de v rias pessoas de outras cidades. MiniDebConf BH 2024 grupo Fotos e v deos Voc pode assistir as grava es das palestras nos links abaixo: E ver as fotos feitas por v rios(as) colaboradores(as) nos links abaixo: Agradecimentos Gostar amos de agradecer a todos(as) os(as) participantes, organizadores(as), volunt rios(as), patrocinadores(as) e apoiadores(as) que contribu ram para o sucesso da MiniDebConf Belo Horizonte 2024. MiniDebConf BH 2024 grupo Patrocinadores Ouro: Prata: Bronze: Apoiadores Organiza o

Reproducible Builds (diffoscope): diffoscope 267 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 267. This version includes the following changes:
[ Chris Lamb ]
* Include "xz --verbose --verbose" (ie. double --verbose) output, not just
  the single --verbose. (Closes: #1069329)
* Only include "xz --list" output if the xz has no other differences.
You find out more by visiting the project homepage.

16 May 2024

John Goerzen: Review of Reputable, Functional, and Secure Email Service

I last reviewed email services in 2019. That review focused a lot of attention on privacy. At the time, I selected mailbox.org as my provider, and have been using them for these 5 years since. However, both their service and their support have gone significantly downhill since, so it is time for me to look at other options. Here I am focusing strongly on email. Some of the providers mentioned here provide other services (IM, video calls, groupware, etc.), and to the extent they do, I am ignoring them.

What Matters in 2024
I want to start off by acknowledging that what you need in email probably depends on your circumstances and the country in which you live. For me, I begin by naming that the largest threat most of us face isn t from state actors but from criminals: hackers, ransomware gangs, etc. It is important to take as many steps as possible to secure one s account against that. Privacy and security are both part of the mix. I still value privacy but I am acknowledging, as Migadu does, that Email as we know it and encryption are incompatible. Although some of these services strongly protect parts of the conversation, the reality is that most people will be emailing people using plain old email services which don t. For stronger security, something like Signal would be needed. (I wrote about Signal in 2021 also.) Interestingly, OpenPGP support seems to be something of a standard feature in the providers I reviewed by this point. All or almost all of them provide integration with browser-based encryption as well as server-side encryption if you prefer that. Although mailbox.org can automatically PGP-encrypt every message that arrives in plaintext, for general use, this is unwieldy; there isn t good tooling for searching mailboxes where every message is encrypted, etc. So I never enabled that feature at Mailbox. I still value security and privacy, but a pragmatic approach addresses the most pressing threats first.

My criteria
The basic requirements for an email service include:
  1. Ability to use my own domains
  2. Strong privacy policy
  3. Ability for me to use my own IMAP and SMTP clients on both desktop and mobile
  4. It must be extremely reliable
  5. It must not be free
  6. It must have excellent support for those rare occasions when it is needed
  7. Support for basic aliases
Why do I say it must not be free? Because if someone is providing a service with the quality I m talking about here, and not charging for it, it implies something is fishy: either they are unscrupulous, are financially unstable, or the product is something else like ads. I am not aware of any provider that matches the other criteria with a free account anyhow. These providers range from about $30 to $90 per year, so cheaper than a Netflix subscription. Immediately, this rules out several options:
  • Proton doesn t let me use my own clients on mobile (their bridge is desktop-only)
  • Tuta also doesn t let me use my own clients
  • Posteo doesn t let me use my own domain
  • mxroute.com lacks a strong privacy policy, and its policy has numerous causes for concern (for instance, If you repeatedly send email to invalid/unroutable recipients, they may be published on our GitHub )
I will have a bit more to say about a couple of these providers below. There are some additional criteria that are strongly desired but not absolutely required:
  1. Ability to set individual access passwords for every device/app
  2. Support for two-factor authentication (2FA/TFA/TOTP) for web-based access
  3. Support for basics in filtering: ability to filter on envelope recipient (so if I get BCC d, I can still filter), and ability to execute more than one action on filter match (eg, deliver to two folders, or deliver to a folder and forward to someone else)
IMAP and SMTP don t really support 2FA, so by setting individual passwords for every device, you can at least limit the blast radius and cut off a specific device if something is (or might be) compromised.

The candidates
I considered these providers: Startmail, Mailfence, Runbox, Fastmail, Kolab, Mailbox.org, and Migadu. I ll review each, and highlight the pricing of the plan I would most likely use. Each provider offers multiple plans; some may be more expensive and some may be cheaper than the one I reviewed. I included a link to each provider s full pricing information so you can compare for your needs. I set up trials with each of these (except Mailbox.org, with which I already had a paid account). It so happend that I had actual questions for support for each one, which gave me an opportunity to see how support responded. I did not fabricate questions, and would not have contacted support if I didn t have real ones. (This means that I asked different questions of each provider, because they were the REAL questions I had.) I ll jump to the spoiler right now: I eventually chose Migadu, with Fastmail and Mailfence as close seconds. I looked for providers myself, and also solicited recommendations in a Mastodon thread.

Mailbox.org
I begin with Mailbox, as it was my top choice in 2019 and the incumbent. Until this year, I had been quite happy with it. I had cause to reach their support less than once a year on average, and each time they replied the same day or next day. Now, however, they are failing on reliability and on support. Their spam filter has become overly aggressive. It has blocked quite a bit of legitimate mail. When contacting their support about a prior issue earlier this year, they initially took 4 days to reply, and then 6 days to reply after that. Ouch. They had me disable some spam settings. It didn t really help. I continue to lose mail. I don t know how much, because they block a lot of it before it even hits the spam folder. One of my friends texted to say mail was dropping. I raised a new ticket with mailbox, which took them 5 days to reply to. Their reply was unhelpful. As the Internet is not a static system, unforeseen events can always occur. Well yes, that s true, and I get it, false positives exist with email. But this was from an ISP s mail system with an address that had been established for years, and it was part of a larger pattern of rejecting quite a bit of legit mail. And every interaction with them recently hasn t resulted in them actually doing anything to resolve anything. It s just a paragraph or two of reply that does nothing and helps nothing. When I complained that it took 5 days to reply, they said We have not been able to reply sooner as we are currently experiencing a high volume of customer enquiries. Even though their SLA for my account is a not-great 48 business hour turnaround, they still missed it and their reason is we re busy. I finally asked what RBL had caught the blocked email, since when I checked, the sender wasn t on any RBL. Mailbox s reply: they only keep their logs for 7 days, so next time I should contact them within 7 days. Which, of course, I DID; it was them that kept delaying. Ugh! It s like they ve become a cable company. Even worse is how they have been blocking mail from GrapheneOS s discussion form. See their thread about it. In short, Graphene s mail server has a clean reputation and Mailbox has no problem with it. But because one of Graphene s IPv6 webservers has an IPv6 allocation of a size Mailbox doesn t like, they drop mail. It s ridiculous, and Mailbox was dismissive of this well-known and well-regarded Open Source project. So if the likes of GrapheneOS can t get good faith effort to deliver their mail, what chance does an individual like me have? I m sorry, but I m literally paying you to deliver email for me and provide good support. If you can t do either of those, you don t get to push that problem down onto me. Hire appropriate staff. On the technical side, they support aliases, my own clients, and have a reasonable privacy policy. Their 2FA support exists for the web interface (though weirdly not the support site), though it is somewhat weird. They do not support app passwords. A somewhat unique feature is the @secure.mailbox.org domain. If you try to receive mail at that address, mailbox.org will block it unless it uses TLS. Same for sending. This isn t E2EE, but it does at least require things not be in plaintext for the last hop to Mailbox. Verdict: not recommended due to poor reliability and support. Mailbox.Org summary:
  • Website: https://mailbox.org/en/
  • Reliability: iffy due to over-aggressive spam filtering
  • Support: Poor; takes 4-6 days for a reply and replies are unhelpful
  • Individual access passwords: No
  • 2FA: Yes, but with a PIN instead of a password as the other factor
  • Filtering: Full SIEVE feature set and GUI editor
  • Spam settings: greylisting on/off, reject some/all spam, etc. But they re insufficient to address Mailbox s overzealousness, which support says I cannot workaround within the interface.
  • Server storage location: Germany
  • Plan as reviewed: standard [pricing link]
    • Cost per year: EUR 30 (about $33)
    • Mail storage included: 10GB
    • Limits on send/receive volume: none
    • Aliases: 50 on your domain name, 25 on mailbox.org
    • Additional mailboxes: Available; each one at the same fee as the primary mailbox

Startmail
I really wanted to like Startmail. Its vault is an interesting idea and should contribute to the security and privacy of an account. They clearly care about privacy. It falls down in filtering. They have no way to filter on envelope recipient (BCC or similar). Their support confirmed this to me and that s a showstopper. Startmail support was also as slow as Mailbox, taking 5 days to respond to me. Two showstoppers right there. Verdict: Not recommended due to slow support responsiveness and weak filtering. Startmail summary:
  • Website: https://www.startmail.com/
  • Reliability: Seems to be fine
  • Support: Mediocre; Took 5 days for a reply, but the reply was helpful
  • Individual app access passwords: Yes
  • 2FA: Yes
  • Filtering: Poor; cannot filter on envelope recipient, and can t build filters with multiple actions
  • Spam settings: None
  • Server storage location: The Netherlands
  • Plan as reviewed: Custom domain (trial was Personal), [pricing link]
    • Cost per year: $70
    • Mail storage included: 20GB
    • Limits on send/receive volume: none
    • Aliases: unlimited, with lots of features: can set expiration, etc.
    • Additional mailboxes: not available

Kolab
Kolab Now is mainly positioned as a full groupware service, but they do have a email-only option which I investigated. There isn t much documentation about it compared to other providers, and also not much in the way of settings. You can turn greylisting on or off. And . that s it. It has a full suite of filtering options. They set an X-Envelope-To header which you can use with the arbitrary header match to do the right thing even for BCC situations. Filters can have multiple conditions and multiple actions. It is SIEVE-based and you can download your SIEVE definitions. If you enable 2FA, you disable IMAP and SMTP; not great. Verdict: Not an impressive enough email featureset to justify going with it. Kolab Now summary:
  • Website: https://kolabnow.com/
  • Reliability: Seems to be fine
  • Support: Fine responsiveness (next day)
  • Invidiaul app passwords: no
  • 2FA: Yes, but if you enable it, they disable IMAP and SMTP
  • Filtering: Excellent
  • Spam settings: Only greylisting on/off
  • Server storage location: Switzerland; they have lots of details on their setup
  • Plan as reviewed: Just email [pricing link]
    • Cost per year: CHF 60, about $66
    • Mail storage included: 5GB
    • Limitations on send/receive volume: None
    • Aliases: Yes. Not sure if there are limits.
    • Additional mailboxes: Yes if you set up a group account. Flexible pricing based on user count is not documented anywhere I could find.

Mailfence
Mailfence is another option, somewhat similar to Startmail but without the unique vault. I had some questions about filters, and support was quite responsive, responding in a couple of hours. Some of their copy on their website is a bit misleading, but support clarified when I asked them. They do not offer encryption at rest (like most of the entries here). Mailfence s filtering system is the kind I d like to see. It allows multiple conditions and multiple actions for each rule, and has some unique actions as well (notify by SMS or XMPP). Support says that Recipients matches envelope recipients. However, one ommission is that I can t match on arbitrary headers; only the canned list of headers they provide. They have only two spam settings:
  • spam filter on/off
  • whitelist
Given some recent complaints about their spam filter being overly aggressive, I find this lack of control somewhat concerning. (However, I discount complaints about people begging for more features in free accounts; free won t provide the kind of service I m looking for with any provider.) There are generally just very few settings for email as well. Verdict: Response and helpful support, filtering has the right structure but lacks arbitrary header match. Could be a good option. Mailfence summary:
  • Website: https://mailfence.com/
  • Reliability: Seems to be fine
  • Support: Excellent responsiveness and helpful replies (after some initial confusion about my question of greylisting)
  • Individual app access passwords: No. You can set a per-service password (eg, an IMAP password), but those will be shared with all devices speaking that protocol.
  • 2FA: Yes
  • Filtering: Good; only misses the ability to filter on arbitrary headers
  • Spam settings: Very few
  • Server storage location: Belgium
  • Plan as reviewed: Entry [pricing link]
    • Cost per year: $42
    • Mail storage included: 10GB, with a maximum of 50,000 messages
    • Limits on send/receive volume: none
    • Aliases: 50. Aliases can t be deleted once created (there may be an exeption to this for aliases on your own domain rather than mailfence.com)
    • Additional mailboxes: Their page on this is a bit confusing, and the pricing page lacks the information promised. It looks like you can pay the same $42/year for additional mailboxes, with a limit of up to 2 additional paid mailboxes and 2 additional free mailboxes tied to the account.

Runbox
This one came recommended in a Mastodon thread. I had some questions about it, and support response was fantastic I heard from two people that were co-founders of the company! Even within hours, on a weekend. Incredible! This kind of response was only surpassed by Migadu. I initially wrote to Runbox with questions about the incoming and outgoing message limits, which I hadn t seen elsewhere, as well as the bandwidth limit. They said the bandwidth limit is no longer enforced on paid accounts. The incoming and outgoing limits are enforced, and all email (even spam) counts towards the limit. Notably the outgoing limit is per recipient, so if you send 10 messages to your 50-recipient family group, that s the limit. However, they also indicated a willingness to reset the limit if something happens. Unfortunately, hitting the limit results in a hard bounce (SMTP 5xx) rather than a temporary failure (SMTP 4xx) so it can result in lost mail. This means I d be worried about some attack or other weirdness causing me to lose mail. Their filter is a pain point. Here are the challenges:
  • You can t directly match on a BCC recipient. Support advised to use a headers match, which will search for something anywhere in the headers. This works and is probably good enough since this data is in the Received: headers, but it is a little more imprecise.
  • They only have a contains , not an equals operator. So, for instance, a pattern searching for test@example.com would also match newtest@example.com . Support advised to put the email address in angle brackets to avoid this. That will work mostly. Angle brackets aren t always required in headers.
  • There is no way to have multiple actions on the filter (there is just no way to file an incoming message into two folders). This was the ultimate showstopper for me.
Support advised they are planning to upgrade the filter system in the future, but these are the limitations today. Verdict: A good option if you don t need much from the filtering system. Lots of privacy emphasis. Runbox summary:
  • Website: https://runbox.com/
  • Reliability: Seems to be fine, except returning 5xx codes if per-day limits are exceeded
  • Support: Excellent responsiveness and replies from founders
  • Individual app passwords: Yes
  • 2FA: Yes
  • Filtering: Poor
  • Spam settings: Very few
  • Server storage location: Norway
  • Plan as reviewed: Mini [pricing link]
    • Cost per year: $35
    • Mail storage included: 10GB
    • Limited on send/receive volume: Receive 5000 messages/day, Send 500 recipients/day
    • Aliases: 100 on runbox.com; unlimited on your own domain
    • Additional mailboxes: $15/yr each, also with 10GB non-shared storage per mailbox

Fastmail
Fastmail came recommended to me by a friend I ve known for decades. Here s the thing about Fastmail, compared to all the services listed above: It all just works. Everything. Filtering, spam prevention, it is all there, all feature-complete, and all just does the right thing as you d hope. Their filtering system has a canned dropdown for To/Cc/Bcc , it supports multiple conditions and multiple actions, and just does the right thing. (Delivering to multiple folders is a little cumbersome but possible.) It has a particularly strong feature set around administering multiple accounts, including things like whether users can prevent admins from reading their mail. The not-so-great part of the picture is around privacy. Fastmail is based in Australia, where the government has extensive power around spying on data, even to the point of forcing companies to add wiretap capabilities. Fastmail s privacy policy states user data may be held in Australia, USA, India, and Netherlands. By default, they share data with unidentified spam companies , though you can disable this in settings. On the other hand, they do make a good effort towards privacy. I contacted support with some questions and got back a helpful response in three hours. However, one of the questions was about in which countries my particular data would be stored, and the support response said they would have to get back to me on that. It s been several days and no word back. Verdict: A featureful option that just works , with a lot of features for managing family accounts and the like, but lacking in the privacy area. Fastmail summary:
  • Website: https://www.fastmail.com/
  • Reliability: Seems to be fine
  • Support: Good response time on most questions; dropped the ball on one tha trequired research
  • Individual app access passwords: Yes
  • 2FA: Yes
  • Filtering: Excellent
  • Spam settings: Can set filter aggressiveness, decide whether to share spam data with spam-fighting companies , configure how to handle backscatter spam, and evaluate the personal learning filter.
  • Server storage locations: Australia, USA, India, and The Netherlands. Legal jurisdiction is Australia.
  • Plan as reviewed: Individual [pricing link]
    • Cost per year: $60
    • Mail storage included: 50GB
    • Limits on send/receive volume: 300/hour
    • Aliases: Unlimited from what I can see
    • Additional mailboxes: No; requires a different plan for that

Migadu
Migadu was a service I d never heard of, but came recommended to me on Mastodon. I listed Migadu last because it is a class of its own compared to all the other options. Every other service is basically a webmail interface with a few extra settings tacked on. Migadu has a full-featured email admin console in addition. By that I mean you can:
  • View usage graphs (incoming, outgoing, storage) over time
  • Manage DNS (if you want Migadu to run your nameservers)
  • Manage multiple domains, and cross-domain relationships with mailboxes
  • View a limited set of logs
  • Configure accounts, reset their passwords if needed/authorized, etc.
  • Configure email address rewrite rules with wildcards and so forth
Basically, if you were the sort of person that ran your own mail servers back in the day, here is Migadu giving you most of that functionality. Effectively you have a web interface to do all the useful stuff, and they handle the boring and annoying bits. This is a really attractive model. Migadu support has been fantastic. They are quick to respond, and went above and beyond. I pointed out that their X-Envelope-To header, which is needed for filtering by BCC, wasn t being added on emails I sent myself. They replied 5 hours later indicating they had added the feature to add X-Envelope-To even for internal mails! Wow! I am impressed. With Migadu, you buy a pool of resources: storage space and incoming/outgoing traffic. What you do within that pool is up to you. You can set up users ( mailboxes ), aliases, domains, whatever you like. It all just shares the pool. You can restrict users further so that an individual user has access to only a subset of the pool resources. I was initially concerned about Migadu s daily send/receive message count limits, but in visiting with support and reading the documentation, what really comes out is that Migadu is a service with a personal touch. Hitting the incoming traffic limit will cause a SMTP temporary fail (4xx) response so you won t lose legit mail and support will work with you if it s a problem for legit uses. In other words, restrictions are soft and they are interpreted reasonably. One interesting thing about Migadu is that they do not offer accounts under their domain. That is, you MUST bring your own domain. That s pretty easy and cheap, of course. It also puts you in a position of power, because it is easy to migrate email from one provider to another if you own the domain. Filtering is done via SIEVE. There is a GUI editor which lets you accomplish most things, though it has an odd blind spot where you can t file a message into multiple folders. However, you can edit a SIEVE ruleset directly and you get the full SIEVE featureset, which is extensive (and does support filing a message into multiple folders). I note that the SIEVE :envelope match doesn t work, but Migadu adds an X-Envelope-To header which is just as good. I particularly love a company that tells you all the reasons you might not want to use them. Migadu s pro/con list is an honest drawbacks list (of course, their homepage highlights all the features!). Verdict: Fantastically powerful, excellent support, and good privacy. I chose this one. Migadu summary:
  • Website: https://migadu.com/
  • Reliability: Excellent
  • Support: Fantastic. Good response times and they added a feature (or fixed a bug?) a few hours after I requested it.
  • Individual access passwords: Yes. Create identities to support them.
  • 2FA: Yes, on both the admin interface and the webmail interface
  • Filtering: Excellent, based on SIEVE. GUI editor doesn t support multiple actions when filing into a folder, but full SIEVE functionality is exposed.
  • Spam settings:
    • On the domain level, filter aggressiveness, Greylisting on/off, black and white lists
    • On the mailbox level, filter aggressiveness, black and whitelists, action to take with spam; compatible with filters.
  • Server storage location: France; legal jurisdiction Switzerland
  • Plan as reviewed: mini [pricing link]
    • Cost per year: $90
    • Mail storage included: 30GB ( soft quota)
    • Limits on send/receive volume: 1000 messgaes in/day, 100 messages out/day ( soft quotas)
    • Aliases: Unlimited on an unlimited number of domains
    • Additional mailboxes: Unlimited and free; uses pooled quotas, but individual quotas can be set

Others
Here are a few others that I didn t think worthy of getting a trial:
  • mxroute was recommended by several. Lots of concerning things in their policy, such as:
    • if you repeatedly send mail to unroutable recipients, they may publish the addresses on Github
    • they will terminate your account if they think you are rude or want to contest a charge
    • they reserve the right to cancel your service at any time for any (or no) reason.
  • Proton keeps coming up, and I will not consider it so long as I am locked into their client on mobile.
  • Skiff comes up sometimes, but they were acquired by Norton.
  • Disroot comes up; this discussion highlights a number of reasons why I avoid them. Their Terms of Service (ToS) is inconsistent with a general-purpose email account (I guess for targeting nonprofits and activists, that could make sense). Particularly laughable is that they claim to be friends of Open Source, but then would take down your account if you upload copyrighted material. News flash: in order for an Open Source license to be meaningful, the underlying work is copyrighted. It is perfectly legal to upload copyrighted material when you wrote it or have the license to do so!

Conclusions
There are a lot of good options for email hosting today, and in particular I appreciate the excellent personal support from companies like Migadu and Runbox. Support small businesses!

15 May 2024

Evgeni Golov: Using HPONCFG on CentOS Stream 9 with OpenSSL 3.2

Today I've updated an HPE ProLiant DL325 G10 from CentOS Stream 8 to CentOS Stream 9 (details on that to follow) and realized that hponcfg was broken afterwards. As I do not have a support contract with HPE, I couldn't just yell at them in private, so I am doing this in public now ;-)
# hponcfg
HPE Lights-Out Online Configuration utility
Version 5.6.0 Date 11/30/2020 (c) 2005,2020 Hewlett Packard Enterprise Development LP
Error: Unable to locate SSL library.
       Install latest SSL library to use HPONCFG.
Welp, what the heck? But wait, 5.6.0 from 2020 looks old, let's update this first! hponcfg is part of the "Management Component Pack" (at least if you're not running RHEL or SLES where you get it via the "Service Pack for ProLiant" which requires a support contract) and can be downloaded from the Software Delivery Repository. The Software Delivery Repository tells you to configure it in /etc/yum.repos.d/mcp.repo as
[mcp]
name=Management Component Pack
baseurl=http://downloads.linux.hpe.com/repo/mcp/dist/dist_ver/arch/project_ver
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp
gpgcheck=0? Suuure! Plain HTTP? Suuure! But it gets better! When you look at https://downloads.linux.hpe.com/repo/mcp/centos/ (you have to substitute dist with your distribution!) you'll see that there is no 9 folder and thus no packages for CentOS (Stream) 9. There are however folders for Oracle, Rocky and Alma. Phew. Let's take one of these!
[mcp]
name=Management Component Pack
baseurl=https://downloads.linux.hpe.com/repo/mcp/rocky/9/x86_64/current/
enabled=1
gpgcheck=1
gpgkey=https://downloads.linux.hpe.com/repo/mcp/GPG-KEY-mcp
dnf upgrade hponcfg updates it to hponcfg-6.0.0-0.x86_64 and:
# hponcfg
HPE Lights-Out Online Configuration utility
Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP
Error: Unable to locate SSL library.
       Install latest SSL library to use HPONCFG.
Fuck. ldd doesn't show hponcfg being linked to libssl, do they dlopen() at runtime and fucked something up? ltrace to the rescue!
# ltrace hponcfg
 
popen("strings /bin/openssl   grep 'Ope"..., "r")            = 0x621700
fgets("OpenSSL 3.2.1 30 Jan 2024\n", 256, 0x621700)          = 0x7ffd870e2e10
strstr("OpenSSL 3.2.1 30 Jan 2024\n", "OpenSSL 3.0")         = nil
 
WAT? They run strings /bin/openssl grep 'OpenSSL' and compare the result with "OpenSSL 3.0"?! Sure, OpenSSL 3.2 in EL9 is rather fresh and didn't hit RHEL/Oracle/Alma/Rocky yet, but surely there are better ways to check for a compatible version of OpenSSL than THIS?! Anyway, I am not going to downgrade my OpenSSL. Neither will I patch it to pretend to be 3.0. But I can patch the hponcfg binary!
# vim /sbin/hponcfg
<go to line 146>
<replace 3.0 with 3.2>
:x
Yes, I used vim. Yes, it works. No, I won't guarantee this won't kill a kitten somewhere.
# ./hponcfg
HPE Lights-Out Online Configuration utility
Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP
Firmware Revision = 2.44 Device type = iLO 5 Driver name = hpilo
USAGE:
  hponcfg  -?
  hponcfg  -h
  hponcfg  -m minFw
  hponcfg  -r [-m minFw] [-u username] [-p password]
  hponcfg  -b [-m minFw] [-u username] [-p password]
  hponcfg  [-a] -w filename [-m minFw] [-u username] [-p password]
  hponcfg  -g [-m minFw] [-u username] [-p password]
  hponcfg  -f filename [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password]
  hponcfg  -i [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password]
  -h,  --help           Display this message
  -?                    Display this message
  -r,  --reset          Reset the Management Processor to factory defaults
  -b,  --reboot         Reboot Management Processor without changing any setting
  -f,  --file           Get/Set Management Processor configuration from "filename"
  -i,  --input          Get/Set Management Processor configuration from the XML input
                        received through the standard input stream.
  -w,  --writeconfig    Write the Management Processor configuration to "filename"
  -a,  --all            Capture complete Management Processor configuration to the file.
                        This should be used along with '-w' option
  -l,  --log            Log replies to "filename"
  -v,  --xmlverbose     Display all the responses from Management Processor
  -s,  --substitute     Substitute variables present in input config file
                        with values specified in "namevaluepairs"
  -g,  --get_hostinfo   Get the Host information
  -m,  --minfwlevel     Minimum firmware level
  -u,  --username       iLO Username
  -p,  --password       iLO Password
For comparison, here is the diff --text output:
# diff -u --text /sbin/hponcfg ./hponcfg
--- /sbin/hponcfg   2022-08-02 01:07:55.000000000 +0000
+++ ./hponcfg   2024-05-15 09:06:54.373121233 +0000
@@ -143,7 +143,7 @@
 helpget_hostinforesetwriteconfigallfileinputlogminfwlevelxmlverbosesubstitutetimeoutdbgverbosityrebootusernamepasswordlibpath%Ah*Ag7Ar=AwIAaMAfRAiXAl\AmgAvrAs At Ad Ab Au Ap Azhgrbaw:f:il:m:vs:t:d:z:u:p:tmpXMLinputFile%2d.xmlw+Error: Syntax Error - Invalid options present.
 =O@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@ M@ M@aQ@ M@aQ@ N@ M@ N@ P@aQ@aQ@ M@ M@aQ@aQ@LN@aQ@ M@ O@ M@ M@ M@ M@aQ@aQ@ M@<!----><LOGINUSER_LOGINPASSWORD<LOGIN USER_LOGIN="%s" PASSWORD="%s"ERROR: LOGIN tag is missing.
 >ERROR: LOGIN end tag is missing.
-strings    grep 'OpenSSL 1'   grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.0which openssl 2>&1/usr/bin/opensslOpenSSL location - %s
+strings    grep 'OpenSSL 1'   grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.2which openssl 2>&1/usr/bin/opensslOpenSSL location - %s
 Current version %s
 No response from command.
Pretty sure it won't apply like this with patch, but you get the idea. And yes, double-giggles for the fact that the error message says "Install latest SSL library to use HPONCFG" and the issues is because I have the latest SSL library installed

14 May 2024

Dirk Eddelbuettel: RApiSerialize 0.1.3 on CRAN: Skipping XDR

A new bug fix release 0.1.3 of RApiSerialize got onto CRAN earlier today. This is the first release in well over a year, and permits the skip the XDR serialization format which is needed when transfering between big- and little-endian machines. But it comes at a certain run-time cost one can avoid on the (much more common) little-endian machines. This is a new option, and the old behavior is the default. Those who want to can now skip the step. The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. We also addressed the recent nag by the CRAN concerning NO_REMAP .

Changes in version 0.1.3 (2024-05-13)
  • Add an xdr argument to disable XDR for an approx. threefold speed increase (Travers Ching and Dirk in #6)
  • Use R_NO_REMAP and Rf_* prefix for API calls
  • Minor continuous integration updates

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed. You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience. Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it". My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved. Now some madman did it and we all get to hear his story! ;-) What is the problem anyway? The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves. So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime. So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release. To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots. This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git. But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those. How can we solve this? Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details. Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us. But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer. We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun. Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure. But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want. My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive. Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.
[Librarian]     --> fatal: repository '../.git' does not exist
Could not checkout ../.git: fatal: repository '../.git' does not exist
Seems librarian disagrees. Damn. (Yes, I checked, the path exists.) does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works! For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request. Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live. Putting it all together Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:
--- a/foreman-installer/.packit.yaml    2024-05-14 21:45:26.545260798 +0200
+++ b/puppet-pulpcore/.packit.yaml  2024-05-14 21:44:47.834162418 +0200
@@ -18,13 +18,15 @@
 actions:
   post-upstream-clone:
     - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec"
+    - "git clone https://github.com/theforeman/foreman-installer"
+    - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"# __dir__ /../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile"
   get-current-version:
-    - "sed 's/-develop//' VERSION"
+    - "sed 's/-develop//' foreman-installer/VERSION"
   create-archive:
-    - bundle config set --local path vendor/bundle
-    - bundle config set --local without development:test
-    - bundle install
-    - bundle exec rake pkg:generate_source
+    - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle"
+    - bash -c "cd foreman-installer && bundle config set --local without development:test"
+    - bash -c "cd foreman-installer && bundle install"
+    - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
  2. It adjusts the Puppetfile to use # __dir__ /../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
  4. It performs all building inside the foreman-installer checkout
Can this be used in other scenarios? I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!

  1. three Ruby modules in a trench coat, so to say

Julian Andres Klode: The new APT 3.0 solver

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work? Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies. Deferring the choices is implemented multiple ways: First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package. Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a b before a b c. Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not nest in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue. Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all. Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design. If you have studied SAT solver design, you ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy). As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B). Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior? The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other. Implementing that policy is rather trivial: We just need to queue obsolete replacement as a dependency to solve, rather than mark the obsolete package for install. Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc c-compiler is enough to keep them around.

New features The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible. The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do? At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful. Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely. The test suite is not passing yet, I haven t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn t remove those. We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed. Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make. Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to. At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

Matthew Palmer: "Is This Project Still Maintained?"

If you wander around a lot of open source repositories on the likes of GitHub, you ll invariably stumble over repos that have an issue (or more than one!) with a title like the above. Sometimes sitting open and unloved, often with a comment or two from the maintainer and a bunch of I ll help out! followups that never seemed to pan out. Very rarely, you ll find one that has been closed, with a happy ending. These issues always fascinate me, because they say a lot about what it means to maintain an open source project, the nature of succession (particularly in a post-Jia Tan world), and the expectations of users and the impedence mismatch between maintainers, contributors, and users. I ve also recently been thinking about pre-empting this sort of issue, and opening my own issue that answers the question before it s even asked.

Why These Issues Are Created As both a producer and consumer of open source software, I completely understand the reasons someone might want to know whether a project is abandoned. It s comforting to be able to believe that there s someone on the other end of the line , and that if you have a problem, you can ask for help with a non-zero chance of someone answering you. There s also a better chance that, if the maintainer is still interested in the software, that compatibility issues and at least show-stopper bugs might get fixed for you. But often there s more at play. There is a delusion that maintained open source software comes with entitlements an expectation that your questions, bug reports, and feature requests will be attended to in some fashion. This comes about, I think, in part because there are a lot of open source projects that are energetically supported, where generous volunteers do answer questions, fix reported bugs, and implement things that they don t personally need, but which random Internet strangers ask for. If you ve had that kind of user experience, it s not surprising that you might start to expect it from all open source projects. Of course, these wonders of cooperative collaboration are the exception, rather than the rule. In many (most?) cases, there is little practical difference between most projects that are maintained and those that are formally declared unmaintained . The contributors (or, most often, contributor singular) are unlikely to have the time or inclination to respond to your questions in a timely and effective manner. If you find a problem with the software, you re going to be paddling your own canoe, even if the maintainer swears that they re still maintaining it.

A Thought Appears With this in mind, I ve been considering how to get ahead of the problem and answer the question for the software projects I ve put out in the world. Nothing I ve built has anything like what you d call a community ; most have never seen an external PR, or even an issue. The last commit date on them might be years ago. By most measures, almost all of my repos look unmaintained . Yet, they don t feel unmaintained to me. I m still using the code, sometimes as often as every day, and if something broke for me, I d fix it. Anyone who needs the functionality I ve developed can use the code, and be pretty confident that it ll do what it says in the README. I m considering creating an issue in all my repos, titled Is This Project Still Maintained? , pinning it to the issues list, and pasting in something I m starting to think of as The Open Source Maintainer s Manifesto . It goes something like this:

Is This Project Still Maintained? Yes. Maybe. Actually, perhaps no. Well, really, it depends on what you mean by maintained . I wrote the software in this repo for my own benefit to solve the problems I had, when I had them. While I could have kept the software to myself, I instead released it publicly, under the terms of an open licence, with the hope that it might be useful to others, but with no guarantees of any kind. Thanks to the generosity of others, it costs me literally nothing for you to use, modify, and redistribute this project, so have at it!

OK, Whatever. What About Maintenance? In one sense, this software is maintained , and always will be. I fix the bugs that annoy me, I upgrade dependencies when not doing so causes me problems, and I add features that I need. To the degree that any on-going development is happening, it s because I want that development to happen. However, if maintained to you means responses to questions, bug fixes, upgrades, or new features, you may be somewhat disappointed. That s not maintenance , that s support , and if you expect support, you ll probably want to have a support contract , where we come to an agreement where you pay me money, and I help you with the things you need help with.

That Doesn t Sound Fair! If it makes you feel better, there are several things you are entitled to:
  1. The ability to use, study, modify, and redistribute the contents of this repository, under the terms stated in the applicable licence(s).
  2. That any interactions you may have with myself, other contributors, and anyone else in this project s spaces will be in line with the published Code of Conduct, and any transgressions of the Code of Conduct will be dealt with appropriately.
  3. actually, that s it.
Things that you are not entitled to include an answer to your question, a fix for your bug, an implementation of your feature request, or a merge (or even review) of your pull request. Sometimes I may respond, either immediately or at some time long afterwards. You may luck out, and I ll think hmm, yeah, that s an interesting thing and I ll work on it, but if I do that in any particular instance, it does not create an entitlement that I will continue to do so, or that I will ever do so again in the future.

But I ve Found a Huge and Terrible Bug! You have my full and complete sympathy. It s reasonable to assume that I haven t come across the same bug, or at least that it doesn t bother me, otherwise I d have fixed it for myself. Feel free to report it, if only to warn other people that there is a huge bug they might need to avoid (possibly by not using the software at all). Well-written bug reports are great contributions, and I appreciate the effort you ve put in, but the work that you ve done on your bug report still doesn t create any entitlement on me to fix it. If you really want that bug fixed, the source is available, and the licence gives you the right to modify it as you see fit. I encourage you to dig in and fix the bug. If you don t have the necessary skills to do so yourself, you can get someone else to fix it everyone has the same entitlements to use, study, modify, and redistribute as you do. You may also decide to pay me for a support contract, and get the bug fixed that way. That gets the bug fixed for everyone, and gives you the bonus warm fuzzies of contributing to the digital commons, which is always nice.

But My PR is a Gift! If you take the time and effort to make a PR, you re doing good work and I commend you for it. However, that doesn t mean I ll necessarily merge it into this repository, or even work with you to get it into a state suitable for merging. A PR is what is often called a gift of work . I ll have to make sure that, at the very least, it doesn t make anything actively worse. That includes introducing bugs, or causing maintenance headaches in the future (which includes my getting irrationally angry at indenting, because I m like that). Properly reviewing a PR takes me at least as much time as it would take me to write it from scratch, in almost all cases. So, if your PR languishes, it might not be that it s bad, or that the project is (dum dum dummmm!) unmaintained , but just that I don t accept this particular gift of work at this particular time. Don t forget that the terms of licence include permission to redistribute modified versions of the code I ve released. If you think your PR is all that and a bag of potato chips, fork away! I won t be offended if you decide to release a permanent fork of this software, as long as you comply with the terms of the licence(s) involved. (Note that I do not undertake support contracts solely to review and merge PRs; that reeks a little too much of pay to play for my liking)

Gee, You Sound Like an Asshole I prefer to think of myself as forthright and plain-speaking , but that brings to mind that third thing you re entitled to: your opinion. I ve written this out because I feel like clarifying the reality we re living in, in the hope that it prevents misunderstandings. If what I ve written makes you not want to use the software I ve written, that s fine you ve probably avoided future disappointment.

Opinions Sought What do you think? Too harsh? Too wishy-washy? Comment away!

Freexian Collaborators: Monthly report about Debian Long Term Support, April 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In April, 19 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 0.5h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 13.5h to the next month.
  • Adrian Bunk did 35.75h (out of 17.25h assigned and 40.5h from previous period), thus carrying over 22.0h to the next month.
  • Bastien Roucari s did 25.0h (out of 25.0h assigned).
  • Ben Hutchings did 24.0h (out of 9.0h assigned and 15.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 46.0h (out of 12.0h assigned and 34.0h from previous period).
  • Guilhem Moulin did 14.75h (out of 20.0h assigned), thus carrying over 5.25h to the next month.
  • Lee Garrett did 51.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 8.75h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 22.5h (out of 19.5h assigned and 4.5h from previous period), thus carrying over 1.5h to the next month.
  • Roberto C. S nchez did 11.0h (out of 9.25h assigned and 2.75h from previous period), thus carrying over 1.0h to the next month.
  • Santiago Ruano Rinc n did 20.0h (out of 20.0h assigned).
  • Sean Whitton did 9.5h (out of 4.5h assigned and 5.5h from previous period), thus carrying over 0.5h to the next month.
  • Stefano Rivera did 1.5h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 8.5h to the next month.
  • Sylvain Beucler did 12.5h (out of 22.75h assigned and 35.0h from previous period), thus carrying over 45.25h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 10.0h (out of 12.0h assigned), thus carrying over 2.0h to the next month.
  • Utkarsh Gupta did 3.25h (out of 28.5h assigned and 29.25h from previous period), thus carrying over 54.5h to the next month.

Evolution of the situation In April, we have released 28 DLAs. During the month of April, there was one particularly notable security update made in LTS. Guilhem Moulin prepared DLA-3782-1 for util-linux (part of the set of base packages and containing a number of important system utilities) in order to address a possible information disclosure vulnerability. Additionally, several contributors prepared updates for oldstable (bullseye), stable (bookworm), and unstable (sid), including:
  • ruby-rack: prepared for oldstable, stable, and unstable by Adrian Bunk
  • wpa: prepared for oldstable, stable, and unstable by Bastien Roucari s
  • zookeeper: prepared for stable by Bastien Roucari s
  • libjson-smart: prepared for unstable by Bastien Roucari s
  • ansible: prepared for stable and unstable, including autopkgtest fixes to increase future supportability, by Lee Garrett
  • wordpress: prepared for oldstable and stable by Markus Koschany
  • emacs and org-mode: prepared for oldstable and stable by Sean Whitton
  • qtbase-opensource-src: prepared for oldstable and stable by Thorsten Alteholz
  • libjwt: prepared for oldstable by Thorsten Alteholz
  • libmicrohttpd: prepared for oldstable by Thorsten Alteholz
These fixes were in addition to corresponding updates in LTS. Another item to highlight in this month s report is an update to the distro-info-data database by Stefano Rivera. This update ensures that Debian buster systems have the latest available information concerning the end-of-life dates and other related information for all releases of Debian and Ubuntu. As announced on the debian-lts-announce mailing list, it is worth to point out that we are getting close to the end of support of Debian 10 as LTS. After June 30th, no new security updates will be made available on security.debian.org. However, Freexian and its team of paid Debian contributors will continue to maintain Debian 10 going forward for the customers of the Extended LTS offer. If you still have Debian 10 servers to keep secure, it s time to subscribe!

Thanks to our sponsors Sponsors that joined recently are in bold.

Next.