Search Results: "seb"

9 January 2024

Louis-Philippe V ronneau: 2023 A Musical Retrospective

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :) Albums In 2023, I added 73 new albums to my collection nearly 2 albums every three weeks! I listed them below in the order in which I acquired them. I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available. Once again this year, it seems that Punk (mostly O !) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montr al Skinhead/Punk concert scene. Concerts A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month... Here are the concerts I went to in 2023: Although metalfinder continues to work as intended, I'm very glad to have discovered the Montr al underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :) That's it for 2023!

22 December 2023

Joachim Breitner: The Haskell Interlude Podcast

It was pointed out to me that I have not blogged about this, so better now than never: Since 2021 I am together with four other hosts producing a regular podcast about Haskell, the Haskell Interlude. Roughly every two weeks two of us interview someone from the Haskell Community, and we chat for approximately an hour about how they came to Haskell, what they are doing with it, why they are doing it and what else is on their mind. Sometimes we talk to very famous people, like Simon Peyton Jones, and sometimes to people who maybe should be famous, but aren t quite yet. For most episodes we also have a transcript, so you can read the interviews instead, if you prefer, and you should find the podcast on most podcast apps as well. I do not know how reliable these statistics are, but supposedly we regularly have around 1300 listeners. We don t get much feedback, however, so if you like the show, or dislike it, or have feedback, let us know (for example on the Haskell Disourse, which has a thread for each episode). At the time of writing, we released 40 episodes. For the benefit of my (likely hypothetical) fans, or those who want to train an AI voice model for nefarious purposes, here is the list of episodes co-hosted by me: Can t decide where to start? The one with Ryan Trinkle might be my favorite. Thanks to the Haskell Foundation and its sponsors for supporting this podcast (hosting, editing, transscription).

14 December 2023

Dirk Eddelbuettel: RProtoBuf 0.4.21 on CRAN: Updated Upstream Support!

An exciting new release 0.4.21 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. ProtoBuf development, following what seemed like a multi-year lull, all of a sudden picked up again with a vengeance a little while ago. And the library releases we rely on for convenience and provided by the Linux distributions are lagging. So last summer we received an excellent, and focussed, pull request #93 offering to update the package to the newer ProtoBuf 22.0 and beyond. (Aside: When a library ditches its numbering scheme you know changes are for real . My Ubuntu 23.10 box is still at 3.21 in a different counting scheme .) But it wasn t until last weekend the issue ticket #95 by Sebastian ran into the same issue, but recognized it and contained a container recipe! So now all of a sudden we were able to build under a newer ProtoBuf which made accepting the PR #93 much easier! We added this as an additional continuous unit test, and made a few other smaller updates to documentation and style. The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.21 (2022-12-13)
  • Package now builds with ProtoBuf >= 22.x thanks to Matteo Gianella (#93 addressing #92).
  • An Alpine 3.19-based workflow was added to test this in continuous integration thanks to a suggestion by Sebastian Meyer.
  • A large number of old-style .Call were updated (#96).
  • Several packaging, dcoumentation and testing items were updated.

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 December 2023

Dirk Eddelbuettel: RcppInt64 0.0.4 on CRAN: Minor Bugfix

The new-ish package RcppInt64 (announced earlier this fall in this post, with two small updates following) arrived on CRAN minutes ago as relase 0.0.4. RcppInt64 collects some of the previous conversions between 64-bit integer values in R and C++, and regroups them in a single package. It offers two interfaces: both a more standard as<>() converter from R values along with its companions wrap() to return to R, as well as more dedicated functions from and to . This release addresses an issues Sebastian reported a few hours and which is reported by newer, pickier compilers: We need to include <cstdint> so that int64_t is declared. CRAN was at its usual best processing this efficiently including tests of the by now two reverse dependencies. Twenty two minutes total, all automated: The brief NEWS entry follows:

Changes in version 0.0.4 (2023-12-09)
  • The cstdint header is now included (closes #1).

Courtesy of my CRANberries, there is a diffstat report relative to previous release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

6 December 2023

Reproducible Builds: Reproducible Builds in November 2023

Welcome to the November 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a rather rapid recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries (more).

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Amazingly, the agenda and all notes from all sessions are all online many thanks to everyone who wrote notes from the sessions. As a followup on one idea, started at the summit, Alexander Couzens and Holger Levsen started work on a cache (or tailored front-end) for the snapshot.debian.org service. The general idea is that, when rebuilding Debian, you do not actually need the whole ~140TB of data from snapshot.debian.org; rather, only a very small subset of the packages are ever used for for building. It turns out, for amd64, arm64, armhf, i386, ppc64el, riscv64 and s390 for Debian trixie, unstable and experimental, this is only around 500GB ie. less than 1%. Although the new service not yet ready for usage, it has already provided a promising outlook in this regard. More information is available on https://rebuilder-snapshot.debian.net and we hope that this service becomes usable in the coming weeks. The adjacent picture shows a sticky note authored by Jan-Benedict Glaw at the summit in Hamburg, confirming Holger Levsen s theory that rebuilding all Debian packages needs a very small subset of packages, the text states that 69,200 packages (in Debian sid) list 24,850 packages in their .buildinfo files, in 8,0200 variations. This little piece of paper was the beginning of rebuilder-snapshot and is a direct outcome of the summit! The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Beyond Trusting FOSS presentation at SeaGL On November 4th, Vagrant Cascadian presented Beyond Trusting FOSS at SeaGL in Seattle, WA in the United States. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. The summary of Vagrant s talk mentions that it will:
[ ] introduce the concepts of Reproducible Builds, including best practices for developing and releasing software, the tools available to help diagnose issues, and touch on progress towards solving decades-old deeply pervasive fundamental security issues Learn how to verify and demonstrate trust, rather than simply hoping everything is OK!
Germane to the contents of the talk, the slides for Vagrant s talk can be built reproducibly, resulting in a PDF with a SHA1 of cfde2f8a0b7e6ec9b85377eeac0661d728b70f34 when built on Debian bookworm and c21fab273232c550ce822c4b0d9988e6c49aa2c3 on Debian sid at the time of writing.

Human Factors in Software Supply Chain Security Marcel Fourn , Dominik Wermke, Sascha Fahl and Yasemin Acar have published an article in a Special Issue of the IEEE s Security & Privacy magazine. Entitled A Viewpoint on Human Factors in Software Supply Chain Security: A Research Agenda, the paper justifies the need for reproducible builds to reach developers and end-users specifically, and furthermore points out some under-researched topics that we have seen mentioned in interviews. An author pre-print of the article is available in PDF form.

Community updates On our mailing list this month:

openSUSE updates Bernhard M. Wiedemann has created a wiki page outlining an proposal to create a general-purpose Linux distribution which consists of 100% bit-reproducible packages albeit minus the embedded signature within RPM files. It would be based on openSUSE Tumbleweed or, if available, its Slowroll-variant. In addition, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Ubuntu Launchpad now supports .buildinfo files Back in 2017, Steve Langasek filed a bug against Ubuntu s Launchpad code hosting platform to report that .changes files (artifacts of building Ubuntu and Debian packages) reference .buildinfo files that aren t actually exposed by Launchpad itself. This was causing issues when attempting to process .changes files with tools such as Lintian. However, it was noticed last month that, in early August of this year, Simon Quigley had resolved this issue, and .buildinfo files are now available from the Launchpad system.

PHP reproducibility updates There have been two updates from the PHP programming language this month. Firstly, the widely-deployed PHPUnit framework for the PHP programming language have recently released version 10.5.0, which introduces the inclusion of a composer.lock file, ensuring total reproducibility of the shipped binary file. Further details and the discussion that went into their particular implementation can be found on the associated GitHub pull request. In addition, the presentation Leveraging Nix in the PHP ecosystem has been given in late October at the PHP International Conference in Munich by Pol Dellaiera. While the video replay is not yet available, the (reproducible) presentation slides and speaker notes are available.

diffoscope changes diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including:
  • Improving DOS/MBR extraction by adding support for 7z. [ ]
  • Adding a missing RequiredToolNotFound import. [ ]
  • As a UI/UX improvement, try and avoid printing an extended traceback if diffoscope runs out of memory. [ ]
  • Mark diffoscope as stable on PyPI.org. [ ]
  • Uploading version 252 to Debian unstable. [ ]

Website updates A huge number of notes were added to our website that were taken at our recent Reproducible Builds Summit held between October 31st and November 2nd in Hamburg, Germany. In particular, a big thanks to Arnout Engelen, Bernhard M. Wiedemann, Daan De Meyer, Evangelos Ribeiro Tzaras, Holger Levsen and Orhun Parmaks z. In addition to this, a number of other changes were made, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Track packages marked as Priority: important in a new package set. [ ][ ]
    • Stop scheduling packages that fail to build from source in bookworm [ ] and bullseye. [ ].
    • Add old releases dashboard link in web navigation. [ ]
    • Permit re-run of the pool_buildinfos script to be re-run for a specific year. [ ]
    • Grant jbglaw access to the osuosl4 node [ ][ ] along with lynxis [ ].
    • Increase RAM on the amd64 Ionos builders from 48 GiB to 64 GiB; thanks IONOS! [ ]
    • Move buster to archived suites. [ ][ ]
    • Reduce the number of arm64 architecture workers from 24 to 16 in order to improve stability [ ], reduce the workers for amd64 from 32 to 28 and, for i386, reduce from 12 down to 8 [ ].
    • Show the entire build history of each Debian package. [ ]
    • Stop scheduling already tested package/version combinations in Debian bookworm. [ ]
  • Snapshot service for rebuilders
    • Add an HTTP-based API endpoint. [ ][ ]
    • Add a Gunicorn instance to serve the HTTP API. [ ]
    • Add an NGINX config [ ][ ][ ][ ]
  • System-health:
    • Detect failures due to HTTP 503 Service Unavailable errors. [ ]
    • Detect failures to update package sets. [ ]
    • Detect unmet dependencies. (This usually occurs with builds of Debian live-build.) [ ]
  • Misc-related changes:
    • do install systemd-ommd on jenkins. [ ]
    • fix harmless typo in squid.conf for codethink04. [ ]
    • fixup: reproducible Debian: add gunicorn service to serve /api for rebuilder-snapshot.d.o. [ ]
    • Increase codethink04 s Squid cache_dir size setting to 16 GiB. [ ]
    • Don t install systemd-oomd as it unfortunately kills sshd [ ]
    • Use debootstrap from backports when commisioning nodes. [ ]
    • Add the live_build_debian_stretch_gnome, debsums-tests_buster and debsums-tests_buster jobs to the zombie list. [ ][ ]
    • Run jekyll build with the --watch argument when building the Reproducible Builds website. [ ]
    • Misc node maintenance. [ ][ ][ ]
Other changes were made as well, however, including Mattia Rizzolo fixing rc.local s Bash syntax so it can actually run [ ], commenting away some file cleanup code that is (potentially) deleting too much [ ] and fixing the html_brekages page for Debian package builds [ ]. Finally, diagnosed and submitted a patch to add a AddEncoding gzip .gz line to the tests.reproducible-builds.org Apache configuration so that Gzip files aren t re-compressed as Gzip which some clients can t deal with (as well as being a waste of time). [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 November 2023

Matthias Klumpp: AppStream 1.0 released!

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!    Check it out on GitHub, or get the release tarball or read the documentation or release notes!

Some nostalgic memories I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later). I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others. At the time I was writing a software deployment tool called Listaller this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream. Back then I saw AppStream as a necessary side-project for my actual project, and didn t even consider me as the maintainer of it for quite a while (I hadn t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be and also not how ubiquitous it would become. The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0?

API breaks The most important thing that s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+. For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release. Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this

Better compatibility checks For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use. The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out! With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more! The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints. So, what s in it for the future? Contrary to what I thought, AppStream does not really seem to be done and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature. Onwards to 1.0.1!

7 November 2023

Matthew Palmer: PostgreSQL Encryption: The Available Options

On an episode of Postgres FM, the hosts had a (very brief) discussion of data encryption in PostgreSQL. While Postgres FM is a podcast well worth a subscribe, the hosts aren t data security experts, and so as someone who builds a queryable database encryption system, I found the coverage to be somewhat lacking. I figured I d provide a more complete survey of the available options for PostgreSQL-related data encryption.

The Status Quo By default, when you install PostgreSQL, there is no data encryption at all. That means that anyone who gets access to any part of the system can read all the data they have access to. This is, of course, not peculiar to PostgreSQL: basically everything works much the same way. What s stopping an attacker from nicking off with all your data is the fact that they can t access the database at all. The things that are acting as protection are perimeter defences, like putting the physical equipment running the server in a secure datacenter, firewalls to prevent internet randos connecting to the database, and strong passwords. This is referred to as tortoise security it s tough on the outside, but soft on the inside. Once that outer shell is cracked, the delicious, delicious data is ripe for the picking, and there s absolutely nothing to stop a miscreant from going to town and making off with everything. It s a good idea to plan your defenses on the assumption you re going to get breached sooner or later. Having good defence-in-depth includes denying the attacker to your data even if they compromise the database. This is where encryption comes in.

Storage-Layer Defences: Disk / Volume Encryption To protect against the compromise of the storage that your database uses (physical disks, EBS volumes, and the like), it s common to employ encryption-at-rest, such as full-disk encryption, or volume encryption. These mechanisms protect against offline attacks, but provide no protection while the system is actually running. And therein lies the rub: your database is always running, so encryption at rest typically doesn t provide much value. If you re running physical systems, disk encryption is essential, but more to prevent accidental data loss, due to things like failing to wipe drives before disposing of them, rather than physical theft. In systems where volume encryption is only a tickbox away, it s also worth enabling, if only to prevent inane questions from your security auditors. Relying solely on storage-layer defences, though, is very unlikely to provide any appreciable value in preventing data loss.

Database-Layer Defences: Transparent Database Encryption If you ve used proprietary database systems in high-security environments, you might have come across Transparent Database Encryption (TDE). There are also a couple of proprietary extensions for PostgreSQL that provide this functionality. TDE is essentially encryption-at-rest implemented inside the database server. As such, it has much the same drawbacks as disk encryption: few real-world attacks are thwarted by it. There is a very small amount of additional protection, in that physical level backups (as produced by pg_basebackup) are protected, but the vast majority of attacks aren t stopped by TDE. Any attacker who can access the database while it s running can just ask for an SQL-level dump of the stored data, and they ll get the unencrypted data quick as you like.

Application-Layer Defences: Field Encryption If you want to take the database out of the threat landscape, you really need to encrypt sensitive data before it even gets near the database. This is the realm of field encryption, more commonly known as application-level encryption. This technique involves encrypting each field of data before it is sent to be stored in the database, and then decrypting it again after it s retrieved from the database. Anyone who gets the data from the database directly, whether via a backup or a direct connection, is out of luck: they can t decrypt the data, and therefore it s worthless. There are, of course, some limitations of this technique. For starters, every ORM and data mapper out there has rolled their own encryption format, meaning that there s basically zero interoperability. This isn t a problem if you build everything that accesses the database using a single framework, but if you ever feel the need to migrate, or use the database from multiple codebases, you re likely in for a rough time. The other big problem of traditional application-level encryption is that, when the database can t understand what data its storing, it can t run queries against that data. So if you want to encrypt, say, your users dates of birth, but you also need to be able to query on that field, you need to choose between one or the other: you can t have both at the same time. You may think to yourself, but this isn t any good, an attacker that breaks into my application can still steal all my data! . That is true, but security is never binary. The name of the game is reducing the attack surface, making it harder for an attacker to succeed. If you leave all the data unencrypted in the database, an attacker can steal all your data by breaking into the database or by breaking into the application. Encrypting the data reduces the attacker s options, and allows you to focus your resources on hardening the application against attack, safe in the knowledge that an attacker who gets into the database directly isn t going to get anything valuable.

Sidenote: The Curious Case of pg_crypto PostgreSQL ships a contrib module called pg_crypto, which provides encryption and decryption functions. This sounds ideal to use for encrypting data within our applications, as it s available no matter what we re using to write our application. It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you re using, which produces the same output. However, I don t recommend ever using pg_crypto s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either. First up, and most horrifyingly, it requires you to pass the long-term keys to the database server. If there s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed. Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result. Less scary, but still very concerning, is that pg_crypto s available cryptography is, to put it mildly, antiquated. We have a lot of newer, safer, and faster techniques for data encryption, that aren t available in pg_crypto. This means that if you do use it, you re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls. In short: friends don t let friends use pg_crypto.

The Future: Enquo All this brings us to the project I run: Enquo. It takes application-layer encryption to a new level, by providing a language- and framework-agnostic cryptosystem that also enables encrypted data to be efficiently queried by the database. So, you can encrypt your users dates of birth, in such a way that anyone with the appropriate keys can query the database to return, say, all users over the age of 18, but an attacker just sees unintelligible gibberish. This should greatly increase the amount of data that can be encrypted, and as the Enquo project expands its available data types and supported languages, the coverage of encrypted data will grow and grow. My eventual goal is to encrypt all data, all the time. If this appeals to you, visit enquo.org to use or contribute to the open source project, or EnquoDB.com for commercial support and hosted database options.

22 September 2023

Ravi Dwivedi: Debconf23

Official logo of DebConf23

Introduction DebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from the 3rd to the 17th of September, 2023. Ever since I got to know about it (which was more than an year ago), I was excited to attend DebConf in my home country. This was my second DebConf, as I attended one last year in Kosovo. I was very happy that I didn t need to apply for a visa to attend. I got full bursary to attend the event (thanks a lot to Debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :) For the conference, I submitted two talks. One was suggested by Sahil on Debian packaging for beginners, while the other was suggested by Praveen who opined that a talk covering broader topics about freedom in self-hosting services will be better, when I started discussing about submitting a talk about prav app project. So I submitted one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting. My friend Suresh - who is enthusiastic about Debian and free software - wanted to attend the DebConf as well. When the registration started, I reminded him about applying. We landed in Kochi on the 28th of August 2023 during the festival of Onam. We celebrated Onam in Kochi, had a trip to Wayanad, and returned to Kochi. On the evening of the 3rd of September, we reached the venue - Four Points Hotel by Sheraton, at Infopark Kochi, Ernakulam, Kerala, India.
Suresh and me celebrating Onam in Kochi.

Hotel overview The hotel had 14 floors, and featured a swimming pool and gym (these were included in our package). The hotel gave us elevator access for only our floor, along with public spaces like the reception, gym, swimming pool, and dining areas. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel was in cahoots with winterwear companies? :)
Four Points Hotel by Sheraton was the venue of DebConf23. Photo credits: Bilal
Photo of the pool. Photo credits: Andreas Tille.
View from the hotel window.

Meals On the first day, Suresh and I had dinner at the eatery on the third floor. At the entrance, a member of the hotel staff asked us about how many people we wanted a table for. I told her that it s just the two of us at the moment, but (as we are attending a conference) we might be joined by others. Regardless, they gave us a table for just two. Within a few minutes, we were joined by Alper from Turkey and urbec from Germany. So we shifted to a larger table but then we were joined by even more people, so we were busy adding more chairs to our table. urbec had already been in Kerala for the past 5-6 days and was, on one hand, very happy already with the quality and taste of bananas in Kerala and on the other, rather afraid of the spicy food :) Two days later, the lunch and dinner were shifted to the All Spice Restaurant on the 14th floor, but the breakfast was still served at the eatery. Since the eatery (on the 3rd floor) had greater variety of food than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the spicy food. It is difficult for locals to help them, because what we consider mild can be spicy for others. It is not easy to satisfy everyone at the dining table, but I think the organizing team did a very good job in the food department. (That said, it didn t matter for me after a point, and you will know why.) The pappadam were really good, and I liked the rice labelled Kerala rice . I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it too (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn t like some of them, mostly because they were very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was the eatery).
This place served as lunch and dinner place and later as hacklab during debconf. Photo credits: Bilal

The excellent Swag Bag The DebConf registration desk was at the second floor. We were given a very nice swag bag. They were available in multiple colors - grey, green, blue, red - and included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a paper coaster, and stickers. It rained almost every day in Kochi during our stay, so handing out an umbrella to every attendee was a good idea.
Picture of the awesome swag bag given at DebConf23. Photo credits: Ravi Dwivedi

A gift for Nattie During breakfast one day, Nattie (Belgium) expressed the desire to buy a coffee filter. The next time I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)

Being a mentor There were many newbies who were eager to learn and contribute to Debian. So, I mentored whoever came to me and was interested in learning. I conducted a packaging workshop in the bootcamp, but could only cover how to set up the Debian Unstable environment, and had to leave out how to package (but I covered that in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people understand why we sign GPG keys and how to sign them. I planned to take a workshop on it but cancelled it later.

My talk My Debian packaging talk was on the 10th of September, 2023. I had not prepared slides for my Debian packaging talk in advance - I thought that I could do it during the trip, but I didn t get the time so I prepared them on the day before the talk. Since it was mostly a tutorial, the slides did not need much preparation. My thanks to Suresh, who helped me with the slides and made it possible to complete them in such a short time frame. My talk was well-received by the audience, going by their comments. I am glad that I could give an interesting presentation.
My presentation photo. Photo credits: Valessio

Visiting a saree shop After my talk, Suresh, Alper, and I went with Anisa and Kristi - who are both from Albania, and have a never-ending fascination for Indian culture :) - to buy them sarees. We took autos to Kakkanad market and found a shop with a great variety of sarees. I was slightly familiar with the area around the hotel, as I had been there for a week. Indian women usually don t try on sarees while buying - they just select the design. But Anisa wanted to put one on and take a few photos as well. The shop staff did not have a trial saree for this purpose, so they took a saree from a mannequin. It took about an hour for the lady at the shop to help Anisa put on that saree but you could tell that she was in heaven wearing that saree, and she bought it immediately :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which would go well with the mundu we already had, but we could not find anything to our liking.
Selfie with Anisa and Kristi. Photo credits: Anisa.

Cheese and Wine Party On the 11th of September we had the Cheese and Wine Party, a tradition of every DebConf. I brought Kaju Samosa and Nankhatai from home. Many attendees expressed their appreciation for the samosas. During the party, I was with Abhas and had a lot of fun. Abhas brought packets of paan and served them at the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was less fun compared to the previous DebConfs - you could only drink alcohol served by the hotel in public places. If you bought your own alcohol, you could only drink in private places (such as in your room, or a friend s room), but not in public places.
Me helping with the Cheese and Wine Party.

Party at my room Last year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after the Cheese and Wine party to have pastis. My idea was to have them with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wanted and he ended up bringing some ten people. Suddenly, the room was crowded. I was having good time at the party, serving them the snacks given to me by Abhas. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC became ineffective and I found myself sweating. I left the room and roamed around in the hotel for some fresh air. I came back after about 1.5 hours - for most part, I was sitting at the ground floor with TK Saurabh. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 2:30 AM. Nobody seemed to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.
This picture was taken when there were few people in my room for the party.

Sadhya Thali On the 12th of September, we had a sadhya thali for lunch. It is a vegetarian thali served on a banana leaf on the eve of Thiruvonam. It wasn t Thiruvonam on this day, but we got a special and filling lunch. The rasam and payasam were especially yummy.
Sadhya Thali: A vegetarian meal served on banana leaf. Payasam and rasam were especially yummy! Photo credits: Ravi Dwivedi.
Sadhya thali being served at debconf23. Photo credits: Bilal

Day trip On the 13th of September, we had a daytrip. I chose the daytrip houseboat in Allepey. Suresh chose the same, and we registered for it as soon as it was open. This was the most sought-after daytrip by the DebConf attendees - around 80 people registered for it. Our bus was set to leave at 9 AM on the 13th of September. Me and Suresh woke up at 8:40 and hurried to get to the bus in time. It took two hours to reach the venue where we get the houseboat. The houseboat experience was good. The trip featured some good scenery. I got to experience the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. By evening, we came back to the place where we had boarded the boat.
Group photo of our daytrip. Photo credits: Radhika Jhalani

A good friend lost When we came back from the daytrip, we received news that Abhraham Raji was involved in a fatal accident during a kayaking trip. Abraham Raji was a very good friend of mine. In my Albania-Kosovo-Dubai trip last year, he was my roommate at our Tirana apartment. I roamed around in Dubai with him, and we had many discussions during DebConf22 Kosovo. He was the one who took the photo of me on my homepage. I also met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu, and went to his flat in Kochi this year in June. We had many projects in common. He was a Free Software activist and was the designer of the DebConf23 logo, in addition to those for other Debian events in India.
A selfie in memory of Abraham.
We were all fairly shocked by the news. I was devastated. Food lost its taste, and it became difficult to sleep. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them. The next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping with the difficult situation. On the next day (the 14th of September), the Debian project leader Jonathan Carter addressed and announced the news officially. THe Debian project also mentioned it on their website. Abraham was supposed to give a talk, but following the incident, all talks were cancelled for the day. The conference dinner was also cancelled. As I write, 9 days have passed since his death, but even now I cannot come to terms with it.

Visiting Abraham s house On the 15th of September, the conference ran two buses from the hotel to Abraham s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting opposite me, and he began conversing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap. In two hours, we reached Abraham s house. I couldn t control myself and burst into tears. I went to see the body. I met his family (mother, father and sister), but I had nothing to say and I felt helpless. Owing to the loss of sleep and appetite over the past few days, I had no energy, and didn t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled for the 16th of September.

A Japanese gift I got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.
Thanks to Niibe Yutaka (the person towards your right hand) from Japan (FSIJ), who gave me a wonderful Japanese gift during debconf23: A folder to keep pages with ancient Japanese manga characters printed on it. I realized I immediately needed that :)
This is the Japanese gift I received.

Group photo On the 16th of September, we had a group photo. I am glad that this year I was more clear in this picture than in DebConf22.
Click to enlarge

Volunteer work and talks attended I attended the training session for the video team and worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas presentation on home automation. He basically demonstrated how he liberated Internet-enabled home devices. I also liked Kristi s presentation on ways to engage with the GNOME community.
Bits from the DPL. Photo credits: Bilal
Kristi on GNOME community. Photo credits: Ravi Dwivedi.
Abhas' talk on home automation. Photo credits: Ravi Dwivedi.
I also attended lightning talks on the last day. Badri, Wouter, and I gave a demo on how to register on the Prav app. Prav got a fair share of advertising during the last few days.
I was roaming around with a QR code on my T-shirt for downloading Prav.

The night of the 17th of September Suresh left the hotel and Badri joined me in my room. Thanks to the efforts of Abhijit PA, Kiran, and Ananthu, I wore a mundu.
Me in mundu. Picture credits: Abhijith PA
I then joined Kalyani, Mangesh, Ruchika, Anisa, Ananthu and Kiran. We took pictures and this marked the last night of DebConf23.

Departure day The 18th of September was the day of departure. Badri slept in my room and left early morning (06:30 AM). I dropped him off at the hotel gate. The breakfast was at the eatery (3rd floor) again, and it was good. Sahil, Saswata, Nilesh, and I hung out on the ground floor.
From left: Nilesh, Saswata, me, Sahil. Photo credits: Sahil.
I had an 8 PM flight from Kochi to Delhi, for which I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were joined by other DebConf23 attendees at the Kochi airport, where we took another selfie.
Ruchika (taking the selfie) and from left to right: Yash, Joost (Netherlands), me, Rhonda
Joost and I were on the same flight, and we sat next to each other. He then took a connecting flight from Delhi to Netherlands, while I went with Yash to the New Delhi Railway Station, where we took our respective trains. I reached home on the morning of the 19th of September, 2023.
Joost and me going to Delhi. Photo credits: Ravi.

Big thanks to the organizers DebConf23 was hard to organize - strict alcohol laws, weird hotel rules, death of a close friend (almost a family member), and a scary notice by the immigration bureau. The people from the team are my close friends and I am proud of them for organizing such a good event. None of this would have been possible without the organizers who put more than a year-long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the time leading up to DebConf. Kudos to them. The organizers also tried their best to get clearance for countries not approved by the ministry. I am also sad that people from China, Kosovo, and Iran could not join. In particular, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport to be a valid travel document), considering how we Indians were so well-received in their country last year.

Note about myself I am writing this on the 22nd of September, 2023. It took me three days to put up this post - this was one of the tragic and hard posts for me to write. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me. PS: Credits to contrapunctus for making grammar, phrasing, and capitalization changes.

21 September 2023

Jonathan Carter: DebConf23

I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
  • #980209 installation fails at the install boot loader phase
  • #1021156 calamares-settings-debian: Confusing/generic program names
  • #1037299 Install Debian -> Untrusted application launcher
  • #1037123 Minimal HD space required too small for some live images
  • #971003 Console auto-login doesn t work with sysvinit
At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
  • #1038660 please set a placeholder theme during development, different from any release
  • #1021816 breeze: Background image not shown any more
  • #956102 desktop-base: unwanted metadata within images
  • #605915 please mtheake it a non-native package
  • #681025 Put old themes in a new package named desktop-base-extra
  • #941642 desktop-base: split theme data files and desktop integrations in separate packages
The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
  • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
Some hopefully harmless soldering.
Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

10 September 2023

Freexian Collaborators: Debian Contributions: /usr-merge updates, Salsa CI progress, DebConf23 lead-up, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-merge work, by Helmut Grohne, et al. Given that we now have consensus on moving forward by moving aliased files from / to /usr, we will also run into the problems that the file move moratorium was meant to prevent. The way forward is detecting them early and applying workarounds on a per-package basis. Said detection is now automated using the Debian Usr Merge Analysis Tool. As problems are reported to the bug tracking system, they are connected to the reports if properly usertagged. Bugs and patches for problem categories DEP17-P2 and DEP17-P6 have been filed. After consensus has been reached on the bootstrapping matters, debootstrap has been changed to swap the initial unpack and merging to avoid unpack errors due to pre-existing links. This is a precondition for having base-files install the aliasing symbolic links eventually. It was identified that the root filesystem used by the Debian installer is still unmerged and a change has been proposed. debhelper was changed to recognize systemd units installed to /usr. A discussion with the CTTE and release team on repealing the moratorium has been initiated.

Salsa CI work, by Santiago Ruano Rinc n August was a busy month in the Salsa CI world. Santiago reviewed and merged a bunch of MRs that have improved the project in different aspects: The aptly job got two MRs from Philip Hands. With the first one, the aptly now can export a couple of variables in a dotenv file, and with the second, it can include packages from multiple artifact directories. These MRs bring the base to improve how to test reverse dependencies with Salsa CI. Santiago is working on documenting this. As a result of the mass bug filing done in August, Salsa CI now includes a job to test how a package builds twice in a row. Thanks to the MRs of Sebastiaan Couwenberg and Johannes Schauer Marin Rodrigues. Last but not least, Santiago helped Johannes Schauer Marin Rodrigues to complete the support for arm64-only pipelines.

DebConf23 lead-up, by Stefano Rivera Stefano wears a few hats in the DebConf organization and in the lead up to the conference in mid-September, they ve all been quite busy. As one of the treasurers of DebConf 23, there has been a final budget update, and quite a few payments to coordinate from Debian s Trusted Organizations. We try to close the books from the previous conference at the next one, so a push was made to get DebConf 22 account statements out of TOs and record them in the conference ledger. As a website developer, we had a number of registration-related tasks, emailing attendees and trying to estimate numbers for food and accommodation. As a conference committee member, the job was mostly taking calls and helping the local team to make decisions on urgent issues. For example, getting conference visas issued to attendees required getting political approval from the Indian government. We only discovered the full process for this too late to clear some complex cases, so this required some hard calls on skipping some countries from the application list, allowing everyone else to get visas in time. Unfortunate, but necessary.

Miscellaneous contributions
  • Rapha l Hertzog updated gnome-shell-extension-hamster to a new upstream git snapshot that is compatible with GNOME Shell 44 that was recently uploaded to Debian unstable/testing. This extension makes it easy to start/stop tracking time with Hamster Time Tracker. Very handy for consultants like us who are billing their work per hour.
  • Rapha l also updated zim to the latest upstream release (0.74.2). This is a desktop wiki that can be very useful as a note-taking tool to build your own personal knowledge base or even to manage your personal todo lists.
  • Utkarsh reviewed and sponsored some uploads from mentors.debian.net.
  • Utkarsh helped the local team and the bursary team with some more DebConf activities and helped finalize the data.
  • Thorsten tried to update package hplip. Unfortunately upstream added some new compressed files that need to appear uncompressed in the package. Even though this sounded like an easy task, which seemed to be already implemented in the current debian/rules, the new type of files broke this implementation and made the package no longer buildable. The problem has been solved and the upload will happen soon.
  • Helmut sent 7 patches for cross build failures. Since dpkg-buildflags now defaults to issue arm64-specific compiler flags, more care is needed to distinguish between build architecture flags and host architecture flags than previously.
  • Stefano pushed the final bit of the tox 4 transition over the line in Debian, allowing dh-python and tox 4 to migrate to testing. We got caught up in a few unusual bugs in tox and the way we run it in Debian package building (which had to change with tox 4). This resulted in a couple of patches upstream.
  • Stefano visited Haifa, Israel, to see the proposed DebConf 24 venue and meet with the local team. While the venue isn t committed yet, we have high hopes for it.

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

9 July 2023

Russell Coker: Matrix

Introduction In 2020 I first setup a Matrix [1] server. Matrix is a full featured instant messaging protocol which requires a less stringent definition of instant , messages being delayed for minutes aren t that uncommon in my experience. Matrix is a federated service where the servers all store copies of the room data, so when you connect your client to it s home server it gets all the messages that were published while you were offline, it is widely regarded as being IRC but without a need to be connected all the time. One of it s noteworthy features is support for end to end encryption (so the server can t access cleartext messages from users) as a core feature. Matrix was designed for bridging with other protocols, the most well known of which is IRC. The most common Matrix server software is Synapse which is written in Python and uses a PostgreSQL database as it s backend [2]. My tests have shown that a lightly loaded Synapse server with less than a dozen users and only one or two active users will have noticeable performance problems if the PostgreSQL database is stored on SATA hard drives. This seems like the type of software that wouldn t have been developed before SSDs became commonly affordable. The matrix-synapse is in Debian/Unstable and the backports repositories for Bullseye and Buster. As Matrix is still being very actively developed you want to have a recent version of all related software so Debian/Buster isn t a good platform for running it, Bullseye or Bookworm are the preferred platforms. Configuring Synapse isn t really hard, but there are some postential problems. The first thing to do is to choose the DNS name, you can never change it without dropping the database (fresh install of all software and no documented way of keeping user configuration) so you don t want to get it wrong. Generally you will want the Matrix addresses at the top level of the domain you choose. When setting up a Matrix server for my local LUG I chose the top level of their domain luv.asn.au as the DNS name for the server. If you don t want to run a server then there are many open servers offering free account. Server Configuration Part of doing this configuration required creating the URL https://luv.asn.au/.well-known/matrix/client with the following contents so clients know where to connect. Note that you should not setup Jitsi sections without first discussing it with the people who run the Jitsi server in question.
 
  "m.homeserver":  
    "base_url": "https://luv.asn.au"
   
  "jitsi":  
    "preferredDomain": "jitsi.perthchat.org"
   
  "im.vector.riot.jitsi":  
    "preferredDomain": "jitsi.perthchat.org"
   
 
Also the URL https://luv.asn.au/.well-known/matrix/server for other servers to know where to connect:
 
  "m.server": "luv.asn.au:8448"
 
If the base_url or the m.server points to a name that isn t configured then you need to add it to the web server configuration. See section 3.1 of the documentation about well known Matrix client fields [3]. The SE Linux specific parts of the configuration are to run the following commands as Bookworm and Bullseye SE Linux policy have support for Synapse:
setsebool -P httpd_setrlimit 1
setsebool -P httpd_can_network_relay 1
setsebool -P matrix_postgresql_connect 1
To configure apache you have to enable proxy mode and SSL with the command a2enmod proxy ssl proxy_http and add the line Listen 8443 to /etc/apache2/ports.conf and restart Apache. The command chmod 700 /etc/matrix-synapse should probably be run to improve security, there s no reason for less restrictive permissions on that directory. In the /etc/matrix-synapse/homeserver.yaml file the macaroon_secret_key is a random key for generating tokens. To use the matrix.org server as a trusted key server and not receive warnings put the following line in the config file:
suppress_key_server_warning: true
A line like the following is needed to configure the baseurl:
public_baseurl: https://luv.asn.au:8448/
To have Synapse directly accept port 8448 connections you have to change bind_addresses in the first section of listeners to the global listen IPv6 and IPv4 addresses. The registration_shared_secret is a password for adding users. When you have set that you can write a shell script to add new users such as:
#!/bin/bash
# usage: matrix_new_user USER PASS
synapse_register_new_matrix_user -u $1 -p $2 -a -k THEPASSWORD
You need to set tls_certificate_path and tls_private_key_path to appropriate values, usually something like the following:
tls_certificate_path: "/etc/letsencrypt/live/www.luv.asn.au-0001/fullchain.pem"
tls_private_key_path: "/etc/letsencrypt/live/www.luv.asn.au-0001/privkey.pem"
For the database section you need something like the following which matches your PostgreSQL setup:
  name: "psycopg2"
  args:
    user: WWWWWW
    password: XXXXXXX
    database: YYYYYYY
    host: ZZZZZZ
    cp_min: 5
    cp_max: 10
You need to run psql commands like the following to set it up:
create role WWWWWW login password 'XXXXXXX';
create database YYYYYYY with owner WWWWWW ENCODING 'UTF8' LOCALE 'C' TEMPLATE 'template0';
For the Apache configuration you need something like the following for the port 8448 web server:
<VirtualHost *:8448>
  SSLEngine on
...
  ServerName luv.asn.au;
  AllowEncodedSlashes NoDecode
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
  ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
  AllowEncodedSlashes NoDecode
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
  ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
Also you must add the ProxyPass section to the port 443 configuration (the server that is probably doing other more directly user visible things) for most (all?) end-user clients:
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
This web page can be used to test listing rooms via federation without logging in [4]. If it gives the error Can t find this server or its room list then you must set allow_public_rooms_without_auth and allow_public_rooms_over_federation to true in /etc/matrix-synapse/homeserver.yaml. The Matrix Federation Tester site [5] is good for testing new servers and for tests after network changes. Clients The Element (formerly known as Riot) client is the most common [6]. The following APT repository will allow you to install Element via apt install element-desktop on Debian/Buster.
deb https://packages.riot.im/debian/ default main
The Debian backports repository for Buster has the latest version of Quaternion, apt install quaternion should install that for you. Quaternion doesn t support end to end encryption (E2EE) and also doesn t seem to have good support for some other features like being invited to a room. My current favourite client is Schildi Chat on Android [7], which has a notification message 24*7 to reduce the incidence of Android killing it. Eventually I want to go to PinePhone or Librem 5 for all my phone use so I need to find a full featured Linux client that works on a small screen. Comparing to Jabber I plan to keep using Jabber for alerts because it really does instant messaging, it can reliably get the message to me within a matter of seconds. Also there are a selection of command-line clients for Jabber to allow sending messages from servers. When I first investigated Matrix there was no program suitable for sending messages from a script and the libraries for the protocol made it unreasonably difficult to write one. Now there is a Matrix client written in shell script [8] which might do that. But the delay in receiving messages is still a problem. Also the Matrix clients I ve tried so far have UIs that are more suited to serious chat than to quickly reading a notification message. Bridges Here is a list of bridges between Matrix and other protocols [9]. You can run bridges yourself for many different messaging protocols including Slack, Discord, and Messenger. There are also bridges run for public use for most IRC channels. Here is a list of integrations with other services [10], this is for interacting with things other than IM systems such as RSS feeds, polls, and other things. This also has some frameworks for writing bots. More Information The Debian wiki page about Matrix is good [11]. The view.matrix.org site allows searching for public rooms [12].

13 June 2023

Matt Brown: Ventilation Monitoring Market Research

Over the last month I ve performed some market research to better understand the potential for co2mon.nz and to help me decide whether the product I ve built has a fit with the market or not. The key conclusions I ve drawn from this work are: Keep reading to hear more about the results that lead to those conclusions.

Survey The first piece of research I undertook was a survey covering three topics: views on indoor air quality, how respondents currently monitor indoor air quality and the desired features, including price, for a CO2 monitor. The survey was distributed to my extended personal network via social media, email and word of mouth. I offered respondents the opportunity to win a year of free monitoring as an incentive and received just under 70 responses overall - the lucky winner of that prize was Sam H of Auckland whose shiny new CO2 monitor will be in the mail shortly.

Views on indoor air quality
  • Nearly all respondents strongly agreed that clean, fresh indoor air is important for avoiding sickness and enabling our best work, learning and general cognitive performance, with not a single negative response.
  • 25% of respondents indicated they did not have a good understanding of the quality of the indoor air they were breathing versus 43% who indicated they had a good understanding of their indoor air quality.
  • Nearly 70% of respondents agreed (and greater than 40% strongly agreed) that real-time monitoring is beneficial and worth investing time and money in providing, with a similar distribution of responses agreeing it should be required in all shared indoor spaces.

Current ventilation monitoring approaches
  • For the home setting, using our senses was the most common method of understanding air quality, and only 6% of respondents were unhappy with their ability to monitor ventilation at home.
  • At work, trusting the owner of the building to monitor ventilation was the most common method, although using our senses and some personally collected data also featured for 20% of respondents. While the majority of respondents saw some room for improvement here, less than 20% of respondents were unsatisfied with the ability to monitor ventilation at work.
  • In shared public spaces using our senses and trusting the owner were equally popular with very little use of any data reported. The majority of respondents (40%) were unsatisfied with this situation with 34% seeing some room for improvement and very few being satisfied overall.

CO2 monitoring product features
  • A screen and WiFi were both strongly supported features with less than 10% of respondents seeing them as irrelevant and a large majority of answers skewing towards essential.
  • Coloured lights providing a quick indication were not viewed as important by 13% of respondents and while the majority of answers were towards essential there was also a large (22%) set of respondents who were indifferent to this feature.
  • The ability to access measurements and reports via a web interface was very mixed. Around 20% of respondents reported the feature as irrelevant, 20% essential with the majority seeing it as useful but mot essential.
  • Almost all respondents strongly indicated that additional air quality metrics beyond CO2 were important to collect.
  • Respondents mostly indicated the proposed prices are too high (64%), with essentially no responses suggesting they were too low and the balance (43%) in the middle. Only 5% of respondents indicated a preference for a rental option over a straight purchase.

Advertising In parallel with the survey, I worked with my cousin who runs a marketing agency, The Asset, to place some Facebook ads aiming to systematically evaluate what combination of images and text would draw the best response. It s been an interesting process - despite working for Google for 15 years, I know relatively little about the day to day practice of online advertising! I think we re about 50% of the way through that process of systematically building a funnel of traffic, it s been a steep learning curve and its clear there s significantly more thought and time that would need to be invested into this were it to be the primary driver of sales for a business. It s interested to see how what resonates or doesn t resonate with the audience is often completely different to what I expect, confirming the importance of having a process to evaluate and tweak how the advertising runs. After just under 2 weeks of advertising with a daily budget in the $20 - $30 range, my ads have had just under 17k impressions by 10k distinct people resulting in 76 visits to the co2mon.nz website, and zero sales. The ads themselves received 233 clicks, so there s clearly a lot of room for further improvement and revision of the ad text itself to present a more compelling message. Unfortunately the most common response and feedback to the ads themselves has been comments arguing that CO2 is wonderful, climate change is invented and all our problems would be solved if we had more CO2 everywhere. Tedious to deal with, but also useful reminder about awareness and interest in the problem to contrast with the results from the survey of my extended personal network!

Feedback from other conversations In addition to the survey and advertising I ve had conversations with some local air conditioning and ventilation businesses as well as a commercial building management firm - all providing similar feedback to the results from the survey - acknowledgement that air quality is important and relatively immaturely measured currently, but low urgency or pain to change or remedy that situation. Another interesting point that s come up in conversations with various small business owners is what to do if or when the monitoring shows a ventilation problem? The obvious answer of opening the windows more does not seem to be particularly well received. Without a compelling solution to offer to the potential problem that the monitoring might reveal I often sense a reluctance from people to invest too much time and money in something which may create a problem in a space they don t currently see as urgent.

Conclusions The responses are interesting and surprising to me a in a few ways (no interest in rental, favouring web interface over app), but at the end of the day lead to the two conclusions described above: Air quality is acknowledged as important, but monitoring it is not an urgent or pressing problem for most people. At home and work the majority of people are OK with relying on their senses or trusting someone else to maintain ventilation. They wouldn t object to improvements, but the feedback is that ventilation monitoring is not a problem people are actively looking to solve. The number of people who do see this as an urgent enough problem to invest money into solving is low - even within the biased sample of my extended network. There is a stronger set of evidence for the problem being seen as more urgent by the users of shared public spaces - but I ve not been able to find any evidence that the owners and managers of those spaces feel the same urgency or duty of care towards their users to invest in this space. Most of the opportunity is in the hardware rather than the software service. This signal comes through in the feedback on the pricing (preferring outright purchase vs rental), but it s also been directly expressed in the free-form comments and other conversations I ve had and the the relative importance given to the physical product features over the web/app interfaces in the survey results.

Wrap Up I m glad I finally spent the time doing this research, particularly the survey, these are good lessons to learn, even if I should have taken the time to learn them a year ago - so I can write that reminder (do your research before building a product) down as a key outcome of this process too! Stay tuned for more details on the other work I ve been doing recently on the hardware side of co2mon.nz and what these results mean for my overall plans. As always, I d love to hear from you if these results give you ideas or questions you d like to discuss.

29 May 2023

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

25 May 2023

Dirk Eddelbuettel: qlcal 0.0.6 on CRAN: More updates from QuantLib

The sixth release of the still new-ish qlcal package arrivied at CRAN today. qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. This release brings updates to a few calendars which happened since the QuantLib 1.30 release, and also updates a several of the (few) non-calendaring functions.

Changes in version 0.0.6 (2023-05-24)
  • Several calendars (India, Singapore, South Africa, South Korea) updated with post-QuantLib 1.3.0 changes (Sebastian Schmidt in #6)
  • Three now-used scheduled files were removed (Dirk in #7))
  • A number of non-calendaring files used were synchronised with the current QuantLib repo (Dirk in #8)

Last release, we also added a quick little demo using xts to column-bind calendars produced from each of the different US sub-calendars. This is a slightly updated version of the sketch we tooted a few days ago. The output now is
> print(Reduce(cbind, Map(makeHol, cals)))
           LiborImpact NYSE GovernmentBond NERC FederalReserve
2023-01-02        TRUE TRUE           TRUE TRUE           TRUE
2023-01-16        TRUE TRUE           TRUE   NA           TRUE
2023-02-20        TRUE TRUE           TRUE   NA           TRUE
2023-04-07          NA TRUE             NA   NA             NA
2023-05-29        TRUE TRUE           TRUE TRUE           TRUE
2023-06-19        TRUE TRUE           TRUE   NA           TRUE
2023-07-04        TRUE TRUE           TRUE TRUE           TRUE
2023-09-04        TRUE TRUE           TRUE TRUE           TRUE
2023-10-09        TRUE   NA           TRUE   NA           TRUE
2023-11-10        TRUE   NA             NA   NA             NA
2023-11-23        TRUE TRUE           TRUE TRUE           TRUE
2023-12-25        TRUE TRUE           TRUE TRUE           TRUE
> 
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

6 February 2023

Reproducible Builds: Reproducible Builds in January 2023

Welcome to the first report for 2023 from the Reproducible Builds project! In these reports we try and outline the most important things that we have been up to over the past month, as well as the most important things in/around the community. As a quick recap, the motivation behind the reproducible builds effort is to ensure no malicious flaws can be deliberately introduced during compilation and distribution of the software that we run on our devices. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


News In a curious turn of events, GitHub first announced this month that the checksums of various Git archives may be subject to change, specifically that because:
the default compression for Git archives has recently changed. As result, archives downloaded from GitHub may have different checksums even though the contents are completely unchanged.
This change (which was brought up on our mailing list last October) would have had quite wide-ranging implications for anyone wishing to validate and verify downloaded archives using cryptographic signatures. However, GitHub reversed this decision, updating their original announcement with a message that We are reverting this change for now. More details to follow. It appears that this was informed in part by an in-depth discussion in the GitHub Community issue tracker.
The Bundesamt f r Sicherheit in der Informationstechnik (BSI) (trans: The Federal Office for Information Security ) is the agency in charge of managing computer and communication security for the German federal government. They recently produced a report that touches on attacks on software supply-chains (Supply-Chain-Angriff). (German PDF)
Contributor Seb35 updated our website to fix broken links to Tails Git repository [ ][ ], and Holger updated a large number of pages around our recent summit in Venice [ ][ ][ ][ ].
Noak J nsson has written an interesting paper entitled The State of Software Diversity in the Software Supply Chain of Ethereum Clients. As the paper outlines:
In this report, the software supply chains of the most popular Ethereum clients are cataloged and analyzed. The dependency graphs of Ethereum clients developed in Go, Rust, and Java, are studied. These client are Geth, Prysm, OpenEthereum, Lighthouse, Besu, and Teku. To do so, their dependency graphs are transformed into a unified format. Quantitative metrics are used to depict the software supply chain of the blockchain. The results show a clear difference in the size of the software supply chain required for the execution layer and consensus layer of Ethereum.

Yongkui Han posted to our mailing list discussing making reproducible builds & GitBOM work together without gitBOM-ID embedding. GitBOM (now renamed to OmniBOR) is a project to enable automatic, verifiable artifact resolution across today s diverse software supply-chains [ ]. In addition, Fabian Keil wrote to us asking whether anyone in the community would be at Chemnitz Linux Days 2023, which is due to take place on 11th and 12th March (event info). Separate to this, Akihiro Suda posted to our mailing list just after the end of the month with a status report of bit-for-bit reproducible Docker/OCI images. As Akihiro mentions in their post, they will be giving a talk at FOSDEM in the Containers devroom titled Bit-for-bit reproducible builds with Dockerfile and that my talk will also mention how to pin the apt/dnf/apk/pacman packages with my repro-get tool.
The extremely popular Signal messenger app added upstream support for the SOURCE_DATE_EPOCH environment variable this month. This means that release tarballs of the Signal desktop client do not embed nondeterministic release information. [ ][ ]

Distribution work

F-Droid & Android There was a very large number of changes in the F-Droid and wider Android ecosystem this month: On January 15th, a blog post entitled Towards a reproducible F-Droid was published on the F-Droid website, outlining the reasons why F-Droid signs published APKs with its own keys and how reproducible builds allow using upstream developers keys instead. In particular:
In response to [ ] criticisms, we started encouraging new apps to enable reproducible builds. It turns out that reproducible builds are not so difficult to achieve for many apps. In the past few months we ve gotten many more reproducible apps in F-Droid than before. Currently we can t highlight which apps are reproducible in the client, so maybe you haven t noticed that there are many new apps signed with upstream developers keys.
(There was a discussion about this post on Hacker News.) In addition:
  • F-Droid added 13 apps published with reproducible builds this month. [ ]
  • FC Stegerman outlined a bug where baseline.profm files are nondeterministic, developed a workaround, and provided all the details required for a fix. As they note, this issue has now been fixed but the fix is not yet part of an official Android Gradle plugin release.
  • GitLab user Parwor discovered that the number of CPU cores can affect the reproducibility of .dex files. [ ]
  • FC Stegerman also announced the 0.2.0 and 0.2.1 releases of reproducible-apk-tools, a suite of tools to help make .apk files reproducible. Several new subcommands and scripts were added, and a number of bugs were fixed as well [ ][ ]. They also updated the F-Droid website to improve the reproducibility-related documentation. [ ][ ]
  • On the F-Droid issue tracker, FC Stegerman discussed reproducible builds with one of the developers of the Threema messenger app and reported that Android SDK build-tools 31.0.0 and 32.0.0 (unlike earlier and later versions) have a zipalign command that produces incorrect padding.
  • A number of bugs related to reproducibility were discovered in Android itself. Firstly, the non-deterministic order of .zip entries in .apk files [ ] and then newline differences between building on Windows versus Linux that can make builds not reproducible as well. [ ] (Note that these links may require a Google account to view.)
  • And just before the end of the month, FC Stegerman started a thread on our mailing list on the topic of hiding data/code in APK embedded signatures which has been made possible by the Android APK Signature Scheme v2/v3. As part of this, they made an Android app that reads the APK Signing block of its own APK and extracts a payload in order to alter its behaviour called sigblock-code-poc.

Debian As mentioned in last month s report, Vagrant Cascadian has been organising a series of online sprints in order to clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). During January, a sprint took place on the 10th, resulting in the following uploads: During this sprint, Holger Levsen filed Debian bug #1028615 to request that the tracker.debian.org service display results of reproducible rebuilds, not just reproducible CI results. Elsewhere in Debian, strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, version 1.13.1-1 was uploaded to Debian unstable by Holger Levsen, including a fix by FC Stegerman (obfusk) to update a regular expression for the latest version of file(1) [ ]. (#1028892) Lastly, 65 reviews of Debian packages were added, 21 were updated and 35 were removed this month adding to our knowledge about identified issues.

Other distributions In other distributions:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 231, 232, 233 and 234 to Debian:
  • No need for from __future__ import print_function import anymore. [ ]
  • Comment and tidy the extras_require.json handling. [ ]
  • Split inline Python code to generate test Recommends into a separate Python script. [ ]
  • Update debian/tests/control after merging support for PyPDF support. [ ]
  • Correctly catch segfaulting cd-iccdump binary. [ ]
  • Drop some old debugging code. [ ]
  • Allow ICC tests to (temporarily) fail. [ ]
In addition, FC Stegerman (obfusk) made a number of changes, including:
  • Updating the test_text_proper_indentation test to support the latest version(s) of file(1). [ ]
  • Use an extras_require.json file to store some build/release metadata, instead of accessing the internet. [ ]
  • Updating an APK-related file(1) regular expression. [ ]
  • On the diffoscope.org website, de-duplicate contributors by e-mail. [ ]
Lastly, Sam James added support for PyPDF version 3 [ ] and Vagrant Cascadian updated a handful of tool references for GNU Guix. [ ][ ]

Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, the following changes were made by Holger Levsen:
  • Node changes:
  • Debian-related changes:
    • Only keep diffoscope s HTML output (ie. no .json or .txt) for LTS suites and older in order to save diskspace on the Jenkins host. [ ]
    • Re-create pbuilder base less frequently for the stretch, bookworm and experimental suites. [ ]
  • OpenWrt-related changes:
    • Add gcc-multilib to OPENWRT_HOST_PACKAGES and install it on the nodes that need it. [ ]
    • Detect more problems in the health check when failing to build OpenWrt. [ ]
  • Misc changes:
    • Update the chroot-run script to correctly manage /dev and /dev/pts. [ ][ ][ ]
    • Update the Jenkins shell monitor script to collect disk stats less frequently [ ] and to include various directory stats. [ ][ ]
    • Update the real year in the configuration in order to be able to detect whether a node is running in the future or not. [ ]
    • Bump copyright years in the default page footer. [ ]
In addition, Christian Marangi submitted a patch to build OpenWrt packages with the V=s flag to enable debugging. [ ]
If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

9 January 2023

Russ Allbery: Review: Black Stars

Review: Black Stars, edited by Nisi Shawl & Latoya Peterson
Publisher: Amazon Original Stories
Copyright: August 2021
ISBN: 1-5420-3272-5
ISBN: 1-5420-3270-9
ISBN: 1-5420-3271-7
ISBN: 1-5420-3273-3
ISBN: 1-5420-3268-7
ISBN: 1-5420-3269-5
Format: Kindle
Pages: 168
This is a bit of an odd duck from a metadata standpoint. Black Stars is a series of short stories (maybe one creeps into novelette range) published by Amazon for Kindle and audiobook. Each one can be purchased separately (or "borrowed" with Amazon Prime), and they have separate ISBNs, so my normal practice would be to give each its own review. They're much too short for that, though, so I'm reviewing the whole group as an anthology. The cover in the sidebar is for the first story of the series. The other covers have similar designs. I think the one for "We Travel the Spaceways" was my favorite. Each story is by a Black author and most of them are science fiction. ("The Black Pages" is fantasy.) I would classify them as afrofuturism, although I don't have a firm grasp on its definition. This anthology included several authors I've been meaning to read and was conveniently available, so I gave it a try, even though I'm not much of a short fiction reader. That will be apparent in the forthcoming grumbling. "The Visit" by Chimamanda Ngozi Adichie: This is a me problem rather than a story problem, and I suspect it's partly because the story is not for me, but I am very done with gender-swapped sexism. I get the point of telling stories of our own society with enough alienation to force the reader to approach them from a fresh angle, but the problem with a story where women are sexist and condescending to men is that you're still reading a story of condescending sexism. That's particularly true when the analogies to our world are more obvious than the internal logic of the story world, as they are here. "The Visit" tells the story of a reunion between two college friends, one of whom is now a stay-at-home husband and the other of whom has stayed single. There's not much story beyond that, just obvious political metaphor (the Male Masturbatory Act to ensure no potential child is wasted, blatant harrassment of the two men by female cops) and depressing character studies. Everyone in this story is an ass except maybe Obinna's single friend Eze, which means there's nothing to focus on except the sexism. The writing is competent and effective, but I didn't care in the slightest about any of these people or anything that was happening in their awful, dreary world. (4) "The Black Pages" by Nnedi Okorafor: Issaka has been living in Chicago, but the story opens with him returning to Timbouctou where he grew up. His parents know he's coming for a visit, but he's a week early as a surprise. Unfortunately, he's arriving at the same time as an al-Qaeda attack on the library. They set it on fire, but most of the books they were trying to destroy were already saved by his father and are now in Issaka's childhood bedroom. Unbeknownst to al-Qaeda, one of the books they did burn was imprisoning a djinn. A djinn who is now free and resident in Issaka's iPad. This was a great first chapter of a novel. The combination of a modern setting and a djinn trapped in books with an instant affinity with technology was great. Issaka is an interesting character who is well-placed to introduce the reader to the setting, and I was fully invested in Issaka and Faro negotiating their relationship. Then the story just stopped. I didn't understand the ending, which was probably me being dim, but the real problem was that I was not at all ready for an ending. I would read the novel this was setting up, though. (6) "2043... (A Merman I Should Turn to Be)" by Nisi Shawl: This is another story that felt like the setup for a novel, although not as good of a novel. The premise is that the United States has developed biological engineering that allows humans to live underwater for extended periods (although they still have to surface occasionally for air, like whales). The use to which that technology is being put is a rerun of Liberia with less colonialism: Blacks are given the option to be modified into merpeople and live under the sea off the US coast as a solution. White supremacists are not happy, of course, and try to stop them from claiming their patch of ocean floor. This was fine, as far as it went, but I wasn't fond of the lead character and there wasn't much plot. There was some sort of semi-secret plan that the protagonist stumbles across and that never made much sense to me. The best parts of the story were the underwater setting and the semi-realistic details about the merman transformation. (6) "These Alien Skies" by C.T. Rwizi: In the far future, humans are expanding across the galaxy via automatically-constructed wormhole gates. Msizi's job is to be the first ship through a new wormhole to survey the system previously reached only by the AI construction ship. The wormhole is not supposed to explode shortly after he goes through, leaving him stranded in an alien system with only his companion Tariro, who is not who she seems to be. This was a classic SF plot, but I still hadn't guessed where it was going, or the relevance of some undiscussed bits of Tariro's past. Once the plot happens, it's a bit predictable, but I enjoyed it despite the depressed protagonist. (6) "Clap Back" by Nalo Hopkinson: Apart from "The Visit," this was the most directly political of the stories. It opens with Wenda, a protest artist, whose final class project uses nanotech to put racist tchotchkes to an unexpected use. This is intercut with news clippings about a (white and much richer) designer who has found a way to embed memories into clothing and is using this to spread quotes of rather pointed "forgiveness" from a Malawi quilt. This was one of the few entries in this anthology that fit the short story shape for me. Wenda's project and Burri's clothing interact fifty years later in a surprising way. This was the second-best story of the group. (7) "We Travel the Spaceways" by Victor LaValle: Grimace (so named because he wears a huge purple coat) is a homeless man in New York who talks to cans. Most of his life is about finding food, but the cans occasionally give him missions and provide minor assistance. Apart from his cans, he's very much alone, but when he comforts a woman in McDonalds (after getting caught thinking about stealing her cheeseburger), he hopes he may have found a partner. If, that is, she still likes him when she discovers the nature of the cans' missions. This was the best-written story of the six. Grimace is the first-person narrator, and LaValle's handling of characterization and voice is excellent. Grimace makes perfect sense from inside his head, but the reader can also see how unsettling he is to those around him. This could have been a disturbing, realistic story about a schitzophrenic man. As one may have guessed from the theme of the anthology, that's not what it is. I admired the craft of this story, but I found Grimace's missions too horrific to truly like it. There is an in-story justification for them; suffice it to say that I didn't find it believable. An expansion with considerably more detail and history might have bridged that gap, but alas, short fiction. (6) Rating: 6 out of 10

3 November 2022

Arturo Borrero Gonz lez: New OpenPGP key and new email

Post logo I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything bad. Is still valid, but I plan to get rid of it soon. It was created in 2013. The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4 I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also, the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg The new key has been uploaded to some public keyservers. If you would like to sign the new key, please follow the steps in the Debian wiki.
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGNjvX4BEADE4w5x0SQmxWLAI1R17RCC98ngTkD/FMyos0GF5xmv0VJeLYhw
x6oJRmiNGHY8+gjq7SyVCWmlwbLKBEPFNI1k5WcrTB+ClgGkWB5KBnbLKm6CSP4N
ccSbrUQrZW+zxk3Q5h3CJljZpmflB2dvRfnDMSSaw8zOc37EtszW3AVVKNYAu3wj
mXpfwI72/OSELhSvhkr51L+ZlEYUMCITeO+jpiWsnU+sA8oKKPjW4+X8cjrN4eFa
1PAPILDf+Omst5SKM2aV5LGZ8rBzb5wNJF6yDexDw2XmfbFWLOfYzFRY6GTXJz/p
8Fh6O1wkHM9RnwmesCXTtkaGQsVFiVsoqGFyzrkIdWPUruB3RG5EzOkapWi/cnbD
1sy7yrUgy99Ew5yzmLaZ40hmRyq/gBBw4yRkdQaddbkErx+9hT+2tJELa5wrmWkb
FtaVZ38xC6gacOZqRjp0Xqtr0jobI0vED8vzIyY0zJwWM0Hu6qqq4hkLWZHjCy8a
T5Oe/Cb78Kqwa2mzJfncDahPxcgxpnbkYdvKokRtNBDftLVEz+Do8Dczw7Me4BoK
HmU8wLyeGeDTmeoBXpxKH90T+rQokgsiiD13bWZ+nBxILun1tjOTVVONG6SHdP3f
unolq8SU3K+m67lLa+pWjyYcNRS2OTWGOz/1zsH2R39ZOyfGD09/10aAKwARAQAB
tC1BcnR1cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJnQGFydHVyby5iZz6J
AlQEEwEKAD4WIQSqZigNTvC/zGv8IQTaXssjHI8ExAUCY2O9fgIbAwUJA8JnAAUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRDaXssjHI8ExCZdD/9Z3vR4sV7vBED4
+mCjdNWWf/mw5YlkZo+XQiMVVss4HfQLdt7VxXgGdcOz5Hond9ax3+qeCEo4DdXq
TC0ACpSCu/TPil6vzbE/kO6i6a4oZjFyteAbbcMXP35stbtDM0U5EZH0adIKknfF
msIPTIdJ/dpkcshtBJIoPqjuuTEBa7bF3OYCajHVqwP4Wsgjy4TvDOwl3hy7bhrQ
ZZHqbh7kW40+alQYaJ8jDvbDh/jhN1/pEiZS9ETu0JfBAF3PYPRLW6XedvwZiPWd
jTXwJd0E+vN5LE1Go8OaYvZb9iitZ21UaYOUnFuhw7SEOSQGfEUBs39+41gBj6vW
05HKCEA6kda9NpfptMbUoSSU+hwRfNA5TdnlxtcRv4NqUigzqa1LoXLdxTsyus+K
BL7dRpKXc72JCrEA3vClisD2FgsxLLRCCSDVM8UM/it/YW7tv42XuhQkTW+okQX4
c5laMzTL+ZV8UOoshseTDOsQsdXhskdnWbnuSwAez2/Dd1gHczuN/+lPiiEnyaTF
XgH17K/F25+92MmwPQcFRVPQcYcbyx1VylA6aCgK6gOEqHCejlZv5XLouzbQh1j1
k6MjUR1ncz8vPV5xSuOMAISqozJ9GxUZT2O3o9Vc9pNg5UEzqTvyURgLOdie8yM4
T93S3nKuHVZ++ZVxEOlPnfEfbFP+xbQrQXJ0dXJvIEJvcnJlcm8gR29uemFsZXog
PGFydHVyb0BkZWJpYW4ub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY73LAhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEMKQQAIe18Np+jdhwxHEFZNppBQ69BtyrnPQg4K5VngZ0NUZdVi+/FU7q
Tc9Z1qNydnXgmav3dafL2/l5zDX9wz7mQD2F0a6luOxZwl1PE6iP5f3cUD7uC9zb
148i1bZGEJbO4iNZKTlJKlbNR9m1PG47pv964CHZnNGp6lsnEspxe2G8DJD48Pje
gbhYukgOtIhQ1CaB1fc8aVwZvXZVSbNBLAqp7pAGhTFJqzHE8/U0sn1/V/wPzFAd
TZtWzKfYAkIIFJI5Rr6LVApIwIe7nWymTdgH4crCd2GZkGR+d6ihPKVSxUAUfoAx
EJQUSJY8rYi39gSDhPuEoK8BYXS1nWFGJiNV1o8xaljQo8rNT9myCaeZuQBLX41/
LRzK4XrxYPvjZpKNucc7fSK+UFriQGzdcAaWtW45Kp/8GmAoLVyCD0DPZNWNJdxp
IORhB33aWakhvDKgaLQa16MJ8fSc3ytn/1lxWzDXA1j05i81y/AOKPtCwBKzQWPF
biuZs3kJgZagLq6L6VOQDHlKqf+jqfl1fWeo04iDg98e0TYKABUfiTz8/MdQcV/X
8VkCgtuZ8BcPPyYzBjvuXWZTvdu0n2pikqAPL4u2cbWfD8JIP2AVCJp9HMGKvENo
XcJgY4h6T3rrC/9EidxECfXlsDbUJxLq0WfJLik84+LRtde3kZiReaIRtC5BcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvQG5ldGZpbHRlci5vcmc+iQJUBBMB
CgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvd8CGwMFCQPCZwAFCwkIBwMF
FQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMSP/g/+MHmxCAi/X+NMHodg9Qou
wEG4Vf1uluAE6c+c1QECCdtSsRjBs1dZoJzGsA23t4LWqluyaptuLDWJQEz+EVKR
mG0bvvropNaoOEShnY069pg7lUHuO/GLeDRhfEH3KT45sIVbLly8QkoGaINSCDLe
RBNaHC6feIC8NfQzQEt72nbi4SgdSQUg0F3lj4WxxECVhXsw/YCqh1d3QYqwRVEE
lCGQ4EbavjtRhO8U7dcL1VwHemKHNq3XvM3PJf1OoPgxWqFW5rHbAdlXdN3WAI6u
DAy7kY+qihz3w6rIDTFq6I3YBTrZ44J+5mN21ZC2iDXAsa/C3Uam0vFsjs/pizuq
WgGI9Vmsyap+bOOjuRSX4hemZoOT4a2GC723fS1dFresYWo3MmwfA3sjgV5tK3ZN
XIpxYIvi6HAHLOAarDaE8Sha1GHvrmPwfZ+cEgTL0mqW3efSF3AFmGHduMB+agzK
rM9sksrRQhbY2fHnBLo1t06SQx3rmhlz5mD1ljQEIzna9D6QKleRu4hgImRLHnCB
CN3o+mZa1MHhaIFzViaD2i3Fv2+bYgT7vnS4QAneLW8O/ZgpAc2MUxMoci5JNyfJ
mWdae7Kbs4Z8rrt/mH2gYyioSB0po4VtVwKWEUW9cLtZusA6mFnMviFpfjakb9TX
MimBAv9hAYpxd+HdfHinmqS0MEFydHVybyBCb3JyZXJvIEdvbnphbGV6IDxhYm9y
cmVyb0B3aWtpbWVkaWEub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY735AhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEGooP/20PR5N34m7CNtyaO96H5W0ULuAuSNuoXaKWDo5LGU6zzDriXbIu
ryYtR66vWF5suf7fHZYX8Ufq4PEsG1UNYEGA9hnjPg3oVwGzBJI7f6Rl2P5Pc8wJ
Eq2kN/xKmfUKIrvgh1f5xgFqC4hzcLDkVlLsPowZWfep8dLY4mtVrsrCD1URhelw
zRDGZ3rTVHWXmfXbSHWR2bgZIIrCtVF8BHStg5b6HuAWpj4Oa0eMfBde0N2RZkLE
ye/r2y/lraHfpT7MXnRMcEmltrv8fic7yvj/Nh4ESWr7UmfbV+GiSw9dc/AlVMXM
ihaW0eXv4F5uMtLJOiqI7bv3UfWSvoqwf2a8EPnzOeBBHhQOOJN7O4UzKBK5GAO8
C3k0I1AV3cTmrXrqT/5yoYAHSekDFCIPES//6Y/pO0ITtCbXkA5e8vaulJbtyXpE
g0Z7I7M1kikL6reZ2PuzsR0psEb/x81bWXODIegyOJolPXMRAY7n9J0xpCnSW9yr
CN4j6YT3Oame04JslwX5Xg1cyheuiusotETYNSKRaGaYBCxYffOWoTLNIBa+RCGc
SVOzJq5pd8fVRM1h2ZZFnfpPJBUb62qPsbk6VwmesGoGevB70zcNQYEI+c35kRfM
IOuJWRIN3Wxx0rpxb5E3i/3TASHM86Dix1VW9vsC/atGU/cgaoTOiNVztDdBcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJvcnJlcm8uZ2xlekBnbWFpbC5j
b20+iQJUBBMBCgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvg8CGwMFCQPC
ZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMS7NA/9F7OL/j7a
xnTDjxAHEiyrCzrBQc/DEAM/yim8E+0UBeTJSZR/bShtbvLbSukeL43tKksPhN/X
skjRF8sJ8KWUnpmSWjv1DQTh7AtkJqACnq7+VtQZq3yuKUCNRNpM8lSFxtmYDUqE
XXD4eMXKoJfdphQ+qpViba+RGXg6sd69Dq739zT/OFMuKZ33z8h7hVNXmoWGcBz6
txvN3cWVJhTLdiBvtn38/0dX7IupQLypLOtP0oZdjoUjkRxTo5biOxt3hUGnxS4x
97PPeRGc4j7lv5ADwFV8bo+g54ZMGRjOcyZmA7dlWFN51JrTx3udW2jgXkYqm7UM
xP4lNwDs9TmT3jan6wR08uwlDakOXfDm3gCQEviN+350sJs2tY+JKBN4QR7NpqeU
2aDFOo0G/0ggf0QbFsMkaTSozerVHRGXMdAi+pbYA6pPWPu8lHIkvvdoj4xUu+Ko
cHX0DCRxmL9mylTbZEanrp5gSpne79McrkbQX2/Yc8lWykCtL5/jHVTD4iNiO5Rf
IJYPAVmC2nlj2URfzwGjjoL5apTStZfng4H2Ccq+3cmhwOXI7pb+PsGeI5PND00A
qHFxe590HFhPxLHoftMIlspstoCvHYGcWQxHNbXW6ccmhHdNYT8Pn4ecKgfr6pCt
0ysilOD2ppPJ88hffKA4nTdtX2Tz2ZwOYwG5Ag0EY2O9fgEQALrapVuv1IcLDit8
9gejdA/Dtlufb2/baImVaQD+dTx2QdMxxEiNKl00a5OhMzXDj9tFrB1Lv4z0t8cY
iDJ+NuydDGgz3MlJgWW0GlpAz8yiul2iqTnkWl3cWeiI+VaX8wzL+acmmkPvlrN8
hM7I55BPr8uBWVIQ7VDmI+ts8gi73xE+Etzzrh13GSSnnYnezfGUQrNfYFcip7D0
hB3bpUIGiPdQ45vSZqXUQx/B6FlabiIGRau8Rt4vaEBGXGFZ9rIR+rMJWx6GqYX4
uY1KM2JZ3SKHk++MWGYdzHdM2oaP6xckZq+u/WiwutkYLLO2hnr03lcAu1IDT1C1
YNPrbTKfqUt+3r0oUK5BrG1Cjdc1mZqcXzYcexOLp79FJLb0t5wPdfgU8dT10kjE
uQxeSYiS4oSpikVQkKoFk++/U95d/z/y/81A6v+cfRus6mW+wRSFSwks7Q5ct7zW
UyKELLC4i4EDgnJXmavVcBD0TWzhH/rZpz9FsO4Mb18IYwbV1/144019/RjiPk5Z
MMNdsjorjV2MtrCIoeAGRgZhbFP2P7CcZOp6ZWzjj40ENlElbLp3VCfkYcTiPHJv
2iaiDz2Mhfmhb1Q/5d/a9tYTYINPmv2QVo+m5Zf+1/U29d2HZMRhD4aqDsivvgtd
GpAnKeus6ePSMqpwjO6v2bmQhjpbABEBAAGJAjwEGAEKACYWIQSqZigNTvC/zGv8
IQTaXssjHI8ExAUCY2O9fgIbDAUJA8JnAAAKCRDaXssjHI8ExA5AD/9VWS1/jHM9
aE3HKCDL4CpiXQPc4ds+3/ft6LXwuCMA/tkt8I4svKZGCCi/X5NfiQetVD+cSzVO
nmloctMt/24yjnGNNSFsDozkn/RqzZIhLJBI69gX4JWR4wpeh4kXMItNM5ZlYw3H
DmuLrf/ey8E2NzbFdzj1VQNoENuwtL2pIJrvK92AcS7acvP0FpiS8riLc5a933SW
oPgelQ1j/04WAH8cyKXB/pruq3OhtK0/b8ylIeI0f7a57dxQj5wysyBVKl+EJd/n
UhypVqMDRWL7N0FttGb9gZ6OVvQnt7iwbtS3tYqAK479+GZwi/Wh/RB2dCDyz8jk
zE0j6y7huP4XzpbBbPVntLDdVAYmpW6iIaTWYxlu79FEUw4JmZdY7hJoEDpHuDIz
ylo0YQgjnRfRfWSdnGCosFrY5UgThPVTaQAILCPtdVyWY4/6s1UaeNs3H0PRA5mz
UT4vDKxGq9gXHnE+qg3dfwMcLR3cDPPWUFVeTfNitZ3Y9eV7SdbQXt5NeOXzFadz
DBc9ZzNx3rBEyUUooU0MEmbltyUFM7R/hVcdpFxs12SgHrvgh13tuxVVVNBXTwwo
pSxmap42vHJERQ8ZJQ4lrvnxNZcuwLHSZK7xVzb0b/1wMooNnhw18vlStMWQJwKl
DiXs/L/ifab2amg9jshULAPgVSw7QeP2OQ==
=UABf
-----END PGP PUBLIC KEY BLOCK-----
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/ For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8 Cheers!

3 September 2022

Shirish Agarwal: Fantasy, J.R.R. Tolkein

J.R.R. Tolkein Now unless you have been living under a rock cave, I am sure you know who Mr. Tolkein is. Apparently, the gentleman passed away on 2nd September 1973 at the sprightly age of 80. And this gives fans like me to talk about fantasy, fantasy authors, and the love-hate relationship we have with them. For a matter of record, I am currently reading Babylon Steel by Gaie Sebold. Now while I won t go into many details (I never like to, if I enjoy a book, I would want the book to be mysterious rather than give praise, simply so that the next person enjoys it as much as I did without having any expectations.) Now this book has plenty of sex so wouldn t recommend it for teenagers but more perhaps to mature audiences, although for the life of me couldn t find any rating on the book. I did come across common sense media but unfortunately, it isn t well known beyond perhaps some people who use it. They sadly don t have a google/Android app  And before anybody comments, I know that Android is no longer interested in supporting FOSS, their loss, not ours but that is entirely a blog post/article in itself. so let s leave that aside for now.

Fantasy So before talking about Mr. Tolkien and his creations let s talk and share a bit about fantasy. We know for a fact that the conscious mind functions at less than 5%, while the other bits are made by the subconscious and the unconscious mind (the three mind model.) So any thought or idea first germinates n either the unconscious or the subconscious part of the mind and then comes into the conscious mind. It is the reason we also dream. That s the subconscious and unconscious mind at work. While we say fantasy mostly to books, it is all around us and not just in prose but in song, dance, and all sorts of creativity are fantasy. Even Sci-fi actually comes from fantasy. Unfortunately, for reasons best known to people, they took out sci-fi and even divided fantasy into high fantasy and low fantasy. I am not going to go much into that but here s a helpful link for those who might want to look more into it. Now the question arises, why do people write? I have asked this question many a time to the authors I have met and the answers are as varied as they come. Two of the most common answers are the need to write (an itch they can t control or won t control) and the other is it s extremely healing. In my own case, even writing mere blog posts I found it unburdening & cathartetic. I believe this last part is what drove Mr. Tolkein and the story and arc that LOTR became. Tolkien, LOTR, World War I The casual reader might not know but if you followed or were curious about Mr. Tolkien, you would have found out that Mr. Tolkien served in World War 1 or what is known as the Great War. It was supposed to be the war that ended all wars but sadly didn t. One of the things that set apart Mr. Tolkein from many of his peers was that Mr. Tolkien was very straight about himself and corresponded with people far and wide. There is actually a book called The Letters of J.R.R. Tolkien that I hope to get at one of the used book depots. That book spans about 480 pages and gives all the answers as to why Mr. Tolkien made Middle-earth as it was made. I sadly haven t had the opportunity to get it and it is somewhat expensive. But I m sure that if World War 1 wouldn t have happened and Mr. Tolkein hadn t taken part and experienced what he experienced, we wouldn t have LOTR. I can bet losing his friends and comrades, and the pain he felt for those around him propelled him to write about land and a race called Hobbits. I haven t done enough fantasy reading but I do feel that his description of hobbits and the way they were and are is unique. The names and surnames he used were for humor as well as to make a statement about them. Having names such as Harfoots, Padfoot, Took and others just wouldn t be for fun, would it? Also, the responses and the behavior in the four books by Hobbits are almost human-like. It is almost like they are or were our cousins at one point in time but we allowed ourselves to forget. Even the corruption of humans has been shown as well as self-doubt. There is another part that I found and find fascinating, unlike most books where there is a single hero, in LOTR we have many heroes and heroines. This again, I would attribute to Mr. Tolkien and the heroism he saw on the battlefield and beyond it. All the tender emotions he shares with readers like us are because either he himself or others around him were subjected to grace and wonderment. This is all I derive from the books, those who have The letters of J.R.R. Tolkein , feel free to correct me. I was supposed to write this yesterday but real life has its own way. I could go on and on, perhaps at a later date or time I may expand on it, but it isn t a coincidence that Lord of the Rings: Rings of Power is starting broadcast on the same day when Mr. Tolkein died. In the very end, fantasy is something humans got and does not matter how rich or poor you are. If one were to look, both artists like Michaelangelo and many other artists, who often didn t have enough to have two square meals in the day, but still somehow were inspired to sketch models of airplanes, flying machines which are shockingly similar to the real thing. Many may not know that almost all primates, including apes, monkeys, squirrels, and even dolphins dream. And all of them have elaborate, complex dreams just as we do. Sadly, this info. is not known by most people otherwise, we would be so much empathetic towards our cousins in the animal kingdom.

16 July 2022

Petter Reinholdtsen: Automatic LinuxCNC servo PID tuning?

While working on a CNC with servo motors controlled by the LinuxCNC PID controller, I recently had to learn how to tune the collection of values that control such mathematical machinery that a PID controller is. It proved to be a lot harder than I hoped, and I still have not succeeded in getting the Z PID controller to successfully defy gravity, nor X and Y to move accurately and reliably. But while climbing up this rather steep learning curve, I discovered that some motor control systems are able to tune their PID controllers. I got the impression from the documentation that LinuxCNC were not. This proved to be not true The LinuxCNC pid component is the recommended PID controller to use. It uses eight constants Pgain, Igain, Dgain, bias, FF0, FF1, FF2 and FF3 to calculate the output value based on current and wanted state, and all of these need to have a sensible value for the controller to behave properly. Note, there are even more values involved, theser are just the most important ones. In my case I need the X, Y and Z axes to follow the requested path with little error. This has proved quite a challenge for someone who have never tuned a PID controller before, but there is at least some help to be found. I discovered that included in LinuxCNC was this old PID component at_pid claiming to have auto tuning capabilities. Sadly it had been neglected since 2011, and could not be used as a plug in replacement for the default pid component. One would have to rewriting the LinuxCNC HAL setup to test at_pid. This was rather sad, when I wanted to quickly test auto tuning to see if it did a better job than me at figuring out good P, I and D values to use. I decided to have a look if the situation could be improved. This involved trying to understand the code and history of the pid and at_pid components. Apparently they had a common ancestor, as code structure, comments and variable names were quite close to each other. Sadly this was not reflected in the git history, making it hard to figure out what really happened. My guess is that the author of at_pid.c took a version of pid.c, rewrote it to follow the structure he wished pid.c to have, then added support for auto tuning and finally got it included into the LinuxCNC repository. The restructuring and lack of early history made it harder to figure out which part of the code were relevant to the auto tuning, and which part of the code needed to be updated to work the same way as the current pid.c implementation. I started by trying to isolate relevant changes in pid.c, and applying them to at_pid.c. My aim was to make sure the at_pid component could replace the pid component with a simple change in the HAL setup loadrt line, without having to "rewire" the rest of the HAL configuration. After a few hours following this approach, I had learned quite a lot about the code structure of both components, while concluding I was heading down the wrong rabbit hole, and should get back to the surface and find a different path. For the second attempt, I decided to throw away all the PID control related part of the original at_pid.c, and instead isolate and lift the auto tuning part of the code and inject it into a copy of pid.c. This ensured compatibility with the current pid component, while adding auto tuning as a run time option. To make it easier to identify the relevant parts in the future, I wrapped all the auto tuning code with '#ifdef AUTO_TUNER'. The end result behave just like the current pid component by default, as that part of the code is identical. The end result entered the LinuxCNC master branch a few days ago. To enable auto tuning, one need to set a few HAL pins in the PID component. The most important ones are tune-effort, tune-mode and tune-start. But lets take a step back, and see what the auto tuning code will do. I do not know the mathematical foundation of the at_pid algorithm, but from observation I can tell that the algorithm will, when enabled, produce a square wave pattern centered around the bias value on the output pin of the PID controller. This can be seen using the HAL Scope provided by LinuxCNC. In my case, this is translated into voltage (+-10V) sent to the motor controller, which in turn is translated into motor speed. So at_pid will ask the motor to move the axis back and forth. The number of cycles in the pattern is controlled by the tune-cycles pin, and the extremes of the wave pattern is controlled by the tune-effort pin. Of course, trying to change the direction of a physical object instantly (as in going directly from a positive voltage to the equivalent negative voltage) do not change velocity instantly, and it take some time for the object to slow down and move in the opposite direction. This result in a more smooth movement wave form, as the axis in question were vibrating back and forth. When the axis reached the target speed in the opposing direction, the auto tuner change direction again. After several of these changes, the average time delay between the 'peaks' and 'valleys' of this movement graph is then used to calculate proposed values for Pgain, Igain and Dgain, and insert them into the HAL model to use by the pid controller. The auto tuned settings are not great, but htye work a lot better than the values I had been able to cook up on my own, at least for the horizontal X and Y axis. But I had to use very small tune-effort values, as my motor controllers error out if the voltage change too quickly. I've been less lucky with the Z axis, which is moving a heavy object up and down, and seem to confuse the algorithm. The Z axis movement became a lot better when I introduced a bias value to counter the gravitational drag, but I will have to work a lot more on the Z axis PID values. Armed with this knowledge, it is time to look at how to do the tuning. Lets say the HAL configuration in question load the PID component for X, Y and Z like this:
loadrt pid names=pid.x,pid.y,pid.z
Armed with the new and improved at_pid component, the new line will look like this:
loadrt at_pid names=pid.x,pid.y,pid.z
The rest of the HAL setup can stay the same. This work because the components are referenced by name. If the component had used count=3 instead, all use of pid.# had to be changed to at_pid.#. To start tuning the X axis, move the axis to the middle of its range, to make sure it do not hit anything when it start moving back and forth. Next, set the tune-effort to a low number in the output range. I used 0.1 as my initial value. Next, assign 1 to the tune-mode value. Note, this will disable the pid controlling part and feed 0 to the output pin, which in my case initially caused a lot of drift. In my case it proved to be a good idea with X and Y to tune the motor driver to make sure 0 voltage stopped the motor rotation. On the other hand, for the Z axis this proved to be a bad idea, so it will depend on your setup. It might help to set the bias value to a output value that reduce or eliminate the axis drift. Finally, after setting tune-mode, set tune-start to 1 to activate the auto tuning. If all go well, your axis will vibrate for a few seconds and when it is done, new values for Pgain, Igain and Dgain will be active. To test them, change tune-mode back to 0. Note that this might cause the machine to suddenly jerk as it bring the axis back to its commanded position, which it might have drifted away from during tuning. To summarize with some halcmd lines:
setp pid.x.tune-effort 0.1
setp pid.x.tune-mode 1
setp pid.x.tune-start 1
# wait for the tuning to complete
setp pid.x.tune-mode 0
After doing this task quite a few times while trying to figure out how to properly tune the PID controllers on the machine in, I decided to figure out if this process could be automated, and wrote a script to do the entire tuning process from power on. The end result will ensure the machine is powered on and ready to run, home all axis if it is not already done, check that the extra tuning pins are available, move the axis to its mid point, run the auto tuning and re-enable the pid controller when it is done. It can be run several times. Check out the run-auto-pid-tuner script on github if you want to learn how it is done. My hope is that this little adventure can inspire someone who know more about motor PID controller tuning can implement even better algorithms for automatic PID tuning in LinuxCNC, making life easier for both me and all the others that want to use LinuxCNC but lack the in depth knowledge needed to tune PID controllers well. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Next.