Search Results: "abi"

31 January 2025

Divine Attah-Ohiemi: Seeking Opportunities: Building a Career in Software Engineering and Beyond

My journey in CS has always been driven by curiosity, determination, and a deep love for understanding software solutions at its tiniest, most complex levels. Taking ALX Africa Software Engineer track after High school was where it all started for me. During the 1-year intensive bootcamp, I delved into the intricacies of Linux programming and low-level programming with C, which solidified my foundational knowledge. This experience not only enhanced my technical skills but also taught me the importance of adaptability and self-directed learning. I discovered how to approach challenges with curiosity, igniting a passion for exploring software solutions in their most intricate forms. Each module pushed me to think critically and creatively, transforming my understanding of technology and its capabilities. Let s just say that I have always been drawn to asking, How does this happen?" And I just go on and on until I find an answer eventually and sometimes I don t but that s okay. That curiosity, combined with a deep commitment to learning, has guided my journey. Debian Webmaster My drive has led me to get involved in open-source contributions, where I can put my knowledge to the test while helping my community. Engaging with real-world experts and learning from my mistakes has been invaluable. One of the highlights of this journey was joining the Debian Webmasters team as an intern through Outreachy. Here, I have the honor of working on redesigning and migrating the old Debian webpages to make them more user-friendly. This experience not only allows me to apply my skills in a practical setting but also deepens my understanding of collaborative software development. Building My Skills: The Foundation of My Experience Throughout my academic and professional journey, I have taken on many roles that have shaped my skills and prepared me for what s ahead I believe. I am definitely not a one-trick pony, and maybe not completely a jack of all trade either but I am a bit diverse I d like to think. Here are the key roles that have defined my journey so far: Volunteer Developer at Yoris Africa (June 2022 - August 2023) I began my career by volunteering at Yoris, where I collaborated with a talented team to design and build the frontend for a mobile app. My contributions extended beyond just the frontend; I also worked on backend solutions and microservices, gaining hands-on experience in full-stack development. This role was instrumental in shaping my understanding of software architecture, allowing me to contribute meaningfully to projects while learning from experienced developers in a dynamic environment. Freelance Academics Software Developer (September 2023 - October 2024) I freelanced as an academic software developer, where I pitched and developed software solutions for universities in my community. One of my most notable projects was creating a Computer-Based Testing (CBT) software for a medical school, which featured a unique questionnaire and scoring system tailored to their specific needs. This experience not only allowed me to apply my technical skills in a real-world setting but also deepened my understanding of educational software requirements and user experience, ultimately enhancing the learning process for students. Open Source Intern at Debian Webmaster Team (November 2024 -) Perhaps the most transformative experience has been my role as an intern at Debian Webmasters. This opportunity allowed me to delve into the fascinating world of open source. As an intern, I have the chance to work on a project where we are redesigning and migrating the Debian webpages to utilize a new and faster technology: Go templates with Hugo. For a detailed look at the work and progress I made during my internship, as well as information on this project and how to get involved, you can check out the wiki. My ultimate goal with this role is to build a vibrant community for Debian in Africa and, if given the chance, to host a debian-cd mirror for faster installations in my region. You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.

29 January 2025

Keith Packard: picolibc-i18n

Internationalization support in Picolibc There are two major internationalization APIs in the C library: locales and iconv. Iconv is an isolated component which only performs charset conversion in ways that don't interact with anything else in the library. Locales affect pretty much every API that deals with strings and covers charset conversion along with a huge range of localized information from character classification to formatting of time, money, people's names, addresses and even standard paper sizes. Picolibc inherits it's implementation of both of these from newlib. Given that embedded applications rarely need advanced functionality from either these APIs, I hadn't spent much time exploring this space. Newlib locale code When run on Cygwin, Newlib's locale support is quite complete as it leverages the underlying Windows locale support. Without Windows support, everything aside from charset conversion and character classification data is stubbed out at the bottom of the stack. Because the implementation can support full locale functionality, the implementation is designed for that, with large data structures and lots of code. Charset conversion and character classification data for locales is all built-in; none of that can be loaded at runtime. There is support for all of the ISO-8859 charsets, three JIS variants, a bunch of Windows code pages and a few other single-byte encodings. One oddity in this code is that when using a JIS locale, wide characters are stored in EUC-JP rather than Unicode. Every other locale uses Unicode. This means APIs like wctype are implemented by mapping the JIS-encoded character to Unicode and then using the underlying Unicode character classification tables. One consequence of this is that there isn't any Unicode to JIS mapping provided as it isn't necessary. When testing the charset conversion and Unicode character classification data, I found numerous minor errors and a couple of pretty significant ones. The JIS conversion code had the most serious issue I found; most of the conversions are in a 2d array which is manually indexed with the wrong value for the length of each row. This led to nearly every translated value being incorrect. The charset conversion tables and Unicode classification data are now generated using python charset support and the standard Unicode data files. In addition, tests have been added which compare Picolibc to the system C library for every supported charset. Newlib iconv code The iconv charset support is completely separate from the locale charset support with a much wider range of supported targets. It also supports loading charset data from files at runtime, which reduces the size of application images. Because the iconv and locale implementations are completely separate, the charset support isn't the same. Iconv supports a lot more charsets, but it doesn't support all of those available to locales. For example, Iconv has Big5 support which locale lacks. Conversely, locale has Shift-JIS support which iconv does not. There's also a difference in how charset names are mapped in the two APIs. The locale code has a small fixed set of aliases, which doesn't include things like US-ASCII or ANSI X3.4. In contrast, the iconv code has an extensive database of charset aliases which are compiled into the library. Picolibc has a few tests for the iconv API which verify charset names and perform some translations. Without an external reference, it's hard to know if the results are correct. POSIX vs C internationalization In addition to including the iconv API, POSIX extends locale support in a couple of ways:
  1. Exposing locale objects via the newlocale, uselocale, duplocale and freelocale APIs.
  2. uselocale sets a per-thread locale, rather than the process-wide locale.
Goals for Picolibc internationalization support For charsets, supporting UTF-8 should cover the bulk of embedded application needs, and even that is probably more than what most applications require. Most (all?) compilers use Unicode for wide character and string constants. That means wchar_t needs to be Unicode in every locale. Aside from charset support, the rest of the locale infrastructure is heavily focused on creating human-consumable strings. I don't think it's a stretch to say that none of this is very useful these days, even for systems with sophisticated user interactions. For picolibc, the cost to provide any of this would be high. Having two completely separate charset conversion datasets makes for a confusing and error-prone experience for developers. Replacing iconv with code that leverages the existing locale support for translating between multi-byte and wide-character representations will save a bunch of source code and improve consistency. Embedded systems can be very sensitive to memory usage, both read-only and read-write. Applications not using internationalization capabilities shouldn't pay a heavy premium even when the library binary is built with support. For the most sensitive targets, the library should be configurable to remove unnecessary functionality. Picolibc needs to be conforming with at least the C language standard, and as much of POSIX as makes sense. Fortunately, the requirements for C are modest as it only includes a few locale-related APIs and doesn't include iconv. Finally, picolibc should test these APIs to make sure they conform with relevant standards, especially character set translation and character classification. The easiest way to do this is to reference another implementation of the same API and compare results. Switching to Unicode for JIS wchar_t This involved ripping the JIS to Unicode translations out of all of the wide character APIs and inserting them into the translations between multi-byte and wide-char representations. The missing Unicode to JIS translation was kludged by iterating over all JIS code points until a matching Unicode value was found. That's an obvious place for a performance improvement, but at least it works. Tiny locale This is a minimal implementation of locales which conforms with the C language standard while providing only charset translation and character classification data. It handles all of the existing charsets, but splits things into three levels
  1. ASCII
  2. UTF-8
  3. Extended, including any or all of: a. ISO 8859 b. Windows code pages and other 8-bit encodings c. JIS (JIS, EUC-JP and Shift-JIS)
When built for ASCII-only, all of the locale support is short-circuited, except for error checking. In addition, support in printf and scanf for wide characters is removed by default (it can be re-enabled with the -Dio-wchar=true meson option). This offers the smallest code size. Because the wctype APIs (e.g. iswupper) are all locale-specific, this mode restricts them to ASCII-only, which means they become wrappers on top of the ctype APIs with added range checking. When built for UTF-8, character classification for wide characters uses tables that provide the full Unicode range. Setlocale now selects between two locales, "C" and "C.UTF-8". Any locale name other than "C" selects the UTF-8 version. If the locale name contains "." or "-", then the rest of the locale name is taken to be a charset name and matched against the list of supported charsets. In this mode, only "us_ascii", "ascii" and "utf-8" are recognized. Because a single byte of a utf-8 string with the high-bit set is not a complete character, all of the ctype APIs in this mode can use the same implementation as the ASCII-only mode. This means the small ctype implementation is available. Calling setlocale(LC_ALL, "C.UTF-8") will allow the application to use the APIs which translate between multi-byte and wide-characters to deal with UTF-8 encoded strings. In addition, scanf and printf can read and write UTF-8 strings into wchar_t strings. Locale names are converted into locale IDs, an enumeration which lists the available locales. Each ID implies a specific charset as that's the only thing which differs between them. This means a locale can be encoded in a few bytes rather than an array of strings. In terms of memory usage, applications not using locales and not using the wctype APIs should see only a small increase in code space. That's due to the wchar_t support added to printf and scanf which need to translate between multi-byte and wide-character representations. There aren't any tables required as ASCII and UTF-8 are directly convertible to Unicode. On ARM-v7m, The added code in printf and scanf add up to about 1kB and another 32 bytes of RAM is used. The big difference when enabling extended charset support is that all of the charset conversion and character classification operations become table driven and dependent on the locale. Depending on the extended charsets supported, these can be quite large. With all of the extended charsets included, this adds an additional 30kB of code and static data and uses another 56 bytes of RAM. There are two known gaps in functionality compared with the newlib code:
  1. Locale strings that encode different locales for different categories. That's nominally required by POSIX as LC_ALL is supposed to return a string sufficient to restore the locale, but the only category which actually matters is LC_CTYPE.
  2. No nl_langinfo support. This would be fairly easy to add, returning appropriate constant values for each parameter.
Tiny locale was merged to picolibc main in this PR Tiny iconv Replacing the bulky newlib iconv code was far easier than swapping locale implementations. Essentially all that iconv does is compute two functions, one which maps from multi-byte to wide-char in one locale and another which maps from wide-char to multi-byte in another locale. Once the JIS locales were fixed to use Unicode, the new iconv implementation was straightforward. POSIX doesn't provide any _l version of mbrtowc or wcrtomb, so using standard C APIs would have been clunky. Instead, the implementation uses the internal APIs to compute the correct charset conversion functions. The entire implementation fits in under 200 lines of code. Tiny iconv is in process in this PR Future directions Right now, both of these new bits of code sit in the source tree parallel to the old versions. I'm not seeing any particular reason to keep the old versions around; they have provided a useful point of comparison in developing the new code, but I don't think they offer any compelling benefits going forward.

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

Russ Allbery: Review: The Sky Road

Review: The Sky Road, by Ken MacLeod
Series: Fall Revolution #4
Publisher: Tor
Copyright: 1999
Printing: August 2001
ISBN: 0-8125-7759-0
Format: Mass market
Pages: 406
The Sky Road is the fourth book in the Fall Revolution series, but it represents an alternate future that diverges after (or during?) the events of The Sky Fraction. You probably want to read that book first, but I'm not sure reading The Stone Canal or The Cassini Division adds anything to this book other than frustration. Much more on that in a moment. Clovis colha Gree is a aspiring doctoral student in history with a summer job as a welder. He works on the platform for the project, which the reader either slowly discovers from the book or quickly discovers from the cover is a rocket to get to orbit. As the story opens, he meets (or, as he describes it) is targeted by a woman named Merrial, a tinker who works on the guidance system. The early chapters provide only a few hints about Clovis's world: a statue of the Deliverer on a horse that forms the backdrop of their meeting, the casual carrying of weapons, hints that tinkers are socially unacceptable, and some division between the white logic and the black logic in programming. Also, because this is a Ken MacLeod novel, everyone is obsessed with smoking and tobacco the way that the protagonists of erotica are obsessed with sex. Clovis's story is one thread of this novel. The other, told in the alternating chapters, is the story of Myra Godwin-Davidova, chair of the governing Council of People's Commissars of the International Scientific and Technical Workers' Republic, a micronation embedded in post-Soviet Kazakhstan. Series readers will remember Myra's former lover, David Reid, as the villain of The Stone Canal and the head of the corporation Mutual Protection, which is using slave labor (sort of) to support a resurgent space movement and its attempt to take control of a balkanized Earth. The ISTWR is in decline and a minor power by all standards except one: They still have nuclear weapons. So, first, we need to talk about the series divergence. I know from reading about this book on-line that The Sky Road is an alternate future that does not follow the events of The Stone Canal and The Cassini Division. I do not know this from the text of the book, which is completely silent about even being part of a series. More annoyingly, while the divergence in the Earth's future compared to The Cassini Division is obvious, I don't know what the Jonbar hinge is. Everything I can find on-line about this book is maddeningly coy. Wikipedia claims the divergence happens at the end of The Sky Fraction. Other reviews and the Wikipedia talk page claim it happens in the middle of The Stone Canal. I do have a guess, but it's an unsatisfying one and I'm not sure how to test its correctness. I suppose I shouldn't care and instead take each of the books on their own terms, but this is the type of thing that my brain obsesses over, and I find it intensely irritating that MacLeod didn't explain it in the books themselves. It's the sort of authorial trick that makes me feel dumb, and books that gratuitously make me feel dumb are less enjoyable to read. The second annoyance I have with this book is also only partly its fault. This series, and this book in particular, is frequently mentioned as good political science fiction that explores different ways of structuring human society. This was true of some of the earlier books in a surprisingly superficial way. Here, I would call it hogwash. This book, or at least the Myra portion of it, is full of people doing politics in a tactical sense, but like the previous books of this series, that politics is mostly embedded in personal grudges and prior romantic relationships. Everyone involved is essentially an authoritarian whose ability to act as they wish is only contested by other authoritarians and is largely unconstrained by such things as persuasion, discussions, elections, or even theory. Myra and most of the people she meets are profoundly cynical and almost contemptuous of any true discussion of political systems. This is the trappings and mechanisms of politics without the intellectual debate or attempt at consensus, turning it into a zero-sum game won by whoever can threaten the others more effectively. Given the glowing reviews I've seen in relatively political SF circles, presumably I am missing something that other people see in MacLeod's approach. Perhaps this level of pettiness and cynicism is an accurate depiction of what it's like inside left-wing political movements. (What an appalling condemnation of left-wing political movements, if so.) But many of the on-line reviews lead me to instead conclude that people's understanding of "political fiction" is stunted and superficial. For example, there is almost nothing Marxist about this book it contains essentially no economic or class analysis whatsoever but MacLeod uses a lot of Marxist terminology and sets half the book in an explicitly communist state, and this seems to be enough for large portions of the on-line commentariat to conclude that it's full of dangerous, radical ideas. I find this sadly hilarious given that MacLeod's societies tend, if anything, towards a low-grade libertarianism that would be at home in a Robert Heinlein novel. Apparently political labels are all that's needed to make political fiction; substance is optional. So much for the politics. What's left in Clovis's sections is a classic science fiction adventure in which the protagonist has a radically different perspective from the reader and the fun lies in figuring out the world-building through the skewed perspective of the characters. This was somewhat enjoyable, but would have been more fun if Clovis had any discernible personality. Sadly he instead seems to be an empty receptacle for the prejudices and perspective of his society, which involve a lot of quasi-religious taboos and an essentially magical view of the world. Merrial is a more interesting character, although as always in this series the romance made absolutely no sense to me and seemed to be conjured by authorial fiat and weirdly instant sexual attraction. Myra's portion of the story was the part I cared more about and was more invested in, aided by the fact that she's attempting to do something more interesting than launch a crewed space vehicle for no obvious reason. She at least faces some true moral challenges with no obviously correct response. It's all a bit depressing, though, and I found Myra's unwillingness to ground her decisions in a more comprehensive moral framework disappointing. If you're going to make a protagonist the ruler of a communist state, even an ironic one, I'd like to hear some real political philosophy, some theory of sociology and economics that she used to justify her decisions. The bits that rise above personal animosity and vibes were, I think, said better in The Cassini Division. This series was disappointing, and I can't say I'm glad to have read it. There is some small pleasure in finishing a set of award-winning genre books so that I can have a meaningful conversation about them, but the awards failed to find me better books to read than I would have found on my own. These aren't bad books, but the amount of enjoyment I got out of them didn't feel worth the frustration. Not recommended, I'm afraid. Rating: 6 out of 10

26 January 2025

Russ Allbery: Review: Dark Matters

Review: Dark Matters, by Michelle Diener
Series: Class 5 #4
Publisher: Eclipse
Copyright: October 2019
ISBN: 0-6454658-6-0
Format: Kindle
Pages: 307
Dark Matters is the fourth book in the science fiction semi-romance Class 5 series. There are spoilers for all of the previous books, and although enough is explained that you could make sense of the story starting here, I wouldn't recommend it. As with the other books in the series, it follows new protagonists, but the previous protagonists make an appearance. You will be unsurprised to hear that the Tecran kidnapped yet another Earth woman. The repetitiveness of the setup would be more annoying if the book took itself too seriously, but it doesn't, and so I mostly find it entertaining. I thought Diener was going to dodge the obvious series structure, but now I am wondering if we're going to end up with one woman per Class 5 ship after all. Lucy is not on a ship, however, Tecran or otherwise. She is a captive in a military research facility on the Tecran home world. The Tecran are in very deep trouble given the events of the previous book and have decided that Lucy's existence is a liability. Only the intervention of some sympathetic Tecran scientists she partly befriended during her captivity lets her escape the facility before it's destroyed. Now she's alone, on an alien world, being hunted by the military. It's not entirely the fault of this book that it didn't tell the story that I wanted to read. The setup for Dark Matters implies this book will see the arrival of consequences for the Tecran's blatant violations of the Sentient Beings Agreement. I was looking forward to a more political novel about how such consequences could be administered. This is the sort of problem that we struggle with in our politics: Collective punishment isn't acceptable, but there have to be consequences sufficient to ensure that a state doesn't repeat the outlawed behavior, and yet attempting to deliver those consequences feels like occupation and can set off worse social ruptures and even atrocities. I wasn't expecting that deep of political analysis of what is, after all, a lighthearted SF adventure series, but Diener has been willing to touch on hard problems. The ethics of violence has been an ongoing theme of the series. Alas for me, this is not what we get. The arriving cavalry, in the form of a Class 5 and the inevitable Grih hunk to serve as the love interest du jour, quickly become more interested in helping Lucy elude pursuers (or escape captors) than in the delicate political situation. The conflict between the local population is a significant story element, but only as backdrop. Instead, this reads like a thriller or an action movie, complete with alien predators and a cinematic set piece finale. The political conflict between the Tecran and the United Council does reach a conclusion of sorts, but it's not that satisfying. Perhaps some of the political fallout will happen in future books, but here Diener simplifies the morality of the story in the climax and dodges out of the tricky ethical and social challenge of how to punish a sovereign nation. One of the things I like about this series is that it takes moral indignation seriously, but now that Diener has raised the (correct) complication that people have strong motivations to find excuses for the actions of their own side, I hope she can find a believable political resolution that isn't simple brute force. This entry in the series wasn't bad, but it didn't grab me. Lucy was fine as a protagonist; her ability to manipulate the Tecran into making mistakes fits the longer time she's had to study them and keeps her distinct from the other protagonists. But the small bit of politics we do see is unsatisfying and conveniently simplistic, and this book mostly degenerates into generic action sequences. Bane, the Class 5 ship featured in this story, is great when he's active, and I continue to be entertained by the obsession the Class 5 ships have with Earth women, but he's sidelined for too much of the story. I felt like Diener focused on the least interesting part of the story setup. If you've read this far, there's nothing wrong with this entry. You'll probably want to keep reading. But it felt like a missed opportunity. Followed in publication order by Dark Ambitions, a novella that returns to Rose to tell a side story. The next novel is Dark Class, in which we'll presumably see the last kidnapped Earth woman. Rating: 6 out of 10

Otto Kek l inen: 10 habits to help becoming a Debian Maintainer

Featured image of post 10 habits to help becoming a Debian MaintainerBecoming a Debian maintainer is a journey that combines technical expertise, community collaboration, and continuous learning. In this post, I ll share 10 key habits that will both help you navigate the complexities of Debian packaging without getting lost, and also enable you to contribute more effectively to one of the world s largest open source projects.

1. Read and re-read the Debian Policy, the Developer s Reference and the git-buildpackage manual Anyone learning Debian packaging and aspiring to become a Debian maintainer is likely to wade through a lot of documentation, only to realize that much of it is outdated or sometimes outright incorrect. Therefore, it is important to learn right from the start which sources are the most reliable and truly worth reading and re-reading. I recommend these documents, in order of importance:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive, and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • The git-buildpackage man pages: While the Policy focuses on the end result and is intentionally void of practical instructions on creating or maintaining Debian packages, the Developer s Reference goes into greater detail. However, it too lacks step-by-step instructions. For the exact commands, consult the man pages of git-buildpackage and its subcommands (e.g., gbp clone, gbp import-orig, gbp pq, gbp dch, gbp push). See also my post on Debian source package git branch and tags for an easy to understand diagrams.

2. Make reading man pages a habit In addition to the above, try to make a habit of checking out the man page of every new tool you use to ensure you are using it as intended. The best place to read accurate and up-to-date documentation is manpages.debian.org. The manual pages are maintained alongside the tools by their developers, ensuring greater accuracy than any third-party documentation. If you are using a tool in the way the tool author documented, you can be confident you are doing the right thing, even if it wasn t explicitly mentioned in some third-party guide about Debian packaging best practices.

3. Read and write emails While members of the Debian community have many channels of communication, the mailing lists are by far the most prominent. Asking questions on the appropriate list is a good way to get current advice from other people doing Debian packaging. Staying subscribed to lists of interest is also a good way to read about new developments as they happen. Note that every post is public and archived permanently, so the discussions on the mailing lists also form a body of documentation that can later be searched and referred to. Regularly writing short and well-structured emails on the mailing lists is great practice for improving technical communication skills a useful ability in general. For Debian specifically, being active on mailing lists helps build a reputation that can later attract collaborators and supporters for more complex initiatives.

4. Create and use an OpenPGP key Related to reputation and identity, OpenPGP keys play a central role in the Debian community. OpenPGP is used to various degrees to sign git commits and tags, sign and encrypt email, and most importantly to sign Debian packages so their origin can be verified. The process of becoming a Debian Maintainer and eventually a Debian Developer culminates in getting your OpenPGP key included in the Debian keyring, which is used to control who can upload packages into the Debian archive. The earlier you create a key and start using it to gain reputation for that specific key that is used to sign your work, the better. Note that due to a recent schism in the OpenPGP standards working group, it is safest to create an OpenPGP key using GnuPG version 2.2.x (not 2.4.x), or using Sequoia-PGP.

5. Integrate Salsa CI in all work One reason Debian remains popular, even 30 years after its inception, is due to its culture of maintaining high standards. For a newcomer, learning all the quality assurance tools such as Lintian, Piuparts, Adequate, various build variations, and reproducible builds may be overwhelming. However, these tasks are easier to manage thanks to Salsa CI, the continuous integration pipeline in Debian that runs tests on every commit at salsa.debian.org. The earlier you activate Salsa CI in the package repository you are working on, the faster you will achieve high quality in your package with fewer missteps. You can also further customize a package specific salsa-ci.yml to have more testing coverage. Example Salsa CI pipeline with customizations

6. Fork on Salsa and use draft Merge Requests to solicit feedback All modern Debian packages are hosted on salsa.debian.org. If you want to make a change to any package, it is easy to fork, make an initial attempt at the change, and publish it as a draft Merge Request (MR) on Salsa to solicit feedback. People might have surprising reasons to object to the change you propose, or they might need time to get used to the idea before agreeing to it. Also, some people might object to a vague idea out of suspicion but agree once they see the exact implementation. There may also be a surprising number of people supporting your idea, and if there is an MR, they have a place to show their support at. Don t expect every Merge Request to be accepted. However, proposing an idea as running code in an MR is far more effective than raising the idea on a mailing list or in a bug report. Get into the habit of publishing plenty of merge requests to solicit feedback and drive discussions toward consensus.

7. Use git rebase frequently Linear git history is much easier to read. The ease of reading git log and git blame output is vital in Debian, where packages often have updates from multiple people spanning many years even decades. Debian packagers likely spend more time than the average software developer reading git history. Make sure you master git commands such as gitk --all, git citool --amend, git commit -a --fixup <commit id>, git rebase -i --autosquash <target branch>, git cherry-pick <commit id 1> <id 2> <id 3>, and git pull --rebase. If rebasing is not done on your initiative, rest assured others will ask you to do it. Thus, if the commands above are familiar, rebasing will be quick and easy for you.

8. Reviews: give some, get some In open source, the larger a project becomes, the more it attracts contributions, and the bottleneck for its growth isn t how much code developers can create but how much code submissions can be properly reviewed. At the time of writing, the main Salsa group Debian has over 800 open merge requests pending reviews and approvals. Feel free to read and comment on any merge request you find. You don t have to be a subject matter expert to provide valuable feedback. Even if you don t have specific feedback, your comment as another human acknowledging that you read the MR and found no issues is viewed positively by the author. Besides, if you spend enough time reviewing MRs in a specific domain, you will eventually become an expert in it. Code reviews are not just about providing feedback to the submitter; they are also great learning opportunities for the reviewer. As a rule of thumb, you should review at least twice as many merge requests as you submit yourself.

9. Improve Debian by improving upstream It is common that while packaging software for Debian, bugs are uncovered and patched in Debian. Do not forget to submit the fixes upstream, and add a Forwarded field to the file in debian/patches! As the person building and packaging something in Debian, you automatically become an authority on that software, and the upstream is likely glad to receive your improvements. While submitting patches upstream is a bit of work initially, getting improvements merged upstream eventually saves time for everyone and makes packaging in Debian easier, as there will be fewer patches to maintain with each new upstream release.

10. Don t hold any habits too firmly Last but not least: Once people learn a specific way of working, they tend to stick to it for decades. Learning how to create and maintain Debian packages requires significant effort, and people tend to stop learning once they feel they ve reached a sufficient level. This tendency to get stuck in a local optimum is understandable and natural, but try to resist it. It is likely that better techniques will evolve over time, so stay humble and re-evaluate your beliefs and practices every few years. Mastering these habits takes time, but each small step brings you closer to making a meaningful impact on Debian. By staying curious, collaborative, and adaptable, you can ensure your contributions stand the test of time just like Debian itself. Good luck on your journey toward becoming a Debian Maintainer!

24 January 2025

Sam Hartman: Feeling Targeted: Executive Order Ending Wasteful DEIA Efforts

As most here know, I m totally blind. One of my roles involves a contract for the US Government, under which I have a government email account. The department recently received a message talking about our work to end, to the maximum extend permitted by law, all diversity, equity, inclusion, and accessibility efforts in the government in accordance with the recently signed executive order. We are all reminded that if we timely identify the contracts and positions that are related to these efforts, there will be no consequences. There are a lot of times in my life when I have felt marginalized frustrated and angry that people weren t interested in working with me to make the small changes that would help me fit in. As an example with this government job, I asked to have access to a screen reader so that I could use my computer. My preferred adaptive software was not approved, even though it was thousands of dollars cheaper than the option the government wanted and could have been installed instantly rather than waiting for a multi-week ordering process. When the screen reader eventually became available, the government-provided installer was not accessible: a blind person could not use it. When I asked for help, the government added an additional multi-week delay because they weren t sure that the license management technology for the software they had chosen met the government s security and privacy policies. Which is to say that even with people actively working toward accessibility, sharing a commitment that accessibility is important, we have a lot of work to do. I feel very targeted at the current time. Now we are removing as many of the resources that help me be effective and feel welcome as we can. Talking about the lack of consequences now is just a way to remind everyone that there will be consequences later and get the fear going. The witch hunt is coming, and if people do a good enough job of turning in all the people who could help me feel welcome, they won t face consequences. Yes, I understand that the Americans with Disabilities act is still law, but its effectiveness will be very different in a climate where you need to eliminate accessibility positions to avoid consequences than in a climate where accessibility is a goal.

comment count unavailable comments

23 January 2025

Dirk Eddelbuettel: qlcal 0.0.14 on CRAN: Calendar Updates

The fourteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.37 release two days ago. qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page. This releases synchronizes qlcal with the QuantLib release 1.37 (made this week), and moves a demo/ file to examples/.

Changes in version 0.0.14 (2025-01-23)
  • Synchronized with QuantLib 1.37 released two days ago
  • Calendar updates for United States and New Zealand
  • The demo/ file is now in inst/examples/

This update includes the inclusion of the January 9, 2025, holiday for the memorial of President Carter that was observed at the NYSE and shown by the allUScalendars.R example:
edd@rob:~/git/qlcal-r/inst/examples(master)$ Rscript allUScalendars.R 
           LiborImpact NYSE GovernmentBond NERC FederalReserve SOFR
2025-01-01        TRUE TRUE           TRUE TRUE           TRUE TRUE
2025-01-09          NA TRUE             NA   NA             NA   NA
2025-01-20        TRUE TRUE           TRUE   NA           TRUE TRUE
2025-02-17        TRUE TRUE           TRUE   NA           TRUE TRUE
2025-04-18          NA TRUE           TRUE   NA             NA TRUE
2025-05-26        TRUE TRUE           TRUE TRUE           TRUE TRUE
2025-06-19        TRUE TRUE           TRUE   NA           TRUE TRUE
2025-07-04        TRUE TRUE           TRUE TRUE           TRUE TRUE
2025-09-01        TRUE TRUE           TRUE TRUE           TRUE TRUE
2025-10-13        TRUE   NA           TRUE   NA           TRUE TRUE
2025-11-11        TRUE   NA           TRUE   NA           TRUE TRUE
2025-11-27        TRUE TRUE           TRUE TRUE           TRUE TRUE
2025-12-25        TRUE TRUE           TRUE TRUE           TRUE TRUE
edd@rob:~/git/qlcal-r/inst/examples(master)$ 
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 January 2025

Dominique Dumont: How we solved storage API throttling on our Azure Kubernetes clusters

Hi This issue was quite puzzling, so I m sharing how we investigated this issue. I hope it can be useful for you. My client informed me that he was no longer able to install new instances of his application. k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API. This issue was confirmed by Azure diagnostic console on Kubernetes ( menu Diagnose and solve problems Cluster and Control Plane Availability and Performance Azure Resource Request Throttling ). We had a lot of throttling:
2025-01-18_11-01-k8s-throttles.png
Which were explained by the high call rate:
2025-01-18_11-01-k8s-calls.png
The first clue was found at the bottom of Azure diagnostic page:
2025-01-18_11-27-throttles-by-user-agent.png
According, to this page, throttling is done by services whose user agent is:
Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)
The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect. That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services. At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don t have access to the control plane logs. However, we re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter: We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different. We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue. We were so puzzled that we decided to try L odagan s advice (tout cr mer pour repartir sur des bases saines, loosely translated as burn everything down to start from scratch ) and we destroyed piece by piece our dev cluster while checking if the throttling stopped. First, we removed all our applications, no change. Then, all ancillary components like rabbitmq, cert-manager were removed, no change. Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. That was actually good news, because it meant that we were close to the actual issue. We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource. The finalizers were removed with a command like:
kubectl patch volumesnapshots $ volumesnapshot  \
  -p ' \"metadata\": \"finalizers\":null '  --type merge
Then, we got the first progress : the throttling and high call rate stopped on our dev cluster. To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. So, the problem was indeed with PVC and snapshots. Even though we have backups outside of Azure, we weren t really thrilled at trying L odagan s method on our prod cluster So we looked for a better fix to try on our preprod cluster. Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:
Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."
The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with
az storage share list --account-name [redacted] --include-snapshots \
      tee preprod-list.json
There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun. I guess that we had a chain of events like: To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. Since we had the chain of events, fixing the issue was not too difficult (but quite long ):
  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter
After these steps, preprod was back to normal. I m now applying the same recipe on prod. We still don t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter. Anyway, to avoid this problem is the future, we will: My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

17 January 2025

Russell Coker: Systemd Hardening and Sending Mail

A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run systemd-analyze security and to get details of one particular daemon run a command like systemd-analyze security mon.service . I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren t a serious thing for Debian so this won t result in release critical bug reports, but it is still something we can aim for. For a simple daemon (EG BIND, dhcpd, and syslogd) this isn t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it s done. For some daemons it s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context. The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right. My mon package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases. I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control. The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted. Can Exim be configured to have it s sendmail -T type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

13 January 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In December, 19 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 14.0h (out of 14.0h assigned).
  • Adrian Bunk did 47.75h (out of 53.0h assigned and 47.0h from previous period), thus carrying over 52.25h to the next month.
  • Andrej Shadura did 6.0h (out of 17.0h assigned and -7.0h from previous period after hours given back), thus carrying over 4.0h to the next month.
  • Bastien Roucari s did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 15.0h (out of 0.0h assigned and 18.0h from previous period), thus carrying over 3.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 17.0h assigned and 9.0h from previous period), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 32.25h (out of 40.5h assigned and 19.5h from previous period), thus carrying over 27.75h to the next month.
  • Guilhem Moulin did 22.5h (out of 9.75h assigned and 12.75h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 3.5h assigned and 6.5h from previous period), thus carrying over 8.0h to the next month.
  • Lee Garrett did 8.5h (out of 14.75h assigned and 45.25h from previous period), thus carrying over 51.5h to the next month.
  • Lucas Kanashiro did 32.0h (out of 10.0h assigned and 54.0h from previous period), thus carrying over 32.0h to the next month.
  • Markus Koschany did 40.0h (out of 20.0h assigned and 20.0h from previous period).
  • Roberto C. S nchez did 13.5h (out of 6.75h assigned and 17.25h from previous period), thus carrying over 10.5h to the next month.
  • Santiago Ruano Rinc n did 18.75h (out of 24.75h assigned and 0.25h from previous period), thus carrying over 6.25h to the next month.
  • Sean Whitton did 6.0h (out of 2.0h assigned and 4.0h from previous period).
  • Sylvain Beucler did 10.5h (out of 21.5h assigned and 38.5h from previous period), thus carrying over 49.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation In December, we have released 29 DLAs. The LTS Team has published updates to several notable packages. Contributor Guilhem Moulin published an update of php7.4, a widely-used open source general purpose scripting language, which addressed denial of service, authorization bypass, and information disclosure vulnerabilities. Contributor Lucas Kanashiro published an update of clamav, an antivirus toolkit for Unix and Linux, which addressed denial of service and authorization bypass vulnerabilities. Finally, contributor Tobias Frost published an update of intel-microcode, the microcode for Intel microprocessors, which well help to ensure that processor hardware is protected against several local privilege escalation and local denial of service vulnerabilities. Beyond our customary LTS package updates, the LTS Team has made contributions to Debian s stable bookworm release and its experimental section. Notably, contributor Lee Garrett published a stable update of dnsmasq. The LTS update was previously published in November and in December Lee continued working to bring the same fixes (addressing the high profile KeyTrap and NSEC3 vulnerabilities) to the dnsmasq package in Debian bookworm. This package was accepted for inclusion in the Debian 12.9 point release scheduled for January 2025. Addititionally, contributor Sean Whitton provided assistance, via upload sponsorships, to the Debian maintainers of xen. This assistance resulted in two uploads of xen into Debian s experimental section, which will contribute to the next Debian stable release having a version of xen with better longterm support from the upstream development team.

Thanks to our sponsors Sponsors that joined recently are in bold.

12 January 2025

Dirk Eddelbuettel: Rcpp 1.0.14 on CRAN: Regular Semi-Annual Update

rcpp logo The Rcpp Core Team is once again thrilled, pleased, and chuffed (am I doing this right for LinkedIn?) to announce a new release (now at 1.0.14) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow too. The release was only uploaded yesterday, and as always get flagged because of the grandfathered .Call(symbol) as well as for the url to the Rcpp book (which has remained unchanged for years) failing . My email reply was promptly dealt with under European morning hours and by the time I got up the submission was in state waiting over a single reverse-dependency failure which is also spurious, appears on some systems and not others, and also not new. Imagine that: nearly 3000 reverse dependencies and only one (spurious) change to worse. Solid testing seems to help. My thanks as always to the CRAN for responding promptly. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. This time we also need a one-off hotfix release 1.0.13-1: we had (accidentally) conditioned an upcoming R change on 4.5.0, but it already came with 4.4.2 so we needed to adjust our code. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2977 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 60.8% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 93.7 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1947 (JSS, 2011) and 354 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 676. This release is primarily incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R once again to a few changes; Kevin did once again did most of these PRs. Other contributed PRs include G bor permitting builds on yet another BSD variant, Simon Guest correcting sourceCpp() to work on read-only files, Marco Colombo correcting a (surprisingly large) number of vignette typos, I aki rebuilding some documentation files that tickled (false) alerts, and I took care of a number of other maintenance items along the way. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.14 (2025-01-11)
  • Changes in Rcpp API:
    • Support for user-defined databases has been removed (Kevin in #1314 fixing #1313)
    • The SET_TYPEOF function and macro is no longer used (Kevin in #1315 fixing #1312)
    • An errorneous cast to int affecting large return object has been removed (Dirk in #1335 fixing #1334)
    • Compilation on DragonFlyBSD is now supported (G bor Cs rdi in #1338)
    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Attributes:
    • The sourceCpp() function can now handle input files with read-only modes (Simon Guest in #1346 fixing #1345)
  • Changes in Rcpp Deployment:
    • One unit tests for arm64 macOS has been adjusted; a macOS continuous integration runner was added (Dirk in #1324)
    • Authors@R is now used in DESCRIPTION as mandated by CRAN, the Rcpp.package.skeleton() function also creates it (Dirk in #1325 and #1327)
    • A single datetime format test has been adjusted to match a change in R-devel (Dirk in #1348 fixing #1347)
  • Changes in Rcpp Documentation:
    • The Rcpp Modules vignette was extended slightly following #1322 (Dirk)
    • Pdf vignettes have been regenerated under Ghostscript 10.03.1 to avoid a false positive by a Windows virus scanner (I aki in #1331)
    • A (large) number of (old) typos have been corrected in the vignettes (Marco Colombo in #1344)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Sahil Dhiman: Prosody Certificate Management With Nginx and Certbot

I have a self-hosted XMPP chat server through Prosody. Earlier, I struggled with certificate renewal and generation for Prosody because I have Nginx (and a bunch of other services) running on the same server which binds to Port 80. Due to this, Certbot wasn t able to auto-renew (through HTTP validation) for domains managed by Prosody. Now, I have cobbled together a solution to keep both Nginx and Prosody happy. This is how I did it:
server  
      listen 80;
      listen [::]:80;
      server_name PROSODY.DOMAIN;
      root <ANY_NGINX_WRITABLE_LOCATION>;
      location ~ /.well-known/acme-challenge  
         allow all;
       
 
0 0 * * * prosodyctl --root cert import /etc/letsencrypt/live/PROSODY.DOMAIN
Explanation from Prosody docs:
Certificates and their keys are copied to /etc/prosody/certs (can be changed with the certificates option) and then it signals Prosody to reload itself. root lets prosodyctl write to paths that may not be writable by the prosody user, as is common with /etc/prosody.

10 January 2025

Sergio Talens-Oliag: Testing New User Tools

On recent weeks I ve had some time to scratch my own itch on matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice. This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.

Desktop / WMOn the desktop / window manager front I ve been using Cinnamon on Debian and Ubuntu systems since Gnome 3 was published (I never liked version 3, so I decided to move to something similar to Gnome 2, including the keyboard shortcuts). In fact I ve never been a fan of Desktop environments, before Gnome I used OpenBox and IceWM because they where a lot faster than desktop systems on my hardware at the time and I was using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well, except for the web browsers and the image manipulation programs). Although I was comfortable using Cinnamon, some years ago I tried to move to i3, a tilling window manager for X11 that looked like a good choice for me, but I didn t have much time to play with it and never used it enough to make me productive with it (I didn t prepare a complete configuration nor had enough time to learn the new shortcuts, so I went back to Cinnamon and never tried again). Anyway, some weeks ago I updated my work machine OS (it was using Ubuntu 22.04 LTS and I updated it to the 24.04 LTS version) and the Cinnamon systray applet stopped working as it used to do (in fact I still have to restart Cinnamon after starting a session to make it work) and, as I had some time, I decided to try a tilling window manager again, but now I decided to go for SwayWM, as it uses Wayland instead of X11.

Sway configurationOn my ~/.config/sway/config I tuned some things:
  • Set fuzzel as the application launcher.
  • Installed manually the shikane application and created a configuration to be executed always when sway is started / reloaded (I adjusted my configuration with wdisplays and used shikanectl to save it).
  • Added support for running the xdg-desktop-portal-wlr service.
  • Enabled the swayidle command to lock the screen after some time of inactivity.
  • Adjusted the keyboard to use the es key map
  • Added some keybindings to make my life easier, including the use of grimm and swappy to take screenshots
  • Configured waybar as the environment bar.
  • Added a shell script to start applications when sway is started (it uses swaymsg to execute background commands and the i3toolwait script to wait for the
    #!/bin/sh
    # VARIABLES
    CHROMIUM_LOCAL_STATE="$HOME/.config/google-chrome/Local State"
    I3_TOOLWAIT="$HOME/.config/sway/scripts/i3-toolwait"
    # Functions
    chromium_profile_dir()  
      jq -r ".profile.info_cache to_entries map( (.value.name): .key ) add .\"$1\" // \"\"" "$CHROMIUM_LOCAL_STATE"
     
    # MAIN
    IGZ_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@intelygenz.com")"
    OURO_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@nxr.global")"
    PERSONAL_PROFILE_DIR="$(chromium_profile_dir "stalens@gmail.com")"
    # Common programs
    swaymsg "exec nextcloud --background"
    swaymsg "exec nm-applet"
    # Run spotify on the first workspace (it is mapped to the laptop screen)
    swaymsg -q "workspace 1"
    $ I3_TOOLWAIT  "spotify"
    # Run tmux on the
    swaymsg -q "workspace 2"
    $ I3_TOOLWAIT  -- foot tmux a -dt sto
    wp_num="3"
    if [ "$OURO_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m ouro-browser -- google-chrome --profile-directory="$OURO_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$IGZ_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m igz-browser -- google-chrome --profile-directory="$IGZ_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$PERSONAL_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m personal-browser -- google-chrome --profile-directory="$PERSONAL_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    # Open the browser without setting the profile directory if none was found
    if [ "$wp_num" = "3" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  google-chrome
      wp_num="$((wp_num+1))"
    fi
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  evolution
    wp_num="$((wp_num+1))"
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  slack
    wp_num="$((wp_num+1))"
    # Open a private browser and a console in the last workspace
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  -- google-chrome --incognito
    $ I3_TOOLWAIT  foot
    # Go back to the second workspace for keepassxc
    swaymsg "workspace 2"
    $ I3_TOOLWAIT  keepassxc

ConclusionAfter using Sway for some days I can confirm that it is a good choice for me, but some of the components needed to make it work as I want are too new and not available on the Ubuntu 24.04 LTS repositories, so I decided to go back to Cinnamon and try Sway again in the future, although I added more workspaces to my setup (now they are only available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some additional keyboard shortcuts and installed or updated some applets.

Text editorWhen I started using Linux many years ago I used vi/vim and emacs as my text editors (vi for plain text and emacs for programming and editing HTML/XML), but eventually I moved to vim as my main text editor and I ve been using it since (well, I moved to neovim some time ago, although I kept my old vim configuration). To be fair I m not as expert as I could be with vim, but I m productive with it and it has many plugins that make my life easier on my machines, while keeping my ability to edit text and configurations on any system that has a vi compatible editor installed. For work reasons I tried to use Visual Studio Code last year, but I ve never really liked it and almost everything I do with it I can do with neovim (i. e. I even use copilot with it). Besides, I m a heavy terminal user (I use tmux locally and via ssh) and I like to be able to use my text editor on my shell sessions, and code does not work like that. The only annoying thing about vim/neovim is its configuration (well, the problem is that I have a very old one and probably should spend some time fixing and updating it), but, as I said, it s been working well for me for a long time, so I never really had the motivation to do it. Anyway, after finishing my desktop tests I saw that I had the Helix editor installed for some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for neovim on my environments (the only drawback is that as it is not vi compatible, I would need to switch back to vi mode when working on remote systems, but I guess I could live with that). I ran the helix tutorial and I liked it, so I decided to configure and install the Language Servers I can probably take advantage of on my daily work on my personal and work machines and see how it works.

Language server installationsA lot of manual installations are needed to get the language servers working what I did on my machines is more or less the following:
# AWK
sudo npm i -g 'awk-language-server@>=0.5.2'
# BASH
sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
# C/C++
sudo apt-get install clangd
# CSS, HTML, ESLint, JSON, SCS
sudo npm i -g vscode-langservers-extracted
# Docker
sudo npm install -g dockerfile-language-server-nodejs
# Docker compose
sudo npm install -g @microsoft/compose-language-service
# Helm
app="helm_ls_linux_amd64"
url="$(
  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/helm_ls
sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
# Markdown
app="marksman-linux-x64"
url="$(
  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/marksman
sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
# Python
sudo npm i -g pyright
# Rust
rustup component add rust-analyzer
# SQL
sudo npm i -g sql-language-server
# Terraform
sudo apt-get install terraform-ls
# TOML
cargo install taplo-cli --locked --features lsp
# YAML
sudo npm install --global yaml-language-server
# JavaScript, TypeScript
sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome

Helix configurationThe helix configuration is done on a couple of toml files that are placed on the ~/.config/helix directory, the config.toml file I used is this one:
theme = "solarized_light"
[editor]
line-number = "relative"
mouse = false
[editor.statusline]
left = ["mode", "spinner"]
center = ["file-name"]
right = ["diagnostics", "selections", "position", "file-encoding", "file-line-ending", "file-type"]
separator = " "
mode.normal = "NORMAL"
mode.insert = "INSERT"
mode.select = "SELECT"
[editor.cursor-shape]
insert = "bar"
normal = "block"
select = "underline"
[editor.file-picker]
hidden = false
[editor.whitespace]
render = "all"
[editor.indent-guides]
render = true
character = " " # Some characters that work well: " ", " ", " ", " "
skip-levels = 1
And to configure the language servers I used the following language-servers.toml file:
[[language]]
name = "go"
auto-format = true
formatter =   command = "goimports"  
[[language]]
name = "javascript"
language-servers = [
  "typescript-language-server", # optional
  "vscode-eslint-language-server",
]
[language-server.rust-analyzer.config.check]
command = "clippy"
[language-server.sql-language-server]
command = "sql-language-server"
args = ["up", "--method", "stdio"]
[[language]]
name = "sql"
language-servers = [ "sql-language-server" ]
[[language]]
name = "hcl"
language-servers = [ "terraform-ls" ]
language-id = "terraform"
[[language]]
name = "tfvars"
language-servers = [ "terraform-ls" ]
language-id = "terraform-vars"
[language-server.terraform-ls]
command = "terraform-ls"
args = ["serve"]
[[language]]
name = "toml"
formatter =   command = "taplo", args = ["fmt", "-"]  
[[language]]
name = "typescript"
language-servers = [
  "typescript-language-server",
  "vscode-eslint-language-server",
]

Neovim configurationAfter a little while I noticed that I was going to need some time to get used to helix and the most interesting thing for me was the easy configuration and the language server integrations, but as I am already comfortable with neovim and just had installed the language server support tools on my machines I just need to configure them for neovim and I can keep using it for a while. As I said my configuration is old, to configure neovim I have the following init.vim file on my ~/.config/nvim folder:
set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath=&runtimepath
source ~/.vim/vimrc
" load lua configuration
lua require('config')
With that configuration I keep my old vimrc (it is a little bit messy, but it works) and I use a lua configuration file for the language servers and some additional neovim plugins on the ~/.config/nvim/lua/config.lua file:
-- -----------------------
-- BEG: LSP Configurations
-- -----------------------
-- AWS (awk_ls)
require'lspconfig'.awk_ls.setup 
-- Bash (bashls)
require'lspconfig'.bashls.setup 
-- C/C++ (clangd)
require'lspconfig'.clangd.setup 
-- CSS (cssls)
require'lspconfig'.cssls.setup 
-- Docker (dockerls)
require'lspconfig'.dockerls.setup 
-- Docker Compose
require'lspconfig'.docker_compose_language_service.setup 
-- Golang (gopls)
require'lspconfig'.gopls.setup 
-- Helm (helm_ls)
require'lspconfig'.helm_ls.setup 
-- Markdown
require'lspconfig'.marksman.setup 
-- Python (pyright)
require'lspconfig'.pyright.setup 
-- Rust (rust-analyzer)
require'lspconfig'.rust_analyzer.setup 
-- SQL (sqlls)
require'lspconfig'.sqlls.setup 
-- Terraform (terraformls)
require'lspconfig'.terraformls.setup 
-- TOML (taplo)
require'lspconfig'.taplo.setup 
-- Typescript (ts_ls)
require'lspconfig'.ts_ls.setup 
-- YAML (yamlls)
require'lspconfig'.yamlls.setup 
  settings =  
    yaml =  
      customTags =   "!reference sequence"  
     
   
 
-- -----------------------
-- END: LSP Configurations
-- -----------------------
-- ---------------------------------
-- BEG: Autocompletion configuration
-- ---------------------------------
-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion
--
-- Pre requisites:
--
--   # Packer
--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \
--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim
--
--   # Start nvim and run :PackerSync or :PackerUpdate
-- ---------------------------------
local use = require('packer').use
require('packer').startup(function()
  use 'wbthomason/packer.nvim' -- Packer, useful to avoid removing it with PackerSync / PackerUpdate
  use 'neovim/nvim-lspconfig' -- Collection of configurations for built-in LSP client
  use 'hrsh7th/nvim-cmp' -- Autocompletion plugin
  use 'hrsh7th/cmp-nvim-lsp' -- LSP source for nvim-cmp
  use 'saadparwaiz1/cmp_luasnip' -- Snippets source for nvim-cmp
  use 'L3MON4D3/LuaSnip' -- Snippets plugin
end)
-- Add additional capabilities supported by nvim-cmp
local capabilities = require("cmp_nvim_lsp").default_capabilities()
local lspconfig = require('lspconfig')
-- Enable some language servers with the additional completion capabilities offered by nvim-cmp
local servers =   'clangd', 'rust_analyzer', 'pyright', 'ts_ls'  
for _, lsp in ipairs(servers) do
  lspconfig[lsp].setup  
    -- on_attach = my_custom_on_attach,
    capabilities = capabilities,
   
end
-- luasnip setup
local luasnip = require 'luasnip'
-- nvim-cmp setup
local cmp = require 'cmp'
cmp.setup  
  snippet =  
    expand = function(args)
      luasnip.lsp_expand(args.body)
    end,
   ,
  mapping = cmp.mapping.preset.insert( 
    ['<C-u>'] = cmp.mapping.scroll_docs(-4), -- Up
    ['<C-d>'] = cmp.mapping.scroll_docs(4), -- Down
    -- C-b (back) C-f (forward) for snippet placeholder navigation.
    ['<C-Space>'] = cmp.mapping.complete(),
    ['<CR>'] = cmp.mapping.confirm  
      behavior = cmp.ConfirmBehavior.Replace,
      select = true,
     ,
    ['<Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_next_item()
      elseif luasnip.expand_or_jumpable() then
        luasnip.expand_or_jump()
      else
        fallback()
      end
    end,   'i', 's'  ),
    ['<S-Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_prev_item()
      elseif luasnip.jumpable(-1) then
        luasnip.jump(-1)
      else
        fallback()
      end
    end,   'i', 's'  ),
   ),
  sources =  
      name = 'nvim_lsp'  ,
      name = 'luasnip'  ,
   ,
 
-- ---------------------------------
-- END: Autocompletion configuration
-- ---------------------------------

ConclusionI guess I ll keep helix installed and try it again on some of my personal projects to see if I can get used to it, but for now I ll stay with neovim as my main text editor and learn the shortcuts to use it with the language servers.

Valhalla's Things: Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic
An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left. A few days ago1 I wanted to paint, but I didn t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line). A sheet of colour tests with mixes of various yellows and blues. I liked the cool grey tones in the second to last line, from 200 Naples yellow (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of things to do when having a desire to paint, but the utter inability to actually paint something and did a bit of a study with those two colours plus titanium white. The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together. And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D

  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.

9 January 2025

Reproducible Builds: Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a start page of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the no architecture label, all. The second builder is, however, only rebuilding the i386 architecture. Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we ll start reproducing Debian unstable soon. Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.

debian-repro-status Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup. The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.

On our mailing list On our mailing list this month:
  • Bernhard M. Wiedemann wrote a detailed post on his long journey towards a bit-reproducible Emacs package. In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.
  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used. The message resulted in a handful of replies, hopefully helpful!
  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).
  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the Census III of Free and Open Source Software: Application Libraries report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to answer the question what is the most popular Free and Open Source Software (FOSS)? .
  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for someone to implement a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image , thus enabling a deterministic and reproducible build. However, they are now reporting that somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list . Like kpcyrd, we hope it gets merged.

Enhancing the Security of Software Supply Chains: Methods and Practices Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on Enhancing the Security of Software Supply Chains: Methods and Practices . Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces that threaten its security and effectiveness . To address these:
First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.
A PDF of Mehdi s entire thesis is available to download.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:
  • Update copyright years. [ ]
  • Update tests to support file 5.46. [ ][ ]
  • Simplify tests_quines.py::test_ differences,differences_deb to simply use assert_diff and not mangle the test fixture. [ ]

Supply-chain attack in the Solana ecosystem A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain. Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.

Website updates Similar to last month, there was a large number of changes made to our website this month, including:
  • Chris Lamb:
    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the Buy-in page to Why Reproducible Builds? [ ]
    • Removing the top black border. [ ][ ]
  • Holger Levsen:
  • hulkoba:
    • Remove the sidebar-type layout and move to a static navigation element. [ ][ ][ ][ ]
    • Create and merge a new Success stories page, which highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions. . [ ]
    • Further changes to the homepage. [ ]
    • Remove the translation icon from the navigation bar. [ ]
    • Remove unused CSS styles pertaining to the sidebar. [ ]
    • Add sponsors to the global footer. [ ]
    • Add extra space on large screens on the Who page. [ ]
    • Hide the side navigation on small screens on the Documentation pages. [ ]

Debian changes There were a significant number of reproducibility-related changes within Debian this month, including:
  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.
  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a very old version of dpkg.
  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of Supporting alternative zlib implementations . In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.
  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.
  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:
    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.
  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.

Other development news In other ecosystem and distribution news:
  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building R-B-OS , a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a new i386.reproduce.debian.net rebuilder. [ ][ ][ ][ ][ ][ ]
    • Make a number of updates to the documentation. [ ][ ][ ][ ][ ]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. [ ]
    • Add a link to the /api/v0/pkgs/list endpoint. [ ]
    • Add support for a statistics page. [ ][ ][ ][ ][ ][ ]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. [ ]
    • Improve the frontpage. [ ][ ]
    • Explain that we re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. [ ]
  • Misc:
    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [ ][ ]
    • Don t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. [ ]
    • Prepare the infom08-i386 node to become another rebuilder. [ ]
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. [ ]
    • Install installation-birthday everywhere. [ ]
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. [ ]
    • Install Recommends by default on Jenkins nodes. [ ]
    • Rename rebuilder_stats.py to rebuilderd_stats.py. [ ]
    • r.d.n/stats: minor formatting changes. [ ]
    • Install files under /etc/cron.d/ with the correct permissions. [ ]
and Jochen Sprickerhof made the following changes: Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [ ][ ][ ][ ], Chris Hofstaedtler updated the URL for Grml s dpkg.selections file [ ], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [ ] and Mattia Rizzolo banned a number of bots and crawlers from the service [ ][ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Valhalla's Things: Poor Man Media Server

Posted on January 9, 2025
Tags: madeof:bits
Some time ago I installed minidlna on our media server: it was pretty easy to do, but quite limited in its support for the formats I use most, so I ended up using other solutions such as mounting the directory with sshfs. Now, doing that from a phone, even a pinephone running debian, may not be as convenient as doing it from the laptop where I already have my ssh key :D and I needed to listed to music from the pinephone. So, in anger, I decided to configure a web server to serve the files. I installed lighttpd because I already had a role for this kind of configuration in my ansible directory, and configured it to serve the relevant directory in /etc/lighttpd/conf-available/20-music.conf:
$HTTP["host"] =~ "music.example.org"  
    server.name          = "music.example.org"
    server.document-root = "/path/to/music"
 
the domain was already configured in my local dns (since everything is only available to the local network), and I enabled both 20-music.conf and 10-dir-listing.conf. And. That s it. It works. I can play my CD rips on a single flac exactly in the same way as I was used to (by ssh-ing to the media server and using alsaplayer). Then this evening I was talking to normal people1, and they mentioned that they wouldn t mind being able to skip tracks and fancy things like those :D and I ve found one possible improvement. For the directories with the generated single-track ogg files I ve added some playlists with the command ls *.ogg > playlist.m3u, then in the directory above I ve run ls */*.m3u > playlist.m3u and that also works. With vlc I can now open http://music.example.org/band/album/playlist.m3u to listen to an album that I have in ogg, being able to move between tracks, or I can open http://music.example.org/band/playlist.m3u and in the playlist view I can browse between the different albums. Left as an exercise to the reader2 are writing a bash script to generate all of the playlist.m3u files (and running it via some git hook when the files change) or writing a php script to generate them on the fly.
Update 2025-01-10: another reader3 wrote the php script and has authorized me to post it here.
<?php
define("MUSIC_FOLDER", __DIR__);
define("ID3v2", false);


function dd()  
    echo "<pre>"; call_user_func_array("var_dump", func_get_args());
    die();
 

function getinfo($file)  
    $cmd = 'id3info "' . MUSIC_FOLDER . "/" . $file . '"';
    exec($cmd, $output);
    $res = [];
    foreach($output as $line)  
    if (str_starts_with($line, "=== "))  
        $key = explode(" ", $line)[1];
        $val = end(explode(": ", $line, 2));
        $res[$key] = $val;
     
     
    if (isset($res['TPE1'])   isset($res['TIT2']))
    echo "#EXTINF: , " . ($res['TPE1'] ?? "Unk") . " - " . ($res['TIT2'] ?? "Untl") . "\r\n";
    if (isset($res['TALB']))
    echo "#EXTALB: " . $res['TALB'] . "\r\n";
 


function pathencode($path, $name)  
    $path = urlencode($path);
    $path =  str_replace("%2F", "/", $path);
    $name = urlencode($name);
    if ($path != "") $path = "/" . $path;
    return $path . "/" . $name;
 

function serve_playlist($path)  
    echo "#EXTM3U";
    echo "# PATH: $path\n\r";
    foreach (glob(MUSIC_FOLDER . "/$path/*") as $filename)  
    $name = basename($filename);
    if (is_dir($filename))  
        echo pathencode($path, $name) . ".m3u\r\n";
     
    $t = explode(".", $filename);
    $ext = array_pop($t);
    if (in_array($ext, ["mp3", "ogg", "flac", "mp4", "m4a"]))  
        if (ID3v2)  
 	   getinfo($path . "/" . $name);
          else  
 	   echo "#EXTINF: , " . $path . "/" . $name . "\r\n";
         
        echo pathencode($path, $name) . "\r\n";
     
     
    die();
 



$path = $_SERVER["REQUEST_URI"];
$path = urldecode($path);
$path = trim($path, "/");

if (str_ends_with($path, ".m3u"))  
    $path = str_replace(".m3u", "", $path);

    serve_playlist($path);
 

$path = MUSIC_FOLDER . "/" . $path;
if (file_exists($path) && is_file($path))  
    header('Content-Description: File Transfer');
    header('Content-Type: application/octet-stream');
    header('Expires: 0');
    header('Cache-Control: must-revalidate');
    header('Pragma: public');
    header('Content-Length: ' . filesize($path));
    readfile($path);
 
It s php, so I assume no responsability for it :D

  1. as much as the members of our LUG can be considered normal.
  2. i.e. the person in the LUG who wanted me to share what I had done.
  3. i.e. the other person in the LUG who was in that conversation and suggested the php script option.

5 January 2025

Jonathan McDowell: Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I d do my annual recap of my Free Software activities. For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences In 2024 I managed to make it to FOSDEM again. It s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I m already booked to go this year as well. I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I d a talk submission in for FOSDEM about our use of attestation and why it s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn t selected. Maybe I ll find somewhere else suitable to do it. BSides Belfast may or may not count - it s a security conference, but there s a lot of overlap with various bits of Free software, so I feel it deserves a mention. I skipped DebConf for 2024 for a variety of reasons, but I m expecting to make DebConf25 in Brest, France in July.

Debian Most of my contributions to Free software continue to happen within Debian. In 2023 I d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I ll have to find some time to build the appropriate DFSG tarball and update it. rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded. kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that s tracking Kodi 22 and we re still on Kodi 21 so I plan to follow the Omega branch for now. Which I ve just noticed had a 21.0.8 release this week. Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I ve had zero concerns about any of his packaging. The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there. I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3. The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups. OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well. libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1. I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze. Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there s been some recent activity, but it doesn t seem to be release quality), but there was an outstanding ITP and I ve some familiarity with the space as we ve been using it at work as part of investigating TCG OPAL encryption. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux I d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I ve another fix in progress that I hope to submit in 2025, but it s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects I didn t end up doing much in the way of externally published personal project work in 2024. Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet. listadmin3 got some minor updates based on external feedback / MRs. It s nice to know it s useful to other folk even in its basic state. That wraps up 2024. I ve got no particular goals for this year at present. Ideally I d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I ll settle for making sure all my Debian packages are in reasonable state for Trixie.

2 January 2025

Matthew Garrett: The GPU, not the TPM, is the root of hardware DRM

As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

comment count unavailable comments

31 December 2024

Russ Allbery: Review: Metal from Heaven

Review: Metal from Heaven, by August Clarke
Publisher: Erewhon
Copyright: November 2024
ISBN: 1-64566-099-0
Format: Kindle
Pages: 443
Metal from Heaven is industrial-era secondary-world fantasy with a literary bent. It is a complete story in one book, and I would be very surprised by a sequel. Clarke previously wrote the Scapegracers young-adult trilogy, which got excellent reviews and a few award nominations, as H.A. Clarke. This is his first adult novel.
Know I adore you. Look out over the glow. The cities sundered, their machines inverted, mountains split and prairies blazing, that long foreseen Hereafter crowning fast. This calamity is a promise made to you. A prayer to you, and to your shadow which has become my second self, tucked behind my eye and growing in tandem with me, pressing outwards through the pupil, the smarter, truer, almost bursting reason for our wrath. Do not doubt me. Just look. Watch us rise as the sun comes up over the beauty. The future stains the bleakness so pink. When my violence subsides, we will have nothing, and be champions.
Marney Honeycutt is twelve years old, a factory worker, and lustertouched. She works in the Yann I. Chauncey Ichorite Foundry in Ignavia City, alongside her family and her best friend, shaping the magical metal ichorite into the valuable industrial products of a new age of commerce and industry. She is the oldest of the lustertouched, the children born to factory workers and poisoned by the metal. It has made her allergic, prone to fits at any contact with ichorite, but also able to exert a strange control over the metal if she's willing to pay the price of spasms and hallucinations for hours afterwards. As Metal from Heaven opens, the workers have declared a strike. Her older sister is the spokesperson, demanding shorter hours, safer working conditions, and an investigation into the health of the lustertouched children. Chauncey's response is to send enforcer snipers to kill the workers, including the entirety of her family.
The girl sang, "Unalone toward dawn we go, toward the glory of the new morning." An enforcer shot her in the belly, and when she did not fall, her head.
Marney survives, fleeing into the city, swearing an impossible personal revenge against Yann Chauncey. An act of charity gets her a ticket on a train into the countryside. The woman who bought her ticket is a bandit who is on the train to rob it. Marney's ability to control ichorite allows her to help the bandits in return, winning her a place with the Highwayman's Choir who have been preying on the shipments of the rich and powerful and then disappearing into the hills. The Choir's secret is that the agoraphobic and paranoid Baron of the Fingerbluffs is dead and has been for years. He was killed by his staff, Hereafterist idealists, who have turned his remote territory into an anarchist commune and haven for pirates and bandits. This becomes Marney's home and the Choir becomes her family, but she never forgets her oath of revenge or the childhood friend she left behind in the piles of bodies and to whom this story is narrated. First, Clarke's writing is absolutely gorgeous.
We scaled the viny mountain jags at Montrose Barony's legal edge, the place where land was and wasn't Ignavia, Royston, and Drustland alike. There was a border but it was diffuse and hallucinatory, even more so than most. On legal papers and state maps there were harsh lines that squashed topography and sanded down the mountains into even hills in planter's rows, but here among the jutting rocks and craggy heather, the ground was lineless.
The rhythm of it, the grasp of contrast and metaphor, the word choice! That climactic word "lineless," with its echo of limitless. So good. Second, this is the rarest of books: a political fantasy that takes class and religion seriously and uses them for more than plot drivers. This is not at all our world, and the technology level is somewhat ambiguous, but the parallels to the Gilded Age and Progressive Era are unmistakable. The Hereafterists that Marney joins are political anarchists, not in the sense of alternative governance structures and political theory sanitized for middle-class liberals, but in the sense of Emma Goldman and Peter Kropotkin. The society they have built in the Fingerbluffs is temporary, threatened, and contingent, but it is sincere and wildly popular among the people who already lived there. Even beyond politics, class is a tangible force in this book. Marney is a factory worker and the child of factory workers. She barely knows how to read and doesn't magically learn over the course of the book. She has friends who are clever in the sense rewarded by politics and nobility, who navigate bureaucracies and political nuance, but that is not Marney's world. When, towards the end of the book, she has to deal with a gathering of high-class women, the contrast is stark, and she navigates that gathering only by being entirely unexpected. Perhaps the best illustration of the subtlety of this is the terminology in the book for lesbian. Marney is a crawly, which is a slur thrown at people like her (and one of the rare fictional slurs that work exactly as the author intended) but is also simply what she calls herself. Whether or not it functions as a slur depends on context, and the context is never hard to understand. The high-class lesbians she meets later are Lunarists, and react to crawly as a vile and insulting word. They use language to separate themselves from both the insult and from the social class that uses it. Language is an indication of culture and manners and therefore of morality, unlike deeds, which admit endless justifications.
Conversation was fleeting. Perdita managed with whomever stood near her, chipper about every prettiness she saw, the flitting butterflies, the dappled light between the leaves, the lushness and the fragrance of untamed land, and her walking companions took turns sharing in her delight. It was infectious, how happy she was. She was going to slaughter millions. She was going to skip like this all the while.
The handling of religion is perhaps even better. Marney was raised a Tullian, which sits alongside two other fleshed-out fictional religions and sketches of several more. Tullians tend to be conservative and patriarchal, and Marney has a realistically complicated relationship with faith: sticking with some Tullian worship practices and gestures because they're part of who she is, feeling a kinship to other Tullians, discarding beliefs that don't fit her, and revising others. Every major religion has a Hereafterist spin or reinterpretation that upends or reverses the parts of the religion that were used to prop up the existing social order and brings it more in line with Hereafterist ideals. We see the Tullian Hereafterist variation in detail, and as someone who has studied a lot of methods of reinterpreting Christianity, I was impressed by how well Clarke invents both a belief system and its revisionist rewrite. This is exactly how religions work in human history, but one almost never sees this subtlety in fantasy novels. Marney's allergy to ichorite causes her internal dialogue to dissolve into hallucinatory synesthesia when she's manipulating or exposed to it. Since that's most of the book, substantial portions read like drug trips with growing body horror. I normally hate this type of narration, so it's a sign of just how good Clarke's writing is that I tolerated it and even enjoyed parts. It helps that the descriptions are irreverent and often surprising, full of unexpected metaphors and sudden turns. It's very hard not to quote paragraph after paragraph of this book. Clarke is also doing a lot with gender that I don't feel qualified to comment in detail on, but it would not surprise me to see this book in the Otherwise Award recommendation list. I can think of three significant male characters, all of whom are well-done, but every other major character is female by at least some gender definition. Within that group, though, is huge gender diversity of the complicated and personal type that doesn't force people into defined boxes. Marney's sexuality is similarly unclassified and sometimes surprising. My one complaint is that I thought the sex scenes (which, to warn, are often graphic) fell into the literary fiction trap of being described so closely and physically that it didn't feel like anyone involved was actually enjoying themselves. (This is almost certainly a matter of personal taste.) I had absolutely no idea how Clarke was going to end this book, and the last couple of chapters caught me by surprise. I'm still not sure what I think about the climax. It's not the ending that I wanted, but one of the merits of this book is that it never did what I thought I wanted and yet made me enjoy the journey anyway. It is, at least, a genre ending, not a literary ending: The reader gets a full explanation of what is going on, and the setting is not static the way that it so often is in literary fiction. The characters can change the world, for good or for ill. The story felt frustrating and incomplete when I first finished it, but I haven't stopped thinking about this book and I think I like the shape of it a bit more now. It was certainly unexpected, at least by me. Clarke names Dhalgren as one of their influences in the acknowledgments, and yes, Metal from Heaven is that kind of book. This is the first 2024 novel I've read that felt like the kind of book that should be on award shortlists. I'm not sure it was entirely successful, and there are parts of it that I didn't like or that weren't for me, but it's trying to do something different and challenging and uncomfortable, and I think it mostly worked. And the writing is so good.
She looked like a mythic princess from the old woodcuts, who ruled nature by force of goodness and faith and had no legal power.
Metal from Heaven is not going to be everyone's taste. If you do not like literary fantasy, there is a real chance that you will hate this. I am very glad that I read it, and also am going to take a significant break from difficult books before I tackle another one. But then I'm probably going to try the Scapegracers series, because Clarke is an author I want to follow. Content notes: Explicit sex, including sadomasochistic sex. Political violence, mostly by authorities. Murdered children, some body horror, and a lot of serious injuries and death. Rating: 8 out of 10

Next.

Previous.