Search Results: "mt"

2 February 2025

Anuradha Weeraman: DeepSeek-R1, at the cusp of an open revolution

DeepSeek-R1, at the cusp of an open revolutionDeepSeek R1, the new entrant to the Large Language Model wars has created quite a splash over the last few weeks. Its entrance into a space dominated by the Big Corps, while pursuing asymmetric and novel strategies has been a refreshing eye-opener.GPT AI improvement was starting to show signs of slowing down, and has been observed to be reaching a point of diminishing returns as it runs out of data and compute required to train, fine-tune increasingly large models. This has turned the focus towards building "reasoning" models that are post-trained through reinforcement learning, techniques such as inference-time and test-time scaling and search algorithms to make the models appear to think and reason better. OpenAI&aposs o1-series models were the first to achieve this successfully with its inference-time scaling and Chain-of-Thought reasoning.

Intelligence as an emergent property of Reinforcement Learning (RL)Reinforcement Learning (RL) has been successfully used in the past by Google&aposs DeepMind team to build highly intelligent and specialized systems where intelligence is observed as an emergent property through rewards-based training approach that yielded achievements like AlphaGo (see my post on it here - AlphaGo: a journey to machine intuition).DeepMind went on to build a series of Alpha* projects that achieved many notable feats using RL:
  • AlphaGo, defeated the world champion Lee Seedol in the game of Go
  • AlphaZero, a generalized system that learned to play games such as Chess, Shogi and Go without human input
  • AlphaStar, achieved high performance in the complex real-time strategy game StarCraft II.
  • AlphaFold, a tool for predicting protein structures which significantly advanced computational biology.
  • AlphaCode, a model designed to generate computer programs, performing competitively in coding challenges.
  • AlphaDev, a system developed to discover novel algorithms, notably optimizing sorting algorithms beyond human-derived methods.
All of these systems achieved mastery in its own area through self-training/self-play and by optimizing and maximizing the cumulative reward over time by interacting with its environment where intelligence was observed as an emergent property of the system.
DeepSeek-R1, at the cusp of an open revolutionThe RL feedback loop
RL mimics the process through which a baby would learn to walk, through trial, error and first principles.

R1 model training pipelineAt a technical level, DeepSeek-R1 leverages a combination of Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT) for its training pipeline:
DeepSeek-R1, at the cusp of an open revolutionDeepSeek-R1 Model Training Pipeline
Using RL and DeepSeek-v3, an interim reasoning model was built, called DeepSeek-R1-Zero, purely based on RL without relying on SFT, which demonstrated superior reasoning capabilities that matched the performance of OpenAI&aposs o1 in certain benchmarks such as AIME 2024.The model was however affected by poor readability and language-mixing and is only an interim-reasoning model built on RL principles and self-evolution.DeepSeek-R1-Zero was then used to generate SFT data, which was combined with supervised data from DeepSeek-v3 to re-train the DeepSeek-v3-Base model.The new DeepSeek-v3-Base model then underwent additional RL with prompts and scenarios to come up with the DeepSeek-R1 model.The R1-model was then used to distill a number of smaller open source models such as Llama-8b, Qwen-7b, 14b which outperformed bigger models by a large margin, effectively making the smaller models more accessible and usable.

Key contributions of DeepSeek-R1
  1. RL without the need for SFT for emergent reasoning capabilities
R1 was the first open research project to validate the efficacy of RL directly on the base model without relying on SFT as a first step, which resulted in the model developing advanced reasoning capabilities purely through self-reflection and self-verification.Although, it did degrade in its language capabilities during the process, its Chain-of-Thought (CoT) capabilities for solving complex problems was later used for further RL on the DeepSeek-v3-Base model which became R1. This is a significant contribution back to the research community.The below analysis of DeepSeek-R1-Zero and OpenAI o1-0912 shows that it is viable to attain robust reasoning capabilities purely through RL alone, which can be further augmented with other techniques to deliver even better reasoning performance.
DeepSeek-R1, at the cusp of an open revolutionSource: https://github.com/deepseek-ai/DeepSeek-R1
Its quite interesting, that the application of RL gives rise to seemingly human capabilities of "reflection", and arriving at "aha" moments, causing it to pause, ponder and focus on a specific aspect of the problem, resulting in emergent capabilities to problem-solve as humans do.
  1. Model distillation
DeepSeek-R1 also demonstrated that larger models can be distilled into smaller models which makes advanced capabilities accessible to resource-constrained environments, such as your laptop. While its not possible to run a 671b model on a stock laptop, you can still run a distilled 14b model that is distilled from the larger model which still performs better than most publicly available models out there. This enables intelligence to be brought closer to the edge, to allow faster inference at the point of experience (such as on a smartphone, or on a Raspberry Pi), which paves way for more use cases and possibilities for innovation.
DeepSeek-R1, at the cusp of an open revolutionSource: https://github.com/deepseek-ai/DeepSeek-R1
Distilled models are very different to R1, which is a massive model with a completely different model architecture than the distilled variants, and so are not directly comparable in terms of capability, but are instead built to be more smaller and efficient for more constrained environments. This technique of being able to distill a larger model&aposs capabilities down to a smaller model for portability, accessibility, speed, and cost will bring about a lot of possibilities for applying artificial intelligence in places where it would have otherwise not been possible. This is another key contribution of this technology from DeepSeek, which I believe has even further potential for democratization and accessibility of AI.
DeepSeek-R1, at the cusp of an open revolution

Why is this moment so significant?DeepSeek-R1 was a pivotal contribution in many ways.
  1. The contributions to the state-of-the-art and the open research helps move the field forward where everybody benefits, not just a few highly funded AI labs building the next billion dollar model.
  2. Open-sourcing and making the model freely available follows an asymmetric strategy to the prevailing closed nature of much of the model-sphere of the larger players. DeepSeek should be commended for making their contributions free and open.
  3. It reminds us that its not just a one-horse race, and it incentivizes competition, which has already resulted in OpenAI o3-mini a cost-effective reasoning model which now shows the Chain-of-Thought reasoning. Competition is a good thing.
  4. We stand at the cusp of an explosion of small-models that are hyper-specialized, and optimized for a specific use case that can be trained and deployed cheaply for solving problems at the edge. It raises a lot of exciting possibilities and is why DeepSeek-R1 is one of the most pivotal moments of tech history.
Truly exciting times. What will you build?

Dirk Eddelbuettel: RcppSpdlog 0.0.20 on CRAN: New Upstream, New Features

Version 0.0.20 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This release updates the code to the version 1.15.1 of spdlog which was released this morning as well. It also contains a contributed PR which illustrates logging in a multithreaded context. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.20 (2025-02-01)
  • New multi-threaded logging example (Young Geun Kim and Dirk via #22)
  • Upgraded to upstream release spdlog 1.15.1

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 January 2025

Jonathan Dowland: dsafilter 20th Anniversary

Happy 20th birthday, dsafilter! dsafilter is a mail filter I wrote two decades ago to solve a problem I had: I was dutifully subscribed to debian-security-announce to learn of new security package updates, but most were not relevant to me. The filter creates a new, summarizing mail, reporting on whether the DSA was applicable to any package installed on the system running the filter, and attached the original DSA mail for reference. Users can then choose to drop mails for packages that aren't relevant. In 2005 I'd been a Debian user for about 6 years, I'd met a few Debian developers in person and I was interested in getting involved. I started my journey to Developer later that same year. I published dsafilter, and I think I sent an announcement to debian-devel, but didn't do a great deal to socialise it, so I suspect nobody else is using it. That said, I have been for the two decades, and I still am! What's notable to me about that is that I haven't had to modify the script at all to keep up with software changes, in particular, from the interpreter. I wrote it as a Ruby script. If I had chosen Perl, it would probably be the same story, but if I'd chosen Python, there's no chance at all that it would still be working today. If it sounds interesting to you, please give it a try. I think it might be due some spring cleaning.

10 January 2025

Sergio Talens-Oliag: Testing New User Tools

On recent weeks I ve had some time to scratch my own itch on matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice. This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.

Desktop / WMOn the desktop / window manager front I ve been using Cinnamon on Debian and Ubuntu systems since Gnome 3 was published (I never liked version 3, so I decided to move to something similar to Gnome 2, including the keyboard shortcuts). In fact I ve never been a fan of Desktop environments, before Gnome I used OpenBox and IceWM because they where a lot faster than desktop systems on my hardware at the time and I was using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well, except for the web browsers and the image manipulation programs). Although I was comfortable using Cinnamon, some years ago I tried to move to i3, a tilling window manager for X11 that looked like a good choice for me, but I didn t have much time to play with it and never used it enough to make me productive with it (I didn t prepare a complete configuration nor had enough time to learn the new shortcuts, so I went back to Cinnamon and never tried again). Anyway, some weeks ago I updated my work machine OS (it was using Ubuntu 22.04 LTS and I updated it to the 24.04 LTS version) and the Cinnamon systray applet stopped working as it used to do (in fact I still have to restart Cinnamon after starting a session to make it work) and, as I had some time, I decided to try a tilling window manager again, but now I decided to go for SwayWM, as it uses Wayland instead of X11.

Sway configurationOn my ~/.config/sway/config I tuned some things:
  • Set fuzzel as the application launcher.
  • Installed manually the shikane application and created a configuration to be executed always when sway is started / reloaded (I adjusted my configuration with wdisplays and used shikanectl to save it).
  • Added support for running the xdg-desktop-portal-wlr service.
  • Enabled the swayidle command to lock the screen after some time of inactivity.
  • Adjusted the keyboard to use the es key map
  • Added some keybindings to make my life easier, including the use of grimm and swappy to take screenshots
  • Configured waybar as the environment bar.
  • Added a shell script to start applications when sway is started (it uses swaymsg to execute background commands and the i3toolwait script to wait for the
    #!/bin/sh
    # VARIABLES
    CHROMIUM_LOCAL_STATE="$HOME/.config/google-chrome/Local State"
    I3_TOOLWAIT="$HOME/.config/sway/scripts/i3-toolwait"
    # Functions
    chromium_profile_dir()  
      jq -r ".profile.info_cache to_entries map( (.value.name): .key ) add .\"$1\" // \"\"" "$CHROMIUM_LOCAL_STATE"
     
    # MAIN
    IGZ_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@intelygenz.com")"
    OURO_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@nxr.global")"
    PERSONAL_PROFILE_DIR="$(chromium_profile_dir "stalens@gmail.com")"
    # Common programs
    swaymsg "exec nextcloud --background"
    swaymsg "exec nm-applet"
    # Run spotify on the first workspace (it is mapped to the laptop screen)
    swaymsg -q "workspace 1"
    $ I3_TOOLWAIT  "spotify"
    # Run tmux on the
    swaymsg -q "workspace 2"
    $ I3_TOOLWAIT  -- foot tmux a -dt sto
    wp_num="3"
    if [ "$OURO_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m ouro-browser -- google-chrome --profile-directory="$OURO_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$IGZ_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m igz-browser -- google-chrome --profile-directory="$IGZ_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$PERSONAL_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m personal-browser -- google-chrome --profile-directory="$PERSONAL_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    # Open the browser without setting the profile directory if none was found
    if [ "$wp_num" = "3" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  google-chrome
      wp_num="$((wp_num+1))"
    fi
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  evolution
    wp_num="$((wp_num+1))"
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  slack
    wp_num="$((wp_num+1))"
    # Open a private browser and a console in the last workspace
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  -- google-chrome --incognito
    $ I3_TOOLWAIT  foot
    # Go back to the second workspace for keepassxc
    swaymsg "workspace 2"
    $ I3_TOOLWAIT  keepassxc

ConclusionAfter using Sway for some days I can confirm that it is a good choice for me, but some of the components needed to make it work as I want are too new and not available on the Ubuntu 24.04 LTS repositories, so I decided to go back to Cinnamon and try Sway again in the future, although I added more workspaces to my setup (now they are only available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some additional keyboard shortcuts and installed or updated some applets.

Text editorWhen I started using Linux many years ago I used vi/vim and emacs as my text editors (vi for plain text and emacs for programming and editing HTML/XML), but eventually I moved to vim as my main text editor and I ve been using it since (well, I moved to neovim some time ago, although I kept my old vim configuration). To be fair I m not as expert as I could be with vim, but I m productive with it and it has many plugins that make my life easier on my machines, while keeping my ability to edit text and configurations on any system that has a vi compatible editor installed. For work reasons I tried to use Visual Studio Code last year, but I ve never really liked it and almost everything I do with it I can do with neovim (i. e. I even use copilot with it). Besides, I m a heavy terminal user (I use tmux locally and via ssh) and I like to be able to use my text editor on my shell sessions, and code does not work like that. The only annoying thing about vim/neovim is its configuration (well, the problem is that I have a very old one and probably should spend some time fixing and updating it), but, as I said, it s been working well for me for a long time, so I never really had the motivation to do it. Anyway, after finishing my desktop tests I saw that I had the Helix editor installed for some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for neovim on my environments (the only drawback is that as it is not vi compatible, I would need to switch back to vi mode when working on remote systems, but I guess I could live with that). I ran the helix tutorial and I liked it, so I decided to configure and install the Language Servers I can probably take advantage of on my daily work on my personal and work machines and see how it works.

Language server installationsA lot of manual installations are needed to get the language servers working what I did on my machines is more or less the following:
# AWK
sudo npm i -g 'awk-language-server@>=0.5.2'
# BASH
sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
# C/C++
sudo apt-get install clangd
# CSS, HTML, ESLint, JSON, SCS
sudo npm i -g vscode-langservers-extracted
# Docker
sudo npm install -g dockerfile-language-server-nodejs
# Docker compose
sudo npm install -g @microsoft/compose-language-service
# Helm
app="helm_ls_linux_amd64"
url="$(
  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/helm_ls
sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
# Markdown
app="marksman-linux-x64"
url="$(
  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/marksman
sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
# Python
sudo npm i -g pyright
# Rust
rustup component add rust-analyzer
# SQL
sudo npm i -g sql-language-server
# Terraform
sudo apt-get install terraform-ls
# TOML
cargo install taplo-cli --locked --features lsp
# YAML
sudo npm install --global yaml-language-server
# JavaScript, TypeScript
sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome

Helix configurationThe helix configuration is done on a couple of toml files that are placed on the ~/.config/helix directory, the config.toml file I used is this one:
theme = "solarized_light"
[editor]
line-number = "relative"
mouse = false
[editor.statusline]
left = ["mode", "spinner"]
center = ["file-name"]
right = ["diagnostics", "selections", "position", "file-encoding", "file-line-ending", "file-type"]
separator = " "
mode.normal = "NORMAL"
mode.insert = "INSERT"
mode.select = "SELECT"
[editor.cursor-shape]
insert = "bar"
normal = "block"
select = "underline"
[editor.file-picker]
hidden = false
[editor.whitespace]
render = "all"
[editor.indent-guides]
render = true
character = " " # Some characters that work well: " ", " ", " ", " "
skip-levels = 1
And to configure the language servers I used the following language-servers.toml file:
[[language]]
name = "go"
auto-format = true
formatter =   command = "goimports"  
[[language]]
name = "javascript"
language-servers = [
  "typescript-language-server", # optional
  "vscode-eslint-language-server",
]
[language-server.rust-analyzer.config.check]
command = "clippy"
[language-server.sql-language-server]
command = "sql-language-server"
args = ["up", "--method", "stdio"]
[[language]]
name = "sql"
language-servers = [ "sql-language-server" ]
[[language]]
name = "hcl"
language-servers = [ "terraform-ls" ]
language-id = "terraform"
[[language]]
name = "tfvars"
language-servers = [ "terraform-ls" ]
language-id = "terraform-vars"
[language-server.terraform-ls]
command = "terraform-ls"
args = ["serve"]
[[language]]
name = "toml"
formatter =   command = "taplo", args = ["fmt", "-"]  
[[language]]
name = "typescript"
language-servers = [
  "typescript-language-server",
  "vscode-eslint-language-server",
]

Neovim configurationAfter a little while I noticed that I was going to need some time to get used to helix and the most interesting thing for me was the easy configuration and the language server integrations, but as I am already comfortable with neovim and just had installed the language server support tools on my machines I just need to configure them for neovim and I can keep using it for a while. As I said my configuration is old, to configure neovim I have the following init.vim file on my ~/.config/nvim folder:
set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath=&runtimepath
source ~/.vim/vimrc
" load lua configuration
lua require('config')
With that configuration I keep my old vimrc (it is a little bit messy, but it works) and I use a lua configuration file for the language servers and some additional neovim plugins on the ~/.config/nvim/lua/config.lua file:
-- -----------------------
-- BEG: LSP Configurations
-- -----------------------
-- AWS (awk_ls)
require'lspconfig'.awk_ls.setup 
-- Bash (bashls)
require'lspconfig'.bashls.setup 
-- C/C++ (clangd)
require'lspconfig'.clangd.setup 
-- CSS (cssls)
require'lspconfig'.cssls.setup 
-- Docker (dockerls)
require'lspconfig'.dockerls.setup 
-- Docker Compose
require'lspconfig'.docker_compose_language_service.setup 
-- Golang (gopls)
require'lspconfig'.gopls.setup 
-- Helm (helm_ls)
require'lspconfig'.helm_ls.setup 
-- Markdown
require'lspconfig'.marksman.setup 
-- Python (pyright)
require'lspconfig'.pyright.setup 
-- Rust (rust-analyzer)
require'lspconfig'.rust_analyzer.setup 
-- SQL (sqlls)
require'lspconfig'.sqlls.setup 
-- Terraform (terraformls)
require'lspconfig'.terraformls.setup 
-- TOML (taplo)
require'lspconfig'.taplo.setup 
-- Typescript (ts_ls)
require'lspconfig'.ts_ls.setup 
-- YAML (yamlls)
require'lspconfig'.yamlls.setup 
  settings =  
    yaml =  
      customTags =   "!reference sequence"  
     
   
 
-- -----------------------
-- END: LSP Configurations
-- -----------------------
-- ---------------------------------
-- BEG: Autocompletion configuration
-- ---------------------------------
-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion
--
-- Pre requisites:
--
--   # Packer
--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \
--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim
--
--   # Start nvim and run :PackerSync or :PackerUpdate
-- ---------------------------------
local use = require('packer').use
require('packer').startup(function()
  use 'wbthomason/packer.nvim' -- Packer, useful to avoid removing it with PackerSync / PackerUpdate
  use 'neovim/nvim-lspconfig' -- Collection of configurations for built-in LSP client
  use 'hrsh7th/nvim-cmp' -- Autocompletion plugin
  use 'hrsh7th/cmp-nvim-lsp' -- LSP source for nvim-cmp
  use 'saadparwaiz1/cmp_luasnip' -- Snippets source for nvim-cmp
  use 'L3MON4D3/LuaSnip' -- Snippets plugin
end)
-- Add additional capabilities supported by nvim-cmp
local capabilities = require("cmp_nvim_lsp").default_capabilities()
local lspconfig = require('lspconfig')
-- Enable some language servers with the additional completion capabilities offered by nvim-cmp
local servers =   'clangd', 'rust_analyzer', 'pyright', 'ts_ls'  
for _, lsp in ipairs(servers) do
  lspconfig[lsp].setup  
    -- on_attach = my_custom_on_attach,
    capabilities = capabilities,
   
end
-- luasnip setup
local luasnip = require 'luasnip'
-- nvim-cmp setup
local cmp = require 'cmp'
cmp.setup  
  snippet =  
    expand = function(args)
      luasnip.lsp_expand(args.body)
    end,
   ,
  mapping = cmp.mapping.preset.insert( 
    ['<C-u>'] = cmp.mapping.scroll_docs(-4), -- Up
    ['<C-d>'] = cmp.mapping.scroll_docs(4), -- Down
    -- C-b (back) C-f (forward) for snippet placeholder navigation.
    ['<C-Space>'] = cmp.mapping.complete(),
    ['<CR>'] = cmp.mapping.confirm  
      behavior = cmp.ConfirmBehavior.Replace,
      select = true,
     ,
    ['<Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_next_item()
      elseif luasnip.expand_or_jumpable() then
        luasnip.expand_or_jump()
      else
        fallback()
      end
    end,   'i', 's'  ),
    ['<S-Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_prev_item()
      elseif luasnip.jumpable(-1) then
        luasnip.jump(-1)
      else
        fallback()
      end
    end,   'i', 's'  ),
   ),
  sources =  
      name = 'nvim_lsp'  ,
      name = 'luasnip'  ,
   ,
 
-- ---------------------------------
-- END: Autocompletion configuration
-- ---------------------------------

ConclusionI guess I ll keep helix installed and try it again on some of my personal projects to see if I can get used to it, but for now I ll stay with neovim as my main text editor and learn the shortcuts to use it with the language servers.

5 January 2025

Jonathan McDowell: Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I d do my annual recap of my Free Software activities. For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences In 2024 I managed to make it to FOSDEM again. It s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I m already booked to go this year as well. I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I d a talk submission in for FOSDEM about our use of attestation and why it s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn t selected. Maybe I ll find somewhere else suitable to do it. BSides Belfast may or may not count - it s a security conference, but there s a lot of overlap with various bits of Free software, so I feel it deserves a mention. I skipped DebConf for 2024 for a variety of reasons, but I m expecting to make DebConf25 in Brest, France in July.

Debian Most of my contributions to Free software continue to happen within Debian. In 2023 I d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I ll have to find some time to build the appropriate DFSG tarball and update it. rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded. kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that s tracking Kodi 22 and we re still on Kodi 21 so I plan to follow the Omega branch for now. Which I ve just noticed had a 21.0.8 release this week. Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I ve had zero concerns about any of his packaging. The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there. I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3. The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups. OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well. libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1. I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze. Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there s been some recent activity, but it doesn t seem to be release quality), but there was an outstanding ITP and I ve some familiarity with the space as we ve been using it at work as part of investigating TCG OPAL encryption. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux I d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I ve another fix in progress that I hope to submit in 2025, but it s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects I didn t end up doing much in the way of externally published personal project work in 2024. Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet. listadmin3 got some minor updates based on external feedback / MRs. It s nice to know it s useful to other folk even in its basic state. That wraps up 2024. I ve got no particular goals for this year at present. Ideally I d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I ll settle for making sure all my Debian packages are in reasonable state for Trixie.

2 January 2025

Colin Watson: Free software activity in December 2024

Most of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via Liberapay (thanks!). OpenSSH I issued a bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which was quite broken in bookworm. base-passwd A few months ago, the adduser maintainer started a discussion with me (as the base-passwd maintainer) and the shadow maintainer about bringing all three source packages under one team, since they often need to cooperate on things like user and group names. I agreed, but hadn t got round to doing anything about it until recently. I ve now officially moved it under team maintenance. debconf Gioele Barabucci has been working on eliminating duplicated code between debconf and cdebconf, ultimately with the goal of migrating to cdebconf (which I m not sure I m convinced of as a goal, but if we can make improvements to both packages as part of working towards it then there s no harm in that). I finally got round to reviewing and merging confmodule changes in each of debconf and cdebconf. This caused an installer regression due to a weirdness in cdebconf-udeb s packaging, which I fixed - sorry about that! I ve also been dealing with a few patch submissions that had been in my queue for a long time, but more on that next month if all goes well. CI issues I noticed and fixed a problem with Restrictions: needs-sudo in autopkgtest. I fixed broken aptly images in the Salsa CI pipeline. Python team Last month, I mentioned some progress on sorting out the multipart vs. python-multipart name conflict in Debian (#1085728), and said that I thought we d be able to finish it soon. I was right! We got it all done this month: The Python 3.13 transition continues, and last month we were able to add it to the supported Python versions in testing. (The next step will be to make it the default.) I fixed lots of problems in aid of this, including: Sphinx 8.0 removed some old intersphinx_mapping syntax which turned out to still be in use by many packages in Debian. The fixes for this were individually trivial, but there were a lot of them: I found that twisted 24.11.0 broke tests in buildbot and wokkel, and fixed those. I packaged python-flatdict, needed for a new upstream version of python-semantic-release. I tracked down a test failure in vdirsyncer (which I ve been using for some years, but had never previously needed to modify) and contributed a fix upstream. I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools: I fixed django-cte to remove a build-dependency on the obsolete python3-nose package. I added Django 5.1 support to django-polymorphic. (There are a number of other packages that still need work here.) I fixed various other build/test failures: I upgraded these packages to new upstream versions: I updated the team s library style guide to remove material related to Python 2 and early versions of Python 3, which is no longer relevant to any current Python packaging work. Other Python upstream work I happened to notice a Twisted upstream issue requesting the removal of the deprecated twisted.internet.defer.returnValue, realized it was still used in many places in Debian, and went on a PR-filing spree informed by codesearch to try to reduce the future impact of such a change on Debian: Other small fixes Santiago Vila has been building the archive with make --shuffle (also see its author s explanation). I fixed associated bugs in cccc (contributed upstream), groff, and spectemu. I backported an upstream patch to putty to fix undefined behaviour that affected use of the small keypad . I removed groff s Recommends: libpaper1 (#1091375, #1091376), since it isn t currently all that useful and was getting in the way of a transition to libpaper2. I filed an upstream bug suggesting better integration in this area.

1 January 2025

Louis-Philippe V ronneau: 2024 A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one. Albums In 2024, I added 88 new albums to my collection that's a lot! This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union. Money continues to ruin the world, I guess. Concerts I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :) If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year. Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait. Anyway, here are the concerts I went to in 2024: Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2025!

  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics.

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

29 December 2024

Emmanuel Kasper: Accessing Atari ST disk images on Linux

This post leverages support for Atari Hard Disk Interface Partition (AHDI) partition tables in the Linux kernel, activated by default in Debian, and in the parted partition editor. Accessing the content of a partition using a user mounted loop device This is the easiest procedure and should be tried to first. Depending if your Linux kernel has support for AHDI partition tables, and the size of the FAT system on the partition, this procedure might not work. In that case, try the procedure using mtools further below. Attach a disk image called hd80mb.image to a loop device:
$ udisksctl loop-setup --file hd80mb.image
Mapped file hd80mb.image as /dev/loop0
Notice how the kernel detected the partition table:
$ dmesg   grep loop0
[160892.151941] loop0: detected capacity change from 0 to 164138
[160892.171061]  loop0: AHDI p1 p2 p3 p4
Inspect the block devices created for each partition:
$ lsblk   grep loop0
If the partitions are not already mounted by udisks2 under /media/, mount them manually:
$ sudo mount /dev/loop0p1 /mnt/
$ ls /mnt/
SHDRIVER.SYS
When you are finished copying data, unmount the partition, and detach the loop device.
$ sudo umount /mnt
$ udisksctl loop-delete --block-device /dev/loop0
Accessing the content of a partition using mtools and parted This procedure uses the mtools package and the support for the AHDI partition scheme in the parted partition editor. Display the partition table, with partitions offsets in bytes:
$ parted st_mint-1.5.img -- unit B print
...
Partition Table: atari
Disk Flags: 
Number  Start       End         Size        Type     File system  Flags
 1      1024B       133170175B  133169152B  primary               boot
 2      133170176B  266339327B  133169152B  primary
 3      266339328B  399508479B  133169152B  primary
 4      399508480B  532676607B  133168128B  primary
Set some Atari-friendly mtools options:
$ export MTOOLS_SKIP_CHECK=1
$ export MTOOLS_NO_VFAT=1
List the content of the partition, passing as parameter the offset in bytes of the partition: For instance here we are interested in the second partition, and the parted output above indicates that this partition starts at byte offset 133170176 in the disk image.
$ mdir -s -i st_mint-1.5.img@@133170176
 Volume in drive : has no label
Directory for ::/
demodata          2024-08-27  11:43 
        1 file                    0 bytes
Directory for ::/demodata
We can also use the command mcopy with a similar syntax to copy data from and to the disk image. For instance we copy a file named file.zip to the root directory of the second partition:
$ mcopy -s -i st_mint-1.5.img@@133170176 file.zip ::
Recompiling mtools to access large partitions With disk images having large AHDI partitions (well considered large in 1992 ), you might encounter the error
mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
init: sector size too big
Cannot initialize '::'
This error is caused by the non-standard large logical sectors that the TOS uses for large FAT partitions (see the Atari Hard Disk Filesystem reference on page 41, TOS partitions size) We can inspect the logical sector size using fsck tools:
$ udiskctl loop-setup --file cecile-falcon-singlepart-1GB.img
$ sudo fsck.fat -Anv /dev/loop0p1
fsck.fat 4.2 (2021-01-31)
...
Media byte 0xf8 (hard disk)
16384 bytes per logical sector
To access the partition, you need to patch mtools, so that it supports a logical sector size of 16384 bytes. For this you need to change the MAX_SECTOR macro from 8192 to 16384 in msdos.h in the mtools distribution and recompile. A rebuilt mtools is then able to access the partition:
$ /usr/local/bin/mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
 Volume in drive : has no label
Directory for ::/
CECILE   SYS      8462 1998-03-27  22:42 
NEWDESK  INF       804 2024-09-09   9:23 
        2 files               9 266 bytes
                      1 072 463 872 bytes free

23 December 2024

Simon Josefsson: OpenSSH and Git on a Post-Quantum SPHINCS+

Are you aware that Git commits and tags may be signed using OpenSSH? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer? As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH and I have attempted to promote the conservatively designed Classic McEliece in a similar way, although it remains to be adopted. What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+ appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work? Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:
jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P ""
Generating public/private ed25519 key pair.
Your identification has been saved in my_ed25519_key
Your public key has been saved in my_ed25519_key.pub
The key fingerprint is:
SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka
The key's randomart image is:
+--[ED25519 256]--+
     .+=.E ..      
      oo=.ooo      
     . =o=+o .     
      =oO+o .      
      .=+S.=       
       oo.o o      
      . o  .       
     ...o.+..      
    .o.o.=**.      
+----[SHA256]-----+
jas@kaka:~$ cat my_ed25519_key
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L
TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ
AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW
0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU=
-----END OPENSSH PRIVATE KEY-----
jas@kaka:~$ cat my_ed25519_key.pub 
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka
jas@kaka:~$ 
Then let s sign something with this key:
jas@kaka:~$ echo "Hello world!" > msg
jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
jas@kaka:~$ cat msg.sig 
-----BEGIN SSH SIGNATURE-----
U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz
Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz
aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8
SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw=
-----END SSH SIGNATURE-----
jas@kaka:~$
Now let s create a list of trusted public-keys and associated identities:
jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers
jas@kaka:~$ 
Then let s verify the message we just signed:
jas@kaka:~$ cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ
jas@kaka:~$ 
I have implemented support for SPHINCS+ in OpenSSH. This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it. What would a better way to demonstrate SPHINCS+ support in OpenSSH than by validating the Git commit that implements it using itself? Here is how to proceed, first get a suitable development environment up and running. I m using a Debian container launched in a protected environment using podman.
jas@kaka:~$ podman run -it --rm debian:stable
Then install the necessary build dependencies for OpenSSH.
# apt-get update 
# apt-get install git build-essential autoconf libz-dev libssl-dev
Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub first if you are curious.
# cd
# git clone https://github.com/jas4711/openssh-portable.git -b sphincsp
# cd openssh-portable
# autoreconf -fvi
# ./configure
# make
Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):
# mkdir -pv ~/.ssh
# echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers
# git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers
Then verify the commit using the newly built ssh-keygen binary:
# PATH=$PWD:$PATH
# git log -1 --show-signature
commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp)
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
Author: Simon Josefsson <simon@josefsson.org>
Date:   Tue Dec 3 18:44:25 2024 +0100
    Add SPHINCS+.
# git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
# 
Yay! So what are some considerations? SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a robust and a simple variant with different security and performance characteristics. To get going, I picked the sphincss256sha256robust SPHINCS+ implementation from SUPERCOP 20241022. There is a good size comparison table in the sphincsplus implementation, if you want to consider alternative variants. SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut n paste. What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.
# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P ""
Generating public/private sphincsplus key pair.
Your identification has been saved in my_sphincsplus_key
Your public key has been saved in my_sphincsplus_key.pub
The key fingerprint is:
SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253
The key's randomart image is:
+[SPHINCSPLUS 256-+
  .  .o            
 o . oo.           
  = .o.. o         
 o o  o o . .   o  
 .+    = S o   o . 
 Eo=  . + . . .. . 
 =*.+  o . . oo .  
 B+=    o o.o. .   
 o*o   ... .oo.    
+----[SHA256]-----+
# cat my_sphincsplus_key.pub 
ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253
# cat my_sphincsplus_key 
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3
BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu
L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB
tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1
Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA
NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l
mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS
vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ=
-----END OPENSSH PRIVATE KEY-----
# 
Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   head -10
tree ede42093e7d5acd37fde02065a4a19ac1f418703
parent 826483d51a9fee60703298bbf839d9ce37943474
author Simon Josefsson <simon@josefsson.org> 1733247865 +0100
committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100
gpgsig -----BEGIN SSH SIGNATURE-----
 U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p
 xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX
 vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0
 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+
 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   tail -5 
 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL
 RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA==
 -----END SSH SIGNATURE-----
Add SPHINCS+.
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   wc -l
579
# 
What about performance? Verification is really fast:
# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
real	0m0.010s
user	0m0.005s
sys	0m0.005s
# 
On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds. Signing is slower, it takes a bit over 2 seconds on my laptop.
# echo "Hello world!" > msg
# time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
real	0m2.226s
user	0m2.226s
sys	0m0.000s
# echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers
# cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg
# 
Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

11 December 2024

Divine Attah-Ohiemi: From Sisterly Wisdom to Debian Dreams: My Outreachy Journey

Discovering Open Source: How I Got Introduced
Hey there! I m Divine Attah-Ohiemi, a sophomore studying Computer Science. My journey into the world of open source was anything but grand. It all started with a simple question to my sister: How do people get jobs without experience? Her answer? Open source! I dove into this vibrant community, and it felt like discovering a hidden treasure chest filled with knowledge and opportunities. Choosing Debian: Why This Community?
Why Debian, you ask? Well, I applied to Outreachy twice, and both times, I chose Debian. It s not just my first operating system; it feels like home. The Debian community is incredibly welcoming, like a big family gathering where everyone supports each other. Whether I was updating my distro or poring over documentation, the care and consideration in this community were palpable. It reminded me of the warmth of homeschooling with relatives. Plus, knowing that Debian's name comes from its creator Ian and his wife Debra adds a personal touch that makes me feel even more honored to contribute to making the website better! Why I Applied to Outreachy: What Inspired Me
Outreachy is my golden ticket to the open source world! As a 19-year-old, I see this internship as a unique opportunity to gain invaluable experience while contributing to something meaningful. It s the perfect platform for me to learn, grow, and connect with like-minded individuals who share my passion for technology and community. I m excited for this journey and can t wait to see where it takes me!

5 December 2024

Reproducible Builds: Reproducible Builds in November 2024

Welcome to the November 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds mourns the passing of Lunar
  2. Introducing reproduce.debian.net
  3. New landing page design
  4. SBOMs for Python packages
  5. Debian updates
  6. Reproducible builds by default in Maven 4
  7. PyPI now supports digital attestations
  8. Dependency Challenges in OSS Package Registries
  9. Zig programming language demonstrated reproducible
  10. Website updates
  11. Upstream patches
  12. Misc development news
  13. Reproducibility testing framework

Reproducible Builds mourns the passing of Lunar The Reproducible Builds community sadly announced it has lost its founding member, Lunar. J r my Bobbio aka Lunar passed away on Friday November 8th in palliative care in Rennes, France. Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar s creativity, insight and kindness were often noted. You can view our full tribute elsewhere on our website. He will be greatly missed.

Introducing reproduce.debian.net In happier news, this month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian that is, using the .buildinfo files hosted by Debian itself. reproduce.debian.net also contains instructions how to setup one s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.

New landing page design As part of a very productive partnership with the Sovereign Tech Fund and Neighbourhoodie, we are pleased to unveil our new homepage/landing page. We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.

SBOMs for Python packages The Python Software Foundation has announced a new cross-functional project for SBOMs and Python packages . Seth Michael Larson writes that the project is specifically looking to solve these issues :
  • Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
  • Solve the phantom dependency problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
  • Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.
A GitHub repository for the initiative is available, and there are a number of queries, comments and remarks on Seth s Discourse forum post.

Debian updates There was significant development within Debian this month. Firstly, at the recent MiniDebConf in Toulouse, France, Holger Levsen gave a Debian-specific talk on rebuilding packages distributed from ftp.debian.org that is to say, how to reproduce the results from the official Debian build servers: Holger described the talk as follows:
For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice with maximal variations applied to see if they can be build reproducible still. Since about a month, we ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.
The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:
It might be unfair to single out a specific talk from Toulouse, but I d like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar s contributions and legacy. Personally, I ve encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.
Holger s slides and video in .webm format are available.
Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen. Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:
  • rebuildctl should be more verbose when encountering issues. [ ]
  • Please add an option to used randomised queues. [ ]
  • Scheduling and re-scheduling multiple packages at once. [ ]
and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file [ ] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license [ ]
Separate to this, back in 2018, Chris Lamb filed a bug report against the sphinx-gallery package as it generates unreproducible content in various ways. This month, however, Dmitry Shachnev finally closed the bug, listing the multiple sub-issues that were part of the problem and how they were resolved.
Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland s post suggesting some short- and medium-term solutions to the problem.
Holger Levsen uploaded some packages with reproducibility-related changes:
Lastly, 12 reviews of Debian packages were added, 5 were updated and 21 were removed this month adding to our knowledge about identified issues in Debian.

Reproducible builds by default in Maven 4 On our mailing list this month, Herv Boutemy reported the latest release of Maven (4.0.0-beta-5) has reproducible builds enabled by default. In his mailing list post, Herv mentions that this story started during our Reproducible Builds summit in Hamburg , where he created the upstream issue that builds on a multi-year effort to have Maven builds configured for reproducibility.

PyPI now supports digital attestations Elsewhere in the Python ecosystem and as reported on LWN and elsewhere, the Python Package Index (PyPI) has announced that it has finalised support for PEP 740 ( Index support for digital attestations ). Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:
One thing is notably missing from all of this work: downstream verification. [ ] This isn t an acceptable end state (cryptographic attestations have defensive properties only insofar as they re actually verified), so we re looking into ways to bring verification to individual installing clients. In particular, we re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.
There was an in-depth discussion on LWN s announcement page, as well as on Hacker News.

Dependency Challenges in OSS Package Registries At BENEVOL, the Belgium-Netherlands Software Evolution workshop in Namur, Belgium, Tom Mens and Alexandre Decan presented their paper, An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries . The abstract of their paper is as follows:
While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. [ ]
A PDF of the paper is available online.

Zig programming language demonstrated reproducible Motiejus Jak ty posted an interesting and practical blog post on his successful attempt to reproduce the Zig programming language without using the pre-compiled binaries checked into the repository, and despite the circular dependency inherent in its bootstrapping process. As a summary, Motiejus concludes that:
I can now confidently say (and you can also check, you don t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn t been checked-in as a source file.
The full post is full of practical details, and includes a few open questions.

Website updates Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:
  • Alex Feyerke and Mariano Gim nez:
    • Dramatically overhaul the website s landing page with new benefit cards tailored to the expected visitors to our website and a reworking of the visual hierarchy and design. [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]
  • Bernhard M. Wiedemann:
    • Update the System images page to document the e2fsprogs approach. [ ]
  • Chris Lamb:
  • FC (Fay) Stegerman:
    • Replace more inline markdown with HTML on the Success stories page. [ ]
    • Add some links, fix some other links and correct some spelling errors on the Tools page. [ ]
  • Holger Levsen:
    • Add a historical presentation ( Reproducible builds everywhere eg. in Debian, OpenWrt and LEDE ) from October 2016. [ ]
    • Add jochensp and Oejet to the list of known contributors. [ ][ ]
  • Julia Kr ger:
  • Ninette Adhikari & hulkoba:
    • Add/rework the list of success stories into a new page that clearly shows milestones in Reproducible Builds. [ ][ ][ ][ ][ ][ ]
  • Philip Rinn:
    • Import 47 historical weekly reports. [ ]
  • hulkoba:
    • Add alt text to almost all images (!). [ ][ ]
    • Fix a number of links on the Talks . [ ][ ]
    • Avoid so-called ghost buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. [ ][ ]
    • Center the sponsor logos on the homepage. [ ]
    • Move publications and generate them instead from a data.yml file with an improved layout. [ ][ ]
    • Make a large number of small but impactful stylisting changes. [ ][ ][ ][ ]
    • Expand the Tools to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. [ ][ ][ ][ ][ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Misc development news

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related changes:
    • Create and introduce a new reproduce.debian.net service and subdomain [ ]
    • Make a large number of documentation changes relevant to rebuilderd. [ ][ ][ ][ ][ ]
    • Explain a temporary workaround for a specific issue in rebuilderd. [ ]
    • Setup another rebuilderd instance on the o4 node and update installation documentation to match. [ ][ ]
    • Make a number of helpful/cosmetic changes to the interface, such as clarifying terms and adding links. [ ][ ][ ][ ][ ]
    • Deploy configuration to the /opt and /var directories. [ ][ ]
    • Add an infancy (or alpha ) disclaimer. [ ][ ]
    • Add more notes to the temporary rebuilderd documentation. [ ]
    • Commit an nginx configuration file for reproduce.debian.net s Stats page. [ ]
    • Commit a rebuilder-worker.conf configuration for the o5 node. [ ]
  • Debian-related changes:
    • Grant jspricke and jochensp access to the o5 node. [ ][ ]
    • Build the qemu package with the nocheck build flag. [ ]
  • Misc changes:
    • Adapt the update_jdn.sh script for new Debian trixie systems. [ ]
    • Stop installing the PostgreSQL database engine on the o4 and o5 nodes. [ ]
    • Prevent accidental reboots of the o4 node because of a long-running job owned by josch. [ ][ ]
In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net [ ][ ][ ][ ]. And lastly, both Holger Levsen [ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ] performed node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

2 December 2024

Jonathan Dowland: jungle/acid/etc

I thought it had been a full year since I last shared a playlist, but it's been two! I had a plan to produce more, but it seems I haven't. Instead here's a few tracks I've discovered recently which share a common theme. In August I stumbled across a Sound on Sound video interviewing Pete Cannon, who creates authentically old-school Jungle music using tools and techniques from the time, including AKAI samplers and the Commodore Amiga computer.
Here's three tracks that I found since then. Some 8-bit Amiga-jungle, by some slower-paced acid house from someone ostensibly based on Whitley Bay, by and a darker piece I heard on the radio. by

29 November 2024

Russ Allbery: Review: The Duke Who Didn't

Review: The Duke Who Didn't, by Courtney Milan
Series: Wedgeford Trials #1
Publisher: Femtopress
Copyright: September 2020
ASIN: B08G4QC3JC
Format: Kindle
Pages: 334
The Duke Who Didn't is a Victorian romance novel, the first of a loosely-connected trilogy in the romance sense of switching protagonists between books. It's self-published, but by Courtney Milan, so the quality of the editing and publishing is about as high as you will see for a self-published novel. Chloe Fong has a goal: to make her father's sauce the success that it should be. His previous version of the recipe was stolen by White and Whistler and is now wildly popular as Pure English Sauce. His current version is much better. In a few days, tourists will come from all over England to the annual festival of the Wedgeford Trials, and this will be Chloe's opportunity to give the sauce a proper debut and marketing push. There is only the small matter of making enough sauce and coming up with a good name. Chloe is very busy and absolutely does not have time for nonsense. Particularly nonsense in the form of Jeremy Yu. Jeremy started coming to the Wedgeford Trials at the age of twelve. He was obviously from money and society, obviously enough that the villagers gave him the nickname Posh Jim after his participation in the central game of the trials. Exactly how wealthy and exactly which society, however, is something that he never quite explained, at first because he was having too much fun and then because he felt he'd waited too long. The village of Wedgeford was thriving under the benevolent neglect of its absent duke and uncollected taxes, and no one who loved it had any desire for that to change. Including Jeremy, the absent duke in question. Jeremy had been in love with Chloe for years, but the last time he came to the Trials, Chloe told him to stop pursuing her unless he could be serious. That was three years and three Trials ago, and Chloe was certain Jeremy had made his choice by his absence. But Jeremy never forgot her, and despite his utter failure to become a more serious person, he is determined to convince her that he is serious about her. And also determined to finally reveal his identity without breaking everything he loves about the village. Somehow. I have mentioned in other reviews that I mostly read sapphic instead of heterosexual romance because the gender roles in heterosexual romance are much more likely to irritate me. It occurred to me that I was probably being unfair to the heterosexual romance genre, I hadn't read nearly widely enough to draw any real conclusions, and I needed to find better examples. I've followed Courtney Milan occasionally on social media (for reasons unrelated to her novels) for long enough to know that she was unlikely to go for gender essentialism, and I'd been meaning to try one of her books for a while. Hence this novel. It is indeed not gender-essentialist. Neither Chloe nor Jeremy fit into obvious gender boxes. Chloe is the motivating force in the novel and many of their interactions were utterly charming. But, despite that, the gender roles still annoyed me in ways that are entirely not the fault of this book. I'm not sure I can even put a finger on something specific. It's a low-grade, pervasive feeling that men do one type of thing and women do a different type of thing, and even if these characters don't stick to that closely, it saturates the vibes. (Admittedly, a Victorian romance was probably not the best choice when I knew this was my biggest problem with genre heterosexual romance. It was just what I had on hand.) The conceit of the Wedgeford Trials series is that the small village of Wedgeford in England, through historical accident, ended up with an unusually large number of residents with Chinese ancestry. This is what I would call a "believable outlier": there was not such a village so far as I know, but there could well have been. At the least, there were way more people with non-English ancestry, including east Asian ancestry, in Victorian England than modern readers might think. There is quite a lot in this novel about family history, cultural traditions, immigration, and colonialism that I'm wholly unqualified to comment on but that was fascinating to read about and seemed (as one would expect from Milan) adroitly written. As for the rest of the story, The Duke Who Didn't is absolutely full of banter. If your idea of a good time with a romance novel is teasing, word play, mock irritation, and endless verbal fencing as a way to avoid directly confronting difficult topics, you will be in heaven. Jeremy is one of those people who is way too much in his own head and has turned his problems into a giant ball of anxiety, but who is good at being the class clown, and therefore leans heavily on banter and making people laugh (or blush) as a way of avoiding whatever he's anxious about. I thought the characterization was quite good, but I admit I still got a bit tired of it. 350 pages is a lot of banter, particularly when the characters have some serious communication problems they need to resolve, and to fully enjoy this book you have to have a lot of patience for Jeremy's near-pathological inability to be forthright with Chloe. Chloe's most charming characteristic is that she makes lists, particularly to-do lists. Her ideal days proceed as an orderly process of crossing things off of lists, and her way to approach any problem is to make a list. This is a great hook, and extremely relatable, but if you're going to talk this much about her lists, I want to see the lists! Chloe is all about details; show me the details! This book does not contain anywhere close to enough of Chloe's lists. I'm not sure there was a single list in this book that the reader both got to see the details of and that made it to more than three items. I think Chloe would agree that it's pointless to talk about the concept of lists; one needs to commit oneself to making an actual list. This book I would unquestioningly classify as romantic comedy (which given my utter lack of familiarity with romance subgenres probably means that it isn't). Jeremy's standard interaction style with anyone is self-deprecating humor, and Chloe is the sort of character who is extremely serious in ways that strike other people as funny. Towards the end of the book, there is a hilarious self-aware subversion of a major romance novel trope that even I caught, despite my general lack of familiarity with the genre. The eventual resolution of Jeremy's problem of hidden identity caught me by surprise in that way where I should have seen it all along, and was both beautifully handled and quite entertaining. All the pieces are here for a great time, and I think a lot of people would love this book. Somehow, it still wasn't quite my thing; I thoroughly enjoyed parts of it, but I don't find myself eager to read another. I'm kind of annoyed at myself that it didn't pull me in, since if I'd liked this I know where to find lots more like it. But ah well. If you like banter-heavy heterosexual romance that is very self-aware about its genre without devolving into metafiction, this is at least worth a try. Followed in the romance series way by The Marquis Who Mustn't, but this is a complete story with a satisfying ending. Rating: 7 out of 10

18 November 2024

C.J. Collier: Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I d include them here, too. hpacucli is the wrong command. Use ssacli instead.
$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x$ fingerprint " \
        gpg --no-default-keyring --keyring "$ KR " --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "$ KR " 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)
pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1
pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=$ KR ] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
    sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK
$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None
$ sudo ssacli ctrl all show config
HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)
   Internal Drive Cage at Port 1I, Box 2, OK
   Internal Drive Cage at Port 2I, Box 2, OK
   Port Name: 1I (Mixed)
   Port Name: 2I (Mixed)
   Array A (SAS, Unused Space: 0  MB)
      logicaldrive 1 (1.64 TB, RAID 6, OK)
      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)
   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)
$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Array A
      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238
2024-12-09 update I was experiencing a disk failure or predictive failure when I walked into my office this afternoon. I checked this blog post and found the command I used to check the status of the sas array.
cjac@server0:~$ sudo ssacli ctrl all show config
HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)
   Internal Drive Cage at Port 1I, Box 2, OK
   Internal Drive Cage at Port 2I, Box 2, OK
   Port Name: 1I (Mixed)
   Port Name: 2I (Mixed)
   Array A (SAS, Unused Space: 0  MB)
      logicaldrive 1 (1.64 TB, RAID 6, OK)
      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, Predictive Failure)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 300 GB, OK)
   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)
At this point I engaged the under-study to have the flashing disk replaced
cjac@server0:~$ sudo ssacli ctrl all show config
[sudo] password for cjac: 
HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)
   Internal Drive Cage at Port 1I, Box 2, OK
   Internal Drive Cage at Port 2I, Box 2, OK
   Port Name: 1I (Mixed)
   Port Name: 2I (Mixed)
   Array A (SAS, Unused Space: 0  MB)
      logicaldrive 1 (1.64 TB, RAID 6, Recovering, 56.43% complete)
      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 300 GB, Rebuilding)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 300 GB, OK)
   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)
In the next and future output, I will elide duplicate lines
cjac@server0:~$ date ; sudo ssacli ctrl all show config
Mon Dec  9 07:36:44 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 66.85% complete)...
Mon Dec  9 07:39:25 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 68.97% complete) ...
cjac@server0:~$ date ; sudo ssacli ctrl all show config   grep Recovering
Mon Dec  9 07:43:05 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 71.85% complete)
Mon Dec  9 07:43:57 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 72.55% complete)
cjac@server0:~$ sleep 5m ; date ; sudo ssacli ctrl all show config   grep Recovering
Mon Dec  9 07:49:43 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 77.00% complete)
cjac@server0:~$ date ; sudo ssacli ctrl all show config   grep -e 1I:2:2 -e logicaldrive
Mon Dec  9 11:36:14 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, OK)
physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 300 GB, OK)

11 November 2024

Dirk Eddelbuettel: RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)
  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)
  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)
  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)
  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Vincent Bernat: Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration. While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.
  pkgs  :
pkgs.stdenv.mkDerivation  
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
 
Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:
  stdenv, fetchurl  :
stdenv.mkDerivation rec  
  pname = "hello";
  version = "2.12.1";
  src = fetchurl  
    url = "mirror://gnu/hello/hello-$ version .tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
   ;
 
To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.
pkgs.stdenvNoCC.mkDerivation rec  
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v$ version  --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';
  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
 
With a fixed-output derivation, it is up to us to ensure the output is always the same: You can use this derivation to override the src attribute in pkgs.caddy:
pkgs.caddy.overrideAttrs (prev:  
  src = pkgs.stdenvNoCC.mkDerivation   /* ... */  ;
  vendorHash = null;
  subPackages = [ "." ];
 );
Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:
 
  inputs =  
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
   ;
  outputs =   self, nixpkgs, flake-utils, caddy  :
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs  
          inherit system;
          overlays = [ caddy.overlays.default ];
         ;
      in
       
        packages =  
          default = pkgs.caddy.withPlugins  
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
           ;
         ;
       );
 

Update (2024-11) This flake won t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term plugins, though Caddy documentation also refers to them as modules since they are implemented as Go modules.
  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different and I have proposed it in another pull request.
  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail.

10 November 2024

Reproducible Builds: Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project. Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Beyond bitwise equality for Reproducible Builds?
  2. Two Ways to Trustworthy at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds? Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds :
The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: Does build A confirm the integrity of build B? or Can build A reveal a compromised build B? . To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.
A PDF of the paper is freely available.

Two Ways to Trustworthy at SeaGL 2024 On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant s talk:
[ ] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust. Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.

Number of cores affected Android compiler output Fay Stegerman wrote that the cause of the Android toolchain bug from September s report that she reported to the Android issue tracker has been found and the bug has been fixed.
the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.
To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.

On our mailing list On our mailing list this month:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:
  • Ignore errors when listing .ar archives (#1085257). [ ]
  • Don t try and test with systemd-ukify in the Debian stable distribution. [ ]
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). [ ]
In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [ ][ ][ ] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [ ][ ]

IzzyOnDroid passed 25% reproducible apps The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.

Distribution work In Debian this month:
  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results [ ]
  • Recently, a news entry was added to snapshot.debian.org s homepage, describing the recent changes that made the system stable again:
    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM s at LeaseWeb.
    The entry list a number of specific updates surrounding the API endpoints and rate limiting.
  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.
Elsewhere in distribution news, Zbigniew J drzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage. Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.

Website updates There were an enormous number of improvements made to our website this month, including:
  • Alba Herrerias:
    • Improve consistency across distribution-specific guides. [ ]
    • Fix a number of links on the Contribute page. [ ]
  • Chris Lamb:
  • hulkoba
  • James Addison:
    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [ ][ ][ ][ ][ ]
    • On the homepage, link directly to the Projects subpage. [ ]
    • Relocate dependency-drift notes to the Volatile inputs page. [ ]
  • Ninette Adhikari:
    • Add a brand new Success stories page that highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds . [ ][ ][ ][ ][ ][ ]
  • Pol Dellaiera:
    • Update the website s README page for building the website under NixOS. [ ][ ][ ][ ][ ]
    • Add a new academic paper citation. [ ]
Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:
  • Add a basic index.html for rebuilderd. [ ]
  • Update the nginx.conf configuration file for rebuilderd. [ ]
  • Document how to use a rescue system for Infomaniak s OpenStack cloud. [ ]
  • Update usage info for two particular nodes. [ ]
  • Fix up a version skew check to fix the name of the riscv64 architecture. [ ]
  • Update the rebuilderd-related TODO. [ ]
In addition, Mattia Rizzolo added a new IP address for the inos5 node [ ] and Vagrant Cascadian brought 4 virt nodes back online [ ].

Supply-chain security at Open Source Summit EU The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

9 November 2024

Jonathan Dowland: Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog. It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page. I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx. Comment posting workflow I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:
  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.
  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.
  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.
  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.
First step: fetching a comment form First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:
hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.
issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form
hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)
Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code! Second step: handling previews
The old Preview Comment page The old Preview Comment page
In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later. Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview. IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment. The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it. Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview. Third step: handling submitted comments IkiWiki is highly configurable, and many different things could happen once you post a comment. On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments. I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat. handling moderated comments
Moderation message upon submitting a comment Moderation message upon submitting a comment
One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead. I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js. Summary I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience. You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is. Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

8 November 2024

Thomas Lange: Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques. I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999. In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024. I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords. Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps. But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program. On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:
PASSWD      = /etc/passwd
SHADOW      = /etc/shadow
Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps. On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:
passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis
You can remove all occurences of "nis" in your /etc/pam.d/common-password file. Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map. Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:
#! /bin/sh
PATH=/usr/sbin:/usr/bin
# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi
cd /var/yp   exit 3
sleep 2
make
Then install the package incron.
# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e
Add this line:
/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%
It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed. You can see the logs from incrond by using:
# journalctl _COMM=incrond
e.g.
Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)
I've disabled the execution of yppasswd using dpkg-divert
# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable
Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets. I've also discovered the package pamtester, which is a nice package for testing your pam configs.

Next.

Previous.