Search Results: "white"

10 January 2025

Sergio Talens-Oliag: Testing New User Tools

On recent weeks I ve had some time to scratch my own itch on matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice. This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.

Desktop / WMOn the desktop / window manager front I ve been using Cinnamon on Debian and Ubuntu systems since Gnome 3 was published (I never liked version 3, so I decided to move to something similar to Gnome 2, including the keyboard shortcuts). In fact I ve never been a fan of Desktop environments, before Gnome I used OpenBox and IceWM because they where a lot faster than desktop systems on my hardware at the time and I was using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well, except for the web browsers and the image manipulation programs). Although I was comfortable using Cinnamon, some years ago I tried to move to i3, a tilling window manager for X11 that looked like a good choice for me, but I didn t have much time to play with it and never used it enough to make me productive with it (I didn t prepare a complete configuration nor had enough time to learn the new shortcuts, so I went back to Cinnamon and never tried again). Anyway, some weeks ago I updated my work machine OS (it was using Ubuntu 22.04 LTS and I updated it to the 24.04 LTS version) and the Cinnamon systray applet stopped working as it used to do (in fact I still have to restart Cinnamon after starting a session to make it work) and, as I had some time, I decided to try a tilling window manager again, but now I decided to go for SwayWM, as it uses Wayland instead of X11.

Sway configurationOn my ~/.config/sway/config I tuned some things:
  • Set fuzzel as the application launcher.
  • Installed manually the shikane application and created a configuration to be executed always when sway is started / reloaded (I adjusted my configuration with wdisplays and used shikanectl to save it).
  • Added support for running the xdg-desktop-portal-wlr service.
  • Enabled the swayidle command to lock the screen after some time of inactivity.
  • Adjusted the keyboard to use the es key map
  • Added some keybindings to make my life easier, including the use of grimm and swappy to take screenshots
  • Configured waybar as the environment bar.
  • Added a shell script to start applications when sway is started (it uses swaymsg to execute background commands and the i3toolwait script to wait for the
    #!/bin/sh
    # VARIABLES
    CHROMIUM_LOCAL_STATE="$HOME/.config/google-chrome/Local State"
    I3_TOOLWAIT="$HOME/.config/sway/scripts/i3-toolwait"
    # Functions
    chromium_profile_dir()  
      jq -r ".profile.info_cache to_entries map( (.value.name): .key ) add .\"$1\" // \"\"" "$CHROMIUM_LOCAL_STATE"
     
    # MAIN
    IGZ_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@intelygenz.com")"
    OURO_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@nxr.global")"
    PERSONAL_PROFILE_DIR="$(chromium_profile_dir "stalens@gmail.com")"
    # Common programs
    swaymsg "exec nextcloud --background"
    swaymsg "exec nm-applet"
    # Run spotify on the first workspace (it is mapped to the laptop screen)
    swaymsg -q "workspace 1"
    $ I3_TOOLWAIT  "spotify"
    # Run tmux on the
    swaymsg -q "workspace 2"
    $ I3_TOOLWAIT  -- foot tmux a -dt sto
    wp_num="3"
    if [ "$OURO_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m ouro-browser -- google-chrome --profile-directory="$OURO_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$IGZ_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m igz-browser -- google-chrome --profile-directory="$IGZ_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$PERSONAL_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m personal-browser -- google-chrome --profile-directory="$PERSONAL_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    # Open the browser without setting the profile directory if none was found
    if [ "$wp_num" = "3" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  google-chrome
      wp_num="$((wp_num+1))"
    fi
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  evolution
    wp_num="$((wp_num+1))"
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  slack
    wp_num="$((wp_num+1))"
    # Open a private browser and a console in the last workspace
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  -- google-chrome --incognito
    $ I3_TOOLWAIT  foot
    # Go back to the second workspace for keepassxc
    swaymsg "workspace 2"
    $ I3_TOOLWAIT  keepassxc

ConclusionAfter using Sway for some days I can confirm that it is a good choice for me, but some of the components needed to make it work as I want are too new and not available on the Ubuntu 24.04 LTS repositories, so I decided to go back to Cinnamon and try Sway again in the future, although I added more workspaces to my setup (now they are only available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some additional keyboard shortcuts and installed or updated some applets.

Text editorWhen I started using Linux many years ago I used vi/vim and emacs as my text editors (vi for plain text and emacs for programming and editing HTML/XML), but eventually I moved to vim as my main text editor and I ve been using it since (well, I moved to neovim some time ago, although I kept my old vim configuration). To be fair I m not as expert as I could be with vim, but I m productive with it and it has many plugins that make my life easier on my machines, while keeping my ability to edit text and configurations on any system that has a vi compatible editor installed. For work reasons I tried to use Visual Studio Code last year, but I ve never really liked it and almost everything I do with it I can do with neovim (i. e. I even use copilot with it). Besides, I m a heavy terminal user (I use tmux locally and via ssh) and I like to be able to use my text editor on my shell sessions, and code does not work like that. The only annoying thing about vim/neovim is its configuration (well, the problem is that I have a very old one and probably should spend some time fixing and updating it), but, as I said, it s been working well for me for a long time, so I never really had the motivation to do it. Anyway, after finishing my desktop tests I saw that I had the Helix editor installed for some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for neovim on my environments (the only drawback is that as it is not vi compatible, I would need to switch back to vi mode when working on remote systems, but I guess I could live with that). I ran the helix tutorial and I liked it, so I decided to configure and install the Language Servers I can probably take advantage of on my daily work on my personal and work machines and see how it works.

Language server installationsA lot of manual installations are needed to get the language servers working what I did on my machines is more or less the following:
# AWK
sudo npm i -g 'awk-language-server@>=0.5.2'
# BASH
sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
# C/C++
sudo apt-get install clangd
# CSS, HTML, ESLint, JSON, SCS
sudo npm i -g vscode-langservers-extracted
# Docker
sudo npm install -g dockerfile-language-server-nodejs
# Docker compose
sudo npm install -g @microsoft/compose-language-service
# Helm
app="helm_ls_linux_amd64"
url="$(
  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/helm_ls
sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
# Markdown
app="marksman-linux-x64"
url="$(
  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/marksman
sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
# Python
sudo npm i -g pyright
# Rust
rustup component add rust-analyzer
# SQL
sudo npm i -g sql-language-server
# Terraform
sudo apt-get install terraform-ls
# TOML
cargo install taplo-cli --locked --features lsp
# YAML
sudo npm install --global yaml-language-server
# JavaScript, TypeScript
sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome

Helix configurationThe helix configuration is done on a couple of toml files that are placed on the ~/.config/helix directory, the config.toml file I used is this one:
theme = "solarized_light"
[editor]
line-number = "relative"
mouse = false
[editor.statusline]
left = ["mode", "spinner"]
center = ["file-name"]
right = ["diagnostics", "selections", "position", "file-encoding", "file-line-ending", "file-type"]
separator = " "
mode.normal = "NORMAL"
mode.insert = "INSERT"
mode.select = "SELECT"
[editor.cursor-shape]
insert = "bar"
normal = "block"
select = "underline"
[editor.file-picker]
hidden = false
[editor.whitespace]
render = "all"
[editor.indent-guides]
render = true
character = " " # Some characters that work well: " ", " ", " ", " "
skip-levels = 1
And to configure the language servers I used the following language-servers.toml file:
[[language]]
name = "go"
auto-format = true
formatter =   command = "goimports"  
[[language]]
name = "javascript"
language-servers = [
  "typescript-language-server", # optional
  "vscode-eslint-language-server",
]
[language-server.rust-analyzer.config.check]
command = "clippy"
[language-server.sql-language-server]
command = "sql-language-server"
args = ["up", "--method", "stdio"]
[[language]]
name = "sql"
language-servers = [ "sql-language-server" ]
[[language]]
name = "hcl"
language-servers = [ "terraform-ls" ]
language-id = "terraform"
[[language]]
name = "tfvars"
language-servers = [ "terraform-ls" ]
language-id = "terraform-vars"
[language-server.terraform-ls]
command = "terraform-ls"
args = ["serve"]
[[language]]
name = "toml"
formatter =   command = "taplo", args = ["fmt", "-"]  
[[language]]
name = "typescript"
language-servers = [
  "typescript-language-server",
  "vscode-eslint-language-server",
]

Neovim configurationAfter a little while I noticed that I was going to need some time to get used to helix and the most interesting thing for me was the easy configuration and the language server integrations, but as I am already comfortable with neovim and just had installed the language server support tools on my machines I just need to configure them for neovim and I can keep using it for a while. As I said my configuration is old, to configure neovim I have the following init.vim file on my ~/.config/nvim folder:
set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath=&runtimepath
source ~/.vim/vimrc
" load lua configuration
lua require('config')
With that configuration I keep my old vimrc (it is a little bit messy, but it works) and I use a lua configuration file for the language servers and some additional neovim plugins on the ~/.config/nvim/lua/config.lua file:
-- -----------------------
-- BEG: LSP Configurations
-- -----------------------
-- AWS (awk_ls)
require'lspconfig'.awk_ls.setup 
-- Bash (bashls)
require'lspconfig'.bashls.setup 
-- C/C++ (clangd)
require'lspconfig'.clangd.setup 
-- CSS (cssls)
require'lspconfig'.cssls.setup 
-- Docker (dockerls)
require'lspconfig'.dockerls.setup 
-- Docker Compose
require'lspconfig'.docker_compose_language_service.setup 
-- Golang (gopls)
require'lspconfig'.gopls.setup 
-- Helm (helm_ls)
require'lspconfig'.helm_ls.setup 
-- Markdown
require'lspconfig'.marksman.setup 
-- Python (pyright)
require'lspconfig'.pyright.setup 
-- Rust (rust-analyzer)
require'lspconfig'.rust_analyzer.setup 
-- SQL (sqlls)
require'lspconfig'.sqlls.setup 
-- Terraform (terraformls)
require'lspconfig'.terraformls.setup 
-- TOML (taplo)
require'lspconfig'.taplo.setup 
-- Typescript (ts_ls)
require'lspconfig'.ts_ls.setup 
-- YAML (yamlls)
require'lspconfig'.yamlls.setup 
  settings =  
    yaml =  
      customTags =   "!reference sequence"  
     
   
 
-- -----------------------
-- END: LSP Configurations
-- -----------------------
-- ---------------------------------
-- BEG: Autocompletion configuration
-- ---------------------------------
-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion
--
-- Pre requisites:
--
--   # Packer
--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \
--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim
--
--   # Start nvim and run :PackerSync or :PackerUpdate
-- ---------------------------------
local use = require('packer').use
require('packer').startup(function()
  use 'wbthomason/packer.nvim' -- Packer, useful to avoid removing it with PackerSync / PackerUpdate
  use 'neovim/nvim-lspconfig' -- Collection of configurations for built-in LSP client
  use 'hrsh7th/nvim-cmp' -- Autocompletion plugin
  use 'hrsh7th/cmp-nvim-lsp' -- LSP source for nvim-cmp
  use 'saadparwaiz1/cmp_luasnip' -- Snippets source for nvim-cmp
  use 'L3MON4D3/LuaSnip' -- Snippets plugin
end)
-- Add additional capabilities supported by nvim-cmp
local capabilities = require("cmp_nvim_lsp").default_capabilities()
local lspconfig = require('lspconfig')
-- Enable some language servers with the additional completion capabilities offered by nvim-cmp
local servers =   'clangd', 'rust_analyzer', 'pyright', 'ts_ls'  
for _, lsp in ipairs(servers) do
  lspconfig[lsp].setup  
    -- on_attach = my_custom_on_attach,
    capabilities = capabilities,
   
end
-- luasnip setup
local luasnip = require 'luasnip'
-- nvim-cmp setup
local cmp = require 'cmp'
cmp.setup  
  snippet =  
    expand = function(args)
      luasnip.lsp_expand(args.body)
    end,
   ,
  mapping = cmp.mapping.preset.insert( 
    ['<C-u>'] = cmp.mapping.scroll_docs(-4), -- Up
    ['<C-d>'] = cmp.mapping.scroll_docs(4), -- Down
    -- C-b (back) C-f (forward) for snippet placeholder navigation.
    ['<C-Space>'] = cmp.mapping.complete(),
    ['<CR>'] = cmp.mapping.confirm  
      behavior = cmp.ConfirmBehavior.Replace,
      select = true,
     ,
    ['<Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_next_item()
      elseif luasnip.expand_or_jumpable() then
        luasnip.expand_or_jump()
      else
        fallback()
      end
    end,   'i', 's'  ),
    ['<S-Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_prev_item()
      elseif luasnip.jumpable(-1) then
        luasnip.jump(-1)
      else
        fallback()
      end
    end,   'i', 's'  ),
   ),
  sources =  
      name = 'nvim_lsp'  ,
      name = 'luasnip'  ,
   ,
 
-- ---------------------------------
-- END: Autocompletion configuration
-- ---------------------------------

ConclusionI guess I ll keep helix installed and try it again on some of my personal projects to see if I can get used to it, but for now I ll stay with neovim as my main text editor and learn the shortcuts to use it with the language servers.

Valhalla's Things: Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic
An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left. A few days ago1 I wanted to paint, but I didn t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line). A sheet of colour tests with mixes of various yellows and blues. I liked the cool grey tones in the second to last line, from 200 Naples yellow (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of things to do when having a desire to paint, but the utter inability to actually paint something and did a bit of a study with those two colours plus titanium white. The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together. And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D

  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.

8 January 2025

John Goerzen: Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I a person that has been censored by Facebook for mentioning the Open Source social network Mastodon not cheering a lighter touch ? The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s. Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not. As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access and wouldn t understand how to access even if they had the equipment you have a place where self-expression can be unleashed. And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms. But as I ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough more than a few dozen people, say there are troublemakers that ruin it for everyone. Maybe it s trolling, maybe it s vicious attacks, you name it it will arrive and it will be poisonous. I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.) But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the sysop or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you re banned. But, in all but the smallest towns, there were other options you could try. On the other hand, sysops enjoyed having people call their BBSs and didn t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don t be excessively annoying, and don t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet. On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty delinking from the network if they repeatedly failed to prevent malicious content from flowing through their site. Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn t be in real life . And yes, some harbored trolls and flamers. The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn t like what the vibe was at a certain place. That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online. With the rise of the very large platforms and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you d need tens or hundreds of millions of dollars of equipment and employees. All that means you can t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook s mission is not the one they say it is [to] give people the power to build community and bring the world closer together. If that was their goal, they wouldn t be creating AI users and AI spam and all the rest. Zuck isn t showing courage; he s sucking up to Trump and those that will pay the price are those that always do: women and minorities. Really, the point of any large social network isn t to build community. It s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles even free speech doing it. Don t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others. The problem with a one-size-fits-all content policy is that the world isn t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can t square this circle. Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don t have much to offer on that point. Secondly, it indicates that we are too centralized. We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don t. And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance. Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go shrug, this sucks and try something better. (And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

1 January 2025

Louis-Philippe V ronneau: 2024 A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one. Albums In 2024, I added 88 new albums to my collection that's a lot! This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union. Money continues to ruin the world, I guess. Concerts I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :) If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year. Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait. Anyway, here are the concerts I went to in 2024: Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2025!

  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics.

23 December 2024

Simon Josefsson: OpenSSH and Git on a Post-Quantum SPHINCS+

Are you aware that Git commits and tags may be signed using OpenSSH? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer? As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH and I have attempted to promote the conservatively designed Classic McEliece in a similar way, although it remains to be adopted. What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+ appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work? Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:
jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P ""
Generating public/private ed25519 key pair.
Your identification has been saved in my_ed25519_key
Your public key has been saved in my_ed25519_key.pub
The key fingerprint is:
SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka
The key's randomart image is:
+--[ED25519 256]--+
     .+=.E ..      
      oo=.ooo      
     . =o=+o .     
      =oO+o .      
      .=+S.=       
       oo.o o      
      . o  .       
     ...o.+..      
    .o.o.=**.      
+----[SHA256]-----+
jas@kaka:~$ cat my_ed25519_key
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L
TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ
AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW
0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU=
-----END OPENSSH PRIVATE KEY-----
jas@kaka:~$ cat my_ed25519_key.pub 
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka
jas@kaka:~$ 
Then let s sign something with this key:
jas@kaka:~$ echo "Hello world!" > msg
jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
jas@kaka:~$ cat msg.sig 
-----BEGIN SSH SIGNATURE-----
U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz
Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz
aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8
SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw=
-----END SSH SIGNATURE-----
jas@kaka:~$
Now let s create a list of trusted public-keys and associated identities:
jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers
jas@kaka:~$ 
Then let s verify the message we just signed:
jas@kaka:~$ cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ
jas@kaka:~$ 
I have implemented support for SPHINCS+ in OpenSSH. This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it. What would a better way to demonstrate SPHINCS+ support in OpenSSH than by validating the Git commit that implements it using itself? Here is how to proceed, first get a suitable development environment up and running. I m using a Debian container launched in a protected environment using podman.
jas@kaka:~$ podman run -it --rm debian:stable
Then install the necessary build dependencies for OpenSSH.
# apt-get update 
# apt-get install git build-essential autoconf libz-dev libssl-dev
Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub first if you are curious.
# cd
# git clone https://github.com/jas4711/openssh-portable.git -b sphincsp
# cd openssh-portable
# autoreconf -fvi
# ./configure
# make
Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):
# mkdir -pv ~/.ssh
# echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers
# git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers
Then verify the commit using the newly built ssh-keygen binary:
# PATH=$PWD:$PATH
# git log -1 --show-signature
commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp)
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
Author: Simon Josefsson <simon@josefsson.org>
Date:   Tue Dec 3 18:44:25 2024 +0100
    Add SPHINCS+.
# git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
# 
Yay! So what are some considerations? SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a robust and a simple variant with different security and performance characteristics. To get going, I picked the sphincss256sha256robust SPHINCS+ implementation from SUPERCOP 20241022. There is a good size comparison table in the sphincsplus implementation, if you want to consider alternative variants. SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut n paste. What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.
# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P ""
Generating public/private sphincsplus key pair.
Your identification has been saved in my_sphincsplus_key
Your public key has been saved in my_sphincsplus_key.pub
The key fingerprint is:
SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253
The key's randomart image is:
+[SPHINCSPLUS 256-+
  .  .o            
 o . oo.           
  = .o.. o         
 o o  o o . .   o  
 .+    = S o   o . 
 Eo=  . + . . .. . 
 =*.+  o . . oo .  
 B+=    o o.o. .   
 o*o   ... .oo.    
+----[SHA256]-----+
# cat my_sphincsplus_key.pub 
ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253
# cat my_sphincsplus_key 
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3
BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu
L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB
tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1
Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA
NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l
mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS
vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ=
-----END OPENSSH PRIVATE KEY-----
# 
Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   head -10
tree ede42093e7d5acd37fde02065a4a19ac1f418703
parent 826483d51a9fee60703298bbf839d9ce37943474
author Simon Josefsson <simon@josefsson.org> 1733247865 +0100
committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100
gpgsig -----BEGIN SSH SIGNATURE-----
 U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p
 xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX
 vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0
 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+
 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   tail -5 
 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL
 RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA==
 -----END SSH SIGNATURE-----
Add SPHINCS+.
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   wc -l
579
# 
What about performance? Verification is really fast:
# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
real	0m0.010s
user	0m0.005s
sys	0m0.005s
# 
On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds. Signing is slower, it takes a bit over 2 seconds on my laptop.
# echo "Hello world!" > msg
# time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
real	0m2.226s
user	0m2.226s
sys	0m0.000s
# echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers
# cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg
# 
Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

21 December 2024

Joey Hess: aiming at December

I have been working all year on a solar upgrade aimed at December. Now here it is, midwinter, and my electric car is charging on a cloudy day from my offgrid solar fence. I lived happily enough with 1 kilowatt of solar that I installed in 2017. Meanwhile, solar panel prices came down massively, incentives increased and everything came together: This was the year. In the spring I started clearing forest trees that were leaning over the house, making both a firebreak and a solar field. In June I picked up a pallet of panels in a box truck.
a porch with a a bunch of solar panels, stacked on edge leaning up against the wall. A black and white cat is sprawled in front of them.
In August I bought the EV and was able to charge it offgrid from my old solar system... a few miles per day on the most sunny days. In September and October I built a solar fence, of my own design.
Me standing in front of the solar fence, which is 10 panels long
For the past several weeks I have been installing additional solar panels on ballasted ground mounts full of gravel. At this point I'm half way through installing my 30 panel upgrade. The design goal of my 12 kilowatt system is to produce 1 kilowatt of power all day on a cloudy day in midwinter, which allows swapping between major loads (EV charger, hot water heater, etc) on a cloudy day and running everything on a sunny day. So the size of the battery bank doesn't matter much. Batteries are getting cheaper fast too, but they are a wear item, so it's better to oversize the solar system and minimize the battery. A lot of this is nonstandard and experimental. And that makes sense with the price of solar panels. It costs more to mount solar panels now than the panels are worth. And non-ideal panel orientation isn't a problem when the system is massively overpaneled. I'm hoping to finish up the install before the end of winter. I have more trees to clear, more ballasted ground mounts to install, and need to come up with something even more experimental for a half dozen or so panels. Using solar panels as mounts for solar panels? Hanging them from trees? Soon the wan light will fade, time to head off to the solstice party to enjoy the long night, and a bonfire.
Solar fence with some ballasted ground mounts in front of it, late evening light. Old pole mounted solar panels in the foreground are from the 90's.

17 December 2024

Gunnar Wolf: The science of detecting LLM-generated text

This post is a review for Computing Reviews for The science of detecting LLM-generated text , a article published in Communications of the ACM
While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts. The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM s own functions ( white-box detection ) or only by evaluating the generated text via a public application programming interface (API) ( black-box detection ). For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones. White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited. The article closes by listing the authors concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.

15 December 2024

Russell Coker: Hisense 65U80G 65 Inch 8K ULED Android TV (2021)

The Aim I just bought a Hisense 65U80G 65 Inch 8K ULED Android TV (2021 model) for $1,568 including delivery. I got that deal by googling refurbished 8K TVs and finding the cheapest one I could buy. Amazon and eBay didn t have any good prices on second hand 8K TVs and new ones start at $3,000 on special. I didn t assess how Hisense compares to other TVs, as far as I could determine there was only one model of 8K TV on sale in Australia in the price range I was prepared to pay. So I won t review how this TV compares to other models but how refurbished TVs compare to other display options. I bought this because the highest resolution monitor in my price range is 5120*2160 [1]. While I could get a 5128*2880 monitor for around $1,500 paying 3* the money for 33% more pixels is bad value for money. Getting 4* the pixels for under 3* the price is good value even when it s a TV with the lower display quality that involves. Before buying this TV I read this blog post by Daniel Lawrence about using an 8K TV as a primary monitor [2]. While he has an interesting setup with a 65 TV on a large desk it s not what I plan to do at this time. My Plans for Use I don t plan to make it a main monitor. While 5120*2160 isn t as good as I like on my desk it s bearable and the quality of the display is high. High resolution isn t needed for all tasks, for example I m writing this blog post on my laptop while watching a movie on the 8K TV. One thing I d like to do with the 8K TV when I get it working as a monitor is to share the screen for team programming projects. I don t have any specific plans other than team coding projects at the moment. But it will be interesting to experiment with it when I get it working. Technical Issues with High Resolution Monitors Hardware Needed A lot of the graphic hardware out there don t support resolutions higher than 5120*2880. It seems that most laptops don t support resolutions higher than that and higher resolutions than 4K are difficult. Only quite recent and high end video cards will do 8K. Apparently the RTX 2080 is one of the oldest ones that does and that s $400 on ebay. Strangely the GPU chipset spec pages don t list the maximum resolution and there s the additional complication that the other chips might not support the resolutions that the GPU itself can support. As an aside I don t use NVidia cards for regular workstations due to reliability problems. But they are good for ML work and for special purpose systems. Interface Versions To do 8K video it seems that you need HDMI 2.1 (or maybe 2.0 with 4:2:0 chroma subsampling) or DisplayPort 1.3 for 30Hz with 24bit color and 2.0 for higher refresh rates. But using a particular version of the interface doesn t require supporting all the resolutions that it might support. This TV has HDMI 2.1 inputs, I ve bought an adaptor cable that does DisplayPort 1.4 to HDMI 2.1 at 8K resolution. So I need a video card that does DisplayPort 1.4 or HDMI 2.1 output. That doesn t mean that the card will work, but it could work. It s a pity that no-one has made a USB-C video controller that has a basic frame-buffer supporting 8K and the minimal GPU capabilities. The consensus of opinion is that no games will run well at 8K at this time so anyone using 8K resolution doesn t need GPU power unless it s for ML stuff. I m thinking of making a system that can be used as a ML server and X/Wayland server so a GPU with a decent amount of RAM and compute power would be good. I m not particularly interested in spending $1,500+ to get a GPU that can drive a $1,568 TV. I m looking into getting a RTX A2000 with 12G of RAM which should be adequate for ML experiments and can handle 8K@60Hz output. I ve ordered a DisplayPort to HDMI converter cable so if I get a DisplayPort card it will work. Software Support When I first got started with 4K monitors I had significant problems in adjusting the UI to be usable. The support for scaling software is much better now than it was then and 8K 65 has a lower DPI than 4K 32 . So I hope this won t be an issue. Progress So Far My first Hisense 8K TV stopped working properly. It would change to a mostly white screen after being used for some time. The screen would change in ways that correlate to changes in what should appear, but not in a way that was usable. It was just a different pattern of white blobs when I changed to a menu view not anything that allowed using it. I presume that this was the problem that drove a need for refurbishment as when I first got the TV it was still signed in to Google accounts for YouTube and to NetFlix. Best Buy Electrical was good about providing a quick replacement, they took away the old TV and delivered a new one on the same visit and it s now working well. I ve obtained a NVidia card that can allegedly do 8K output and a combination of cables that might be able to carry an 8K signal. Now I just need to get the NVidia drivers to not cause a kernel panic to get things to work.

30 November 2024

Enrico Zini: New laptop setup

My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed. This post has the notes of all the provisioning steps, so that I can replicate them again if needed. Installing Debian 12 Debian 12's installer just worked, with Secure Boot enabled no less, which was nice. The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation. I let it install the way it pleased, then I booted into grml for a round of gparted. The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet. Shrink partitions: note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't. Resize with gparted: Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know. Regrow partitions: Setup gnome When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker. This applies to Gnome as present in Debian Stable. General Gnome settings tips I can list all available settings with:
gsettings list-recursively
which is handy for grepping things like hotkeys. I can manually set a value with:
gsettings set <schema> <key> <value>
and I can reset it to its default with:
gsettings reset <schema> <key>
Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:
gsettings set <schema>:<path> <key> <value>
Install appindicators First thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session. I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one. Fix font sizes across monitors My laptop screen and monitor have significantly different DPIs, so:
gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
And in Settings/Displays, set a reasonable scaling factor for each display. Disable Alt/Super as hotkey for the Overlay Seeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:
gsettings set org.gnome.mutter overlay-key ''
Focus-follows-mouse and Raise-or-lower My desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore. Thankfully Gnome can be configured that way, with some work: This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows. The amount cannot be shortened, but it can be removed with:
gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false
Mouse and keyboard shortcuts Gnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me. It was a lot of work, and these are the steps I used to get rid of most of them. Disable Super+N combinations that accidentally launch a questionable choice of programs:
for i in  seq 1 9 ; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; done
Gnome-Shell settings: gnome-tweak-tool settings: Gnome Terminal settings: Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid: Other hotkeys that got in my way and had to disable the hard way:
for n in  seq 1 12 ; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]'
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]'
gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]'
gsettings set org.gnome.desktop.interface menubar-accel '[]'
Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:
$ gsettings list-recursively grep F10
org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'
I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me. Appearance Gnome-Shell settings: gnome-tweak-tool settings: Gnome Terminal settings: Other decluttering and tweaks Gnome Shell Settings: Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:
gsettings set org.gnome.desktop.screensaver lock-delay 30
Extensions I enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:
gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'
I also enabled: I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction. I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access. Yuck. Run program dialog The default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there. It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:
gsettings list-recursively grep custom-key
org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']
I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland: Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is. TODO I'll try to update these notes as I investigate. Conclusion so far I now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable. It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use. Thankfully it can be switched off, and now I have notes to do it again if needed. gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths. At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that. Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.

29 November 2024

Russ Allbery: Review: The Duke Who Didn't

Review: The Duke Who Didn't, by Courtney Milan
Series: Wedgeford Trials #1
Publisher: Femtopress
Copyright: September 2020
ASIN: B08G4QC3JC
Format: Kindle
Pages: 334
The Duke Who Didn't is a Victorian romance novel, the first of a loosely-connected trilogy in the romance sense of switching protagonists between books. It's self-published, but by Courtney Milan, so the quality of the editing and publishing is about as high as you will see for a self-published novel. Chloe Fong has a goal: to make her father's sauce the success that it should be. His previous version of the recipe was stolen by White and Whistler and is now wildly popular as Pure English Sauce. His current version is much better. In a few days, tourists will come from all over England to the annual festival of the Wedgeford Trials, and this will be Chloe's opportunity to give the sauce a proper debut and marketing push. There is only the small matter of making enough sauce and coming up with a good name. Chloe is very busy and absolutely does not have time for nonsense. Particularly nonsense in the form of Jeremy Yu. Jeremy started coming to the Wedgeford Trials at the age of twelve. He was obviously from money and society, obviously enough that the villagers gave him the nickname Posh Jim after his participation in the central game of the trials. Exactly how wealthy and exactly which society, however, is something that he never quite explained, at first because he was having too much fun and then because he felt he'd waited too long. The village of Wedgeford was thriving under the benevolent neglect of its absent duke and uncollected taxes, and no one who loved it had any desire for that to change. Including Jeremy, the absent duke in question. Jeremy had been in love with Chloe for years, but the last time he came to the Trials, Chloe told him to stop pursuing her unless he could be serious. That was three years and three Trials ago, and Chloe was certain Jeremy had made his choice by his absence. But Jeremy never forgot her, and despite his utter failure to become a more serious person, he is determined to convince her that he is serious about her. And also determined to finally reveal his identity without breaking everything he loves about the village. Somehow. I have mentioned in other reviews that I mostly read sapphic instead of heterosexual romance because the gender roles in heterosexual romance are much more likely to irritate me. It occurred to me that I was probably being unfair to the heterosexual romance genre, I hadn't read nearly widely enough to draw any real conclusions, and I needed to find better examples. I've followed Courtney Milan occasionally on social media (for reasons unrelated to her novels) for long enough to know that she was unlikely to go for gender essentialism, and I'd been meaning to try one of her books for a while. Hence this novel. It is indeed not gender-essentialist. Neither Chloe nor Jeremy fit into obvious gender boxes. Chloe is the motivating force in the novel and many of their interactions were utterly charming. But, despite that, the gender roles still annoyed me in ways that are entirely not the fault of this book. I'm not sure I can even put a finger on something specific. It's a low-grade, pervasive feeling that men do one type of thing and women do a different type of thing, and even if these characters don't stick to that closely, it saturates the vibes. (Admittedly, a Victorian romance was probably not the best choice when I knew this was my biggest problem with genre heterosexual romance. It was just what I had on hand.) The conceit of the Wedgeford Trials series is that the small village of Wedgeford in England, through historical accident, ended up with an unusually large number of residents with Chinese ancestry. This is what I would call a "believable outlier": there was not such a village so far as I know, but there could well have been. At the least, there were way more people with non-English ancestry, including east Asian ancestry, in Victorian England than modern readers might think. There is quite a lot in this novel about family history, cultural traditions, immigration, and colonialism that I'm wholly unqualified to comment on but that was fascinating to read about and seemed (as one would expect from Milan) adroitly written. As for the rest of the story, The Duke Who Didn't is absolutely full of banter. If your idea of a good time with a romance novel is teasing, word play, mock irritation, and endless verbal fencing as a way to avoid directly confronting difficult topics, you will be in heaven. Jeremy is one of those people who is way too much in his own head and has turned his problems into a giant ball of anxiety, but who is good at being the class clown, and therefore leans heavily on banter and making people laugh (or blush) as a way of avoiding whatever he's anxious about. I thought the characterization was quite good, but I admit I still got a bit tired of it. 350 pages is a lot of banter, particularly when the characters have some serious communication problems they need to resolve, and to fully enjoy this book you have to have a lot of patience for Jeremy's near-pathological inability to be forthright with Chloe. Chloe's most charming characteristic is that she makes lists, particularly to-do lists. Her ideal days proceed as an orderly process of crossing things off of lists, and her way to approach any problem is to make a list. This is a great hook, and extremely relatable, but if you're going to talk this much about her lists, I want to see the lists! Chloe is all about details; show me the details! This book does not contain anywhere close to enough of Chloe's lists. I'm not sure there was a single list in this book that the reader both got to see the details of and that made it to more than three items. I think Chloe would agree that it's pointless to talk about the concept of lists; one needs to commit oneself to making an actual list. This book I would unquestioningly classify as romantic comedy (which given my utter lack of familiarity with romance subgenres probably means that it isn't). Jeremy's standard interaction style with anyone is self-deprecating humor, and Chloe is the sort of character who is extremely serious in ways that strike other people as funny. Towards the end of the book, there is a hilarious self-aware subversion of a major romance novel trope that even I caught, despite my general lack of familiarity with the genre. The eventual resolution of Jeremy's problem of hidden identity caught me by surprise in that way where I should have seen it all along, and was both beautifully handled and quite entertaining. All the pieces are here for a great time, and I think a lot of people would love this book. Somehow, it still wasn't quite my thing; I thoroughly enjoyed parts of it, but I don't find myself eager to read another. I'm kind of annoyed at myself that it didn't pull me in, since if I'd liked this I know where to find lots more like it. But ah well. If you like banter-heavy heterosexual romance that is very self-aware about its genre without devolving into metafiction, this is at least worth a try. Followed in the romance series way by The Marquis Who Mustn't, but this is a complete story with a satisfying ending. Rating: 7 out of 10

10 November 2024

Reproducible Builds: Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project. Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Beyond bitwise equality for Reproducible Builds?
  2. Two Ways to Trustworthy at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds? Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds :
The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: Does build A confirm the integrity of build B? or Can build A reveal a compromised build B? . To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.
A PDF of the paper is freely available.

Two Ways to Trustworthy at SeaGL 2024 On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant s talk:
[ ] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust. Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.

Number of cores affected Android compiler output Fay Stegerman wrote that the cause of the Android toolchain bug from September s report that she reported to the Android issue tracker has been found and the bug has been fixed.
the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.
To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.

On our mailing list On our mailing list this month:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:
  • Ignore errors when listing .ar archives (#1085257). [ ]
  • Don t try and test with systemd-ukify in the Debian stable distribution. [ ]
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). [ ]
In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [ ][ ][ ] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [ ][ ]

IzzyOnDroid passed 25% reproducible apps The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.

Distribution work In Debian this month:
  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results [ ]
  • Recently, a news entry was added to snapshot.debian.org s homepage, describing the recent changes that made the system stable again:
    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM s at LeaseWeb.
    The entry list a number of specific updates surrounding the API endpoints and rate limiting.
  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.
Elsewhere in distribution news, Zbigniew J drzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage. Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.

Website updates There were an enormous number of improvements made to our website this month, including:
  • Alba Herrerias:
    • Improve consistency across distribution-specific guides. [ ]
    • Fix a number of links on the Contribute page. [ ]
  • Chris Lamb:
  • hulkoba
  • James Addison:
    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [ ][ ][ ][ ][ ]
    • On the homepage, link directly to the Projects subpage. [ ]
    • Relocate dependency-drift notes to the Volatile inputs page. [ ]
  • Ninette Adhikari:
    • Add a brand new Success stories page that highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds . [ ][ ][ ][ ][ ][ ]
  • Pol Dellaiera:
    • Update the website s README page for building the website under NixOS. [ ][ ][ ][ ][ ]
    • Add a new academic paper citation. [ ]
Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:
  • Add a basic index.html for rebuilderd. [ ]
  • Update the nginx.conf configuration file for rebuilderd. [ ]
  • Document how to use a rescue system for Infomaniak s OpenStack cloud. [ ]
  • Update usage info for two particular nodes. [ ]
  • Fix up a version skew check to fix the name of the riscv64 architecture. [ ]
  • Update the rebuilderd-related TODO. [ ]
In addition, Mattia Rizzolo added a new IP address for the inos5 node [ ] and Vagrant Cascadian brought 4 virt nodes back online [ ].

Supply-chain security at Open Source Summit EU The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 November 2024

Ravi Dwivedi: Asante Kenya for a Good Time

In September of this year, I visited Kenya to attend the State of the Map conference. I spent six nights in the capital Nairobi, two nights in Mombasa, and one night on a train. I was very happy with the visa process being smooth and quick. Furthermore, I stayed at the Nairobi Transit Hotel with other attendees, with Ibtehal from Bangladesh as my roommate. One of the memorable moments was the time I spent at a local coffee shop nearby. We used to go there at midnight, despite the grating in the shops suggesting such adventures were unsafe. Fortunately, nothing bad happened, and we were rewarded with a fun time with the locals.
The coffee shop Ibtehal and me used to visit during the midnight
Grating at a chemist shop in Mombasa, Kenya
The country lies on the equator, which might give the impression of extremely hot temperatures. However, Nairobi was on the cooler side (10 25 degrees Celsius), and I found myself needing a hoodie, which I bought the next day. It also served as a nice souvenir, as it had an outline of the African map printed on it. I also bought a Safaricom SIM card for 100 shillings and recharged it with 1000 shillings for 8 GB internet with 5G speeds and 400 minutes talk time.

A visit to Nairobi s Historic Cricket Ground On this trip, I got a unique souvenir that can t be purchased from the market a cricket jersey worn in an ODI match by a player. The story goes as follows: I was roaming around the market with my friend Benson from Nairobi to buy a Kenyan cricket jersey for myself, but we couldn t find any. So, Benson had the idea of visiting the Nairobi Gymkhana Club, which used to be Kenya s main cricket ground. It has hosted some historic matches, including the 2003 World Cup match in which Kenya beat the mighty Sri Lankans and the record for the fastest ODI century by Shahid Afridi in just 37 balls in 1996. Although entry to the club was exclusively for members, I was warmly welcomed by the staff. Upon reaching the cricket ground, I met some Indian players who played in Kenyan leagues, as well as Lucas Oluoch and Dominic Wesonga, who have represented Kenya in ODIs. When I expressed interest in getting a jersey, Dominic agreed to send me pictures of his jersey. I liked his jersey and collected it from him. I gave him 2000 shillings, an amount suggested by those Indian players.
Me with players at the Nairobi Gymkhana Club
Cricket pitch at the Nairobi Gymkhana Club
A view of the cricket ground inside the Nairobi Gymkhana Club
Scoreboard at the Nairobi Gymkhana cricket ground

Giraffe Center in Nairobi Kenya is known for its safaris and has no shortage of national parks. In fact, Nairobi is the only capital in the world with a national park. I decided not to visit one, as most of them were expensive and offered multi-day tours, and I didn t want to spend that much time in the wildlife. Instead, I went to the Giraffe Center in Nairobi with Pragya and Rabina. The ticket cost 1500 Kenyan shillings (1000 Indian rupees). In Kenya, matatus - shared vans, usually decorated with portraits of famous people and play rap songs - are the most popular means of public transport. Reaching the Giraffe Center from our hotel required taking five matatus, which cost a total of 150 shillings, and a 2 km walk. The journey back was 90 shillings, suggesting that we didn t find the most efficient route to get there. At the Giraffe Center, we fed giraffes and took photos.
A matatu with a Notorious BIG portrait.
Inside the Giraffe Center

Train ride from Nairobi to Mombasa I took a train from Nairobi to Mombasa. The train is known as the SGR Train, where SGR refers to Standard Gauge Railway. The journey was around 500 km. M-Pesa was the only way to make payment for pre-booking the train ticket, and I didn t have an M-Pesa account. Pragya s friend Mary helped facilitate the payment. I booked a second-class ticket, which cost 1500 shillings (1000 Indian rupees). The train was scheduled to depart from Nairobi at 08:00 hours in the morning and arrive in Mombasa at 14:00 hours. The security check at the station required scanning our bags and having them sniffed by sniffer dogs. I also fell victim to a scam by a security official who offered to help me get my ticket printed, only to later ask me to get him some coffee, which I politely declined. Before boarding the train, I was treated to some stunning views at the Nairobi Terminus station. It was a seating train, but I wished it were a sleeper train, as I was sleep-deprived. The train was neat and clean, with good toilets. The train reached Mombasa on time at around 14:00 hours.
SGR train at Nairobi Terminus.
Interior of the SGR train

Arrival in Mombasa
Mombasa Terminus station.
Mombasa was a bit hotter than Nairobi, with temperatures reaching around 30 degrees Celsius. However, that s not too hot for me, as I am used to higher temperatures in India. I had booked a hostel in the Old Town and was searching for a hitchhike from the Mombasa Terminus station. After trying for more than half an hour, I took a matatu that dropped me 3 km from my hostel for 200 shillings (140 Indian rupees). I tried to hitchhike again but couldn t find a ride. I think I know why I couldn t get a ride in both the cases. In the first case, the Mombasa Terminus was in an isolated place, so most of the vehicles were taxis or matatus while any noncommercial cars were there to pick up friends and family. If the station were in the middle of the city, there would be many more car/truck drivers passing by, thus increasing my possibilities of getting a ride. In the second case, my hostel was at the end of the city, and nobody was going towards that side. In fact, many drivers told me they would love to give me a ride, but they were going in some other direction. Finally, I took a tuktuk for 70 shillings to reach my hostel, Tulia Backpackers. It was 11 USD (1400 shillings) for one night. The balcony gave a nice view of the Indian Ocean. The rooms had fans, but there was no air conditioning. Each bed also had mosquito nets. The place was walking distance of the famous Fort Jesus. Mombasa has had more Islamic influence compared to Nairobi and also has many Hindu temples.
The balcony at Tulia Backpackers Hostel had a nice view of the ocean.
A room inside the hostel with fans and mosquito nets on the beds

Visiting White Sandy Beaches and Getting a Hitchhike Visiting Nyali beach marked my first time ever at a white sand beach. It was like 10 km from the hostel. The next day, I visited Diani Beach, which was 30 km from the hostel. Going to Diani Beach required crossing a river, for which there s a free ferry service every few minutes, followed by taking a matatu to Ukunda and then a tuk-tuk. The journey gave me a glimpse of the beautiful countryside of Kenya.
Nyali beach is a white sand beach
This is the ferry service for crossing the river.
During my return from Diani Beach to the hostel, I was successful in hitchhiking. However, it was only a 4 km ride and not sufficient to reach Ukunda, so I tried to get another ride. When a truck stopped for me, I asked for a ride to Ukunda. Later, I learned that they were going in the same direction as me, so I got off within walking distance from my hostel. The ride was around 30 km. I also learned the difference between a truck ride and a matatu or car ride. For instance, matatus and cars are much faster and cooler due to air conditioning, while trucks tend to be warmer because they lack it. Further, the truck was stopped at many checkpoints by the police for inspections as it carried goods, which is not the case with matatus. Anyways, it was a nice experience, and I am grateful for the ride. I had a nice conversation with the truck drivers about Indian movies and my experiences in Kenya.
Diani beach is a popular beach in Kenya. It is a white sand beach.
Selfie with truck drivers who gave me the free ride

Back to Nairobi I took the SGR train from Mombasa back to Nairobi. This time I took the night train, which departs at 22:00 hours, reaching Nairobi at around 04:00 in the morning. I could not sleep comfortably since the train only had seater seats. I had booked the Zarita Hotel in Nairobi and had already confirmed if they allowed early morning check-in. Usually, hotels have a fixed checkout time, say 11:00 in the morning, and you are not allowed to stay beyond that regardless of the time you checked in. But this hotel checked me in for 24 hours. Here, I paid in US dollars, and the cost was 12 USD.

Almost Got Stuck in Kenya Two days before my scheduled flight from Nairobi back to India, I heard the news that the airports in Kenya were closed due to the strikes. Rabina and Pragya had their flight back to Nepal canceled that day, which left them stuck in Nairobi for two additional days. I called Sahil in India and found out during the conversation that the strike was called off in the evening. It was a big relief for me, and I was fortunate to be able to fly back to India without any changes to my plans.
Newspapers at a stand in Kenya covering news on the airport closure

Experience with locals I had no problems communicating with Kenyans, as everyone I met knew English to an extent that could easily surpass that of big cities in India. Additionally, I learned a few words from Kenya s most popular local language, Swahili, such as Asante, meaning thank you, Jambo for hello, and Karibu for welcome. Knowing a few words in the local language went a long way. I am not sure what s up with haggling in Kenya. It wasn t easy to bring the price of souvenirs down. I bought a fridge magnet for 200 shillings, which was the quoted price. On the other hand, it was much easier to bargain with taxis/tuktuks/motorbikes. I stayed at three hotels/hostels in Kenya. None of them had air conditioners. Two of the places were in Nairobi, and they didn t even have fans in the rooms, while the one in Mombasa had only fans. All of them had good Wi-Fi, except Tulia where the internet overall was a bit shaky. My experience with the hotel staff was great. For instance, we requested that the Nairobi Transit Hotel cancel the included breakfast in order to reduce the room costs, but later realized that it was not a good idea. The hotel allowed us to revert and even offered one of our missing breakfasts during dinner. The staff at Tulia Backpackers in Mombasa facilitated the ticket payment for my train from Mombasa to Nairobi. One of the staff members also gave me a lift to the place where I could catch a matatu to Nyali Beach. They even added an extra tea bag to my tea when I requested it to be stronger.

Food At the Nairobi Transit Hotel, a Spanish omelet with tea was served for breakfast. I noticed that Spanish omelette appeared on the menus of many restaurants, suggesting that it is popular in Kenya. This was my first time having this dish. The milk tea in Kenya, referred to by locals as white tea, is lighter than Indian tea (they don t put a lot of tea leaves).
Spanish Omelette served in breakfast at Nairobi Transit Hotel
I also sampled ugali with eggs. In Mombasa, I visited an Indian restaurant called New Chetna and had a buffet thali there twice.
Ugali with eggs.

Tips for Exchanging Money In Kenya, I exchanged my money at forex shops a couple of times. I received good exchange rates for bills larger than 50 USD. For instance, 1 USD on xe.com was 129 shillings, and I got 128.3 shillings per USD (a total of 12,830 shillings) for two 50 USD notes at an exchange in Nairobi, while 127 shillings, which was the highest rate at the banks. On the other hand, for smaller bills such as a one US dollar note, I would have got 125 shillings. A passport was the only document required for the exchange, and they also provided a receipt. A good piece of advice for travelers is to keep 50 USD or larger bills for exchanging into the local currency while saving the smaller US dollar bills for accommodation, as many hotels and hostels accept payment in US dollars (in addition to Kenyan shillings).

Missed Malindi and Lamu There were more places on my to-visit list in Kenya. But I simply didn t have time to cover them, as I don t like rushing through places, especially in a foreign country where there is a chance of me underestimating the amount of time it takes during transit. I would have liked to visit at least one of Kilifi, Watamu or Malindi beaches. Further, Lamu seemed like a unique place to visit as it has no cars or motorized transport; the only options for transport are boats and donkeys. That s it for now. Meet you in the next one :)

24 October 2024

Valhalla's Things: Asemic Writing, a Zine

Posted on October 24, 2024
Tags: madeof:atoms, madeof:bits, craft:zine
An open booklet with lines that look like some kind of cursive non-alphabetic script, framed by a border in the same script and four symbols in the corners. I have no idea either. The front of that booklet, with three lines of fake text in different sizes and a circle of the same. Happy Maladay1 to those who celebrate it, I guess.
A template on white paper with pencil lines where text is supposed to go. Multiple A4 sheet of tracing paper with fake text, plus an A6 sheet and a white A6 sheet with a stamp impression. If you care about the how, it started as china ink on tracing paper, with the help of a template (and a correction sheet for one page where I used the wrong line on the template). alt A rubber stamp was carved with the author s signature and stamped on white paper because the ink from the pad wasn t working well on tracing paper. Then everything was scanned (with the correction on top of the wrong page) asemic_zine_scans.tar. Imported in Inkscape and traced asemic_zine_svg.tar. Printed, cut in half, folded and stapled. The magenta lines weren t by design, but are there because my printer is currently2 cursed. And finally, asemic_zine.pdf was created, joining the pages together with pdfjam, for convenience in case somebody wants to download the full thing. All the .tar and .pdf downloads from this page are released under the WTFPL, or All Rites Reversed..

  1. it s still technically Maladay when I write this, even if by the time you ll get this it s probably the 6th of The Aftermath.
  2. I mean, all printers are always cursed, but at different times they can be cursed in different and novel ways.

10 October 2024

Gunnar Wolf: Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I m not really that much of a Python guy But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it s the language I find my students can best cope with. But delivering a talk to ChiPy? On the other hand, I have long used a very simplistic and limited filesystem I ve designed as an implementation project at class: FIUnamFS (for Facultad de Ingenier a, Universidad Nacional Aut noma de M xico : the Engineering Faculty for Mexico s National University, where I teach. Sorry, the link is in Spanish but you will find several implementations of it from the students ). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk. As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE. Python FUSE But, in the six semesters I ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation. Maybe this is because it s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros , and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4 But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?). And of course, there isn t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy Where to start from? So I setup to try and help. Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but maybe my documentation ends up obfuscating the intent? I hope not and, read on, I ve provided some remediation). I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS). So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem. I think providing fun or useful examples is also a good way to get students to use what I m teaching, so I ve added some ideas I ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem. They all provide something that could be seen as useful, in a way that s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! So I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time. Of course, I will also share this project with my students in the next couple of weeks And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report! Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y ense ando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.

1 October 2024

Ravi Dwivedi: State of the Map Conference in Kenya

Last month, I traveled to Kenya to attend a conference called State of the Map 2024 ( SotM for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.
University of Nairobi.
I have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn t wait to attend. The grant had an upper limit of 1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee. Pre-travel tasks included obtaining Kenya s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.
Our group at the conference.
The venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.
View of Nairobi's skyline from the open area on the fifth floor.
A library in Mahatma Gandhi wing of the University of Nairobi.
The diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program. I learned about the extent of OSM use through Pragya and Rabina s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi/NCR mapping parties.
One of the conference halls where talks took place.
Outside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as white tea ) is usually not as strong compared to India, so I switched to coffee (which is also called white coffee when taken with milk). The food wasn t spicy, but I can t complain :) Fruit juices served as a nice addition to lunch.
One of the lunch meals served during the conference.
At the end of the second day of the conference, there was a surprise in store for us a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.
Minibus which took us from the university to Bao Box restaurant.
This minibus in the picture gave a sense of a real matatu.
I am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa. While most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well. My suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it s not ideal - as Telegram s servers are proprietary and centralized - but it s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea. Finally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip. Thanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.

11 September 2024

Valhalla's Things: Two Linen Hoods

Posted on September 11, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A woman wearing a white hood with a deep shoulder covering that ends in a point at about waist height.  The hood is not fitted, has a slight point because of the rectangular construction, and on the torso part one can see the big square gusset that gives it fullness. I ve been influenced again into feeling the need for a garment. It was again a case of multiple sources conspiring in the same direction for unrelated reasons, but I decided I absolutely needed a linen hood, made from the heavy white linen I knew I had in my stash. Why? I don t know. I do like the feeling of wearing a hood, and the white linen should give a decent protection from the sun, but I don t know how often I m going to wear these instead of just a hat. On the other hand the linen was already there and I needed something small to sew. A woman wearing a hood in the same shape, but made in tartan-print flannel.  It looks slightly smaller than the linen one. My first idea was to make a square hood: some time ago I had already made one out of some leftovers of duvet cover, vaguely inspired by the S , because I have a long-term plan of making one a bit more from scratch1. I like the fact that this pattern is completely made out of squares and rectangles, and while the flannel one is quite fitting, as suitable for a warm garment, I felt that by making it just a cm or two wider it would have worked nicely for a warm weather one, and indeed it did. A woman wearing some sort of white veil that covers the head and has two scarf-like appendages, one wrapped around the shoulders and falling on the back, and one falling on the front side, kept in place with one arm. Except, before I even started on the square hood, I started to think that the same square top would also be good for a hood-scarf, one of those long flowy garments that sit on the head, wrap around the neck and fall down, moving with the wind and the movements of the person. Because, let s be honest. worn in a way that look like a veil they feel nice, it s true. But with the help of a couple of pins then you can do this. A woman wearing what looks like a deep hood over some sort of fabric face mask, showing only the eyes, in a way that resembles characters in a video game.  There are again two scarf-like edges that fall on the front and back. And no, I ve never played that game2, and I m not even 100% sure what it is about, other than killing people, climbing buildings and petting cats3, but that s not really an issue when making a bit of casual cosplay of something, right? Anyway, should anybody feel the need to make themselves a hood or ten, the patterns have been released as usual as #FreeSoftWear: square hood and hood scarf.

  1. I m not going to raise the sheep :D I m actually not even going to wash and comb the wool, I ll start from the step just after those :D
  2. because proprietary software, because somewhat underpowered computers and other related reasons that are somewhat incidental to the game itself.
  3. at least two out of three things that make it look like a perfectly enjoyable activity.

4 September 2024

Samuel Henrique: DebConf24 was fun!: Security, curl, wcurl, Debian's quality

A picture of a badger2040w with Samuel's badge and the curl manpage PCB on the side

tl;dr DebConf24 was fun! A playlist of all of my talks, with subtitles (en, pt-br) and chapters is available on YouTube.

Overview DebConf24 was held in Busan, South Korea, between Sunday July 28th to Sunday August 4th 2024. As usual for DebConfs, I had a great time meeting my friends, but also met new people and got to learn a bit about the interesting things they're working on. I ended up getting too excited during the talk submission stage of the conference and as a result I presented 5 different activities (3 talks, 1 BoF and 1 lightning talk). Since I was too busy with the presentations, I did not have a lot of time to actually hang out with folks, or even to go out in the city, I guess I've learned my lesson for next time. The main purpose of this post is to write about all of the things I presented at the conference. I did want to list some of the interesting talks I've watched, but that I would not be able to be fair as I'm sure I would miss some. You can get the schedule and the recordings of any talks from the conference's website: https://debconf24.debconf.org/schedule/

wcurl Lightning Talk The most fun of my presentations, during the second-to-last day of the conference, I've asked for help from Sergio Durigan Junior <sergiodj> to setup an URL containing a whitespace and redirecting that to wcurl's manpage. I then did a little demo to showcase why me (and a lot others) struggle with downloading things with curl, and how wcurl solves that. https://www.youtube.com/watch?v=eM8M5qa4pPM

Fixing CVEs on Debian: Everything you probably know already I've always felt like DebConf was missing security-related talks, so I decided to do something about it and presented a few of the things I've learned when fixing CVEs for Debian. This is an area where we don't get a lot of new contributors, I'm trying to change that, and this talk can be used to introduce newcomers to it. https://www.youtube.com/watch?v=XzNVVILVyUM

The secret sauce of Debian Debian is not very vocal about all of the nice things it has regarding quality-assurance, testing, or CI, even though it's at the state-of-the-art for a lot of things. This talk is an initial step towards making people aware of the cool things happening behind the scenes. Ideally we should have it well-documented somewhere. https://www.youtube.com/watch?v=x_X2IBnpjic

"I use Debian BTW": fzf, tmux, zoxide and friends One of my earliest good memories of Debian was when it started coming with a colored PS1 by default, I still remember the feeling of relief whenever I jumped into a Debian server and didn't have to deal with a black and white PS1. There's still a lot of room for Debian to ship better defaults, and I think some of them can actually happen. This talk is a bit of a silly one where I'm just making people aware of the existence of a few Golang/Rust CLI tools, and also some dotfiles configurations that should probably be the default. https://www.youtube.com/watch?v=tfto3Seokn4

curl The curl project does such a great job with their security advisories that it will likely never receive the amount of praise it deserves, but I did my best at mentioning it throughout my CVEs talk. Maybe I will write more extensively about this someday, but in case I don't:
There's no other project which always consistently mentions the exact range of commits that are affected by a given CVE. Forget about whether the versions are EOL, curl doesn't have LTS releases, yet they do such a great job at clearly documenting their CVEs that I would take that over having LTS releases anytime (that's for curl at least, I acknowledge some types of projects have a different need for LTS releases). Not only that, but they are also always careful about explaining alternative mitigations such as configuration changes, build flags that defuse the exploitation, or parameters that you should not use.
Just like we tend to do every time we meet, me and the other Debian curl maintainers spent the first 2 or 3 days of the conference talking about how we wanted to eventually meet up to discuss the package. It was going to be informal, maybe during the Cheese and Wine party, but then I've realized we should make it part of the official schedule, which would also give us the recordings for later. And so the "curl maintainers BoF" happened, where we spoke about HTTP3, GnutTLS, wcurl and other things. https://www.youtube.com/watch?v=fL7hSypUTdM

wcurl Right after that BoF, Daniel Stenberg asked if we were interested in having wcurl adopted into curl, which we definitely were, so wcurl is now part of the curl project. Daniel was also kind enough to design a logo for the project, which makes me especially happy because I can stop with my own approach at a logo (which I had to redo every few days): A laptop with a curl and a GoHorse sticker, there's a 'w' handwritten with a marker on the right side of the curl sticker, making it 'wcurl' And here is the new logo: 'wcurl' written with the same font and colors as the curl logo, with the 'w' being green instead of blue, and a download icon at the end Much better, I would say :)

curl Swag DebConf24 was my chance at forwarding some curl swag items to the other curl maintainers, so both Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles> got the curl-up t-shirt and the very cool curl PCB coaster, both gifted by Daniel Stenberg. Unfortunately I didn't have any of that for DebConf attendees, but I did drop loads of curl stickers at the stickers table, they were gone very quickly. A table full of different stickers, curl stickers can be seen over the whole table

For the future I used to think the most humbling experience you could have as someone who presented a talk was to have to watch it yourself, you notice a lot of mistakes and you instantly think about things that should be done differently. It turns out the most humbling thing to do is actually to write subtitles for your talks, I noticed every single mistake, often multiple times. So after spending more than 30 hours writing the subtitles for both English and Brazilian Portuguese for my talks, I feel like it's going to be much easier to avoid committing the same mistakes again. After some time you stop feeling shame about those mistakes and you're just left with feelings of annoyance, and at that point it becomes easier to consciously avoid them. I am collecting a list of things I wish I had done differently on all of those talks, so if I end up presenting any one of them again, it will be an improved version. A picture from the top of a group of conference attendees, there's about 150 people in the picture

31 July 2024

Matthew Palmer: Health Industry Company Sues to Prevent Certificate Revocation

It s not often that a company is willing to make a sworn statement to a court about how its IT practices are incompatible with the needs of the Internet, but when they do it s popcorn time.

The Combatants In the red corner, weighing in at nah, I m not going to do that schtick. The plaintiff in the case is Alegeus Technologies, LLC, a Delaware Corporation that, according to their filings, is a leading provider of a business-tobusiness, white-label funding and payment platform for healthcare carriers and third-party administrators to administer consumer-directed employee benefit programs . Not being subject to the US bonkers health care system, I have only a passing familiarity with the sorts of things they do, but presumably it involves moving a lot of money around, which is sometimes important. The defendant is DigiCert, a CA which, based on analysis I ve done previously, is the second-largest issuer of WebPKI certificates by volume.

The History According to a recently opened Mozilla CA bug, DigiCert found an issue in their domain control validation workflow, that meant it may have been possible for a miscreant to have certificates issued to them that they weren t legitimately entitled to. Given that validating domain names is basically the YOU HAD ONE JOB! of a CA, this is a big deal. The CA/Browser Forum Baseline Requirements (BRs) (which all CAs are required to adhere to, by virtue of their being included in various browser and OS trust stores), say that revocation is required within 24 hours when [t]he CA obtains evidence that the validation of domain authorization or control for any Fully Qualified Domain Name or IP address in the Certificate should not be relied upon (section 4.9.1.1, point 5). DigiCert appears to have at least tried to do the right thing, by opening the above Mozilla bug giving some details of the problem, and notifying their customers that their certificates were going to be revoked. One may quibble about how fast they re doing it, but they re giving it a decent shot, at least. A complicating factor in all this is that, only a touch over a month ago, Google Chrome announced the removal of another CA, Entrust, from its own trust store program, citing a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports . Many of these compliance failures were failures to revoke certificates in a timely manner. One imagines that DigiCert would not like to gain a reputation for tardy revocation, particularly at the moment.

The Legal Action Now we come to Alegeus Technologies. They ve opened a civil case whose first action is to request the issuance of a Temporary Restraining Order (TRO) that prevents DigiCert from revoking certificates issued to Alegeus (which the court has issued). This is a big deal, because TROs are legal instruments that, if not obeyed, constitute contempt of court (or something similar) and courts do not like people who disregard their instructions. That means that, in the short term, those certificates aren t getting revoked, despite the requirement imposed by root stores on DigiCert that the certificates must be revoked. DigiCert is in a real rock / hard place situation here: revoke and get punished by the courts, or don t revoke and potentially (though almost certainly not, in the circumstances) face removal from trust stores (which would kill, or at least massively hurt, their business). The reasons that Alegeus gives for requesting the restraining order is that [t]o Reissue and Reinstall the Security Certificates, Alegeus must work with and coordinate with its Clients, who are required to take steps to rectify the certificates. Alegeus has hundreds of such Clients. Alegeus is generally required by contract to give its clients much longer than 24 hours notice before executing such a change regarding certification. In the filing, Alegeus does acknowledge that DigiCert is a voluntary member of the Certification Authority Browser Forum (CABF), which has bylaws stating that certificates with an issue in their domain validation must be revoked within 24 hours. This is a misstatement of the facts, though. It is the BRs, not the CABF bylaws, that require revocation, and the BRs apply to all CAs that wish to be included in browser and OS trust stores, not just those that are members of the CABF. In any event, given that Alegeus was aware that DigiCert is required to revoke certificates within 24 hours, one wonders why Alegeus went ahead and signed agreements with their customers that required a lengthy notice period before changing certificates. What complicates the situation is that there is apparently a Master Services Agreement (MSA) that states that it constitutes the entire agreement between the parties and that MSA doesn t mention certificate revocation anywhere relevant. That means that it s not quite so cut-and-dried that DigiCert does, in fact, have the right to revoke those certificates. I d expect a lot of update to your Master Services Agreement emails to be going out from DigiCert (and other CAs) in the near future to clarify this point. Not being a lawyer, I can t imagine which way this case might go, but there s one thing we can be sure of: some lawyers are going to able to afford that trip to a tropical paradise this year.

The Security Issues The requirement for revocation within 24 hours is an important security control in the WebPKI ecosystem. If a certificate is misissued to a malicious party, or is otherwise compromised, it needs to be marked as untrustworthy as soon as possible. While revocation is far from perfect, it is the best tool we have. In this court filing, Alegeus has claimed that they are unable to switch certificates with less than 24 hours notice (due to contractual SLAs ). This is a pretty big problem, because there are lots of reasons why a certificate might need to be switched out Very Quickly. As a practical example, someone with access to the private key for your SSL certificate might decide to use it in a blog post. Letting that sort of problem linger for an extended period of time might end up being a Pretty Big Problem of its own. An organisation that cannot respond within hours to a compromised certificate is playing chicken with their security.

The Takeaways Contractual obligations that require you to notify anyone else of a certificate (or private key) changing are bonkers, and completely antithetical to the needs of the WebPKI. If you have to have them, you re going to want to start transitioning to a private PKI, wherein you can do whatever you darn well please with revocation (or not). As these sorts of problems keep happening, trust stores (and hence CAs) are going to crack down on this sort of thing, so you may as well move sooner rather than later. If you are an organisation that uses WebPKI certificates, you ve got to be able to deal with any kind of certificate revocation event within hours, not days. This basically boils down to automated issuance and lifecycle management, because having someone manually request and install certificates is terrible on many levels. There isn t currently a completed standard for notifying subscribers if their certificates need premature renewal (say, due to needing to be revoked), but the ACME Renewal Information Extension is currently being developed to fill that need. Ask your CA if they re tracking this standards development, and when they intend to have the extension available for use. (Pro-tip: if they say we ll start doing development when the RFC is published , run for the hills; that s not how responsible organisations work on the Internet).

The Givings If you ve found this helpful, consider shouting me a refreshing beverage. Reading through legal filings is thirsty work!

14 July 2024

Ravi Dwivedi: Kenya Visa Process

Prior to arrival in Kenya, you need to apply for an Electronic Travel Authorization (eTA) on their website by uploading all the required documents. This system is in place since Jan 2024 after the country abolished the visa system, implementing the eTA portal. The required documents will depend on the purpose of your visit, which in my case, was to attend a conference. Here is the list of documents I submitted for my eTA: Reservation means I didn t book the flights and hotels, but rather reserved them. Additionally, optional means that those documents were not mandatory to submit, but I submitted them in the Other Documents section in order to support my application. After submitting the eTA, I had to make a payment of around 35 US Dollars (approximately 3000 Indian Rupees). It took 40 hours for me to receive an email from Kenya stating that my eTA has been approved, along with an attached PDF, making this one of my smoothest experiences of obtaining travel documents to travel to a country :). An eTA is technically not a visa, but I put the word visa in the title due to familiarity with the term.

3 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Initial Evaluation and Adaptation (episode 2/5)

Here comes the 2nd article of the 5-episode blog post series written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Enjoy also this read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation (this article)
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Initial evaluation and adaptation The Polis code base consists of a number of components, the administration and participation interfaces, a common web backend, and a statistics processing server. Both frontends and the backend are written in a mixture of JavaScript and TypeScript, only the statistics processing server is written in Clojure. In case of self hosting the preferred method of deployment is via Docker containers using Docker Compose or any other orchestrator. The participation frontend for conversations can either be used as a standalone web page or be embedded via an iframe. For our planned use case we initially defined the following goals: After a preliminary evaluation of our own and consulting with Policy Lab UK who were also evaluating and testing Polis and had already made a range of improvements related to self-hosting as well as bug fixes and modernization changes we decided to take their work as a base for our adaptations with the intent of submitting generally useful changes back to the Polis project. Subsequently, a number of changes were implemented, including the removal of hardcoded domain names, the elimination of unnecessary cookies and third-party requests, support for an alternative email sending service, and the option of disabling Facebook and X integration. For the branding our approach was to add an option allowing websites which are embedding conversations in an iframe to load an alternative stylesheet for overriding the native Polis branding. For this to be practical we intended to use CSS custom properties for defining branding-related styles such as colors and fonts. That approach turned out to be problematic because although the Polis participation frontend stylesheet is generated via SCSS and some of the colors are parameterized, however, they are not used consistently throughout the SCSS stylesheets, unfortunately. In addition the frontend templates contain a large amount of hardcoded style attributes. While we succeeded in implementing user-defined stylesheets, it took a disproportionate amount of development resources to parameterize all used colors and fonts via CSS custom properties aggravated by the fact that the SCSS and template files are huge and contain many unused rules and code.

Next.