Search Results: "thk"

18 April 2024

Thomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024
Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I ve toyed around with it a little but not yet seriously used it as I m currently not coding. MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager. The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system. The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots. I defined the VM as a nix flake since this is how I started from the MicroVM projects example:
 
  description = "Haskell dev MicroVM";
  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
  outputs =   self, impermanence, microvm, nixpkgs  :
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem  
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ( pkgs, ...  :  
            environment.persistence.$ persistencePath  =  
                hideMounts = true;
                users.$ user  =  
                  directories = [
                    "git" ".stack"
                  ];
                 ;
               ;
              environment.sessionVariables =  
                TERM = "screen-256color";
               ;
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.$ persistencePath .neededForBoot = nixpkgs.lib.mkForce true;
              microvm =  
                forwardPorts = [
                    from = "host"; host.port = 2222; guest.port = 22;  
                    from = "guest"; host.port = 5432; guest.port = 5432;   # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                    type = "user"; id = "usernet"; mac = "00:00:00:00:00:02";  
                ];
                mem = 4096;
                shares = [  
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                   
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/$ vmname /persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-$ vmname -persistent";
                 
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
               ;
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=$ builtins.storePath <nixpkgs> "];
              nix.settings =  
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
               ;
              security.sudo =  
                enable = true;
                wheelNeedsPassword = false;
               ;
              services.getty.autologinUser = user;
              services.openssh =  
                enable = true;
               ;
              system.stateVersion = "24.11";
              systemd.services.loadnixdb =  
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat $ persistencePath /nix-store-db-dump nix-store --load-db";
               ;
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.$ user  =  
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
               ;
              users.users.root.password = "";
              users.groups.user =  ;
             )
          ];
         ;
    in  
      packages.$ system .default = nixosConfiguration.config.microvm.declaredRunner;
     ;
 
I start the microVM with a templated systemd user service:
[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ]   nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack. Finally the microvm is started inside a tmux session named microvm . This way I can use the VM with SSH or through the console and also access the qemu console. And for completeness the virtiofsd service:
[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=$ XDG_RUNTIME_DIR /microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

Thomas Koch: Using nix package manager in Debian

Posted on January 16, 2024
The nix package manager is available in Debian since May 2020. Why would one use it in Debian? Especially the last point nagged me every time I set up a new Debian installation. My emacs configuration and my Desktop setup expects certain software to be installed. Please be aware that I m a beginner with nix and that my config might not follow best practice. Additionally many nix users are already using the new flakes feature of nix that I m still learning about. So I ve got this file at .config/nixpkgs/config.nix1:
with (import <nixpkgs>  );
 
  packageOverrides = pkgs: with pkgs;  
    thk-emacsWithPackages = (pkgs.emacsPackagesFor emacs-gtk).emacsWithPackages (
      epkgs:
      (with epkgs.elpaPackages; [
        ace-window
        company
        org
        use-package
      ]) ++ (with epkgs.melpaPackages; [
        editorconfig
        flycheck
        haskell-mode
        magit
        nix-mode
        paredit
        rainbow-delimiters
        treemacs
        visual-fill-column
        yasnippet-snippets
      ]) ++ [    # From main packages set
      ]
    );

    userPackages = buildEnv  
      extraOutputsToInstall = [ "doc" "info" "man" ];
      name = "user-packages";
      paths = [
        ghc
        git
        (pkgs.haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
        nix
        stack
        thk-emacsWithPackages
        tmux
        vcsh
        virtiofsd
      ];
     ;
   ;
 
Every time I change the file or want to receive updates, I do:
nix-env --install --attr nixpkgs.userPackages --remove-all
You can see that I install nix with nix. This gives me a newer version than the one available in Debian stable. However, the nix-daemon still runs as the older binary from Debian. My dirty hack is to put this override in /etc/systemd/system/nix-daemon.service.d/override.conf:
[Service]
ExecStart=
ExecStart=@/home/thk/.local/state/nix/profile/bin/nix-daemon nix-daemon --daemon
I m not too interested in a cleaner way since I hope to fully migrate to Nix anyways.

  1. Note the nixpkgs in the path. This is not a config file for nix the package manager but for the nix package collection. See the nixpkgs manual.

9 August 2023

Antoine Beaupr : OpenPGP key transition

This is a short announcement to say that I have changed my main OpenPGP key. A signed statement is available with the cryptographic details but, in short, the reason is that I stopped using my old YubiKey NEO that I have worn on my keyring since 2015. I now have a YubiKey 5 which supports ED25519 which features much shorter keys and faster decryption. It allowed me to move all my secret subkeys on the key (including encryption keys) while retaining reasonable performance. I have written extensive documentation on how to do that OpenPGP key rotation and also YubiKey OpenPGP operations.

Warning on storing encryption keys on a YubiKey People wishing to move their private encryption keys to such a security token should be very careful as there are special precautions to take for disaster recovery. I am toying with the idea of writing an article specifically about disaster recovery for secrets and backups, dealing specifically with cases of death or disabilities.

Autocrypt changes One nice change is the impact on Autocrypt headers, which are considerably shorter. Before, the header didn't even fit on a single line in an email, it overflowed to five lines:
Autocrypt: addr=anarcat@torproject.org; prefer-encrypt=nopreference;
 keydata=xsFNBEogKJ4BEADHRk8dXcT3VmnEZQQdiAaNw8pmnoRG2QkoAvv42q9Ua+DRVe/yAEUd03EOXbMJl++YKWpVuzSFr7IlZ+/lJHOCqDeSsBD6LKBSx/7uH2EOIDizGwfZNF3u7X+gVBMy2V7rTClDJM1eT9QuLMfMakpZkIe2PpGE4g5zbGZixn9er+wEmzk2mt20RImMeLK3jyd6vPb1/Ph9+bTEuEXi6/WDxJ6+b5peWydKOdY1tSbkWZgdi+Bup72DLUGZATE3+Ju5+rFXtb/1/po5dZirhaSRZjZA6sQhyFM/ZhIj92mUM8JJrhkeAC0iJejn4SW8ps2NoPm0kAfVu6apgVACaNmFb4nBAb2k1KWru+UMQnV+VxDVdxhpV628Tn9+8oDg6c+dO3RCCmw+nUUPjeGU0k19S6fNIbNPRlElS31QGL4H0IazZqnE+kw6ojn4Q44h8u7iOfpeanVumtp0lJs6dE2nRw0EdAlt535iQbxHIOy2x5m9IdJ6q1wWFFQDskG+ybN2Qy7SZMQtjjOqM+CmdeAnQGVwxowSDPbHfFpYeCEb+Wzya337Jy9yJwkfa+V7e7Lkv9/OysEsV4hJrOh8YXu9a4qBWZvZHnIO7zRbz7cqVBKmdrL2iGqpEUv/x5onjNQwpjSVX5S+ZRBZTzah0w186IpXVxsU8dSk0yeQskblrwARAQABzSlBbnRvaW5lIEJlYXVwcsOpIDxhbmFyY2F0QHRvcnByb2plY3Qub3JnPsLBlAQTAQgAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgBYhBI3JAc5kFGwEitUPu3khUlJ7dZIeBQJihnFIBQkacFLiAAoJEHkhUlJ7dZIeXNAP/RsX+27l9K5uGspEaMH6jabAFTQVWD8Ch1om9YvrBgfYtq2k/m4WlkMh9IpT89Ahmlf0eq+V1Vph4wwXBS5McK0dzoFuHXJa1WHThNMaexgHhqJOs
 S60bWyLH4QnGxNaOoQvuAXiCYV4amKl7hSuDVZEn/9etDgm/UhGn2KS3yg0XFsqI7V/3RopHiDT+k7+zpAKd3st2V74w6ht+EFp2Gj0sNTBoCdbmIkRhiLyH9S4B+0Z5dUCUEopGIKKOSbQwyD5jILXEi7VTZhN0CrwIcCuqNo7OXI6e8gJd8McymqK4JrVoCipJbLzyOLxZMxGz8Ki0b9O844/DTzwcYcg9I1qogCsGmZfgVze2XtGxY+9zwSpeCLeef6QOPQ0uxsEYSfVgS+onCesSRCgwAPmppPiva+UlGuIMun87gPpQpV2fqFg/V8zBxRvs6YTGcfcQjfMoBHmZTGb+jk1//QAgnXMO7fGG38YH7iQSSzkmodrH2s27ZKgUTHVxpBL85ptftuRqbR7MzIKXZsKdA88kjIKKXwMmez9L1VbJkM4k+1Kzc5KdVydwi+ujpNegF6ZU8KDNFiN9TbDOlRxK5R+AjwdS8ZOIa4nci77KbNF9OZuO3l/FZwiKp8IFJ1nK7uiKUjmCukL0od/6X2rJtAzJmO5Co93ZVrd5r48oqUvjklzzsBNBFmeC3oBCADEV28RKzbv3dEbOocOsJQWr1R0EHUcbS270CrQZfb9VCZWkFlQ/1ypqFFQSjmmUGbNX2CG5mivVsW6Vgm7gg8HEnVCqzL02BPY4OmylskYMFI5Bra2wRNNQBgjg39L9XU4866q3BQzJp3r0fLRVH8gHM54Jf0FVmTyHotR/Xiw5YavNy2qaQXesqqUv8HBIha0rFblbuYI/cFwOtJ47gu0QmgrU0ytDjlnmDNx4rfsNylwTIHS0Oc7Pezp7MzLmZxnTM9b5VMprAXnQr4rewXCOUKBSto+j4rD5/77DzXw96bbueNruaupb2Iy2OHXNGkB0vKFD3xHsXE2x75NBovtABEBAAHCwqwEGAEIACAWIQSNyQHOZBRsBIrVD7t5IVJSe3WSHgUCWZ4LegIbAgFACRB5IV
 JSe3WSHsB0IAQZAQgAHRYhBHsWQgTQlnI7AZY1qz6h3d2yYdl7BQJZngt6AAoJED6h3d2yYdl7CowH/Rp7GHEoPZTSUK8Ss7crwRmuAIDGBbSPkZbGmm4bOTaNs/gealc2tsVYpoMx7aYgqUW+t+84XciKHT+bjRv8uBnHescKZgDaomDuDKc2JVyx6samGFYuYPcGFReRcdmH0FOoPCn7bMW5mTPztV/wIA80LZD9kPKIXanfUyI3HLP0BPwZG4WTpKzJaalR1BNwu2oF6kEK0ymH3LfDiJ5Sr6emI2jrm4gH+/19ux/x+ST4tvm2PmH3BSQOPzgiqDiFd7RZoAIhmwr3FW4epsK9LtSxsi9gZ2vATBKO1oKtb6olW/keQT6uQCjqPSGojwzGRT2thEANH+5t6Vh0oDPZhrKUXRAAxHMBNHEaoo/M0sjZo+5OF3Ig1rMnI6XbKskLv6hu13cCymW0w/5E4XuYnyQ1cNC3pLvqDQbDx5mAPfBVHuqxJdRLQ3yDM/D2QIsxnkzQwi0FsJuni4vuJzWK/NHHDCvxMCh0YmSgbptUtgW8/niatd2Y6MbfRGxUHoctKtzqzivC8hKMTFrj4AbZhg/e9QVCsh5zSXtpWP0qFDJsxRMx0/432n9d4XUiy4U672r9Q09SsynB3QN6nTaCTWCIxGxjIb+8kJrRqTGwy/PElHX6kF0vQUWZNf2ITV1sd6LK/s/7sH+x4rzgUEHrsKr/qPvY3rUY/dQLd+owXesY83ANOu6oMWhSJnPMksbNa4tIKKbjmw3CFIOfoYHOWf3FtnydHNXoXfj4nBX8oSnkfhLILTJgf6JDFXfw6mTsv/jMzIfDs7PO1LK2oMK0+prSvSoM8bP9dmVEGIurzsTGjhTOBcb0zgyCmYVD3S48vZlTgHszAes1zwaCyt3/tOwrzU5JsRJVns+B/TUYaR/u3oIDMDygvE5ObWxXaFVnCC59r+zl0FazZ0ouyk2AYIR
 zHf+n1n98HCngRO4FRel2yzGDYO2rLPkXRm+NHCRvUA/i4zGkJs2AV0hsKK9/x8uMkBjHAdAheXhY+CsizGzsKjjfwvgqf84LwAzSDdZqLVE2yGTOwU0ESiArJwEQAJhtnC6pScWjzvvQ6rCTGAai6hrRiN6VLVVFLIMaMnlUp92EtgVSNpw6kANtRTpKXUB5fIPZVUrVdfEN06t96/6LE42tgifDAFyFTZY5FdHHri1GG/Cr39MpW2VqCDCtTTPVWHTUlU1ZG631BJ+9NB+ce58TmLr6wBTQrT+W367eRFBC54EsLNb7zQAspCn9pw1xf1XNHOGnrAQ4r9BXhOW5B8CzRd4nLRQwVgtw/c5M/bjemAOoq2WkwN+0mfJe4TSfHwFUozXuN274X+0Gr10fhp8xEDYuQM0qu6W3aDXMBBwIu0jTNudEELsTzhKUbqpsBc9WjwNMCZoCuSw/RTpFBV35mXbqQoQgbcU7uWZslLl9Wvv/C6rjXgd+GeX8SGBjTqq1ZkTv5UXLHTNQzPnbkNEExzqToi/QdSjFMIACnakeOSxc0ckfnsd9pfGv1PUyPyiwrHiqWFzBijzGIZEHxhNGFxAkXwTJR7Pd40a7RDxwbO6p/TSIIum41JtteehLHwTRDdQNMoyfLxuNLEtNYS0uR2jYI1EPQfCNWXCdT2ZK/l6GVP6jyB/olHBIOr+oVXqJh+48ki8cATPczhq3fUr7UivmguGwD67/4omZ4PCKtz1hNndnyYFS9QldEGo+AsB3AoUpVIA0XfQVkxD9IZr+Zu6aJ6nWq4M2bsoxABEBAAHCwXYEGAEIACACGwwWIQSNyQHOZBRsBIrVD7t5IVJSe3WSHgUCWPerZAAKCRB5IVJSe3WSHkIgEACTpxdn/FKrwH0/LDpZDTKWEWm4416l13RjhSt9CUhZ/Gm2GNfXcVTfoF/jKXXgjHcV1DHjfLUPmPVwMdqlf5ACOiFqIUM2ag/OEARh356w
 YG7YEobMjX0CThKe6AV2118XNzRBw/S2IO1LWnL5qaGYPZONUa9Pj0OaErdKIk/V1wge8Zoav2fQPautBcRLW5VA33PH1ggoqKQ4ES1hc9HC6SYKzTCGixu97mu/vjOa8DYgM+33TosLyNy+bCzw62zJkMf89X0tTSdaJSj5Op0SrRvfgjbC2YpJOnXxHr9qaXFbBZQhLjemZi6zRzUNeJ6A3Nzs+gIc4H7s/bYBtcd4ugPEhDeCGffdS3TppH9PnvRXfoa5zj5bsKFgjqjWolCyAmEvd15tXz5yNXtvrpgDhjF5ozPiNp/1EeWX4DxbH2i17drVu4fXwauFZ6lcsAcJxnvCA28RlQlmEQu/gFOx1axVXf6GIuXnQSjQN6qJbByUYrdc/cFCxPO2/lGuUxnufN9Tvb51Qh54laPgGLrlD2huQeSD9Sxa0MNUjNY0qLqaReT99Ygb2LPYGSLoFVx9iZz6sZNt07LqCx9qNgsJwsdmwYsNpMuFbc7nkWjtlEqzsXZHTvYN654p43S+hcAhmmOzQZcew6h71fAJLciiqsPBnCEdgCGFAWhZZdPkMA==
After the change, the entire key fits on a single line, neat!
Autocrypt: addr=anarcat@torproject.org; prefer-encrypt=nopreference;
 keydata=xjMEZHZPzhYJKwYBBAHaRw8BAQdAWdVzOFRW6FYVpeVaDo3sC4aJ2kUW4ukdEZ36UJLAHd7NKUFudG9pbmUgQmVhdXByw6kgPGFuYXJjYXRAdG9ycHJvamVjdC5vcmc+wpUEExYIAD4WIQS7ts1MmNdOE1inUqYCKTpvpOU0cwUCZHZgvwIbAwUJAeEzgAULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRACKTpvpOU0c47SAPdEqfeHtFDx9UPhElZf7nSM69KyvPWXMocu9Kcu/sw1AQD5QkPzK5oxierims6/KUkIKDHdt8UcNp234V+UdD/ZB844BGR2UM4SCisGAQQBl1UBBQEBB0CYZha2IMY54WFXMG4S9/Smef54Pgon99LJ/hJ885p0ZAMBCAfCdwQYFggAIBYhBLu2zUyY104TWKdSpgIpOm+k5TRzBQJkdlDOAhsMAAoJEAIpOm+k5TRzBg0A+IbcsZhLx6FRIqBJCdfYMo7qovEo+vX0HZsUPRlq4HkBAIctCzmH3WyfOD/aUTeOF3tY+tIGUxxjQLGsNQZeGrQI
Note that I have implemented my own kind of ridiculous Autocrypt support for the Notmuch Emacs email client I use, see this elisp code. To import keys, I pipe the message into this script which is basically just:
sq autocrypt decode   gpg --import
... thanks to Sequoia best-of-class Autocrypt support.

Note on OpenPGP usage While some have claimed OpenPGP's death, I believe those are overstated. Maybe it's just me, but I still use OpenPGP for my password management, to authenticate users and messages, and it's the interface to my YubiKey for authenticating with SSH servers. I understand people feel that OpenPGP is possibly insecure, counter-intuitive and full of problems, but I think most of those problems should instead be attributed to its current flagship implementation, GnuPG. I have tried to work with GnuPG for years, and it keeps surprising me with evilness and oddities. I have high hopes that the Sequoia project can bring some sanity into this space, and I also hope that RFC4880bis can eventually get somewhere so we have a more solid specification with more robust crypto. It's kind of a shame that this has dragged on for so long, but Update: there's a separate draft called openpgp-crypto-refresh that might actually be adopted as the "OpenPGP RFC" soon! And it doesn't keep real work from happening in Sequoia and other implementations. Thunderbird rewrote their OpenPGP implementation with RNP (which was, granted, a bumpy road because it lost compatibility with GnuPG) and Sequoia now has a certificate store with trust management (but still no secret storage), preliminary OpenPGP card support and even a basic GnuPG compatibility layer. I'm also curious to try out the OpenPGP CA capabilities. So maybe it's just because I'm becoming an old fart that doesn't want to change tools, but so far I haven't seen a good incentive in switching away from OpenPGP, and haven't found a good set of tools that completely replace it. Maybe OpenSSH's keys and CA can eventually replace it, but I suspect they will end up rebuilding most of OpenPGP anyway, just more slowly. If they do, let's hope they avoid the mistakes our community has done in the past at least...

20 December 2022

Russ Allbery: Review: Tess of the Road

Review: Tess of the Road, by Rachel Hartman
Series: Tess of the Road #1
Publisher: Random House
Copyright: 2018
Printing: 2022
ISBN: 1-101-93130-2
Format: Kindle
Pages: 536
Tess of the Road is the first book of a YA fantasy duology set in the same universe as Seraphina and Shadow Scale. It's hard to decide what to say about reading order (and I now appreciate the ambiguous answers I got). Tess of the Road is a sequel to Seraphina and Shadow Scale in the sense that there are numerous references to the previous duology, but it has a different protagonist and different concerns. You don't need to read the other duology first, but Tess of the Road will significantly spoil the resolution of the romance plot in Seraphina, and it will be obvious that you've skipped over background material. That said, Shadow Scale is not a very good book, and this is a much better book. I guess the summary is this: if you're going to read the first duology, read it first, but don't feel obligated to do so. Tess was always a curious, adventurous, and some would say unmanageable girl, nothing like her twin. Jeanne is modest, obedient, virtuous, and practically perfect in every way. Tess is not; after a teenage love affair resulting in an out-of-wedlock child and a boy who disappeared rather than marry her, their mother sees no alternative but to lie about which of the twins is older. If Jeanne can get a good match among the nobility, the family finances may be salvaged. Tess's only remaining use is to help her sister find a match, and then she can be shuffled off to a convent. Tess throws herself into court politics and does exactly what she's supposed to. She engineers not only a match, but someone Jeanne sincerely likes. Tess has never lacked competence. But this changes nothing about her mother's view of her, and Tess is depressed, worn, and desperately miserable in Jeanne's moment of triumph. Jeanne wants Tess to stay and become the governess of her eventual children, retaining their twin bond of the two of them against the world. Their older sister Seraphina, more perceptively, tries to help her join an explorer's expedition. Tess, in a drunken spiral of misery, insults everyone and runs away, with only a new pair of boots and a pack of food. This is going to be one of those reviews where the things I didn't like are exactly the things other readers liked. I see why people loved this book, and I wish I had loved it too. Instead, I liked parts of it a great deal and found other parts frustrating or a bit too off-putting. Mostly this is a preference problem rather than a book problem. My most objective complaint is the pacing, which was also my primary complaint about Shadow Scale. It was not hard to see where Hartman was going with the story, I like that story, I was on board with going there, but getting there took for-EV-er. This is a 536 page book that I would have edited to less than 300 pages. It takes nearly a hundred pages to get Tess on the road, and while some of that setup is necessary, I did not want to wallow in Tess's misery and appalling home life for quite that long. A closely related problem is that Hartman continues to love flashbacks. Even after Tess has made her escape, we get the entire history of her awful teenage years slowly dribbled out over most of the book. Sometimes this is revelatory; mostly it's depressing. I had guessed the outlines of what had happened early in the book (it's not hard), and that was more than enough motivation for me, but Hartman was determined to make the reader watch every crisis and awful moment in detail. This is exactly what some readers want, and sometimes it's even what I want, but not here. I found the middle of the book, where the story is mostly flashbacks and flailing, to be an emotional slog. Part of the problem is that Tess has an abusive mother and goes through the standard abuse victim process of being sure that she's the one who's wrong and that her mother is justified in her criticism. This is certainly realistic, and it eventually lead to some satisfying catharsis as Tess lets go of her negative self-image. But Tess's mother is a narcissistic religious fanatic with a persecution complex and not a single redeeming quality whatsoever, and I loathed reading about her, let alone reading Tess tiptoeing around making excuses for her. The point of this in the story is for Tess to rebuild her self-image, and I get it, and I'm sure this will work for some readers, but I wanted Tess's mother (and the rest of her family except her sisters) to be eaten by dragons. I do not like the emotional experience of hating a character in a book this much. Where Tess of the Road is on firmer ground is when Tess has an opportunity to show her best qualities, such as befriending a quigutl in childhood and, in the sort of impulsive decision that shows her at her best, learning their language. (For those who haven't read the previous books, quigutls are a dog-sized subspecies of dragon that everyone usually treats like intelligent animals, although they're clearly more than that.) Her childhood quigutl friend Pathka becomes her companion on the road, which both gives her wanderings some direction and adds some useful character interaction. Pathka comes with a plot that is another one of those elements that I think will work for some readers but didn't work for me. He's in search of a Great Serpent, a part of quigutl mythology that neither humans or dragons pay attention to. That becomes the major plot of the novel apart from Tess's emotional growth. Pathka also has a fraught relationship with his own family, which I think was supposed to parallel Tess's relationships but never clicked for me. I liked Tess's side of this relationship, but Pathka is weirdly incomprehensible and apparently fickle in ways that I found unsatisfying. I think Hartman was going for an alien tone that didn't quite work for me. This is a book that gets considerably better as it goes along, and the last third of the book was great. I didn't like being dragged through the setup, but I loved the character Tess became. Once she reaches the road crew, this was a book full of things that I love reading about. The contrast between her at the start of the book and the end is satisfying and rewarding. Tess's relationship with her twin Jeanne deserves special mention; their interaction late in the book is note-perfect and much better than I had expected. Unfortunately, Tess of the Road doesn't have a real resolution. It's only the first half of Tess's story, which comes back to that pacing problem. Ah well. I enjoyed this but I didn't love it. The destination was mostly worth the journey, but I thought the journey was much too long and I had to spend too much time in the company of people I hated far more intensely than was comfortable. I also thought the middle of the book sagged, a problem I have now had with two out of three of Hartman's books. But I can see why other readers with slightly different preferences loved it. I'm probably invested enough to read the sequel, although I'm a bit grumbly that the sequel is necessary. Followed by In the Serpent's Wake. Rating: 7 out of 10

15 September 2022

Joachim Breitner: rec-def: Dominators case study

More ICFP-inspired experiments using the rec-def library: In Norman Ramsey s very nice talk about his Functional Pearl Beyond Relooper: Recursive Translation of Unstructured Control Flow to Structured Control Flow , he had the following slide showing the equation for the dominators of a node in a graph:
Norman Ramsey shows a formula Norman Ramsey shows a formula
He said it s ICFP and I wanted to say the dominance relation has a beautiful set of equations you can read all these algorithms how to compute this, but the concept is simple . This made me wonder: If the concept is simple and this formula is beautiful shouldn t this be sufficient for the Haskell programmer to obtain the dominator relation, without reading all those algorithms? Before we start, we have to clarify the formula a bit: If a node is an entry node (no predecessors) then the big intersection is over the empty set, and that is not a well-defined concept. For these nodes, we need that big intersection to return the empty set, as entry nodes are not dominated by any other node. (Let s assume that the entry nodes are exactly those with no predecessors.) Let s try, first using plain Haskell data structures. We begin by implementing this big intersection operator on Data.Set, and also a function to find the predecessors of a node in a graph: Now we can write down the formula that Norman gave, quite elegantly: Does this work? It seems it does: But not surprising if you have read my previous blog posts it falls over once we have recursion: So let us reimplement it with Data.Recursive.Set. The hope is that we can simply replace the operations, and that now it can suddenly handle cyclic graphs as well. Let s see: It does! Well, it does return a result but it looks strange. Clearly node 3 and 4 are also dominated by 1, but the result does not reflect that. But the result is a solution to Norman s equation. Was the equation wrong? No, but we failed to notice that the desired solution is the largest, not the smallest. And Data.Recursive.Set calculates, as documented, the least fixed point. What now? Until the library has code for RDualSet a, we can work around this by using the dual formula to calculate the non-dominators. To do this, we
  • use union instead of intersection
  • delete instead of insert,
  • S.empty, use the set of all nodes (which requires some extra plumbing)
  • subtract the result from the set of all nodes to get the dominators
and thus the code turns into:
And with this, now we do get the correct result:
ghci> domintors3 [(1,2),(1,3),(2,4),(3,4),(4,3)]
fromList [(1,[1]),(2,[1,2]),(3,[1,3]),(4,[1,4])]
We worked a little bit on how to express the beautiful formula to Haskell, but at no point did we have to think about how to solve it. To me, this is the essence of declarative programming.

3 September 2022

Joachim Breitner: More recursive definitions

Haskell is a pure and lazy programming language, and the laziness allows us to write some algorithms very elegantly, by recursively referring to already calculated values. A typical example is the following definition of the Fibonacci numbers, as an infinite stream:

Elegant graph traversals A maybe more practical example is the following calculation of the transitive closure of a graph: We represent graphs as maps from vertex to their successors vertex, and define the resulting map sets recursively: The set of reachable vertices from a vertex v is v itself, plus those reachable by its successors vs, for which we query sets. And, despite this apparent self-referential recursion, it works!

Cyclic graphs ruin it all These tricks can be very impressive until someone tries to use it on a cyclic graph and the program just hangs until we abort it: At this point we are thrown back to implement a more pedestrian graph traversal, typically keeping explicit track of vertices that we have seen already: I have written that seen/todo recursion idiom so often in the past, I can almost write it blindly And indeed, this code handles cyclic graphs just fine:
ghci> transitive2 $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,[1,2,3]),(2,[1,2,3]),(3,[3])]
But this is a bit anticlimactic Haskell is supposed to be a declarative language, and transitive1 declares my intent just fine!

We can have it all It seems there actually is a way to write essentially the code in transitive1, and still get the right result in all cases, and I have just published a possible implementation as rec-def. In the module Data.Recursive.Set we find an API that resembles that of Set, with a type RSet a, and in addition to conversion functions from and to sets, we find the two operations that we needed in transitive1: Let s try that: And indeed it works! Magic!
ghci> transitive2 $ M.fromList [(1,[3]),(2,[1,3]),(3,[])]
fromList [(1,[1,3]),(2,[1,2,3]),(3,[3])]
ghci> transitive2 $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,[1,2,3]),(2,[1,2,3]),(3,[3])]
To show off some more, here are small examples:
ghci> let s = RS.insert 42 s in RS.get s
fromList [42]
ghci> : 
  let s1 = RS.insert 23 s2
      s2 = RS.insert 42 s1
  in RS.get s1
 : 
fromList [23,42]

How is that possible? Is it still Haskell? The internal workings of the RSet a type will be the topic of a future blog post; let me just briefly mention that it uses unsafe features under the hood, and just keeps applying the equations you gave until a fixed-point is reached. Because it starts with the empty set and all operations provided by Data.Recursive.Set are monotonous (e.g. no difference) it will eventually find the least fixed point. Despite the unsafe machinery under the hood, I claim that Data.Recursive.Set is itself nicely safe, and does not destroy Haskell s nice properties like purity, referential transparency and equational reasoning. If you disagree, I d like to hear about it (here, on Twitter, Reddit or Discourse)! There is a brief discussion at the end of the tutorial in Data.Recursive.Example.

More than sets The library also provides Data.Recursive.Bool for recursive equations with booleans (preferring False) and Data.Recursive.DualBool (preferring True), and some operations like member :: Ord a => a -> RSet a -> RBool can actually connect different types. I plan to add other data types (natural numbers, maps, Maybe, with suitable orders) as demand arises and as I come across nice small example use-cases for the documentation (e.g. finding shortest paths in a graph). I believe this idiom is practically useful in a wide range of applications (which of course all have some underlying graph structure but then almost everything in Computer Science is a graph). My original motivation was a program analysis. Imagine you want to find out from where in your program you can run into a division by zero. As long as your program does not have recursion, you can simply keep track of a boolean flag while you traverse the program, keeping track a mapping from function names to whether they can divide by zero all nice and elegant. But once you allow mutually recursive functions, things become tricky. Unless you use RBool! Simply use laziness, pass the analysis result down when analyzing the function s right-hand sides, and it just works!

21 March 2022

Gunnar Wolf: Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? ), I read Anarcat s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not? Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.
From the Space Cadet keyboard that (obviously ) influenced Emacs early design
The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one: The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipograf a Cient fica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years But it is fun and interesting to see what we were able to produce with in-house tools back in 1985! So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM. Now, after 39 years hammering at Emacs buffers Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don t read my mail or handle my git from Emacs. I could say, I m a relatively newbie after almost four decades. Four decades When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands. And yes, Emacs and TeX are still the main tools I use day to day.

23 July 2021

Evgeni Golov: It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8. Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh? Well, and then I dared to test the same on Debian Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server Let's dig deeper! To fetch the keytab, the installer does roughly this:
# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com
And if one executes that by hand to see the a actual error, you see:
# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)
Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":
  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)
And guess what, all of these are perfectly set by ipa-client-install when joining the realm But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?
# hostname
foreman
Bingo! Let's see what happens if we force that to be an FQDN?
# hostname foreman.example.com
# kinit -k
NO ERRORS! NICE! We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting. Is it a version difference between the systems?
  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8
So, something changed in 1.18? Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix. Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN. I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?! Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

10 April 2020

Utkarsh Gupta: Debian Activities for March 2020

Here s my (sixth) monthly update about the activities I ve done in Debian this March.

Debian LTS This was my sixth month as a Debian LTS paid contributor.
I was assigned 24.00 hours and worked on the following things:

CVE Fixes and Announcements:
  • Issued DLA 2131-1, fixing CVE-2014-6262, for rrdtool.
    For Debian 8 Jessie , this problem has been fixed in version 1.4.8-1.2+deb8u1.
  • Issued DLA 2131-2, fixing regression caused by DLA 2131-1, for rrdtool.
    For Debian 8 Jessie , this problem has been fixed in version 1.4.8-1.2+deb8u2.
  • Issued DLA 2135-1, fixing CVE-2020-9546, CVE-2020-9547, and CVE-2020-9548, for jackson-databind.
    For Debian 8 Jessie , these problems have been fixed in version 2.4.2-2+deb8u12.
  • Issued DLA 2137-1, fixing CVE-2020-10232, for sleuthkit.
    For Debian 8 Jessie , this problem has been fixed in version 4.1.3-4+deb8u2.
  • Issued DLA 2139-1, fixing CVE-2020-5258 and CVE-2020-5259, for dojo.
    For Debian 8 Jessie , these problems have been fixed in version 1.10.2+dfsg-1+deb8u3.
  • Issued DLA 2141-1, fixing CVE-2020-10184 and CVE-2020-10185, for yubikey-val.
    For Debian 8 Jessie , these problems have been fixed in version 2.27-1+deb8u1.
  • Issued DLA 2146-1, fixing CVE-2019-15690, for libvncserver.
    For Debian 8 Jessie , this problem has been fixed in version 0.9.9+dfsg2-6.1+deb8u7.
  • Issued DLA 2147-1, fixing CVE-2019-17546, for gdal.
    For Debian 8 Jessie , this problem has been fixed in version 1.10.1+dfsg-8+deb8u2.
  • Issued DLA 2149-1, fixing CVE-2020-5267, for rails.
    For Debian 8 Jessie , this problem has been fixed in version 2:4.1.8-1+deb8u6.
  • Issued DLA 2153-1, fixing CVE-2020-10672 and CVE-2020-10673, for jackson-databind.
    For Debian 8 Jessie , these problems have been fixed in version 2.4.2-2+deb8u13.
  • Issued DLA 2154-1, fixing CVE-2020-10802 and CVE-2020-10803, for phpmyadmin.
    For Debian 8 Jessie , these problems have been fixed in version 4:4.2.12-2+deb8u9.

Other LTS Work:

Debian Work

Uploads to the Archive:
  • micro (2.0.2-1~bpo10+1) to buster-backports.
  • rails (2:5.2.4.1+dfsg-1) to unstable.
  • ruby-rack (2.0.8-1) to unstable.
  • ruby-grape (1.3.0-1) to experimental.
  • libgit2 (0.28.4+dfsg.1-3) to unstable.
  • micro (2.0.2-2) to unstable.
  • ruby-octokit (4.17.0-1) to unstable.
  • ruby-power-assert (1.1.6-1) to unstable.
  • rails (2:5.2.4.1+dfsg-2) to unstable.
  • ruby-octokit (4.17.0-2) to unstable.
  • ruby-method-source (1.0.0-1) to unstable.
  • libwebservice-ils-perl (0.18-1) to unstable.
  • libdata-hal-perl (1.001-1) to unstable.
  • rails (2:4.2.7.1-1+deb9u2) to stretch.
  • rails (2:5.2.2.1+dfsg-1+deb10u1) to buster.
  • libgit2 (0.28.4+dfsg.1-4) to unstable.
  • ruby-grape (1.3.1+git20200320.c8fd21b-1) to experimental.
  • ruby-grape-logging (1.8.3-1) to unstable.
  • ruby-grape (1.3.1+git20200320.c8fd21b-2) to unstable.
  • ruby-dry-equalizer (0.3.0-2) to unstable.
  • ruby-dry-core (0.4.9-2) to unstable.
  • ruby-dry-logic (1.0.5-2) to unstable.
  • ruby-dry-inflector (0.2.0-2) to unstable.
  • ruby-dry-container (0.7.2-2) to unstable.
  • ruby-dry-configurable (0.9.0-2) to unstable.
  • ruby-dry-types (1.2.2-2) to unstable.
  • micro (2.0.2-2~bpo10+1) to buster-backports.
  • golang-vbom-util (0.0~git20180919.efcd4e0-2) to unstable.
  • golang-github-tonistiigi-units (0.0~git20180711.6950e57-2) to unstable.
  • golang-github-jaguilar-vt100 (0.0~git20150826.2703a27-2) to unstable.
  • golang-github-grpc-ecosystem-grpc-opentracing (0.0~git20180507.8e809c8-2) to unstable.
  • rails (2:6.0.2.1+dfsg-3) to experimental.
  • libgit2 (0.99.0+dfsg.1-1) to experimental.
  • golang-github-goji-param (0.0~git20160927.d7f49fd-5) to unstable.
  • phpmyadmin-sql-parser (4.6.1-2) to unstable.
  • mariadb-mysql-kbs (1.2.10-2) to unstable.
  • golang-github-aleksi-pointer (1.1.0-1) to unstable.
  • golang-github-andreyvit-diff (0.0~git20170406.c7f18ee-2) to unstable.
  • golang-github-audriusbutkevicius-go-nat-pmp (0.0~git20160522.452c976-2) to unstable.
  • ruby-power-assert (1.1.7-1) to unstable.
  • ruby-test-unit (3.3.5-1) to unstable.
  • ruby-omniauth (1.9.1-1) to unstable.
  • ruby-warden (1.2.8-1) to unstable.
  • python-libais (0.17+git.20190917.master.e464cf8-2) to unstable.
  • lolcat (100.0.1-3) to unstable.
  • ruby-vips (2.0.17-1) to unstable.

Bug Fixes:
  • #836206 for lolcat.
  • #940338 for golang-github-audriusbutkevicius-go-nat-pmp.
  • #940335 for golang-github-andreyvit-diff.
  • #940334 for golang-github-aleksi-pointer.
  • #940362 for golang-github-goji-param.
  • #952025 for ruby-grape.
  • #867027 for ruby-grape.
  • #954529 for libgit2.
  • #954304 for rails (CVE-2020-5267) buster-pu.
  • #954304 for rails (CVE-2020-5267) stretch-pu.
  • #954304 for rails (CVE-2020-5267) unstable.
  • #953400 for micro.
  • #927889 for libgit2.
  • #952111 for micro.

Miscellaneous:
  • Sponsored a lot of uploads :)
  • Outreachy mentoring for GitLab project for Sakshi Sangwan.
  • Opened PRs & MRs upstream.

Until next time.
:wq for today.

17 September 2016

Jonas Meurer: data recovery

Data recovery with ddrescue, testdisk and sleuthkit From time to time I need to recover data from disks. Reasons can be broken flash/hard disks as well as accidently deleted files. Fortunately, this doesn't happen to often, which on the downside means that I usually don't remember the details about best practice. Now that a good friend asked me to recover very important data from a broken flash disk, I take the opportunity to write down what I did and hopefully don't need to read the same docs again next time :) Disclaimer: I didn't take the time to read through full documentation. This is rather a brief summary of the best practice to my knowledge, not a sophisticated and detailed explanation of data recovery techniques. Create image with ddrescue First and most secure rule for recovery tasks: don't work on the original, use a copied image instead. This way you can do, whatever you want without risking further data loss. The perfect tool for this is GNU ddrescue. Contrary to dd, it doesn't reiterate over a broken sector with I/O errors again and again while copying. Instead, it remembers the broken sector for later and goes on to the next sector first. That way, all sectors that can be read without errors are copied first. This is particularly important as every extra attempt to read a broken sector can further damage the source device, causing even more data loss. In Debian, ddrescue is available in the gddrescue package:
apt-get install gddrescue
Copying the raw disk content to an image with ddrescue is as easy as:
ddrescue /dev/disk disk-backup.img disk.log
Giving a logfile as third argument has the great advantage that you can interupt ddrescue at any time and continue the copy process later, possibly with different options. In case of very large disks where only the first part was in use, it might be useful to start with copying the beginning only:
ddrescue -i0 -s20MiB /dev/disk disk-backup.img disk.log
In case of errors after the first run, you should start ddrescue again with direct read access (-d) and tell it to try again bad sectors three times (-r3):
ddrescue -d -r3 /dev/disk disk-backup.img disk.log
If some sectors are still missing afterwards, it might help to run ddrescue with infinite retries for some time (e.g. one night):
ddrescue -d -r-1  /dev/disk disk-backup.img disk.log

Inspect the image Now that you have an image of the raw disk, you can take a first look at what it contains. If ddrescue was able to recover all sectors, chances are high that no further magic is required and all data is there. If the raw disk (used to) contain a partition table, take a first look with mmls from sleuthkit:
mmls disk-backup.img
In case of a intact partition table, you can try to create device maps with kpartx after setting up a loop device for the image file:
losetup /dev/loop0 disk-backup.img
kpartx -a /dev/loop0
If kpartx finds partitions, they will be made available at /dev/mapper/loop0p1, /dev/mapper/loop0p2 and so on. Search for filesystems on the partitions with fsstat from sleuthkit on the partition device map:
fsstat /dev/mapper/loop0p1
Or directly on the image file with the offset discovered by mmls earlier. This also might work in case of
fsstat -o 8064 disk-backup.img
The offset obviously is not needed if the image contains a partition dump (without partition table):
fsstat disk-backup.img
In case that a filesystem if found, simply try to mount it:
mount -t <fstype> -o ro /dev/mapper/loop0p1 /mnt
or
losetup -o 8064 /dev/loop1 disk-backup.img
mount -t <fstype> -o ro /dev/loop1 /mnt

Recover partition table If the partition table is broken, try to recover it with testdisk. But first, create a second copy of the image, as you will alter it now:
ddrescue disk-backup.img disk-backup2.img
testdisk disk-backup2.img
In testdisk, select a media (e.g. Disk disk-backup2.img) and proceed, then select the partition table type (usually Intel or EFI GPT) and analyze -> quick search. If partitions are found, select one or more and write the partition structure to disk.

Recover files Finally, let's try to recover the actual files from the image. testdisk If the partition table recovery was sucessfull, try to undelete files from within testdisk. Go back to the main menu and select advanced -> undelete. photorec Another option is to use the photorec tool that comes with testdisk. It searches the image for known file structures directly, ignoring possible filesystems:
photorec sdb2.img
You have to select either a particular partition or the whole disk, a file system (ext2/ext3 vs. other) and a destination for recovered files. Last time, photorec was my last resort as the fat32 filesystem was so damaged that testdisk detected only an empty filesystem. sleuthkit sleuthkit also ships with tools to undelete files. I tried fls and icat. fls searches for and lists files and directories in the image, searching for parts of the former filesystem. icat copies the files by their inode numer. Last time I tried, fls and icat didn't recover any new files compared to photorec. Still, for the sake of completeness, I document what I did. First, I invoked fls in order to search for files:
fls -f fat32 -o 8064 -pr disk-backup.img
Then, I tried to backup one particular file from the list:
icat -f fat32 -o 8064 <INODE>
Finally, I used the recoup.pl script from Dave Henk in order to batch-recover all discovered files:
wget http://davehenk.googlepages.com/recoup.pl
chmod +x recoup.pl
vim recoup.pl
[...]
my $fullpath="~/recovery/sleuthkit/";
my $FLS="/usr/bin/fls";
my @FLS_OPT=("-f","fat32","-o","8064","-pr","-m $fullpath","-s 0");
my $FLS_IMG="~/recovery/disk-image.img";
my $ICAT_LOG="~/recovery/icat.log";
my $ICAT="/usr/bin/icat";
my @ICAT_OPT=("-f","fat32","-o","8064");
[...]
Further down, the double quotes around $fullfile needed to be replaced by single quotes (at least in my case, as $fullfile contained a subdir called '$OrphanFiles'):
system("$ICAT @ICAT_OPT $ICAT_IMG $inode > \'$fullfile\' 2>> $ICAT_LOG") if ($inode != 0);
That's it for now. Feel free to comment with suggestions on how to further improve the process of recovering data from broken disks.

17 January 2014

Steve Kemp: So I found a job.

Just to recap my life since December:
I had worked with Bytemark for seven years and left for reasons which made sense. I started working for "big corp" with a job that on-paper sounded good, but ultimately turned out to be a poor fit for my tastes. I spent a month trying to decide "Is this bad, or is this just not what I'm used to?", because I was aware that there would obviously be big differences as well as little ones. At the point I realized some of the niggles could be fixed but most couldn't then I resigned, rather than prolong the initial probationary training period - because I knew I wouldn't stay, and it seemed unfair and misleading to stay for the full duration of the probationary period knowing full well I'd leave the moment it concluded - and the notice period switched from seven days to one month.
A couple of people were kind enough to get in touch and discuss potential offers, both locally, remotely in the UK, and from abroad (the latter surprised me, but pleased me too). I spent a couple of days "contracting", by which I really mean doing a few favours for friends, some of whom paid me in Amazon vouchers, and some of whom paid me in beer.
e.g. I tweaked the upcoming death Knight site to handle 3000 simultaneous HTTP connections, then I upgraded some servers from Squeeze to Wheezy for some other folk.
That aside I've largely been idle for about 10 days and have now picked the company to work for - so I'm going to be a contractor with a day-rate for an American firm for the next couple of months. If that goes well then I'll become a full-time employee, hopefully.

29 January 2013

Jonathan Carter: Ubuntu Developer Summit for 13.04 (Raring)

The War on Time Whoosh! I ve been incredibly quiet on my blog for the last 2-3 months. It s been a crazy time but I ll catch up and explain everything over the next few entries. Firstly, I d like to get out a few details about the last Ubuntu Developer Summit that took place in Copenhagen, Denmark in October. I m usually really good at getting my blog post out by the end of UDS or a day or two after, but this time it just flew by so incredibly fast for me that I couldn t keep up. It was a bit shorter than usual at 4 days, as apposed to the usual 5. The reason I heard for that was that people commented in previous post-UDS surveys that 5 days were too long, which is especially understandable for Canonical staff who are often in sprints (away from home) for the week before the UDS as well. I think the shorter period works well, it might need a bit more fine-tuning, I think the summary session at the end wasn t that useful because, like me, there wasn t enough time for people to process the vast amount of data generated during UDS and give nice summaries on it. Overall, it was a great get-together of people who care about Ubuntu and also many areas of interest outside of Ubuntu. Copenhagen, Denmark I didn t take many photos this UDS, my camera is broken and only takes blurry pics (not my fault I swear!). So I just ended up taking a few pictures with my phone. Go tag yourself on Google+ if you were there. One of the first interesting things I saw when arriving in Copenhagen was the hotel we stayed in. The origami-like design reminded me of the design of the Quantal Quetzel logo that is used for the current stable Ubuntu release. 2012-10-28_05-50-14_21 quantal The Road ahead for Edubuntu to 14.04 and beyond St phane previously posted about the vision we share for Edubuntu 14.04 and beyond, this was what was mostly discussed during UDS and how we ll approach those goals for the 13.04 release. This release will mostly focus on the Edubuntu Server aspect. If everything works out, you will be able to use the standard Edubuntu DVD to also install an Edubuntu Server system that will act as a Linux container host as well as an Active Directory compatible directory server using Samba 4. The catch with Samba 4 is that it doesn t have many administration tools for Linux yet. St phane has started work on a web interface for Edubuntu server that looks quite nice already. I m supposed to do some CSS work on it, but I have to say it looks really nice already, it s based on the MAAS service theme and St phane did some colour changes and fixes on it already. edu-server-account edu-server-password From the Edubuntu installer, you ll be able to choose whether this machine should act as a domain server, or whether you would like to join an existing domain. Since Edubuntu Server is highly compatible with Microsoft Active Directory, the installer will connect to it regardless of whether it s a Windows Domain or Edubuntu Domain. This should make it really easy for administrators in schools with mixed environments and where complete infrastructure migrations are planned. Authentication Options Choosing machine role You will be able to connect to the same domain whether you re using Edubuntu on thin clients, desktops or tablets and everything is controllable using the Epoptes administration tool. Many people are asking whether this is planned for Ubuntu / Ubuntu Server as well, since this could be incredibly useful in other organisations who have a domain infrastructure. It s currently meant to be easily rebrandable and the aim is to have it available as a general solution for Ubuntu once all the pieces work together. Empowering Ubuntu Flavours This cycle, Ubuntu is making some changes to the release schedule. One of the biggest changes made this cycle is that the alpha and beta releases are being dropped for the main Ubunut product. This session was about establishing how much divergence and changes the Ubuntu Flavours (Ubuntu Studio, Mythbuntu, Kubuntu, Lubuntu and Edubuntu) could have from the main release cycle. Edubuntu and Kubuntu decided to be a bit more conservative and maintain the snapshot releases. For Edubuntu it has certainly helped so far in identifying and finding some early bugs and I m already glad that we did that. Mythbuntu is also a notable exception since it will now only do LTS releases. We re tempted to change Edubuntu s official policy that the LTS releases are the main releases and treat the releases in between more like technology previews for the next LTS. It s already not such a far stretch from the truth, but we ll need to properly review and communicate that at some point. Valve at UDS and Steam for Linux One of the first plenaries was from Valve where Drew Bliss talked about Steam on Linux. Steam is one of the most popular publishing and distribution systems for games and up until recently it has only been available on Windows and Mac. Valve (the company behind Steam and many popular games such as Half Life and Portal) are actively working on porting games to run natively on Linux as well. Some people have asked me what I think about it, since the system is essentially using a free software platform to promote a lot of non-free software. My views on this is pretty simple, I think it s an overwhelmingly good thing for Linux desktop adoption and it s been proven to be a good thing for people who don t even play games. Since the announcement from Valve, Nvidia has already doubled perfomance in many cases for its Linux drivers. AMD, who have been slacking on Linux support the last few years have beefed up their support drastically with the announcement of new drivers that were released earlier this month. This new collection of AMD drivers also adds support for a range of cards where the drivers were completely discontinued, giving new life to many older laptops and machines which would be destined for the dumpster otherwise. This benefits not only gamers, but everyone from an average office worker who wants snappy office suite performance and fast web browsing to designers who work with graphics, videos and computer aided design. Also, it means that many home users who prefer Linux-based systems would no longer need to dual-boot to Windows or OS X for their games. While Steam will actively be promoting non-free software, it more than makes up for that by the enablement it does for the free software eco-system. I think anyone who disagrees with that is somewhat of a purist and should be more willing to make compromises in order to make progress. Ubuntu Release Changes Last week, there was a lot of media noise stating that Ubuntu will no longer do releases and will become a rolling release except for the LTS releases. This is certainly not the case, at least not any time soon. One meme that I ve noticed increasingly over the last UDSs was that there s an increasing desire to improve the LTS releases and using the usual Ubuntu releases more and more for experimentation purposes. I think there s more and more consensus that the current 6 month cycle isn t really optimal and that there must be a better way to get Ubuntu to the masses, it s just the details of what the better way is that leaves a lot to be figured out. There s a desire between developers to provide better support (better SRUs and backports) for the LTS releases to make it easier for people to stick with it and still have access to new features and hardware support. Having less versions between LTS releases will certainly make that easier. In my opinion it will probably take at least another 2 cycles worth of looking at all the factors from different angles and getting feedback from all the stakeholders before a good plan will have formed for the future of Ubuntu releases. I m glad to see that there is so much enthusiastic discussion around this and I m eager to see how Ubuntu s releases will continue to evolve. Lightning Talks Lightning talks are a lot like punk-rock songs. When it s good, it s really, really amazingly good and fun. When it s bad, at least it will be over soon :) Unfortunately, since it s been a few months since the UDS, I can t remember all the details of the lightning talks, but one thing that I find worth mentioning is that they re not just awesome for the topic they aim to produce (for example, the one lightning talks session I attended was on the topic of Tests in your software ), but since they are more demo-like than presentation-like, you get to learn a lot of neat tricks and cool things that you didn t know before. Every few minutes someone would do something and I d hear someone say something like Awesome! I didn t know you could do that with apt-daemon! . It s fun and educational and I hope lightning talks will continue to be a tradition at future UDSs. Social Stefano Rivera (fellow MOTU, Debianista, Capetonian, Clugger) wins the prize for person I ve seen in the most countries in one year. In 2012, I saw him in Cape Town for Scaleconf, Managua during Debconf, Oakland for a previous UDS and Copenhagen for this UDS. Sometimes when I look at silly little statistics like that I realise what a great adventure the year was! Between the meet n greet, an evening of lightning talks and the closing party (which was viking themed and pretty awesome) there was just one free evening left. I used it to gather with the Debian folk who were at UDS. It was great to see how many Debian people were attending, I think we had around a dozen or so people at the dinner and there were even more who couldn t make it since they work for Canonical or Linaro and had to attend team dinners the same evening. It was as usual, great to put some more faces to names and get to know some people better. It was also great to have a UDS with many strong technical community folk present who is willing to engage in discussion. There were still a few people who felt missing but it was less than at some previous UDSs. I also discovered my face on a few puzzles! They were a *great* idea, I saw a few people come and go to work on them during the week, they seem to have acted as good menial activities for people to fix their brains when they got fried during sessions :) 2012-10-31_14-32-28_374 Overall, this was a good and punchy UDS. I ll probably not make the next one in Oakland due to many changes in my life currently taking place (although I will remotely participate), but will probably make the one later this year, especially if it s in Europe. I ll also make a point of live-blogging a bit more, it s just so hard remembering all the details a few months after the fact. Thanks to everyone who contributed their piece in making it a great week!

13 January 2013

Bernhard R. Link: some signature basics

While almost everyone has already worked with cryptographic signatures, they are usually only used as black boxes, without taking a closer look. This article intends to shed some lights behind the scenes. Let's take a look at some signature. In ascii-armoured form or behind a clearsigned message one often does only see something like this:
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJQ8qxQAAoJEH8RcgMj+wLc1QwP+gLQFEvNSwVonSwSCq/Dn2Zy
fHofviINC1z2d/voYea3YFENNqFE+Vw/KMEBw+l4kIdJ7rii1DqRegsWQ2ftpno4
BFhXo74vzkFkTVjo1s05Hmj+kGy+v9aofnX7CA9D/x4RRImzkYzqWKQPLrAEUxpa
xWIije/XlD/INuhmx71xdj954MHjDSCI+9yqfl64xK00+8NFUqEh5oYmOC24NjO1
qqyMXvUO1Thkt6pLKYUtDrnA2GurttK2maodWpNBUHfx9MIMGwOa66U7CbMHReY8
nkLa/1SMp0fHCjpzjvOs95LJv2nlS3xhgw+40LtxJBW6xI3JvMbrNYlVrMhC/p6U
AL+ZcJprcUlVi/LCVWuSYLvUdNQOhv/Z+ZYLDGNROmuciKnvqHb7n/Jai9D89HM7
NUXu4CLdpEEwpzclMG1qwHuywLpDLAgfAGp6+0OJS5hUYCAZiE0Gst0sEvg2OyL5
dq/ggUS6GDxI0qUJisBpR2Wct64r7fyvEoT2Asb8zQ+0gQvOvikBxPej2WhwWxqC
FBYLuz+ToVxdVBgCvIfMi/2JEE3x8MaGzqnBicxNPycTZqIXjiPAGkODkiQ6lMbK
bXnR+mPGInAAbelQKmfsNQQN5DZ5fLu+kQRd1HJ7zNyUmzutpjqJ7nynHr7OAeqa
ybdIb5QeGDP+CTyNbsPa
=kHtn
-----END PGP SIGNATURE-----
This is actually only a form of base64 encoded data stream. It can be translated to the actual byte stream using gpg's --enarmor and --dearmour commands (Can be quite useful if some tool only expects one BEGIN SIGNATURE/END SIGNATURE block but you want to include multiple signatures but cannot generate them with a single gpg invocation because the keys are stored too securely in different places). Reading byte streams manually is not much fun, so I wrote gpg2txt some years ago, which can give you some more information. Above signature looks like the following:
89 02 1C -- packet type 2 (signature) length 540
        04 00 -- version 4 sigclass 0
        01 -- pubkey 1 (RSA)
        02 -- digest 2 (SHA1)
        00 06 -- hashed data of 6 bytes
                05 02 -- subpacket type 2 (signature creation time) length 4
                        50 F2 AC 50 -- created 1358081104 (2013-01-13 12:45:04)
        00 0A -- unhashed data of 10 bytes
                09 10 -- subpacket type 16 (issuer key ID) length 8
                        7F 11 72 03 23 FB 02 DC -- issuer 7F11720323FB02DC
        D5 0C -- digeststart 213,12
        0F FA -- integer with 4090 bits
                02 D0 [....]
Now, what does this mean. First all gpg data (signatures, keyrings, ...) is stored as a series of blocks (which makes it trivial to concatenate public keys, keyrings or signatures). Each block has a type and a length. A single signature is a single block. If you create multiple signatures at once (by giving multiple -u to gpg) there are simple multiple blocks one after the other. Then there is a version and a signature class. Version 4 is the current format, some really old stuff (or things wanting to be compatible with very old stuff) sometimes still have version 3. The signature class means what kind of signature it is. There are roughly two signature classes: A verbatim signature (like this one), or a signature of a clearsigned signature. With a clearsigned signature not the file itself is hashed, but instead a normalized form that is supposed to be invariant under usual modifications by mailers. (This is done so people can still read the text of a mail but the recipient can still verify it even if there were some slight distortions on the way.) Then the type of the key used and the digest algorithm used for creating this signature. The digest algorithm (together with the signclass, see above) describes which hashing algorithm is used. (You never sign a message, you only sign a hashsum. (Otherwise your signature would be as big as your message and it would take ages to create a signature, as asymetric keys are necessarily very slow)). This example uses SHA1, which is no longer recommended: As SHA1 has shown some weaknesses, it may get broken in the not too distant future. And then it might be possible to take this signature and claim it is the signature of something else. (If your signatures are still using SHA1, you might want to edit your key preferences and/or set a digest algorithm to use in your ~/.gnupg/gpg.conf. Then there are some more information about this signature: the time it was generated on and the key it was generated with. Then, after the first 2 bytes of the message digest (I suppose it was added in cleartext to allow checking if the message is OK before starting with expensive cryptograhic stuff, but it might not checked anywhere at all), there is the actual signature. Format-wise the signature itself is the most boring stuff. It's simply one big number for RSA or two smaller numbers for DSA. Some little detail is still missing: What is this "hashed data" and "unhashed data" about? If the signed digest would only be a digest of the message text, then having a timestamp in the signature would not make much sense, as anyone could edit it without making the signature invalid. That's why the digest is not only signed message, but also parts of the information about the signature (those are the hashed parts) but not everything (not the unhashed parts).

4 September 2012

Thomas Koch: leaving facebook

I've never really used facebook so it's not too hard for me to delete my account there. The only thing that's a pity is that I found (or others found me) many people that I once met and which I'd love to meet again once. So I'll go through my facebook contacts list and will try to copy all contact data especially the email adresses and plan to write emails to everybody every quarter or half a year about happenings in our lifes. I'd be glad to receive such emails from others too! They'll surely be read more carefully then my facebook timeline!

This gets me to another point: I still don't have a satisfying free (as in freedom) solution to manage my contacts across devices and available on my web server. The same for image galleries. So I can understand how convenient facebook for many people is. Still facebook is evil and it's an important and ongoing project to provide a better alternative.

It'll be harder for me to leave Twitter. This one comes next. If people would just switch to the free and privacy aware alternative identi.ca...

And then I've to finally provide a better alternative for my family to share baby pictures then Google Plus... Leaving Xing and Linkedin also doesn't worry me, I just have to collect the addresses of my important contacts there. Not so pressing is to leave Couchsurfing for bewelcome.org.

So if you want to hear news from me just pass by www.koch.ro from time to time or put my blogs feed into your feed reader. I'll also keep my identi.ca account.

update: One tiny example of facebooks evilness is, that they don't show me your real email address (anymore). They only show me an @facebook address. So if I don't know your real email address from any other source, I would not be able to send you messages without sending it over facebooks servers. I read that you can change the email address shown back to your real email address somewhere in the options.

16 June 2012

Vincent Bernat: GPG Key Transition Statement 2012

I am transitioning my GPG key from an old 1024-bit DSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time but I prefer all new correspondance to be encrypted with the new key. I will be making all signatures going forward with the new key. I have followed the excellent tutorial from Daniel Kahn Gillmor which also explains why this migration is needed. The only step that I did not execute is issuing a new certification for keys I have signed in the past. I did not find any search engine to tell me which key I have signed. Here is the signed transition statement (I have stolen it from Zack):
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256,SHA1
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted in the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
transition.
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
reauthenticating me.
The old key, which I am transitional away from, is:
  pub   1024D/F22A794E 2001-03-23
      Key fingerprint = 5854 AF2B 65B2 0E96 2161  E32B 285B D7A1 F22A 794E
The new key, to which I am transitioning, is:
  pub   4096R/353525F9 2012-06-16 [expires: 2014-06-16]
      Key fingerprint = AEF2 3487 66F3 71C6 89A7  3600 95A4 2FE8 3535 25F9
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs 95A42FE8353525F9
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff 95A42FE8353525F9
Please contact me via e-mail at <vincent@bernat.im> if you have any
questions about this document or this transition.
  Vincent Bernat
  vincent@bernat.im
  16-06-2012
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAEBCAAGBQJP3LchAAoJEJWkL+g1NSX5fV0P/iEjcLp7EOky/AVkbsHxiV30
KId7aYmcZRLJpvLZPz0xxThZq2MTVhX+SdiPcrSTa8avY8Kay6gWjEK0FtB+72du
3RxhVYDqEQtrhUmIY2jOVyw9c0vMJh4189J+8iJ5HGQo9SjFEuRrP9xxNTv3OQD5
fRTMUBMC3q1/KcuhPA8ULp4L1OS0xTksRfvs6852XDfSJIZhsYxYODWpWqLsGEcu
DhQ7KHtbOUwjwsoiURGnjwdiFpbb6/9cwXeD3/GAY9uNHxac6Ufi4J64bealuPXi
O4GgG9cEreBTkPrUsyrHtCYzg43X0q4B7TSDg27j0xm+xd+jW/d/0AlBHPXcXemc
b+pw09qLOwQWbsd6d4bx22VXI75btSFs8HwR9hKHBeOAagMHz+AVl5pLXo2rYoiH
34fR1HWqyRdT3bCt19Ys1N+d0fznsZNFOMC+l23QyptOoMz7t7vZ6GbB20ExafrW
+gi7r1sV/6tb9sYMcVV2S3XT003Uwg8PXajyOnFHxPsMoX9zsk1ejo3lxkkTZs0H
yLZtUj3iZ3yX9e2yfv3eOxitR4+bIntEbMecnTI9xJn+33QTz/pWBqg9uDosqzUo
UoQtc6WVn9x3Zsi7aneDYcp06ZdphgsyWhgiLIhQG9MAK9wKthKiZv8DqGYDOsKt
WwpQFvns33e5x4SM4KxXiEYEARECAAYFAk/ctyEACgkQKFvXofIqeU5YLwCdFhEL
P7vpUJA2zv9+dpPN5GLfBlcAn0mDGJcjJpYZl/+aXEnP/8cE0day
=0QnC
-----END PGP SIGNATURE-----
For easier access, I have also published it in text format. You can check it with:
$ gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
gpg: requesting key 353525F9 from hkp server keys.gnupg.net
gpg: key 353525F9: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
$ curl http://vincent.bernat.im/media/files/key-transition-2012.txt   \
>       gpg --verify
To avoid signing/encrypting with the old key who share the same email addresses than the new one, I have saved it, removed it from the keyring and added it again. The new key is now first in both the secret and the public keyrings and will be used whenever the appropriate email address is requested.
$ gpg --export-secret-keys F22A794E > ~/tmp/secret
$ gpg --export F22A794E > ~/tmp/public
$ gpg --delete-secret-key F22A794
sec  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
This is a secret key! - really delete? (y/N) y
$ gpg --delete-key F22A794E
pub  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
$ gpg --import ~/tmp/public
gpg: key F22A794E: public key "Vincent Bernat <bernat@luffy.cx>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: 3 marginal(s) needed, 1 complete(s) needed, classic trust model
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2014-06-16
$ gpg --import ~/tmp/secret
gpg: key F22A794E: secret key imported
gpg: key F22A794E: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
$ rm ~/tmp/public ~/tmp/secret
$ gpg --edit-key F22A794E
[...]
gpg> trust
[...]
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
I now need to gather some signatures for the new key. If this is appropriate for you, please sign the new key if you signed the old one.

2 January 2012

Thomas Koch: Perils of not switching to Git

Somebody probably already recommended you to switch to Git, because it's the best VCS. I'd like to go a step further now and talk about the risk you're taking if you won't switch soon.
By still using SVN (if you're using CVS you're doomed anyway), you communicate the following:
Be aware that good developers today will not consider working with or for you if you're still using SVN. - And that's the risk.
Until recently I thought that Mercurial would be an acceptable alternative to Git. Until I used mercurial for some time. It is not.

Update: The comments are not too approving, to say the least. Lets see what time will tell. In the meanwhile I'll attach links to this blogpost:

13 September 2010

Robert Collins: What do I do @ work?

I recently moved withing Canonical from being a paid developer of Bazaar to take on a larger challenge Technical Architect for Launchpad. Its been two months now, and its time to put my head up out of the coal face, have a look around and regroup. When I worked on Bazaar, every day when I started work got up I was working on a tool anyone can use, designed for collaboration upon sourcecode, for people writing software. This is a toolchain component right at the heart of the free software world. Bazaar and tools like it get used everyday to manage, distribute and collaborate on the sourcecode that makes up the components of Ubuntu, Debian, Fedora and so forth. Every time someone new starts using Bazaar for a new free or open source project, well I felt happy happy that in my small part I m helping with this revolution we re carrying out. Launchpad is pretty similar to Bazaar in some ways. Obviously they are both free software, both are written in Python, and both are sponsored by Canonical, my employer. And they both are designed to assist in collaboration and communication between free software developers albeit in rather different ways. Bazaar is a tool anyone can install locally, run as a command line, GUI, or local webserver, and share code either centrally (e.g. by pushing to Launchpad), or in a peer to peer fashion, acting as their own server. Launchpad, by contrast is a website which (usually) folk will use as a service in their browser, from the comand line FTP (for package building), ssh (for Bazaar branch pushing or pulling), or even local GUI programs using the Launchpad API service. This makes it more approachable for first time collaborators, but its less able to be used offline, and it has all the usual caveats of web sites : it needs a username and password, it s availability depends on the operators on the team I m part of. So there s a lot less room for error: if we do something wrong, the system is unavailable, and users can t just apt-get install an older release. With Launchpad our goal is to to get all the infrastructure that open source need out of the way, so that they can focus on their code, collaboration within their team and almost uniquely collaboration with other teams. As well as being open source, Launchpad is free for all open source projects to use Ubuntu is our single biggest user they use it for all bugtracking, translation and package building, and have a hugefraction of the total storage overhead in the database. Launchpad is a pretty nice system, so people use it, and as a result (on a technical basis) it is suffering from its own success: small corner cases in the code turn up every day or two, code written years ago to deal with a relatively small data set now has to deal with data sets a thousand or more times larger (one table, for instance, has over 600,000,000 rows in it. For the last two months then, I ve been working on Launchpad. As Technical Architect, I need to ensure that the things that we (users, stakeholders and developers of Launchpad) want to do are supported by the structure of the system : the platform(s) we re building on, the way we approach problems, coding standards and diagnostic tools. That sounds pretty dry and hands off, but I m finding its actually very balanced. I wrote a presentation when I started the job, which encapsulated the challenges I saw in front of the team on this purely technical front, and what I thought I needed to do. I think I was about right in my expectations: On a typical day, I ll be hands on in a problem helping get it diagnosed, talking long term structural changes with someone around how to make things more efficient / flexible / maintainable, and writing a small patch here or there to help move things along. In the two months since I took on this challenge, we ve made significant headway on the problem of performance for Launchpad : many inefficient code paths have been identified and removed, some new infrastructure has been created as is being rolled out to make individual pages faster, and we ve massively increased the diagnostic data we get when things go wrong. We ve introduced facilities for responding more rapidly to issues in the software (but they have to be rolled out across the system) and I hope, over the next 4 months we ll reach the first of my performance goals: for any webpage in Launchpad, it will complete rendering in 99% of the time. (Note that we already meet this goal if you measure the whole system, but this is biased by some pages being very frequently hit and also being very small).

13 June 2010

Gunnar Wolf: World Naked Bike Ride 2010 Mexico

For the second time (First time was in 2008; I didn't join in 2009 as I travelled to Nicaragua on that date), I took part of the World Naked Bike Ride. The WNBR is a global effort, where people in ~150 cities all over the world go cycling nude on the streets of our towns, with varied demands, including: I love my bike! One of the things I most like about WNBR is its diversity. Not everybody goes for the same reasons. As people who read me often will know, I took part because I believe (and act accordingly!) that the bicycle is the best, most efficient vehicle in by far most of the situations we face day to day, but we need to raise awareness in everybody that the bicycle is just one more vehicle: On one side, we have the right to safely ride on the streets, like any other vehicle. On the other side, we must be responsible, safe drivers, just as we want car drivers to be. Ok, and I will recognize it before anybody complains that I sound too idealistic: I took part of the WNBR because it is _tons_ of fun. This year, we were between 300 and 500 people (depending on whom you ask). Compared to 2008, I felt less tension, more integration, more respect within the group. Of course, it is only natural in the society I live in that most of the participants were men, but the proportion of women really tends to even out. Also, many more people joined fully or partially in the nude (as nudity is not required, it is just an invitation). There was a great display of creativity, people painted with all kinds of interesting phrases and designs, some really beautiful. Oh, one more point, important to me: This is one of the best ways to show that we bikers are not athletes or anything like that. We were people ranging from very thin to quite fat, from very young to quite old. And that is even more striking when we show our whole equipment. If we can all bike around... So can you! Some links, with obvious nudity warnings in case you are offended by looking at innocent butts and similar stuff: As for the sad, stupid note: 19 cyclists were placed under arrest in Morelia, Michoac n because of faltas a la moral (trasgressions against morality), an ill-defined and often abused concept. Also, by far, most of the comments I have read from people on the media, as well a most questions we had by reporters before or after the ride were either why are you going nude (because that's the only way I'll get your attention!) or But many people were not nude! (nudity is not a requirement but only an option.

16 May 2010

Thomas Koch: tnt is not topgit

As I've already written, I'm working on an alternative to topgit. I made a first attempt in perl some weeks ago, but gave up after some frustrating hours. Yesterday I started again in python and had a very nice time putting together the groundwork and the first two commands.
It may be noted, that I've no previous programming experience in neither perl nor python!
By now, I can create a patchset branch and add a patch branch to it. There's still a lot to do. For my talk at the Debian Mini Conference in Berlin next month I'd like to be able to update patch branches, export patchsets and give a status summary.
Maybe I can already find somebody who's interested in joining me with this project? The code is in my github account, however the name will most probably change.
One reason that I've been much faster in python is the fantastic python-git library. I can only recommend it!
In other news: I'm searching a couch to surf in Berlin from june 7.-12. I prefer couchsurfing over hotels mostly to get to know nice people around the world. Please contact me, if you'd like to host me for a night or two. (thomas at koch punkt ro)

2 April 2010

Thomas Koch: Design document for a patch management system on a DVCS

Dear friends of Debian,

this is my first post to Planet Debian. - The planet with the most geeky registration procedure in the known universe!

I proposed an alternative to topgit some days ago on the vcs-pkg.org list. Martin asked (and encouraged) me to give a better explanation of the idea, which I'll hereby try. Sorry for not giving any drawings, but I'm totally incapable of anything graphical.

Hopefully, I'll manage to come to the Debian Miniconf in Berlin. Then we could discuss the idea further and maybe even start implementing it. (Somebody would need to help me with my first steps in Perl then...)

The following text is available on github. Please help me expand it!

Design document for a patch management system on a DVCS
Requirements The system to implement manages patchsets. A patchset is a set of patches with a tree-ish dependency graph between the patches. There's one distinct root of this dependency graph. Patches are managed as branches with each branch representing a patch. Modification of a patch is done by a commit to the respective branch. A branch representing a patch as part of a patchset is called patchbranch. The patch of a patchbranch is created as the diff between the root of the patchbranch and the head. The most important management methods are:
  • Export a patchset in different formats
    • quilt
    • a merged commit of all patches
    • a line of commits with each commit representing one patch
  • Update a patchset against an updated root.
  • Copy a patchset
  • Delete a patchset from direct visibility while preserving all history about it
  • Hide and unhide a patchset from direct visibility
Additional requirements:
  • The system should be implementable on top of GIT, Mercurial and eventually Bazaar.
  • The system must easily cope with multiple different and independent patchsets.
  • All information about a patchset must be encoded in one distinct branch. Publishing this one branch must be sufficient to allow somebody else to recreate the patchset with all of its patchbranches.
  • The system should not rely on the presence of hooks.
  • The system should not require the addition of management files in patch branches (like .topmsg and .topdeps in topgit)
  • The system must be easy to understand for a regular user of the underlying DVCS.
  • The implementation may allow a patchset to depend on another patchset(s).

implementation
patchset meta branch A patchset meta branch holds all informations about one patchset. First, it holds references to the top commits of all patch branches in the form of parent references of commits. Thus pushing the patchset meta branch automatically also pushes all commits of all patch branches. Secondly, the patchset meta branch contains meta informations about the patchset. These meta informations are:
  • The names of all patch branches together with the most recent commit identifier of a particular patch branch. Let's save this information in a file called branches.
  • A message for each patch branch that explains the patch. These messages can be saved in the file tree as msg/$ PATCH-BRANCH-NAME
  • References to the dependencies of the patch (other patches of the same patchset or the root of the patchset). This is also encoded in the file branches.
Since the patchset meta branch holds all this informations, it is possible, to delete all patch branches and recreate them from this informations. Although the commits of the patchset meta branches hold references to the patch branches, its file tree does not need to contain any files from the referenced patches. This may confuse the underlying DVCS, but the patch meta branch is not ment to be directly inspected.

The branches file A branches file for a fictive patchset could look like:
# patch branches without an explicit dependency depend on the root of the
# patchset tree
# A Root can be given as either a fix commit (seen here), a branch or a tag.
# A fixed commit or tag is useful to maintain a patchset against an older
# upstream version
ROOT: 6a8589de32d490806ab86432a3181370e65953ca
# A tag as a dependency
#ROOT: upstream/0.1.2
# A branch as a dependency
#ROOT: upstream
# A regular patch with it's name and last commit
BRANCH: debian/use-debian-jars-in-build-xml 4bab542c261ff1a1ae87151c3536f19ef02d7937
# two other regular patches
BRANCH: upstream-jira/HDFS-1234 a8e4af13106582ca1bfbbcaeb0537f73faf46d87
BRANCH: upstream-jira/MAP-REDUCE-007 e3426bcbcb2537478f851edcf6eb04b34f6c7106
# This patch depends on the above two patches
# The sha1 below the dependency patches references a merge commit of the two
# dependencies
BRANCH: upstream-jira/HDFS-008 517851aa829d77e09bc5e59985fed1b0aa339cc6
DEPENDENCIES:
  upstream-jira/HDFS-1234
  upstream-jira/MAP-REDUCE-007
    cc294f2e4773c4ff71efb83648a0e16835fca841
# A patch branch that belongs to the patch branch, but won't get exported (yet)
BRANCH: upstream-jira/HDFS-9999 74257905azgsa4689bc5e59985fed1b0aa339cc6
BRANCH-FLAGS: noexport



Next.