Search Results: "Sean Whitton"

20 June 2021

Sean Whitton: transient-caps-lock

If you re writing a lot of Common Lisp and you want to follow the convention of using all uppercase to refer to symbols in docstrings, comments etc., you really need something better than the shift key. Similarly if you re writing C and you have VARIOUS_LONG_ENUMS. The traditional way is a caps lock key. But that means giving up a whole keyboard key, all of the time, just for block capitalisation, which one hardly uses outside of programming. So a better alternative is to come up with some Emacs thing to get block capitalisation, as Emacs key binding is much more flexible than system keyboard layouts, and can let us get block capitalisation without giving up a whole key. The simplest thing would be to bind some sequence of keys to just toggle caps lock. But I came up with something a bit fancier. With the following, you can type M-C, and then you get block caps until the point at which you ve probably finished typing your symbol or enum name.
(defun spw/transient-caps-self-insert (&optional n)
  (interactive "p")
  (insert-char (upcase last-command-event) n))
(defun spw/activate-transient-caps ()
  "Activate caps lock while typing the current whitespace-delimited word(s).
This is useful for typing Lisp symbols and C enums which consist
of several all-uppercase words separated by hyphens and
underscores, such that M-- M-u after typing will not upcase the
whole thing."
  (interactive)
  (let* ((map (make-sparse-keymap))
     (deletion-commands &apos(delete-backward-char
                          paredit-backward-delete
                          backward-kill-word
                          paredit-backward-kill-word
                          spw/unix-word-rubout
                          spw/paredit-unix-word-rubout))
     (typing-commands (cons &aposspw/transient-caps-self-insert
                            deletion-commands)))
     (substitute-key-definition &aposself-insert-command
                                #&aposspw/transient-caps-self-insert
                                map
                                (current-global-map))
    (set-transient-map
     map
     (lambda ()
       ;; try to determine whether we are probably still about to try to type
       ;; something in all-uppercase
       (and (member this-command typing-commands)
            (not (and (eq this-command &aposspw/transient-caps-self-insert)
                      (= (char-syntax last-command-event) ?\ )))
            (not (and (or (bolp) (= (char-syntax (char-before)) ?\ ))
                      (member this-command deletion-commands))))))))
(global-set-key "\M-C" #&aposspw/activate-transient-caps)
A few notes:

15 May 2021

Sean Whitton: pinebookpro

I recently bought a Pinebook Pro. This was mainly out of general interest, but also because I wanted to have a spare portable computer. When I was recently having some difficulty with my laptop not charging, I realised that I am dependent on having access to Emacs, notmuch.el and my usual git repositories in the way that most people are dependent on their smartphones all the info I need to get things done is in there, and it s very disabling not to have it. So, good to have a spare. I decided to get the machine running the hard way, and have been working to add a facility to install the device-specific bootloader to Consfigurator. It has been good to learn about how ARM machines boot. The only really hard part turned out to be coming up with the right abstractions within Consfigurator, thanks to the hard work of the Debian U-Boot maintainers. This left me with a chroot and a corresponding disk image, properly partitioned and with the bootloader installed. It was only then that the difficulties began: getting a kernel and initrd combination which can output to the Pinebook Pro s screen and take input from its keyboard is not really straightforward yet, but that s required for inputting disk encryption passwords, which are required on portable devices. I don t have the right hardware to make a serial connection to the machine, so all this took a lot of trial and error. I ve ended up using Manjaro s patched upstream kernel build for now, because that compiles in the right drivers, and debugging an initrd without a serial connection is far too inefficient. What I keep having to remind myself is that this device isn t really a laptop in the usual sense it s a single board computer that s powering several pieces of hardware which together roughly constitute a laptop. I think something which epitomises this is how the power light doesn t come on when you hit the power button, but only when the bootloader or operating system kernel thinks to turn on the LED. You start up this SBC and it loads up some software and then once it has got itself going several seconds later that software starts turning on the screen, keyboard, power LEDs etc. Whereas on an ordinary laptop it s more than you turn on the keyboard, screen, power LEDs etc. all at once, and then /they/ go off and load some software. Of course this description is nothing like what s actually going on, but it s my attempt to capture how it feels as a user, who is installing operating systems, but otherwise treating the laptop s hardware, including things like boot ROMs, as a black box. There are tangible differences between what it is like to do that with an ordinary laptop and with the Pinebook Pro. Thanks to Vagrant Cascadian for all the work on U-Boot in Debian and for help on IRC, Cyril Brulebois for help with crossbuilding, and Birger Schacht for a useful blog post.

8 April 2021

Sean Whitton: consfigurator-live-build

One of my goals for Consfigurator is to make it capable of installing Debian to my laptop, so that I can stop booting to GRML and manually partitioning and debootstrapping a basic system, only to then turn to configuration management to set everything else up. My configuration management should be able to handle the partitioning and debootstrapping, too. The first stage was to make Consfigurator capable of debootstrapping a basic system, chrooting into it, and applying other arbitrary configuration, such as installing packages. That s been in place for some weeks now. It s sophisticated enough to avoid starting up newly installed services, but I still need to add some bind mounting. Another significant piece is teaching Consfigurator how to partition block devices. That s quite tricky to do in a sufficiently general way I want to cleanly support various combinations of LUKS, LVM and regular partitions, including populating /etc/crypttab and /etc/fstab. I have some ideas about how to do it, but it ll probably take a few tries to get the abstractions right. Let s imagine that code is all in place, such that Consfigurator can be pointed at a block device and it will install a bootable Debian system to it. Then to install Debian to my laptop I d just need to take my laptop s disk drive out and plug it into another system, and run Consfigurator on that system, as root, pointed at the block device representing my laptop s disk drive. For virtual machines, it would be easy to write code which loop-mounts an empty disk image, and then Consfigurator could be pointed at the loop-mounted block device, thereby making the disk image file bootable. This is adequate for virtual machines, or small single-board computers with tiny storage devices (not that I actually use any of those, but I want Consfigurator to be able to make disk images for them!). But it s not much good for my laptop. I casually referred to taking out my laptop s disk drive and connecting it to another computer, but this would void my laptop s warranty. And Consfigurator would not be able to update my laptop s NVRAM, as is needed on UEFI systems. What s wanted here is a live system which can run Consfigurator directly on the laptop, pointed at the block device representing its physical disk drive. Ideally this live system comes with a chroot with the root filesystem for the new Debian install already built, so that network access is not required, and all Consfigurator has to do is partition the drive and copy in the contents of the chroot. The live system could be set up to automatically start doing that upon boot, but another option is to just make Consfigurator itself available to be used interactively. The user boots the live system, starts up Emacs, starts up Lisp, and executes a Consfigurator deployment, supplying the block device representing the laptop s disk drive as an argument to the deployment. Consfigurator goes off and partitions that drive, copies in the contents of the chroot, and executes grub-install to make the laptop bootable. This is also much easier to debug than a live system which tries to start partitioning upon boot. It would look something like this:
    ;; melete.silentflame.com is a Consfigurator host object representing the
    ;; laptop, including information about the partitions it should have
    (deploy-these :local ...
      (chroot:partitioned-and-installed
        melete.silentflame.com "/srv/chroot/melete" "/dev/nvme0n1"))
Now, building live systems is a fair bit more involved than installing Debian to a disk drive and making it bootable, it turns out. While I want Consfigurator to be able to completely replace the Debian Installer, I decided that it is not worth trying to reimplement the relevant parts of the Debian Live tool suite, because I do not need to make arbitrary customisations to any live systems. I just need to have some packages installed and some files in place. Nevertheless, it is worth teaching Consfigurator how to invoke Debian Live, so that the customisation of the chroot which isn t just a matter of passing options to lb_config(1) can be done with Consfigurator. This is what I ve ended up with in Consfigurator s source code:
(defpropspec image-built :lisp (config dir properties)
  "Build an image under DIR using live-build(7), where the resulting live
system has PROPERTIES, which should contain, at a minimum, a property from
CONSFIGURATOR.PROPERTY.OS setting the Debian suite and architecture.  CONFIG
is a list of arguments to pass to lb_config(1), not including the '-a' and
'-d' options, which Consfigurator will supply based on PROPERTIES.
This property runs the lb_config(1), lb_bootstrap(1), lb_chroot(1) and
lb_binary(1) commands to build or rebuild the image.  Rebuilding occurs only
when changes to CONFIG or PROPERTIES mean that the image is potentially
out-of-date; e.g. if you just add some new items to PROPERTIES then in most
cases only lb_chroot(1) and lb_binary(1) will be re-run.
Note that lb_chroot(1) and lb_binary(1) both run after applying PROPERTIES,
and might undo some of their effects.  For example, to configure
/etc/apt/sources.list, you will need to use CONFIG not PROPERTIES."
  (:desc (declare (ignore config properties))
         #?"Debian Live image built in $ dir ")
  (let* (...)
    ;; ...
     (eseqprops
      ;; ...
      (on-change
          (eseqprops
           (on-change
               (file:has-content ,auto/config ,(auto/config config) :mode #o755)
             (file:does-not-exist ,@clean)
             (%lbconfig ,dir)
             (%lbbootstrap t ,dir))
           (%lbbootstrap nil ,dir)
           (deploys ((:chroot :into ,chroot)) ,host))
        (%lbchroot ,dir)
        (%lbbinary ,dir)))))
Here, %lbconfig is a property running lb_config(1), %lbbootstrap one which runs lb_bootstrap(1), etc. Those properties all just change directory to the right place and run the command, essentially, with a little extra code to handle failed debootstraps and the like. The ON-CHANGE and ESEQPROPS combinators work together to sequence the interaction of the Debian Live suite and Consfigurator. This way, we only rebuild the chroot if the configuration changed, and we only rebuild the image if the chroot changed. Now over in my personal consfig:
(try-register-data-source
 :git-snapshot :name "consfig" :repo #P"src/cl/consfig/" ...)
(defproplist hybrid-live-iso-built :lisp ()
  "Build a Debian Live system in /srv/live/spw.
Typically this property is not applied in a DEFHOST form, but rather run as
needed at the REPL.  The reason for this is that otherwise the whole image will
get rebuilt each time a commit is made to my dotfiles repo or to my consfig."
  (:desc "Sean's Debian Live system image built")
  (live-build:image-built.
      '("--archive-areas" "main contrib non-free" ...)
      "/srv/live/spw"
    (os:debian-stable "buster" :amd64)
    (basic-props)
    (apt:installed "whatever" "you" "want")
    (git:snapshot-extracted "/etc/skel/src" "dotfiles")
    (file:is-copy-of "/etc/skel/.bashrc" "/etc/skel/src/dotfiles/.bashrc")
    (git:snapshot-extracted "/root/src/cl" "consfig")))
The first argument to LIVE-BUILD:IMAGE-BUILT. is additional arguments to lb_config(1). The third argument onwards are the properties for the live system. The cool thing is GIT:SNAPSHOT-EXTRACTED the calls to this ensure that a copy of my Emacs configuration and my consfig end up in the live image, ready to be used interactively to install Debian, as described above. I ll need to add something like (chroot:host-chroot-bootstrapped melete.silentflame.com "/srv/chroot/melete") too. As with everything Consfigurator-related, Joey Hess s Propellor is the giant upon whose shoulders I m standing.

23 March 2021

Sean Whitton: rmsopenletter

I was shocked to learn today that Richard Stallman has been reinstated as a member of the board of the Free Software Foundation. I think this is plain inappropriate, but I cannot see how anyone who doesn t think that could fail to see the reinstatement as counterproductive. As Bradley M. Kuhn put it,
The question is whether an organization should have a designated leader who is on a sustained, public campaign advocating about an unrelated issue that many consider controversial. It really doesn t matter what your view about the controversial issue is; a leader who refuses to stop talking loudly about unrelated issues eventually creates an untenable distraction from the radical activism you re actively trying to advance. The message of universal software freedom is a radical cause; it s basically impossible for one individual to effectively push forward two unrelated controversial agendas at once. In short, the radical message of software freedom became overshadowed by RMS radical views about sexual morality.
There is an open letter calling for the removal of the entire Board of the Free Software Foundation in response. I haven t signed the letter because the Free Software Foundation Board s vote to reinstate Stallman was not unanimous, so the call to remove all of them does not make sense to me. I agree with the open letter s call to remove Stallman from other positions of leadership. I hope that this whole situation can be resolved quickly.

13 March 2021

Sean Whitton: lisp-use-for-transpose-chars

I had thought that Emacs C-t was mainly about correcting typos. It turns out to be extremely useful when working on Lisp macros which themselves write macros. This typically involves nested quasiquotation, where you can have multiple alternating sequences of open parentheses and backticks, or of commas, quotation marks and ampersats. While you re working on it you often need to reorder these character sequences and C-t does a great job.

10 March 2021

Sean Whitton: consfigurator 0.3.1

I d like to briefly introduce my new project, Consfigurator:
Consfigurator is a system for declarative configuration management using Common Lisp. You can use it to configure hosts as root, deploy services as unprivileged users, build and deploy containers, and produce disc images. [not all of these are implemented yet, but the design permits them to be] Consfigurator s design gives you a great deal of flexibility about how to control the hosts you want to configure. If there is a command you can run which will obtain input and output streams attached to an interactive POSIX sh running on the target host/container, then with a little glue code, you can use much of Consfigurator s functionality to configure that host/container. But if it is possible to get an implementation of Common Lisp started up on the host, then Configurator can transparently execute your deployment code over on the remote side, rather than exchanging information via POSIX sh. This lets you use the full power of Common Lisp to deploy your configuration. Configurator has convenient abstractions for combining these different ways to execute your configuration on hosts with different ways of connecting to them. Connections can be arbitrarily nested. For example, to combine SSHing to a Debian machine as an unprivileged user, using sudo to become root, and then starting up a Lisp image to execute your deployment code, you would evaluate
    (deploy ((:ssh (:sudo :as "spwhitton@athena.example.com") :debian-sbcl)) athena.example.com)
Declarative configuration management systems like Consfigurator and Propellor share a number of goals with projects like the GNU Guix System and NixOS. However, tools like Consfigurator and Propellor try to layer the power of declarative and reproducible configuration on top of traditional, battle-tested unix system administration infrastructure like apt, dpkg, yum, and distro package archives, rather than seeking to replace any of those. Let s get as much as we can out of all that existing distro policy-compliant work!
Please check out the user s manual, which includes a tutorial/quick start guide, and come join us in #consfigurator on irc.oftc.net. It s early days but you can already do a fair few things with Consfigurator. It s a good time to come help get all the basic properties defined!

4 March 2021

Sean Whitton: waylandmar21

While struggling with Pfizer second dose side effects yesterday, with little ability to do anything serious so surreal to have a fever yet also certainty you re not actually ill[1] I thought I d try building the branch of Emacs with native Wayland support, and try starting up Sway instead of i3. I recently upgraded my laptop to Debian bullseye, as I usually do at this stage of our pre-release freeze, and was wondering whether bullseye would be the release which would enable me to switch to Wayland. Why might I want to do this? I don t care about screen tearing and don t have any fancy monitors with absurd numbers of pixels. Previously, I had been hoping to cling on to my X11 setup for as long as possible, and switch to Wayland only once things I want to use started working worse on X11, because all the developers of those things have stopped using X11. But then after upgrading to bullseye, I found I had to forward-port an old patch to xfce4-session to prevent it from resetting SSH_AUTH_SOCK to the wrong value, and I thought to myself, maybe I could cut out some of the layers here, and maybe it ll be a bit less annoying. I have a pile of little scripts trying to glue together xfce4 and i3 to get all the functionality I need, but since there have been people who use their computers for similar purposes to me trying to make Sway useful for quite some time now, maybe there are more integrated solutions available. I have also been getting tired of things which have only ever half-worked under X, like toggling autolock off when there isn t fullscreen video playing (when I m video conferencing on another device, I often want to prevent my laptop s screen from locking, and it works most of the time, but sometimes still locks, sigh). I have a normalise desktop keybinding which tries to fix recurrent issues by doing things like restarting ibus, and it would be nice to drop something so hackish. And indeed, a lot of basic things do work way better under Sway. swayidle is clean and sane, and I was easily able to add a keybinding which inhibits locking the screen when a certain window is visible. I could bind brightness up/down keys without having to invoke xfce4-power-manager never managed that before and, excitingly, I could have those keys bound such that they still work when the screen is locked. I still need two old scripts which interact with the i3/sway IPC, but those two are reasonable ways to extend functionality, rather than bad hacks. The main thing that I could not figure out is IME various people claim online to have got typing their Asian languages working natively under Sway, not just into Xwayland windows, but I couldn t, and there is no standard way to do it yet, it would seem. Also, it seems Qt in Debian doesn t support Wayland natively, so far as I could tell. And there isn t really a drop-in replacement for dmenu yet, so you have to run that under Xwayland. A lot of this is probably going to be fixed during 2021, but the thing is, I ll be on Debian bullseye, so how it works now is probably as good as it is going to get for the next two years or so. So, wanting to get back to doing something more useful today, I reluctantly booted back into X11. I m really looking forward to switching to Sway, and getting rid of some of my hacks, but I think I am probably going to have to wait for Debian bookworm unless I completely run out of patience with the various X11 annoyances described above, and start furiously backporting things. Update, later that afternoon Newer versions of fcitx5 packages hit Debian testing within the past few days, it turns out, and the IME problem is solved! So looks like I am slowly going to be able to migrate to Sway during the Debian bullseye lifecycle after all. How nice. Many thanks to various upstreams and those who have been working on these packages in Debian. [1] Okay, I suppose I could have caught the disease a few days ago and it became symptomatic at the same time I was experiencing the side effects.

31 January 2021

Sean Whitton: gemini:// space

Recently I have become curious about the Gemini Project and the content that people have made available to be retrieved over the gemini:// protocol. I m not convinced by the arguments for not just using http, and mostly it s just that I typically find more things that I am interested in casually reading through on people s gemlogs than I would on, say, reddit, and similar aggregators. But presumably advocates of gemini:// and the text/gemini format would argue that it s various respects in which it differs from the web that makes geminispace conducive to the production of the sort of content you find there. So I m remaining open minded about the possibility that having a completely separate protocol is important, and not just an annoyance because rss2email doesn t work and I had to spend time writing gmi2email. I now have a games console at home for the first time in some years, which I bought in response to the ongoing pandemic, and one thing that I have noticed is that using it feels like being offline in a way that playing games on a regular computer never would. It has a WiFi connection but it doesn t have a web browser, and I am glad that using it provides an opportunity to be disconnected from the usual streams of information. And perhaps something similar ought to be said in favour of how the Gemini project does not just use http. There is, perhaps, a positive psychological effect induced by making the boundary between text/gemini and the web as hard as it is made by using gemini:// rather than http. Something about which I find myself much more sceptical is how the specification for gemini:// and text/gemini is not extensible. Advocates of Gemini have this idea that they can t include, say, a version number in the protocol, because the extensibility of the web is what has led to the problems they think it has, so they want to make it impossible. Now on the one hand perhaps the people behind Gemini are in the best position that anyone is in to come up with a spec which they will finalise and render effectively unchangeable, because a lot of them have been using Gopher for decades, and so they have enough experience to be able to say exactly what Gopher is missing, and be confident that they ve not missed anything. But on the other hand, Gemini is one technological piece in attempts to make a version of the Internet which is healthier for humans the so-called small Internet movement and maybe there will be new ideas about how the small Internet should be which would benefit from a new version of the Gemini specification. So it seems risky to lock-in to one version. Comments on Hacker News.

8 November 2020

Sean Whitton: Combining repeat and repeat-complex-command

In Emacs, you can use C-x z to repeat the last command you input, and subsequently you can keep tapping the z key to execute that command again and again. If the command took minibuffer input, however, you ll be asked for that input again. For example, suppose you type M-z : to delete through the next colon character. If you want to keep going and delete through the next few colons, you would need to use C-x z : z : z : etc. which is pretty inconvenient. So there s also C-x ESC ESC RET or C-x M-: RET, which will repeat the last command which took minibuffer input, as if you d given it the same minibuffer input. So you could use M-z : C-x M-: RET C-x M-: RET etc., but then you might as well just keep typing M-z : over and over. It s also quite inconvenient to have to remember whether you need to use C-x z or C-x M-: RET. I wanted to come up with a single command which would choose the correct repetition method. It turns out it s a bit involved, but here s what I came up with. You can use this under the GPL-3 or any later version published by the FSF. Assumes lexical binding is turned on for the file you have this in.
;; Adapted from  repeat-complex-command&apos as of November 2020
(autoload &aposrepeat-message "repeat")
(defun spw/repeat-complex-command-immediately (arg)
  "Like  repeat-complex-command&apos followed immediately by RET."
  (interactive "p")
  (if-let ((newcmd (nth (1- arg) command-history)))
      (progn
        (add-to-history &aposcommand-history newcmd)
        (repeat-message "Repeating %S" newcmd)
        (apply #&aposfuncall-interactively
               (car newcmd)
               (mapcar (lambda (e) (eval e t)) (cdr newcmd))))
    (if command-history
        (error "Argument %d is beyond length of command history" arg)
      (error "There are no previous complex commands to repeat"))))
(let (real-last-repeatable-command)
  (defun spw/repeat-or-repeat-complex-command-immediately ()
    "Call  repeat&apos or  spw/repeat-complex-command-immediately&apos as appropriate.

Note that no prefix argument is accepted because this has
different meanings for  repeat&apos and for
 spw/repeat-complex-command-immediately&apos, so that might cause surprises."
    (interactive)
    (if (eq last-repeatable-command this-command)
        (setq last-repeatable-command real-last-repeatable-command)
      (setq real-last-repeatable-command last-repeatable-command))
    (if (eq last-repeatable-command (caar command-history))
        (spw/repeat-complex-command-immediately 1)
      (repeat nil))))
;;  suspend-frame&apos is bound to both C-x C-z and C-z
(global-set-key "\C-z" #&aposspw/repeat-or-repeat-complex-command-immediately)

23 July 2020

Sean Whitton: keyboardingupdates

Marks and mark rings in GNU Emacs I recently attempted to answer the question of whether experienced Emacs users should consider partially or fully disabling Transient Mark mode, which is (and should be) the default in modern GNU Emacs. That blog post was meant to be as information-dense as I could make it, but now I d like to describe the experience I have been having after switching to my custom pseudo-Transient Mark mode, which is labelled mitigation #2 in my older post. In summary: I feel like I ve uncovered a whole editing paradigm lying just beneath the surface of the editor I ve already been using for years. That is cool and enjoyable in itself, but I think it s also helped me understand other design decisions about the basics of the Emacs UI better than before in particular, the ideas behind how Emacs chooses where to display buffers, which were very frustrating to me in the past. I am now regularly using relatively obscure commands like C-x 4 C-o. I see it! It all makes sense now! I would encourage everyone who has never used Emacs without Transient Mark mode to try turning it off for a while, either fully or partially, just to see what you can learn. It s fascinating how it can come to seem more convenient and natural to pop the mark just to go back to the end of the current line after fixing up something earlier in the line, even though doing so requires pressing two modified keys instead of just C-e. Eshell I was amused to learn some years ago that someone was trying to make Emacs work as an X11 window manager. I was amazed and impressed to learn, more recently, that the project is still going and a fair number of people are using it. Kudos! I suspect that the basic motivation for such projects is that Emacs is a virtual Lisp machine, and it has a certain way of managing visible windows, and people would like to be able to bring both of those to their X11 window management. However, I am beginning to suspect that the intrinsic properties of Emacs buffers are tightly connected to the ways in which Emacs manages visible windows, and the intrinsic properties of Emacs buffers are at least as fundamental as its status as a virtual Lisp machine. Thus I am not convinced by the idea of trying to use Emacs ways of handling visible windows to handle windows which do not contain Emacs buffers. (but it s certainly nice to learn it s working out for others) The more general point is this. Emacs buffers are as fundamental to Emacs as anything else is, so it seems unlikely to be particularly fruitful to move something typically done outside of Emacs into Emacs, unless that activity fits naturally into an Emacs buffer or buffers. Being suited to run on a virtual Lisp machine is not enough. What could be more suited to an Emacs buffer, however, than a typical Unix command shell session? By this I mean things like running commands which produce text output, and piping this output between commands and into and out of files. Typically the commands one enters are sort of like tiny programs in themselves, even if there are no pipes involved, because you have to spend time determining just what options to pass to achieve what you want. It is great to have all your input and output available as ordinary buffer text, navigable just like all your other Emacs buffers. Full screen text user interfaces, like top(1), are not the sort of thing I have in mind here. These are suited to terminal emulators, and an Emacs buffer makes a poor terminal emulator what you end up with is a sort of terminal emulator emulator. Emacs buffers and terminal emulators are just different things. These sorts of thoughts lead one to Eshell, the Emacs Shell. Quoting from its documentation:
The shell s role is to make [system] functionality accessible to the user in an unformed state. Very roughly, it associates kernel functionality with textual commands, allowing the user to interact with the operating system via linguistic constructs. Process invocation is perhaps the most significant form this takes, using the kernel s fork' andexec functions. Emacs is a user application, but it does make the functionality of the kernel accessible through an interpreted language namely, Lisp. For that reason, there is little preventing Emacs from serving the same role as a modern shell. It too can manipulate the kernel in an unpredetermined way to cause system changes. All it s missing is the shell-ish linguistic model.
Eshell has been working very well for me for the past month or so, for, at least, Debian packaging work, which is very command shell-oriented (think tools like dch(1)). The other respects in which Eshell is tightly integrated with the rest of Emacs are icing on the cake. In particular, Eshell can transparently operate on remote hosts, using TRAMP. So when I need to execute commands on Debian s ftp-master server to process package removal requests, I just cd /ssh:fasolo: in Eshell. Emacs takes care of disconnecting and connecting to the server when needed there is no need to maintain a fragile SSH connection and a shell process (or anything else) running on the remote end. Or I can cd /ssh:athena\ sudo:root@athena: to run commands as root on the webserver hosting this blog, and, again, the text of the session survives on my laptop, and may be continued at my leisure, no matter whether athena reboots, or I shut my laptop and open it up again the next morning. And of course you can easily edit files on the remote host.

Sean Whitton: Kinesis Advantage 2 for heavy Emacs users

A little under two months ago I invested in an expensive ergonomic keyboard, a Kinesis Advantage 2, and set about figuring out how to use it most effectively with Emacs. The default layout for the keyboard is great for strong typists who control their computer mostly with their mouse, but less good for Emacs users, who are strong typists that control their computer mostly with their keyboard. It took me several tries to figure out where to put the ctrl, alt, backspace, delete, return and spacebar keys, and aside from one forum post I ran into, I haven t found anyone online who came up with anything much like what I ve come up with, so I thought I should probably write up a blog post. The mappings Sequences of two modified keys on different halves of the keyboard It is desirable to input sequences like C-x C-o without switching which hand is holding the control key. This requires one-handed chording, but this is trecherous when the modifier keys not under the thumbs, because you might need to press the modified key with the same finger that s holding the modifier! Fortunately, most or all sequences of two keys modified by ctrl or alt/meta, where each of the two modifier keys is typed by a different hand, begin with C-c, C-x or M-g, and the left hand can handle each of these on its own. This leaves the right hand completely free to hit the second modified key while the left hand continues to hold down the modifier. My rebindings for ordinary keyboards I have some rebindings to make Emacs usage more ergonomic on an ordinary keyboard. So far, my Kinesis Advantage setup is close enough to that setup that I m not having difficulty switching back and forth from my laptop keyboard. The main difference is for sequences of two modified keys on different halves of the keyboard which of the two modified keys is easiest to type as a one-handed chord is different on the Kinesis Advantage than on my laptop keyboard. At this point, I m executing these sequences without any special thought, and they re rare enough that I don t think I need to try to determine what would be the most ergonomic way to handle them.

5 June 2020

Sean Whitton: spacecadetrebindings

I ve been less good at taking adequate typing breaks during the lockdown and I ve become concerned about how much chording my left hand does on its own during typical Emacs usage, with caps lock rebound to control, as I ve had it for years. I thought that now was as good a time as any to do something drastic about this. Here are my rebindings: Optional extras: This has the following advantages: The main disadvantage, aside from an adjustment period when I feel that someone has inserted a massive marshmellow between me and my computer, is that Ctrl-Alt combinations are a bit difficult; in Emacs, C-M-SPC is hard to do without. However I think I ve found a decent way to do it (thumb on control, curled ring finger on alt, possibly little finger on shift for Emacs infamous C-M-S-v standard binding).

30 May 2020

Sean Whitton: GNU Emacs' Transient Mark mode

Something I ve found myself doing as the pandemic rolls on is picking out and (re-)reading through sections of the GNU Emacs manual and the GNU Emacs Lisp reference manual. This has got me (too) interested in some of the recent history of Emacs development, and I did some digging into archives of emacs-devel from 2008 (15M mbox) regarding the change to turn Transient Mark mode on by default and set mark-even-if-inactive to true by default in Emacs 23.1. It s not always clear which objections to turning on Transient Mark mode by default take into account the mark-even-if-inactive change. I think that turning on Transient Mark mode along with mark-even-if-inactive is a good default. The question that remains is whether the disadvantages of Transient Mark mode are significant enough that experienced Emacs users should consider altering Emacs default behaviour to mitigate them. Here s one popular blog arguing for some mitigations. How might Transient Mark mode be disadvantageous? The suggestion is that it makes using the mark for navigation rather than for acting on regions less convenient:
  1. setting a mark just so you can jump back to it (i) is a distinct operation you have to think of separately; and (ii) requires two keypresses, C-SPC C-SPC, rather than just one keypress
  2. using exchange-point-and-mark activates the region, so to use it for navigation you need to use either C-u C-x C-x or C-x C-x C-g, neither of which are convenient to type, or else it will be difficult to set regions at the place you ve just jumped to because you ll already have one active.
There are two other disadvantages that people bring up which I am disregarding. The first is that it makes it harder for new users to learn useful ways in which to use the mark when it s deactivated. This happened to me, but it can be mitigated without making any behavioural changes to Emacs. The second is that the visual highlighting of the region can be distracting. So far as I can tell, this is only a problem with exchange-point-and-mark, and it s subsumed by the problem of that command actually activating the region. The rest of the time Emacs automatic deactivation of the region seems sufficient. How might disabling Transient Mark mode be disadvantageous? When Transient Mark mode is on, many commands will do something usefully different when the mark is active. The number of commands in Emacs which work this way is only going to increase now that Transient Mark mode is the default. If you disable Transient Mark mode, then to use those features you need to temporarily activate Transient Mark mode. This can be fiddly and/or require a lot of keypresses, depending on exactly where you want to put the region. Without being able to see the region, it might be harder to know where it is. Indeed, this is one of the main reasons for wanting Transient Mark mode to be the default, to avoid confusing new users. I don t think this is likely to affect experienced Emacs users often, however, and on occasions when more precision is really needed, C-u C-x C-x will make the region visible. So I m not counting this as a disadvantage. How might we mitigate these two sets of disadvantages? Here are the two middle grounds I m considering. Mitigation #1: Transient Mark mode, but hack C-x C-x behaviour
(defun spw/exchange-point-and-mark (arg)
  "Exchange point and mark, but reactivate mark a bit less often.

Specifically, invert the meaning of ARG in the case where
Transient Mark mode is on but the region is inactive."
  (interactive "P")
  (exchange-point-and-mark
   (if (and transient-mark-mode (not mark-active))
       (not arg)
     arg)))
(global-set-key [remap exchange-point-and-mark] &aposspw/exchange-point-and-mark)
We avoid turning Transient Mark mode off, but mitigate the second of the two disadvantages given above. I can t figure out why it was thought to be a good idea to make C-x C-x reactivate the mark and require C-u C-x C-x to use the action of exchanging point and mark as a means of navigation. There needs to a binding to reactivate the mark, but in roughly ten years of having Transient Mark mode turned on, I ve found that the need to reactivate the mark doesn t come up often, so the shorter and longer bindings seem the wrong way around. Not sure what I m missing here. Mitigation #2: disable Transient Mark mode, but enable it temporarily more often
(setq transient-mark-mode nil)
(defun spw/remap-mark-command (command &optional map)
  "Remap a mark-* command to temporarily activate Transient Mark mode."
  (let* ((cmd (symbol-name command))
         (fun (intern (concat "spw/" cmd)))
         (doc (concat "Call  "
                      cmd
                      "&apos and temporarily activate Transient Mark mode.")))
    (fset fun  (lambda ()
                 ,doc
                 (interactive)
                 (call-interactively #&apos,command)
                 (activate-mark)))
    (if map
        (define-key map (vector &aposremap command) fun)
      (global-set-key (vector &aposremap command) fun))))
(dolist (command &apos(mark-word
                   mark-sexp
                   mark-paragraph
                   mark-defun
                   mark-page
                   mark-whole-buffer))
  (spw/remap-mark-command command))
(with-eval-after-load &aposorg
  (spw/remap-mark-command &aposorg-mark-element org-mode-map)
  (spw/remap-mark-command &aposorg-mark-subtree org-mode-map))
;; optional
(global-set-key "\M-=" (lambda () (interactive) (activate-mark)))
;; resettle the previous occupant
(global-set-key "\C-cw" &aposcount-words-region)
Here we remove both of the disadvantages of Transient Mark mode given above, and mitigate the main disadvantage of not activating Transient Mark mode by making it more convenient to activate it temporarily. For example, this enables using C-M-SPC C-M-SPC M-( to wrap the following two function arguments in parentheses. And you can hit M-h a few times to mark some blocks of text or code, then operate on them with commands like M-% and C-/ which behave differently when the region is active.1 Comparing these mitigations Both of these mitigations handle the second of the two disadvantages of Transient Mark mode given above. What remains, then, is
  1. under the effects of mitigation #1, how much of a barrier to using marks for navigational purposes is it to have to press C-SPC C-SPC instead of having a single binding, C-SPC, for all manual mark setting2
  2. under the effects of mitigation #2, how much of a barrier to taking advantage of commands which act differently when the region is active is it to have to temporarily enable Transient Mark mode with C-SPC C-SPC, M-= or one of the mark-* commands?
These are unknowns.3 So I m going to have to experiment, I think, to determine which mitigation to use, if either. In particular, I don t know whether it s really significant that setting a mark for navigational purposes and for region marking purposes are distinct operations under mitigation #1. My plan is to start with mitigation #2 because that has the additional advantage of allowing me to confirm or disconfirm my belief that not being able to see where the region is will only rarely get in my way.

  1. The idea of making the mark-* commands activate the mark comes from an emacs-devel post by Stefan Monnier in the archives linked above.
  2. One remaining possibility I m not considering is mitigation #1 plus binding something else to do the same as C-SPC C-SPC. I don t believe there are any easily rebindable keys which are easier to type than typing C-SPC twice. And this does not deal with the two distinct mark-setting operations problem.
  3. Another way to look at this is the question of which of setting a mark for navigational purposes and activating a mark should get C-SPC and which should get C-SPC C-SPC.

3 May 2020

Sean Whitton: Processing mail from discussion lists differently

I realised this week that for some years I have been applying Inbox Zero indiscriminately to all e-mail that I receive, but this does not make sense, and has some downsides. My version of Inbox Zero works very well when applied to mail addressed directly to me, and for mail to certain mailing lists, where each post to the list might as well be addressed directly to me, in addition to the list. However, I also receive by e-mail the following things to which I now believe Inbox Zero should not be applied: I believe that applying Inbox Zero to these sorts of things is not only incorrect but is actually harming my engagement with these mediums. Let us distinguish
  1. processing e-mail this means applying the Inbox Zero decision procedure to incoming messages, at set times during the day (I do it once, around 4pm)
  2. browsing and sometimes catching up RSS articles and list mail looking through unread items, replying if I think I have something useful to say, leaving things for later, and occasionally marking all as read if I ve not had time for that group of lists lately.
These should be kept apart. The easy case to see is why you shouldn t apply the browsing/catching up approach to e-mail which should rather be processed that s just the original wisdom of Inbox Zero. And clearly you don t want ever to have to resort to just marking all mail addressed directly to you as read. What goes wrong, then, if you misapply the processing mentality to e-mail which should, rather, be browsed/caught up? Well, the core of Inbox Zero is deciding whether to read and reply to something right now, add it to your todo list to handle later, or decide the mail cannot be dealt with quickly but is not important enough to go on that todo list. If you apply this to RSS articles and discussion list mail, then the third option is basically ruled out, because almost nothing on mailing lists or on blogs that I follow is important enough to go on my list of Real Tasks. But then you re faced with either reading the article/post right now, or discarding it. There is no option to leave the item marked as unread and then maybe come back to it, or postpone discarding it until you ve decided to catch up the group. Applying Inbox Zero to discussion lists and RSS feeds creates a false sense of urgency. I ve realised that my implicit response to this has been reluctance to subscribe to new mailing lists and feeds, because I don t have things set up to allow me to read them in a leisurely way. But then I m missing out on discussions and writing that might be relevant and beneficial to me if I could only approach them when I m in a frame of mind other than time to get my inbox down to zero . This also means that subscribing to new mailing lists just in order to post something has an unreasonably high cost: introducing all mail on those lists into my Inbox Zero processing window. So, what s needed is to make the virtual folder views which I use to read new mail correspond cleanly to the distinction between mail to be processed and mail to be browsed. I.e. for each virtual folder it should be clear whether mail there is to be processed or to be browsed. Then I can continue to use my processing views once per day, and access the browsing views at leisure. (Indeed, I ve created a keybinding to cycle through the new browsing views, and another to catch them up.) As I mentioned, some list mail ought to be processed. For example, I want to process rather than browse the debian-policy mailing list, as one of the Policy Editors. With notmuch, it s easy to include this mail in the relevant processing views (I ve long had one processing view for weekdays and another for weekends). Additionally, I ve used a bit of Emacs Lisp to create a dynamic uncategorised unread view which catches mail to be browsed which (i) doesn t show up in one of the other virtual folders for mail to be browsed, and (ii) doesn t show up in one of the processing views. So now subscribing to a mailing list is cheap: mail will end up in the uncategorised view, and I can decide whether to leave it there for a low traffic list; create a new virtual folder for the list (trivial as it s all just more Emacs Lisp, no actual moving of files required); or unsubscribe. I ve been experiencing nostalgia for my time in secondary school when my friends and I had all sorts of interesting stuff flowing into our mailstores, from RSS feeds to each other s blogs where we d post both our own content and links elsewhere, RSS feeds to stranger s blogs, and e-mails sent to each other with links. We didn t have to worry about processing anything back then, as our e-mail accounts were used for intellectual enrichment but not the completion of tasks. I think I can recapture something of that with my new virtual folders for browsing.

3 April 2020

Sean Whitton: Manifest to run Debian pre-upload tests on builds.sr.ht

Before uploading stuff to Debian, I build in a clean chroot, and then run piuparts, autopkgtest and lintian. For some of my packages this can take around an hour on my laptop, which is fairly old. Normally I don t mind waiting, but sometimes I want to put my laptop away, and then it would be good for things to be faster. It occurred to me that I could make use of my builds.sr.ht account to run these tests on more powerful hardware. This build manifest seems to work:
# BEGIN CONFIGURABLE
sources:
  - https://salsa.debian.org/perl-team/modules/packages/libgit-annex-perl.git
environment:
  source: libgit-annex-perl
  quilt:  auto
# END CONFIGURABLE
image: debian/unstable
packages:
  - autopkgtest
  - devscripts
  - dgit
  - lintian
  - piuparts
  - sbuild
tasks:
  - setup:  
      cd $source
      source_version=$(dpkg-parsechangelog -SVersion)
      echo "source_version=$source_version" >>~/.buildenv
      git deborig   origtargz
      sudo sbuild-createchroot --command-prefix=eatmydata --include=eatmydata unstable /srv/chroot/unstable-amd64-sbuild
      sudo sbuild-adduser $USER
  - build:  
      cd $source
      dgit --quilt=$quilt sbuild -d unstable --no-run-lintian
  - lintian:  
      lintian $ source _$ source_version _multi.changes
  - piuparts:  
      sudo piuparts --no-eatmydata --schroot unstable-amd64-sbuild $ source _$ source_version _multi.changes
  - autopkgtest:  
      autopkgtest $ source _$ source_version _multi.changes -- schroot unstable-amd64-sbuild
And here s my script.

23 November 2017

Sean Whitton: Using Propellor to provision your Debian development laptop

sbuild is a tool used by those maintaining packages in Debian, and derived distributions such as Ubuntu. When used correctly, it can catch a lot of categories of bugs before packages are uploaded. It does this by building the package in a clean environment, and then running the package through the Lintian, piuparts, adequate and autopkgtest tools. However, configuring sbuild so that it makes use of all of these tools is cumbersome. In response to this complexity, I wrote a module for the Propellor configuration management system to prepare a system such that a user can just go ahead and run the sbuild(1) command. This module is useful on one s development laptop if you need to reinstall your OS, you don t have to look up the instructions for setting up sbuild again. But it s also useful on throwaway build boxes. I can instruct propellor to provision a new virtual machine to build packages with sbuild, and all the different tools mentioned above will be connected together for me. I just uploaded Propellor version 5.1.0 to Debian unstable. The version overhauls the API and internals of the Sbuild module to take better advantage of Propellor s design. I won t get into those details in this post. What I d like to do is demonstrate how you can set up sbuild on your own machines, using Propellor. Getting started with Propellor apt-get install propellor, and then propellor --init. You ll be offered two setups, options A and B. I suggest starting with option B. If you never use Propellor for anything other than provisioning sbuild, you can stick with option B. If this tutorial makes you want to check out more features of Propellor, you might consider switching to option A and importing your old configuration. Open ~/.propellor/config.hs. You will see something like this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ mybox
        ]
-- An example host.
mybox :: Host
mybox = host "mybox.example.com" $ props
        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.unattendedUpgrades
        & Apt.installed ["etckeeper"]
        & Apt.installed ["ssh"]
        & User.hasSomePassword (User "root")
        & File.dirExists "/var/www"
        & Cron.runPropellor (Cron.Times "30 * * * *")
You ll want to customise this so that it reflects your computer. My laptop is called iris, so I might replace the above with this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
The list of lines beginning with & are the properties of the host iris. Here, I ve removed all properties except the osDebian property, which informs propellor that iris runs Debian testing and has the amd64 architecture. The effect of this is that Propellor will not try to change anything about iris. In this tutorial, we are not going to let Propellor configure anything about iris other than setting up sbuild. (The osDebian property is a pure info property, which means that it tells Propellor information about the host to which other properties might refer, but it doesn t itself change anything about iris.) Telling Propellor to configure sbuild First, add to the import lines at the top of config.hs the lines:
import qualified Propellor.Property.Sbuild as Sbuild
import qualified Propellor.Property.Schroot as Schroot
to enable use of the Sbuild module. Here is the full config for iris, which I ll go through line-by-line:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
        & Apt.useLocalCacher
        & sidSchrootBuilt
        & Sbuild.usableBy (User "spwhitton")
        & Schroot.overlaysInTmpfs
        & Cron.runPropellor (Cron.Times "30 * * * *")
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
Running Propellor to configure your laptop propellor iris.silentflame.com. In this configuration, you don t need to worry about whether the hostname iris.silentflame.com actually resolves to your laptop. However, it must be possible to ssh root@localhost. This should be enough that spwhitton can:
$ sbuild -A --run-lintian --run-autopkgtest --run-piuparts foo.dsc
Further configuration It is easy to add new schroots; for example, for building backports:
        ...
        & stretchSchrootBuilt
        ...
  where
        ...
        stretchSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian (Stable "stretch") X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
You can also add additional properties to configure your chroot. Perhaps on your LAN you need sbuild to install packages via https, and you already have an apt cacher available. You can replace the apt-cacher-ng configuration like this:
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Apt.mirror "https://foo.mirror/debian/"
                & Apt.installed ["apt-transport-https"]
Thanks Thanks to Propellor s author, Joey Hess, for help navigating Propellor s type system while performing the overhaul included in version 5.1.0. Also for a conversation at DebConf17 which enabled this work by clearing some misconceptions of mine.

22 October 2017

Sean Whitton: Debian Policy call for participation -- October 2017

Here s are some of the bugs against the Debian Policy Manual. In particular, there really are quite a few patches needing seconds from DDs. Consensus has been reached and help is needed to write a patch: #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides #767839 Linking documentation of arch:any package to arch:all #770440 policy should mention systemd timers #773557 Avoid unsafe RPATH/RUNPATH #780725 PATH used for building is not specified #793499 The Installed-Size algorithm is out-of-date #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #823256 Update maintscript arguments with dpkg >= 1.18.5 #833401 virtual packages: dbus-session-bus, dbus-default-session-bus #835451 Building as root should be discouraged #838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus #845715 Please document that packages are not allowed to write outside thei #853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai #874019 Note that the -e argument to x-terminal-emulator works like #874206 allow a trailing comma in package relationship fields Wording proposed, awaiting review from anyone and/or seconds by DDs: #688251 Built-Using description too aggressive #737796 copyright-format: support Files: paragraph with both abbreviated na #756835 Extension of the syntax of the Packages-List field. #786470 [copyright-format] Add an optional License-Grant field #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #835451 Building as root should be discouraged #845255 Include best practices for packaging database applications #846970 Proposal for a Build-Indep-Architecture: control file field #850729 Documenting special version number suffixes #864615 please update version of posix standard for scripts (section 10.4) #874090 Clarify wording of some passages #874095 copyright-format: Use the synopsis term established in the de Merged for the next release: #683495 perl scripts: #!/usr/bin/perl MUST or SHOULD? #877674 [debian-policy] update links to the pdf and other formats of the do #878523 [PATCH] Spelling fixes

2 October 2017

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

28 September 2017

Sean Whitton: Debian Policy 4.1.1.0 released

I just released Debian Policy version 4.1.1.0. There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx. There are still some issues remaining; I hope to submit some patches to the www-team s scripts to fix those.

Next.

Previous.