Search Results: "Sean Whitton"

22 October 2017

Sean Whitton: Debian Policy call for participation -- October 2017

Here s are some of the bugs against the Debian Policy Manual. In particular, there really are quite a few patches needing seconds from DDs. Consensus has been reached and help is needed to write a patch: #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides #767839 Linking documentation of arch:any package to arch:all #770440 policy should mention systemd timers #773557 Avoid unsafe RPATH/RUNPATH #780725 PATH used for building is not specified #793499 The Installed-Size algorithm is out-of-date #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #823256 Update maintscript arguments with dpkg >= 1.18.5 #833401 virtual packages: dbus-session-bus, dbus-default-session-bus #835451 Building as root should be discouraged #838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus #845715 Please document that packages are not allowed to write outside thei #853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai #874019 Note that the -e argument to x-terminal-emulator works like #874206 allow a trailing comma in package relationship fields Wording proposed, awaiting review from anyone and/or seconds by DDs: #688251 Built-Using description too aggressive #737796 copyright-format: support Files: paragraph with both abbreviated na #756835 Extension of the syntax of the Packages-List field. #786470 [copyright-format] Add an optional License-Grant field #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #835451 Building as root should be discouraged #845255 Include best practices for packaging database applications #846970 Proposal for a Build-Indep-Architecture: control file field #850729 Documenting special version number suffixes #864615 please update version of posix standard for scripts (section 10.4) #874090 Clarify wording of some passages #874095 copyright-format: Use the synopsis term established in the de Merged for the next release: #683495 perl scripts: #!/usr/bin/perl MUST or SHOULD? #877674 [debian-policy] update links to the pdf and other formats of the do #878523 [PATCH] Spelling fixes

2 October 2017

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

28 September 2017

Sean Whitton: Debian Policy 4.1.1.0 released

I just released Debian Policy version 4.1.1.0. There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx. There are still some issues remaining; I hope to submit some patches to the www-team s scripts to fix those.

17 September 2017

Sean Whitton: Debian Policy call for participation -- September 2017

Here s a summary of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you re an existing contributor. Consensus has been reached and help is needed to write a patch #172436 BROWSER and sensible-browser standardization #273093 document interactions of multiple clashing package diversions #299007 Transitioning perms of /usr/local #314808 Web applications should use /usr/share/package, not /usr/share/doc/ #425523 Describe error unwind when unpacking a package fails #452393 Clarify difference between required and important priorities #476810 Please clarify 12.5, Copyright information #484673 file permissions for files potentially including credential informa #491318 init scripts should support start/stop/restart/force-reload - why #556015 Clarify requirements for linked doc directories #568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar #578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al. #582109 document triggers where appropriate #587991 perl-policy: /etc/perl missing from Module Path #592610 Clarify when Conflicts + Replaces et al are appropriate #613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS) #614807 Please document autobuilder-imposed build-dependency alternative re #628515 recommending verbose build logs #664257 document Architecture name definitions #682347 mark editor virtual package name as obsolete #685506 copyright-format: new Files-Excluded field #685746 debian-policy Consider clarifying the use of recommends #688251 Built-Using description too aggressive #749826 [multiarch] please document the use of Multi-Arch field in debian/c #757760 please document build profiles #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides #767839 Linking documentation of arch:any package to arch:all #770440 policy should mention systemd timers #773557 Avoid unsafe RPATH/RUNPATH #780725 PATH used for building is not specified #793499 The Installed-Size algorithm is out-of-date #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #823256 Update maintscript arguments with dpkg >= 1.18.5 #833401 virtual packages: dbus-session-bus, dbus-default-session-bus #835451 Building as root should be discouraged #838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus #845715 Please document that packages are not allowed to write outside thei #853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai #874019 Note that the -e argument to x-terminal-emulator works like #874206 allow a trailing comma in package relationship fields Wording proposed, awaiting review from anyone and/or seconds by DDs #515856 remove get-orig-source #542288 Versions for native packages, NMU s, and binary only uploads #582109 document triggers where appropriate #610083 Remove requirement to document upstream source location in debian/c #645696 [copyright-format] clearer definitions and more consistent License: #649530 [copyright-format] clearer definitions and more consistent License: #662998 stripping static libraries #682347 mark editor virtual package name as obsolete #683222 say explicitly that debian/changelog is required in source packages #688251 Built-Using description too aggressive #737796 copyright-format: support Files: paragraph with both abbreviated na #756835 Extension of the syntax of the Packages-List field. #786470 [copyright-format] Add an optional License-Grant field #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #835451 Building as root should be discouraged #845255 Include best practices for packaging database applications #850729 Documenting special version number suffixes #874090 Clarify wording of some passages #874095 copyright-format: Use the synopsis term established in the de Merged for the next release #661928 recipe for determining shlib package name #679751 please clarify package account and home directory location in policy #683222 say explicitly that debian/changelog is required in source packages #870915 [5.6.30] Testsuite: There are much more defined values #872893 Chapters, sections, appendices and numbering #872895 Include multi-page HTML in package #872896 An html.tar.gz has leaked into the .deb? #872900 Very generic info file name #872950 Too much indirection in info file menus #873819 upgrading-checklist.txt: typo pgpsignurlmangle in section 4.11 of V #874411 missing line breaks in summary of ways maintainers scripts are call

29 August 2017

Sean Whitton: Nourishment

This semester I am taking JPN 530, Haruki Murakami and the Literature of Modern Japan . My department are letting me count it for the Philosophy Ph.D., and in fact my supervisor is joining me for the class. I have no idea what the actual class sessions will be like first one this afternoon and I m anxious about writing a literature term paper. But I already know that my weekends this semester are going to be great because I ll be reading Murakami s novels. What s particularly wonderful about this, and what I wanted to write about, is how nourishing I find reading literary fiction to be. For example, this weekend I read
This was something sure to be crammed full of warm secrets, like an antique clock built when peace filled the world. Potentiality knocks at the door of my heart.
and I was fed for the day. All my perceived needs dropped away; that s all it takes. This stands in stark contrast to reading philosophy, which is almost always draining rather than nourishing even philosophy I really want to read. Especially having to read philosophy at the weekend. (quotation is from On Seeing the 100% Perfect Girl on a Beautiful April Morning)

19 August 2017

Sean Whitton: Debian Policy call for participation -- August 2017

At the Debian Policy BoF at DebConf17, Solveig suggested that we could post summaries of recent activity in policy bugs to Planet Debian, as a kind of call for participation. Russ Allbery had written a script to generate such a summary some time ago, but it couldn t handle the usertags the policy team uses to progress bugs through the policy changes process. Today I enhanced the script to handle usertags and I m pleased to be able to post a summary of our bugs. Consensus has been reached and help is needed to write a patch #172436 BROWSER and sensible-browser standardization #273093 document interactions of multiple clashing package diversions #299007 Transitioning perms of /usr/local #314808 Web applications should use /usr/share/package, not /usr/share/doc/ #425523 Describe error unwind when unpacking a package fails #452393 Clarify difference between required and important priorities #476810 Please clarify 12.5, Copyright information #484673 file permissions for files potentially including credential informa #491318 init scripts should support start/stop/restart/force-reload - why #556015 Clarify requirements for linked doc directories #568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar #578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al. #582109 document triggers where appropriate #587279 Clarify restrictions on main to non-free dependencies #587991 perl-policy: /etc/perl missing from Module Path #592610 Clarify when Conflicts + Replaces et al are appropriate #613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS) #614807 Please document autobuilder-imposed build-dependency alternative re #616462 clarify wording of parenthetical in section 2.2.1 #628515 recommending verbose build logs #661928 recipe for determining shlib package name #664257 document Architecture name definitions #682347 mark editor virtual package name as obsolete #683222 say explicitly that debian/changelog is required in source packages #685506 copyright-format: new Files-Excluded field #685746 debian-policy Consider clarifying the use of recommends #688251 Built-Using description too aggressive #757760 please document build profiles #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides #767839 Linking documentation of arch:any package to arch:all #770440 policy should mention systemd timers #773557 Avoid unsafe RPATH/RUNPATH #780725 PATH used for building is not specified #793499 The Installed-Size algorithm is out-of-date #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #823256 Update maintscript arguments with dpkg >= 1.18.5 #833401 virtual packages: dbus-session-bus, dbus-default-session-bus #835451 Building as root should be discouraged #838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus #845715 Please document that packages are not allowed to write outside thei #853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai Wording proposed, awaiting review from anyone and/or seconds by DDs #542288 Versions for native packages, NMU s, and binary only uploads #582109 document triggers where appropriate #630174 forbid installation into /lib64 #645696 [copyright-format] clearer definitions and more consistent License: #648271 11.8.3 Packages providing a terminal emulator says xterm passes - #649530 [copyright-format] clearer definitions and more consistent License: #662998 stripping static libraries #732445 debian-policy should encourage verification of upstream cryptograph #737796 copyright-format: support Files: paragraph with both abbreviated na #756835 Extension of the syntax of the Packages-List field. #786470 [copyright-format] Add an optional License-Grant field #835451 Building as root should be discouraged #844431 Packages should be reproducible #845255 Include best practices for packaging database applications #850729 Documenting special version number suffixes Merged for the next release #587279 Clarify restrictions on main to non-free dependencies #616462 clarify wording of parenthetical in section 2.2.1 #732445 debian-policy should encourage verification of upstream cryptograph #844431 Packages should be reproducible

18 August 2017

Sean Whitton: The knowledge that one has an unread message is equivalent to a 10 point drop in one's IQ

According to Daniel Pocock s talk at DebConf17 s Open Day, hearing a ping from your messaging or e-mail app or seeing a visual notification of a new unread message has an equivalent effect on your ability to concentrate as This effect is probably at least somewhat mitigated by reading the message, but that is a context switch, and we all know what those do to your concentration. So if you want to get anything done, be sure to turn off notifications.

17 August 2017

Sean Whitton: DebCamp/DebConf17: reports on sprints and BoFs

In addition to my personal reflections on DebCamp/DebConf17, here is a brief summary of the activities that I had a hand in co-ordinating. I won t discuss here many other small items of work and valuable conversations that I had during the two weeks; hopefully the fruits of these will show themselves in my uploads to the archive over the next year. Debian Policy sprint & BoF Debian Emacs Team meeting/sprint Unfortunately we didn t make any significant progress towards converting all addons to use dh_elpa, as the work is not that much fun. Might be worth a more focused sprint next year. Report on team website Git for Debian packaging BoF & follow-up conversations The BoF was far more about dgit than I had wanted; however, I think that this was mostly because people had questions about dgit, rather than any unintended lecturing by me. I believe that several people came away from DebConf thinking that starting to use dgit would improve Debian for themselves and for users of their packages.

Sean Whitton: I went all the way to Montr al for DebConf17, and all I got was a new MUA

This year s group photo (by Aigars Mahinovs). I really like the tagline
On Sunday night I got back from Montr al, where I attended both DebCamp17 and DebConf17. It was a wonderful two weeks. All I really did was work on Debian for roughly eight hours per day, interspersed with getting to know both people I ve been working with since I first began contributing to Debian in late 2015, and people I didn t yet know. But this was all I really needed to be doing. There was no need to engage in distracting myself. I enjoyed the first week more. There were sufficiently few people present that you could know at least all of their faces, and interesting-sounding talks didn t interrupt making progress on one s own work or unblocking other people s work. In the second week it was great to meet people who were only present for the second week, but it felt more like regular Debian, in that I was often waiting on other people or they were waiting on me. While I spent one morning actually writing fresh code, and I did a fair amount of pure packaging work, the majority of my time was poured into (i) Debian Policy; (ii) discussions within the Emacs team; and (iii) discussions about dgit. This was as I expected. During DebConf, it s not that useful to seclude oneself and sufficiently reacquaint oneself with a codebase that one can start producing patches, because that can be done anywhere in the world, without everyone else around. It s far more useful to bring different people together to get projects unblocked. I did some of that for my own work, and also tried to help other people s, including those who weren t able to attend the conference. In my ordinary life, taking a step back from the methods by which I protect my PGP keys and other personal data, I can appear to myself as a paranoid extremist, or some kind of data hoarder. It was comforting to find at DebConf plenty of people who go way further than me: multiple user accounts on their laptop, with separate X servers, for tasks of different security levels; PGP keys on smartcards; refusal to sign my PGP key based on government-issued ID alone; use of Qubes OS. One thing that did surprise me was to find myself in a minority for using the GNOME desktop; I had previously assumed that most people deep in Debian development didn t bother with tiling window managers. Turns out they are enthusiastic to talk about the trade-offs between window managers while riding the subway train back to our accommodation at midnight who knew such people existed? I was pleased to find them. One evening, I received a tag-teamed live tutorial in using i3 s core keybindings, and the next morning GNOME seemed deeply inelegant. The insinuation began, but I was immediately embroiled in inner struggle over the fact that i3 is a very popular tiling window manager, so it wouldn t be very cool if I were to start using it. This difficulty was compounded when I learned that the Haskell team lead still uses xmonad. The struggle continues. I hope that I ve been influenced by the highly non-judgemental and tolerant attitudes of the attendees of the conference. While most people at the conference were pretty ordinary aside from wanting to talk about the details of Debian packaging and processes! there were several people who rather visibly rejected social norms about how to present themselves. Around these people there was nothing of the usual tension. Further, in contrast with my environment as a graduate student, everyone was extremely relaxed about how everyone was spending their time. People drinking beer in the evenings were sitting at tables where other people were continuing to silently work on Debian. It is nice to have my experience in Montr al as a reference to check my own judgemental tendencies. I came away with a lot more than a new MUA: a certainty that I want to try to get to next year s conference; friends; a real life community behind what was hitherto mostly a hobby; a long list of tasks and the belief that I can accomplish them; a list of PGP fingerprints to sign; a new perspective on the arguments that occur on Debian mailing lists; an awareness of the risk of unconsciously manipulating other community members into getting work done. With regard to the MUA, I should say that I did not waste a lot of DebConf time messing with its configuration. I had actually worked out a notmuch configuration some months ago, but couldn t use it because I couldn t figure out how to incorporate my old mail archives into its index. Fortunately notmuch s maintainer is also on the Emacs team he was able to confirm that the crazy solution I d come up with was not likely to break notmuch s operating assumptions, and so I was able to spend about half an hour copying and pasting the configuration and scripts I d previously developed into my homedir, and then start using notmuch for the remainder of the conference. The main reason for wanting to use notmuch was to handle Debian mailing list volume more effectively than I m able to with mutt, so I was very happy to have the opportunity to pester David with newbie questions. Many, many thanks to all the volunteers whose efforts made DebCamp17 and DebConf17 possible.

22 July 2017

Sean Whitton: I'm going to DebCamp17, Montr al, Canada

Here s what I m planning to work on please get in touch if you want to get involved with any of these items. In rough order of priority/likelihood of completion:

12 July 2017

Reproducible builds folks: Reproducible Builds: week 115 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 2 and Saturday July 8 2017: Reproducible work in other projects Ed Maste pointed to a thread on the LLVM developer mailing list about container iteration being the main source of non-determinism in LLVM, together with discussion on how to solve this. Ignoring build path issues, container iteration order was also the main issue with rustc, which was fixed by using a fixed-order hash map for certain compiler structures. (It was unclear from the thread whether LLVM's builds are truly path-independent or rather that they haven't done comparisons between builds run under different paths.) Bugs filed Patches submitted upstream: Reviews of unreproducible packages 52 package reviews have been added, 62 have been updated and 20 have been removed in this week, adding to our knowledge about identified issues. No issue types were updated or added this week. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Development continued in git with contributions from: With these changes, we are able to generate a dynamically loaded HTML diff for GCC-6 that can be displayed in a normal web browser. For more details see this mailing list post. Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

5 June 2017

Sean Whitton: skiessequel

Hey Sega, Why the Hell is There No Skies of Arcadia Sequel? USgamer
There was still plenty of world left to be explored, if it came to that
I don t understand this remark there really wasn t much left. They d have to make changes in order to squash more in, as I ve found when trying to convert the world of Arcadia to D&D.

30 May 2017

Sean Whitton: Corbyn and May

Since arriving back in the UK I ve found myself appreciating Sheffield, and indeed British life more generally, far more than I expected, and far more than I have on any previous return, during the time I ve been working and now studying abroad. On Sunday, John Prescott came to give a speech to those of us campaigning for Labour, before we set to work. A heckler came over and shouted at Prescott: how could he vote for Labour with Corbyn in charge? Prescott did not break his stride, shouting something in response to the man and then returning to his speech, and someone went to the man and said, he came here to speak to us, please don t interrupt, come over here and let s talk about Corbyn. And the man did. Real democracy on a street corner, where people are able to fully express themselves without watching their words, or being told they re being uncivil, and without any hint of police or security (note, for those outside the UK reading this post, that John Prescott was the Deputy Prime Minister for 8 years he arrived in a squat people carrier). I think that living in the US had made me believe that this kind of engagement with politics was over. Since I value these battles for ideas so highly, it makes me want to leave Arizona sooner rather than later. In last night s Corbyn vs. May , in which each of the two answered audience questions and were then interviewed by the aggressive Jeremy Paxman May has refused to engage in a head-to-head debate we saw Corbyn at his best. I don t think that there was a clear loser, but there was an opportunity to see that Corbyn is quite capable of oratory. For me, there were two highlights. A small businessman asked Corbyn how he could vote for someone who was raising both corporation tax and the minimum wage. Without showing a grain of disrespect, Corbyn challenged him to reconsider his position on the grounds that we are all better off if everyone is better off. The second highlight was Corbyn s firm response to Paxman going on and on about why abolishing the monarchy was not in the manifesto, while Corbyn is a known republican: we re not going to abolish the monarchy because I m fighting this election for social justice (paraphrased). This is the slightly old-fashioned sense of social justice : truly universal entitlement to health and education, because that is the mark of a civilised nation. What a privilege it is to be able to both campaign and vote for such a man. I ve been thinking about the responses we should make to neo-liberals who say that pouring money into health and education for those who can already afford it results in inefficiency and waste, rendering everyone worse off. There are many such people in the Arizona philosophy department. I do not believe that this economic argument has yet been won by the neo-liberals. A different response, though, is to think about the opportunities for the development of virtue that are lost when we introduce markets. I think that fear is one of the greatest barriers to the development of the virtues. It closes us down. Fundamentally, social justice is about the removal of fear, so that people are able to flourish. The neo-liberals would rather encourage and exploit fear, in all stratas of society (they want themselves to be afraid of being a bit less rich, and respond accordingly).

11 April 2017

Sean Whitton: coffeereddit

Most people I know can handle a single coffee per day, sometimes even forgetting to drink it. I never could understand how they did it. Talking about this with a therapist I realised that the problem isn t necessary the caffeine, it s my low tolerance of less than razor sharp focus. Most people accept they have slumps in their focus and just work through them. binarybear on reddit

3 April 2017

Sean Whitton: A different reason there are so few tenure-track jobs in philosophy

Recently I heard a different reason suggested as to why there are fewer and fewer tenure-track jobs in philosophy. University administrators are taking control of the tenure review process; previously departments made decisions and the administrators rubber-stamped them. The result of this is that it is easier to get tenure. This is because university administrators grant tenure based on quantitively-measurable achievements, rather than a qualitative assessment of the candidate qua philosopher. If a department thought that someone shouldn t get tenure, the administration might turn around and say that they are going to grant it because the candidate has fulfilled such-and-such requirements. Since it is easier to get tenure, hiring someone at the assistant professor level is much riskier for a philosophy department: they have to assume the candidate will get tenure. So the pre-tenure phase is no longer a probationary period. That is being pushed onto post-docs and graduate students. This results in the intellectual maturity of published work going down. There are various assumptions in the above that could be questioned, but what s interesting is that it takes a lot of the blame for the current situation off the shoulders of faculty members (there have been accusations that they are not doing enough). If tenure-track hires are a bigger risk for the quality of the academic philosophers who end up with permanent jobs, it is good that they are averse to that risk.

13 March 2017

Sean Whitton: Initial views of 5th edition DnD

I ve been playing in a 5e campaign for around two months now. In the past ten days or so I ve been reading various source books and Internet threads regarding the design of 5th edition. I d like to draw some comparisons and contrasts between 5th edition, and the 3rd edition family of games (DnD 3.5e and Paizo s Pathfinder, which may be thought of as 3.75e). The first thing I d like to discuss is that wizards and clerics are no longer Vancian spellcasters. In rules terms, this is the idea that individual spells are pieces of ammunition. Spellcasters have a list of individual spells stored in their heads, and as they cast spells from that list, they cross off each item. Barring special rules about spontaneously converting prepared spells to healing spells, for clerics, the only way to add items back to the list is to take a night s rest. Contrast this with spending points from a pool of energy in order to use an ability to cast a fireball. Then the limiting factor on using spells is having enough points in your mana pool, not having further castings of the spell waiting in memory. One of the design goals of 5th edition was to reduce the dominance of spellcasters at higher levels of play. The article to which I linked in the previous paragraph argues that this rebalancing requires the removal of Vancian magic. The idea, to the extent that I ve understood it, is that Vancian magic is not an effective restriction on spellcaster power levels, so it is to be replaced with other restrictions adding new restrictions while retaining the restrictions inherent in Vancian magic would leave spellcasters crippled. A further reason for removing Vancian magic was to defeat the so-called five minute adventuring day . The compat ability of a party that contains higher level Vancian spellcasters drops significantly once they ve fired off their most powerful combat spells. So adventuring groups would find themselves getting into a fight, and then immediately retreating to fully rest up in order to get their spells back. This removes interesting strategic and roleplaying possibilities involving the careful allocation of resources, and continuing to fight as hit points run low. There are some other related changes. Spell components are no longer used up when casting a spell. So you can use one piece of bat guano for every fireball your character ever casts, instead of each casting requiring a new piece. Correspondingly, you can use a spell focus, such as a cool wand, instead of a pouch full of material components since the pouch never runs out, there s no mechanical change if a wizard uses an arcane focus instead. 0th level spells may now be cast at will (although Pathfinder had this too). And there are decent 0th level attack spells, so a spellcaster need not carry a crossbow or shortbow in order to have something to do on rounds when it would not be optimal to fire off one of their precious spells. I am very much in favour of these design goals. The five minute adventuring day gets old fast, and I want it to be possible for the party to rely on the cool abilities of non-spellcasters to deal with the challenges they face. However, I am concerned about the flavour changes that result from the removal of Vancian magic. These affect wizards and clerics differently, so I ll take each case in turn. Firstly, consider wizards. In third edition, a wizard had to prepare and cast Read Magic (the only spell they could prepare without a spellbook), and then set about working through their spellbook. This involved casting the spells they wanted to prepare, up until the last few triggering words or gestures that would cause the effect of the spell to manifest. They would commit these final parts of the spell to memory. When it came to casting the spell, the wizard would say the final few words and make the required gestures, and bring out relevant material components from their component pouch. The completed spell would be ripped out of their mind, to manifest its effect in the world. We see that the casting of a spell is a highly mentally-draining activity it rips the spell out of the caster s memory! not to be undertaken lightly. Thus it is natural that a wizard would learn to use a crossbow for basic damage-dealing. Magic is not something that comes very naturally to the wizard, to be deployed in combat as readily as the fighter swings their sword. They are not a superhero or video game character, pew pew ing their way to victory. This is a very cool starting point upon which to roleplay an academic spellcaster, not really available outside of tabletop games. I see it as a distinction between magical abilities and real magic. Secondly, consider clerics. Most of the remarks in the previous paragraph apply, suitably reworked to be in terms of requesting certain abilities from the deity to whom the cleric is devoted. Additionally, there is the downgrading of the importance of the cleric s healing magic in 5th edition. Characters can heal themselves by taking short and long rests. Previously, natural healing was very slow, so a cleric would need to convert all their remaining magic to healing spells at the end of the day, and hope that it was enough to bring the party up to fighting shape. Again, this made the party of adventurers seem less like superheroes or video game characters. Magic had a special, important and unique role, that couldn t be replaced by the abilities of other classes.
There are some rules in the back of the DMG Slow Natural Healing , Healing Kit Dependency , Lingering Wounds which can be used to make healing magic more important. I m not sure how well they would work without changes to the cleric class.
I would like to find ways to restore the feel and flavour of Vancian clerics and wizards to 5th edition, without sacrificing the improvements that have been made that let other party members do cool stuff too. I hope it is possible to keep magic cool and unique without making it dominate the game. It would be easy to forbid the use of arcane foci, and say that material component pouches run out if the party do not visit a suitable marketplace often enough. This would not have a significant mechanical effect, and could enhance roleplaying possibilities. I am not sure how I could deal with the other issues I ve discussed without breaking the game. The second thing I would like to discuss is bounded accuracy. Under this design principle, the modifiers to dice rolls grow much more slowly. The gain of hit points remains unbounded. Under third edition, it was mechanically impossible for a low-level monster to land a hit on a higher-level adventurer, rendering them totally useless even in overwhelming numbers. With bounded accuracy, it s always possible for a low-level monster to hit a PC, even if they do insigificant damage. That means that multiple low-level monsters pose a threat. This change opens up many roleplaying opportunities by keeping low-level character abilities relevant, as well as monster types that can remain involves in stories without giving them implausible new abilities so they don t fall far behind the PCs. However, I m a little worried that it might make high level player characters feel a lot less powerful to play. I want to cease a be a fragile adventurer and become a world-changing hero at later levels, rather than forever remain vulnerable to the things that I was vulnerable to at the start of the game. This desire might just be the result of the video games which I played growing up. In the JRPGs I played and in Diablo II, enemies in earlier areas of the map were no threat at all once you d levelled up by conquering higher-level areas. My concerns about bounded accuracy might just be that it clashes with my own expectations of how fantasy heroes work. A good DM might be able to avoid these worries entirely. The final thing I d like to discuss is the various simplifications to the rules of 5th edition, when it is compared with 3rd edition and Pathfinder. Attacks of opportunity are only provoked when leaving a threatened square; you can go ahead and cast a spell when in melee with someone. There is a very short list of skills, and party members are much closer to each other in skills, now that you can t pump more and more ranks into one or two abilities. Feats as a whole are an optional rule. At first I was worried about these simplifications. I thought that they might make character building and tactics in combat a lot less fun. However, I am now broadly in favour of all of these changes, for two reasons. Firstly, they make the game so much more accessible, and make it far more viable to play without relying on a computer program to fill in the boxes on your character sheet. In my 5th edition group, two of us have played 3rd edition games, and the other four have never played any tabletop games before. But nobody has any problems figuring out their modifiers because it is always simply your ability bonus or penalty, plus your proficiency bonus if relevant. And advantage and disadvantage is so much more fun than getting an additional plus or minus two. Secondly, these simplifications downplay the importance of the maths, which means it is far less likely to be broken. It is easier to ensure that a smaller core of rules is balanced than it is to keep in check a larger mass of rules, constantly being supplemented by more and more addon books containing more and more feats and prestige classes. That means that players make their characters cool by roleplaying them in interesting ways, not making them cool by coming up with ability combos and synergies in advance of actually sitting down to play. Similarly, DMs can focus on flavouring monsters, rather than writing up longer stat blocks. I think that this last point reflects what I find most worthwhile about tabletop RPGs. I like characters to encounter cool NPCs and cool situations, and then react in cool ways. I don t care that much about character creation. (I used to care more about this, but I think it was mainly because of interesting options for magic items, which hasn t gone away.) The most important thing is exercising group creativity while actually playing the game, rather than players and DMs having to spend a lot of time preparing the maths in advance of playing. Fifth edition enables this by preventing the rules from getting in the way, because they re broken or overly complex. I think this is why I love Exalted: stunting is vital, and there is social combat. I hope to be able to work out a way to restore Vancian magic, but even without that, on balance, fifth edition seems like a better way to do group storytelling about fantasy heroes. Hopefully I will have an opportunity to DM a 5th edition campaign. I am considering disallowing all homebrew and classes and races from supplemental books. Stick to the well-balanced core rules, and do everything else by means of roleplaying and flavour. This is far less gimmicky, if more work for unimaginative players (such as myself!). Some further interesting reading:

7 February 2017

Sean Whitton: reclaiming conversation

On Friday night I attended a talk by Sherry Turkle called Reclaiming Conversation: The Power of Talk in a Digital Age . Here are my notes.
Turkle is an anthropologist who interviews people from different generations about their communication habits. She has observed cross-generational changes thanks to (a) the proliferation of instant messaging apps such as WhatsApp and Facebook Messenger; and (b) fast web searching from smartphones. Her main concern is that conversation is being trivialised. Consider six or seven college students eating a meal together. Turkle s research has shown that the etiquette among such a group has shifted such that so long as at least three people are engaged in conversation, others at the table feel comfortable turning their attention to their smartphones. But then the topics of verbal conversation will tend away from serious issues you wouldn t talk about your mother s recent death if anyone at the table was texting. There are also studies that purport to show that the visibility of someone s smartphone causes them to take a conversation less seriously. The hypothesis is that the smartphone is a reminder of all the other places they could be, instead of with the person they are with. A related cause of the trivialisation of conversation is that people are far less willing to make themselves emotionally vulnerable by talking about serious matters. People have a high degree of control over the interactions that take place electronically (they can think about their reply for much longer, for example). Texting is not open-ended in the way a face-to-face conversation is. People are unwilling to give up this control, so they choose texting over talking. What is the upshot of these two respects in which conversation is being trivialised? Firstly, there are psycho-social effects on individuals, because people are missing out on opportunities to build relationships. But secondly, there are political effects. Disagreeing about politics immediately makes a conversation quite serious, and people just aren t having those conversations. This contributes to polarisation. Note that this is quite distinct from the problems of fake news and the bubbling effects of search engine algorithms, including Facebook s news feed. It would be much easier to tackle fake news if people talked about it with people around them who would be likely to disagree with them. Turkle understands connection as a capacity for solitude and also for conversation. The drip feed of information from the Internet prevents us from using our capacity for solitude. But then we fail to develop a sense of self. Then when we finally do meet other people in real life, we can t hear them because we just use them to try to establish a sense of self. Turkle wants us to be more aware of the effects that our smartphones can have on conversations. People very rarely take their phone out during a conversation because they want to escape from that conversation. Instead, they think that the phone will contribute to that conversation, by sharing some photos, or looking up some information online. But once the phone has come out, the conversation almost always takes a turn for the worse. If we were more aware of this, we would have access to deeper interactions. A further respect in which the importance of conversation is being downplayed is in the relationships between teachers and students. Students would prefer to get answers by e-mail than build a relationship with their professors, but of course they are expecting far too much of e-mail, which can t teach them in the way interpersonal contact can. All the above is, as I said, cross-generational. Something that is unique to millenials and below is that we seek validation for the way that we feel using social media. A millenial is not sure how they feel until they send a text or make a broadcast (this makes them awfully dependent on others). Older generations feel something, and then seek out social interaction (presumably to share, but not in the social media sense of share ). What does Turkle think we can do about all this? She had one positive suggestion and one negative suggestion. In response to student or colleague e-mails asking for something that ought to be discussed face-to-face, reply I m thinking. And you ll find they come to you. She doesn t want anyone to write empathy apps in response to her findings. For once, more tech is definitely not the answer. Turkle also made reference to the study reported here and here and here.

31 January 2017

Sean Whitton: sorcererscode

The Sorcerer s Code This is a well-written biographical piece on Stallman.

9 January 2017

Sean Whitton: jan17vcspkg

There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference. I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things. When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian s packaging is just another branch of development. Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git. One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.

Next.

Previous.