Sean Whitton: Debian Policy call for participation -- October 2017

SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
This was something sure to be crammed full of warm secrets, like an antique clock built when peace filled the world. Potentiality knocks at the door of my heart.and I was fed for the day. All my perceived needs dropped away; that s all it takes. This stands in stark contrast to reading philosophy, which is almost always draining rather than nourishing even philosophy I really want to read. Especially having to read philosophy at the weekend. (quotation is from On Seeing the 100% Perfect Girl on a Beautiful April Morning)
![]() |
SOURCE_DATE_EPOCH
support:
There was still plenty of world left to be explored, if it came to thatI don t understand this remark there really wasn t much left. They d have to make changes in order to squash more in, as I ve found when trying to convert the world of Arcadia to D&D.
Most people I know can handle a single coffee per day, sometimes even forgetting to drink it. I never could understand how they did it. Talking about this with a therapist I realised that the problem isn t necessary the caffeine, it s my low tolerance of less than razor sharp focus. Most people accept they have slumps in their focus and just work through them. binarybear on reddit
debian-devel
mailing list
about the representation of the changes to upstream source code made
by Debian maintainers. Here are a few notes for my own reference.
I spent a lot of time defending the workflow I described in
dgit-maint-merge(7)
(which was inspired by this blog post).
However, I came to be convinced that there is a case for a manually
curated series of patches for certain classes of package. It will
depend on how upstream uses git (rebasing or merging) and on whether
the Debian delta from upstream is significant and/or long-standing. I
still think that we should be using dgit-maint-merge(7)
for leaf or
near-leaf packages, because it saves so much volunteer time that can
be better spent on other things.
When upstream does use a merging workflow, one advantage of the
dgit-maint-merge(7)
workflow is that Debian s packaging is just
another branch of development.
Now consider packages where we do want a manually curated patch
series. It is very hard to represent such a series in git. The only
natural way to do it is to continually rebase the patch series against
an upstream branch, but public branches that get rebased are not a
good idea. The solution that many people have adopted is to represent
their patch series as a folder full of .diff
files, and then use
gbp pq
to convert this into a rebasing branch. This branch is not
shared. It is edited, rebased, and then converted back to the folder
of .diff
files, the changes to which are then committed to git.
One of the advantages of dgit is that there now exists an official,
non-rebasing git history of uploads to the archive.
It would be nice if we could represent curated patch series as
branches in the dgit repos, rather than as folders full of .diff
files. But as I just described, this is very hard. However, Ian
Jackson has the beginnings of a workflow that just might
fit the bill.
Next.