Search Results: "dkg"

9 March 2024

Reproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.

Reproducible Builds at FOSDEM 2024 Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn t the only talk related to Reproducible Builds. However, please see our comprehensive FOSDEM 2024 news post for the full details and links.

Maintainer Perspectives on Open Source Software Security Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that 56% of [polled] projects support reproducible builds .

Mailing list highlights From our mailing list this month:

Distribution work In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian s knowledge about identified issues. A number of issue types were updated as well. [ ][ ][ ][ ] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report
Fedora developer Zbigniew J drzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects README file lists a number of fields will always or almost always vary and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.
Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.
Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:
  • Use a deterministic name instead of trusting gpg s use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [ ][ ]
  • Don t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). [ ]
  • Don t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. [ ]
  • Fix compatibility with pytest 8.0. [ ]
  • Temporarily fix support for Python 3.11.8. [ ]
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). [ ]
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. [ ]
  • Expand an older changelog entry with a CVE reference. [ ]
  • Make test_zip black clean. [ ]
In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [ ][ ] thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.

reprotest reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:
  • Create a (working) proof of concept for enabling a specific number of CPUs. [ ][ ]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [ ][ ]
  • Support a new --vary=build_path.path option. [ ][ ][ ][ ]

Website updates There were made a number of improvements to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Temporarily disable upgrading/bootstrapping Debian unstable and experimental as they are currently broken. [ ][ ]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. [ ]
    • Add an Erlang package set. [ ]
  • Other changes:
    • Grant Jan-Benedict Glaw shell access to the Jenkins node. [ ]
    • Enable debugging for NetBSD reproducibility testing. [ ]
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. [ ]
    • Revert reproducible nodes: mark osuosl2 as down . [ ]
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. [ ]
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. [ ]
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [ ][ ]
Vagrant Cascadian also made the following changes:
  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [ ][ ][ ]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [ ][ ] [ ][ ]
In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [ ], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [ ] and Zheng Junjie added a link to the GNU Guix tests [ ]. Lastly, node maintenance was performed by Holger Levsen [ ][ ][ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

9 February 2024

Reproducible Builds (diffoscope): diffoscope 256 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 256. This version includes the following changes:
* CVE-2024-25711: Use a determistic name when extracting content from GPG
  artifacts instead of trusting the value of gpg's --use-embedded-filenames.
  This prevents a potential information disclosure vulnerability that could
  have been exploited by providing a specially-crafted GPG file with an
  embedded filename of, say, "../../.ssh/id_rsa".
  Many thanks to Daniel Kahn Gillmor <dkg@debian.org> for reporting this
  issue and providing feedback.
  (Closes: reproducible-builds/diffoscope#361)
* Temporarily fix support for Python 3.11.8 re. a potential regression
  with the handling of ZIP files. (See reproducible-builds/diffoscope#362)
You find out more by visiting the project homepage.

7 December 2023

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, December 2023

dkg's New OpenPGP certificate in December 2023 In December of 2023, I'm moving to a new OpenPGP certificate. You might know my old OpenPGP certificate, which had an fingerprint of C29F8A0C01F35E34D816AA5CE092EB3A5CA10DBA. My new OpenPGP certificate has a fingerprint of: D477040C70C2156A5C298549BB7E9101495E6BF7. Both certificates have the same set of User IDs:
  • Daniel Kahn Gillmor
  • <dkg@debian.org>
  • <dkg@fifthhorseman.net>
You can find a version of this transition statement signed by both the old and new certificates at: https://dkg.fifthhorseman.net/2023-dkg-openpgp-transition.txt The new OpenPGP certificate is:
-----BEGIN PGP PUBLIC KEY BLOCK-----
xjMEZXEJyxYJKwYBBAHaRw8BAQdA5BpbW0bpl5qCng/RiqwhQINrplDMSS5JsO/Y
O+5Zi7HCwAsEHxYKAH0FgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0
QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmfUAgfN9tyTSxpxhmHA1r63GiI4v6NQ
mrrWVLOBRJYuhQMVCggCmwECHgEWIQTUdwQMcMIValwphUm7fpEBSV5r9wAAmaEA
/3MvYJMxQdLhIG4UDNMVd2bsovwdcTrReJhLYyFulBrwAQD/j/RS+AXQIVtkcO9b
l6zZTAO9x6yfkOZbv0g3eNyrAs0QPGRrZ0BkZWJpYW4ub3JnPsLACwQTFgoAfQWC
ZXEJywMLCQcJELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVv
aWEtcGdwLm9yZ4l+Z3i19Uwjw3CfTNFCDjRsoufMoPOM7vM8HoOEdn/vAxUKCAKb
AQIeARYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AAALZQEAhJsgouepQVV98BHUH6Sv
WvcKrb8dQEZOvHFbZQQPNWgA/A/DHkjYKnUkCg8Zc+FonqOS/35sHhNA8CwqSQFr
tN4KzRc8ZGtnQGZpZnRoaG9yc2VtYW4ubmV0PsLACgQTFgoAfQWCZXEJywMLCQcJ
ELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVvaWEtcGdwLm9y
ZxLvwkgnslsAuo+IoSa9rv8+nXpbBdab2Ft7n4H9S+d/AxUKCAKbAQIeARYhBNR3
BAxwwhVqXCmFSbt+kQFJXmv3AAAtFgD4wqcUfQl7nGLQOcAEHhx8V0Bg8v9ov8Gs
Y1ei1BEFwAD/cxmxmDSO0/tA+x4pd5yIvzgfGYHSTxKS0Ww3hzjuZA7NE0Rhbmll
bCBLYWhuIEdpbGxtb3LCwA4EExYKAIAFgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAA
AAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmd7X4TgiINwnzh4jar0
Pf/b5hgxFPngCFxJSmtr/f0YiQMVCggCmQECmwECHgEWIQTUdwQMcMIValwphUm7
fpEBSV5r9wAAMuwBAPtMonKbhGOhOy+8miAb/knJ1cIPBjLupJbjM+NUE1WyAQD1
nyGW+XwwMrprMwc320mdJH9B0jdokJZBiN7++0NoBM4zBGVxCcsWCSsGAQQB2kcP
AQEHQI19uRatkPSFBXh8usgciEDwZxTnnRZYrhIgiFMybBDQwsC/BBgWCgExBYJl
cQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBn
cC5vcmfCopazDnq6hZUsgVyztl5wmDCmxI169YLNu+IpDzJEtQKbAr6gBBkWCgBv
BYJlcQnLCRB3LRYeNc1LgUcUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lh
LXBncC5vcmcQglI7G7DbL9QmaDkzcEuk3QliM4NmleIRUW7VvIBHMxYhBHS8BMQ9
hghL6GcsBnctFh41zUuBAACwfwEAqDULksr8PulKRcIP6N9NI/4KoznyIcuOHi8q
Gk4qxMkBAIeV20SPEnWSw9MWAb0eKEcfupzr/C+8vDvsRMynCWsDFiEE1HcEDHDC
FWpcKYVJu36RAUlea/cAAFD1AP0YsE3Eeig1tkWaeyrvvMf5Kl1tt2LekTNWDnB+
FUG9SgD+Ka8vfPR8wuV8D3y5Y9Qq9xGO+QkEBCW0U1qNypg65QHOOARlcQnLEgor
BgEEAZdVAQUBAQdAWTLEa0WmnhUmDBdWXX0ZlYAa4g1CK/fXg0NPOQSteA4DAQgH
wsAABBgWCgByBYJlcQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9u
cy5zZXF1b2lhLXBncC5vcmexrMBZe0QdQ+ZJOZxFkAiwCw2I7yTSF2Ox9GVFWKmA
mAKbDBYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AABcJQD/f4ltpSvLBOBEh/C2dIYa
dgSuqkCqq0B4WOhFRkWJZlcA/AxqLWG4o8UrrmwrmM42FhgxKtEXwCSHE00u8wR4
Up8G
=9Yc8
-----END PGP PUBLIC KEY BLOCK-----
When I have some reasonable number of certifications, i'll update the certificate associated with my e-mail addresses on https://keys.openpgp.org, in DANE, and in WKD. Until then, those lookups should continue to provide the old certificate.

27 September 2023

Jonathan McDowell: onak 0.6.3 released

Yesterday I tagged a new version of onak, my OpenPGP compatible keyserver. I d spent a bit of time during DebConf doing some minor cleanups, in particular an annoying systemd socket activation issue I d been seeing. That turned out to be due completely failing to compile in the systemd support, even when it was detected. There was also a signature verification issue with certain Ed225519 signatures (thanks Antoine Beaupr for making me dig into that one), along with various code cleanups. I also worked on Stateless OpenPGP CLI support, which is something I talked about when I released 0.6.2. It isn t something that s suitable for release, but it is sufficient to allow running the OpenPGP interoperability test suite verification tests, which I m pleased to say all now pass. For the next release I m hoping the OpenPGP crypto refresh process will have completed, which at the very least will mean adding support for v6 packet types and fingerprints. The PostgreSQL DB backend could also use some love, and I might see if performance with SQLite3 has improved any. Anyway. Available locally or via GitHub.
0.6.3 - 26th September 2023
  • Fix systemd detection + socket activation
  • Add CMake checking for Berkeley DB
  • Minor improvements to keyd logging
  • Fix decoding of signature creation time
  • Relax version check on parsing signature + key packets
  • Improve HTML escaping
  • Handle failed database initialisation more gracefully
  • Fix bug with EDDSA signatures with top 8+ bits unset

29 June 2023

C.J. Collier: Converting a windows install to a libvirt VM

Reduce the size of your c: partition to the smallest it can be and then turn off windows with the understanding that you will never boot this system on the iron ever again.
Boot into a netinst installer image (no GUI). hold alt and press left arrow a few times until you get to a prompt to press enter. Press enter. In this example /dev/sda is your windows disk which contains the c: partition
and /dev/disk/by-id/usb0 is the USB-3 attached SATA controller that you have your SSD attached to (please find an example attached). This SSD should be equal to or larger than the windows disk for best compatability. A photo of a USB-3 attached SATA controller To find the literal path names of your detected drives you can run fdisk -l. Pay attention to the names of the partitions and the sizes of the drives to help determine which is which. Once you have a shell in the netinst installer, you should maybe be able to run a command like the following. This will duplicate the disk located at if (in file) to the disk located at of (out file) while showing progress as the status.
dd if=/dev/sda of=/dev/disk/by-id/usb0 status=progress
If you confirm that dd is available on the netinst image and the previous command runs successfully, test that your windows partition is visible in the new disk s partition table. The start block of the windows partition on each should match, as should the partition size.
fdisk -l /dev/disk/by-id/usb0
fdisk -l /dev/sda
If the output from the first is the same as the output from the second, then you are probably safe to proceed. Once you confirm that you have made and tested a full copy of the blocks from your windows drive saved on your usb disk, nuke your windows partition table from orbit.
dd if=/dev/zero of=/dev/sda bs=1M count=42
You can press alt-f1 to return to the Debian installer now. Follow the instructions to install Debian. Don t forget to remove all attached USB drives. Once you install Debian, press ctrl-alt-f3 to get a root shell. Add your user to the sudoers group:
# adduser cjac sudoers
log out
# exit
log in as your user and confirm that you have sudo
$ sudo ls
Don t forget to read the spider man advice enter your password you ll need to install virt-manager. I think this should help:
$ sudo apt-get install virt-manager libvirt-daemon-driver-qemu qemu-system-x86
insert the USB drive. You can now create a qcow2 file for your virtual machine.
$ sudo qemu-img convert -O qcow2 \
/dev/disk/by-id/usb0 \
/var/lib/libvirt/images/windows.qcow2
I personally create a volume group called /dev/vg00 for the stuff I want to run raw and instead of converting to qcow2 like all of the other users do, I instead write it to a new logical volume.
sudo lvcreate /dev/vg00 -n windows -L 42G # or however large your drive was
sudo dd if=/dev/disk/by-id/usb0 of=/dev/vg00/windows status=progress
Now that you ve got the qcow2 file created, press alt-left until you return to your GDM session. The apt-get install command above installed virt-manager, so log in to your system if you haven t already and open up gnome-terminal by pressing the windows key or moving your mouse/gesture to the top left of your screen. Type in gnome-terminal and either press enter or click/tap on the icon. I like to run this full screen so that I feel like I m in a space ship. If you like to feel like you re in a spaceship, too, press F11. You can start virt-manager from this shell or you can press the windows key and type in virt-manager and press enter. You ll want the shell to run commands such as virsh console windows or virsh list When virt-manager starts, right click on QEMU/KVM and select New.
In the New VM window, select Import existing disk image
When prompted for the path to the image, use the one we created with sudo qemu-img convert above.
Select the version of Windows you want.
Select memory and CPUs to allocate to the VM.
Tick the Customize configuration before install box
If you re prompted to enable the default network, do so now.
The default hardware layout should probably suffice. Get it as close to the underlying hardware as it is convenient to do. But Windows is pretty lenient these days about virtualizing licensed windows instances so long as they re not running in more than one place at a time. Good luck! Leave comments if you have questions.

29 November 2022

Jonathan McDowell: onak 0.6.2 released

Over the weekend I released a new version of onak, my OpenPGP compatible keyserver. At 2 years since the last release that means I ve at least managed to speed up a bit, but it s fair to say its development isn t a high priority for me at present. This release is largely driven by a collection of minor fixes that have built up, and the knowledge that a Debian freeze is coming in the new year. The fixes largely revolve around the signature verification that was introduced in 0.6.0, which makes it a bit safer to run a keyserver by only accepting key material that can be validated. All of the major items I wanted to work on post 0.6.0 remain outstanding. For the next release I d like to get some basic Stateless OpenPGP Command Line Interface support integrated. That would then allow onak to be tested with the OpenPGP interoperability test suite, which has recently added support for verification only OpenPGP implementations. I realise most people like to dismiss OpenPGP, and the tooling has been fairly dreadful for as long as I ve been using it, but I do think it fills a space that no competing system has bothered to try and replicate. And that s the web of trust, which helps provide some ability to verify keys without relying on (but also without preventing) a central authority to do so. Anyway. Available locally or via GitHub.
0.6.2 - 27th November 2022
  • Don t take creation time from unhashed subpackets
  • Fix ECDSA/SHA1 signature check
  • Fix handling of other signature requirement
  • Fix deletion of keys with PostgreSQL backend
  • Add support for verifying v3 signature packets

3 November 2022

Arturo Borrero Gonz lez: New OpenPGP key and new email

Post logo I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything bad. Is still valid, but I plan to get rid of it soon. It was created in 2013. The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4 I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also, the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg The new key has been uploaded to some public keyservers. If you would like to sign the new key, please follow the steps in the Debian wiki.
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGNjvX4BEADE4w5x0SQmxWLAI1R17RCC98ngTkD/FMyos0GF5xmv0VJeLYhw
x6oJRmiNGHY8+gjq7SyVCWmlwbLKBEPFNI1k5WcrTB+ClgGkWB5KBnbLKm6CSP4N
ccSbrUQrZW+zxk3Q5h3CJljZpmflB2dvRfnDMSSaw8zOc37EtszW3AVVKNYAu3wj
mXpfwI72/OSELhSvhkr51L+ZlEYUMCITeO+jpiWsnU+sA8oKKPjW4+X8cjrN4eFa
1PAPILDf+Omst5SKM2aV5LGZ8rBzb5wNJF6yDexDw2XmfbFWLOfYzFRY6GTXJz/p
8Fh6O1wkHM9RnwmesCXTtkaGQsVFiVsoqGFyzrkIdWPUruB3RG5EzOkapWi/cnbD
1sy7yrUgy99Ew5yzmLaZ40hmRyq/gBBw4yRkdQaddbkErx+9hT+2tJELa5wrmWkb
FtaVZ38xC6gacOZqRjp0Xqtr0jobI0vED8vzIyY0zJwWM0Hu6qqq4hkLWZHjCy8a
T5Oe/Cb78Kqwa2mzJfncDahPxcgxpnbkYdvKokRtNBDftLVEz+Do8Dczw7Me4BoK
HmU8wLyeGeDTmeoBXpxKH90T+rQokgsiiD13bWZ+nBxILun1tjOTVVONG6SHdP3f
unolq8SU3K+m67lLa+pWjyYcNRS2OTWGOz/1zsH2R39ZOyfGD09/10aAKwARAQAB
tC1BcnR1cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJnQGFydHVyby5iZz6J
AlQEEwEKAD4WIQSqZigNTvC/zGv8IQTaXssjHI8ExAUCY2O9fgIbAwUJA8JnAAUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRDaXssjHI8ExCZdD/9Z3vR4sV7vBED4
+mCjdNWWf/mw5YlkZo+XQiMVVss4HfQLdt7VxXgGdcOz5Hond9ax3+qeCEo4DdXq
TC0ACpSCu/TPil6vzbE/kO6i6a4oZjFyteAbbcMXP35stbtDM0U5EZH0adIKknfF
msIPTIdJ/dpkcshtBJIoPqjuuTEBa7bF3OYCajHVqwP4Wsgjy4TvDOwl3hy7bhrQ
ZZHqbh7kW40+alQYaJ8jDvbDh/jhN1/pEiZS9ETu0JfBAF3PYPRLW6XedvwZiPWd
jTXwJd0E+vN5LE1Go8OaYvZb9iitZ21UaYOUnFuhw7SEOSQGfEUBs39+41gBj6vW
05HKCEA6kda9NpfptMbUoSSU+hwRfNA5TdnlxtcRv4NqUigzqa1LoXLdxTsyus+K
BL7dRpKXc72JCrEA3vClisD2FgsxLLRCCSDVM8UM/it/YW7tv42XuhQkTW+okQX4
c5laMzTL+ZV8UOoshseTDOsQsdXhskdnWbnuSwAez2/Dd1gHczuN/+lPiiEnyaTF
XgH17K/F25+92MmwPQcFRVPQcYcbyx1VylA6aCgK6gOEqHCejlZv5XLouzbQh1j1
k6MjUR1ncz8vPV5xSuOMAISqozJ9GxUZT2O3o9Vc9pNg5UEzqTvyURgLOdie8yM4
T93S3nKuHVZ++ZVxEOlPnfEfbFP+xbQrQXJ0dXJvIEJvcnJlcm8gR29uemFsZXog
PGFydHVyb0BkZWJpYW4ub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY73LAhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEMKQQAIe18Np+jdhwxHEFZNppBQ69BtyrnPQg4K5VngZ0NUZdVi+/FU7q
Tc9Z1qNydnXgmav3dafL2/l5zDX9wz7mQD2F0a6luOxZwl1PE6iP5f3cUD7uC9zb
148i1bZGEJbO4iNZKTlJKlbNR9m1PG47pv964CHZnNGp6lsnEspxe2G8DJD48Pje
gbhYukgOtIhQ1CaB1fc8aVwZvXZVSbNBLAqp7pAGhTFJqzHE8/U0sn1/V/wPzFAd
TZtWzKfYAkIIFJI5Rr6LVApIwIe7nWymTdgH4crCd2GZkGR+d6ihPKVSxUAUfoAx
EJQUSJY8rYi39gSDhPuEoK8BYXS1nWFGJiNV1o8xaljQo8rNT9myCaeZuQBLX41/
LRzK4XrxYPvjZpKNucc7fSK+UFriQGzdcAaWtW45Kp/8GmAoLVyCD0DPZNWNJdxp
IORhB33aWakhvDKgaLQa16MJ8fSc3ytn/1lxWzDXA1j05i81y/AOKPtCwBKzQWPF
biuZs3kJgZagLq6L6VOQDHlKqf+jqfl1fWeo04iDg98e0TYKABUfiTz8/MdQcV/X
8VkCgtuZ8BcPPyYzBjvuXWZTvdu0n2pikqAPL4u2cbWfD8JIP2AVCJp9HMGKvENo
XcJgY4h6T3rrC/9EidxECfXlsDbUJxLq0WfJLik84+LRtde3kZiReaIRtC5BcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvQG5ldGZpbHRlci5vcmc+iQJUBBMB
CgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvd8CGwMFCQPCZwAFCwkIBwMF
FQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMSP/g/+MHmxCAi/X+NMHodg9Qou
wEG4Vf1uluAE6c+c1QECCdtSsRjBs1dZoJzGsA23t4LWqluyaptuLDWJQEz+EVKR
mG0bvvropNaoOEShnY069pg7lUHuO/GLeDRhfEH3KT45sIVbLly8QkoGaINSCDLe
RBNaHC6feIC8NfQzQEt72nbi4SgdSQUg0F3lj4WxxECVhXsw/YCqh1d3QYqwRVEE
lCGQ4EbavjtRhO8U7dcL1VwHemKHNq3XvM3PJf1OoPgxWqFW5rHbAdlXdN3WAI6u
DAy7kY+qihz3w6rIDTFq6I3YBTrZ44J+5mN21ZC2iDXAsa/C3Uam0vFsjs/pizuq
WgGI9Vmsyap+bOOjuRSX4hemZoOT4a2GC723fS1dFresYWo3MmwfA3sjgV5tK3ZN
XIpxYIvi6HAHLOAarDaE8Sha1GHvrmPwfZ+cEgTL0mqW3efSF3AFmGHduMB+agzK
rM9sksrRQhbY2fHnBLo1t06SQx3rmhlz5mD1ljQEIzna9D6QKleRu4hgImRLHnCB
CN3o+mZa1MHhaIFzViaD2i3Fv2+bYgT7vnS4QAneLW8O/ZgpAc2MUxMoci5JNyfJ
mWdae7Kbs4Z8rrt/mH2gYyioSB0po4VtVwKWEUW9cLtZusA6mFnMviFpfjakb9TX
MimBAv9hAYpxd+HdfHinmqS0MEFydHVybyBCb3JyZXJvIEdvbnphbGV6IDxhYm9y
cmVyb0B3aWtpbWVkaWEub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY735AhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEGooP/20PR5N34m7CNtyaO96H5W0ULuAuSNuoXaKWDo5LGU6zzDriXbIu
ryYtR66vWF5suf7fHZYX8Ufq4PEsG1UNYEGA9hnjPg3oVwGzBJI7f6Rl2P5Pc8wJ
Eq2kN/xKmfUKIrvgh1f5xgFqC4hzcLDkVlLsPowZWfep8dLY4mtVrsrCD1URhelw
zRDGZ3rTVHWXmfXbSHWR2bgZIIrCtVF8BHStg5b6HuAWpj4Oa0eMfBde0N2RZkLE
ye/r2y/lraHfpT7MXnRMcEmltrv8fic7yvj/Nh4ESWr7UmfbV+GiSw9dc/AlVMXM
ihaW0eXv4F5uMtLJOiqI7bv3UfWSvoqwf2a8EPnzOeBBHhQOOJN7O4UzKBK5GAO8
C3k0I1AV3cTmrXrqT/5yoYAHSekDFCIPES//6Y/pO0ITtCbXkA5e8vaulJbtyXpE
g0Z7I7M1kikL6reZ2PuzsR0psEb/x81bWXODIegyOJolPXMRAY7n9J0xpCnSW9yr
CN4j6YT3Oame04JslwX5Xg1cyheuiusotETYNSKRaGaYBCxYffOWoTLNIBa+RCGc
SVOzJq5pd8fVRM1h2ZZFnfpPJBUb62qPsbk6VwmesGoGevB70zcNQYEI+c35kRfM
IOuJWRIN3Wxx0rpxb5E3i/3TASHM86Dix1VW9vsC/atGU/cgaoTOiNVztDdBcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJvcnJlcm8uZ2xlekBnbWFpbC5j
b20+iQJUBBMBCgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvg8CGwMFCQPC
ZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMS7NA/9F7OL/j7a
xnTDjxAHEiyrCzrBQc/DEAM/yim8E+0UBeTJSZR/bShtbvLbSukeL43tKksPhN/X
skjRF8sJ8KWUnpmSWjv1DQTh7AtkJqACnq7+VtQZq3yuKUCNRNpM8lSFxtmYDUqE
XXD4eMXKoJfdphQ+qpViba+RGXg6sd69Dq739zT/OFMuKZ33z8h7hVNXmoWGcBz6
txvN3cWVJhTLdiBvtn38/0dX7IupQLypLOtP0oZdjoUjkRxTo5biOxt3hUGnxS4x
97PPeRGc4j7lv5ADwFV8bo+g54ZMGRjOcyZmA7dlWFN51JrTx3udW2jgXkYqm7UM
xP4lNwDs9TmT3jan6wR08uwlDakOXfDm3gCQEviN+350sJs2tY+JKBN4QR7NpqeU
2aDFOo0G/0ggf0QbFsMkaTSozerVHRGXMdAi+pbYA6pPWPu8lHIkvvdoj4xUu+Ko
cHX0DCRxmL9mylTbZEanrp5gSpne79McrkbQX2/Yc8lWykCtL5/jHVTD4iNiO5Rf
IJYPAVmC2nlj2URfzwGjjoL5apTStZfng4H2Ccq+3cmhwOXI7pb+PsGeI5PND00A
qHFxe590HFhPxLHoftMIlspstoCvHYGcWQxHNbXW6ccmhHdNYT8Pn4ecKgfr6pCt
0ysilOD2ppPJ88hffKA4nTdtX2Tz2ZwOYwG5Ag0EY2O9fgEQALrapVuv1IcLDit8
9gejdA/Dtlufb2/baImVaQD+dTx2QdMxxEiNKl00a5OhMzXDj9tFrB1Lv4z0t8cY
iDJ+NuydDGgz3MlJgWW0GlpAz8yiul2iqTnkWl3cWeiI+VaX8wzL+acmmkPvlrN8
hM7I55BPr8uBWVIQ7VDmI+ts8gi73xE+Etzzrh13GSSnnYnezfGUQrNfYFcip7D0
hB3bpUIGiPdQ45vSZqXUQx/B6FlabiIGRau8Rt4vaEBGXGFZ9rIR+rMJWx6GqYX4
uY1KM2JZ3SKHk++MWGYdzHdM2oaP6xckZq+u/WiwutkYLLO2hnr03lcAu1IDT1C1
YNPrbTKfqUt+3r0oUK5BrG1Cjdc1mZqcXzYcexOLp79FJLb0t5wPdfgU8dT10kjE
uQxeSYiS4oSpikVQkKoFk++/U95d/z/y/81A6v+cfRus6mW+wRSFSwks7Q5ct7zW
UyKELLC4i4EDgnJXmavVcBD0TWzhH/rZpz9FsO4Mb18IYwbV1/144019/RjiPk5Z
MMNdsjorjV2MtrCIoeAGRgZhbFP2P7CcZOp6ZWzjj40ENlElbLp3VCfkYcTiPHJv
2iaiDz2Mhfmhb1Q/5d/a9tYTYINPmv2QVo+m5Zf+1/U29d2HZMRhD4aqDsivvgtd
GpAnKeus6ePSMqpwjO6v2bmQhjpbABEBAAGJAjwEGAEKACYWIQSqZigNTvC/zGv8
IQTaXssjHI8ExAUCY2O9fgIbDAUJA8JnAAAKCRDaXssjHI8ExA5AD/9VWS1/jHM9
aE3HKCDL4CpiXQPc4ds+3/ft6LXwuCMA/tkt8I4svKZGCCi/X5NfiQetVD+cSzVO
nmloctMt/24yjnGNNSFsDozkn/RqzZIhLJBI69gX4JWR4wpeh4kXMItNM5ZlYw3H
DmuLrf/ey8E2NzbFdzj1VQNoENuwtL2pIJrvK92AcS7acvP0FpiS8riLc5a933SW
oPgelQ1j/04WAH8cyKXB/pruq3OhtK0/b8ylIeI0f7a57dxQj5wysyBVKl+EJd/n
UhypVqMDRWL7N0FttGb9gZ6OVvQnt7iwbtS3tYqAK479+GZwi/Wh/RB2dCDyz8jk
zE0j6y7huP4XzpbBbPVntLDdVAYmpW6iIaTWYxlu79FEUw4JmZdY7hJoEDpHuDIz
ylo0YQgjnRfRfWSdnGCosFrY5UgThPVTaQAILCPtdVyWY4/6s1UaeNs3H0PRA5mz
UT4vDKxGq9gXHnE+qg3dfwMcLR3cDPPWUFVeTfNitZ3Y9eV7SdbQXt5NeOXzFadz
DBc9ZzNx3rBEyUUooU0MEmbltyUFM7R/hVcdpFxs12SgHrvgh13tuxVVVNBXTwwo
pSxmap42vHJERQ8ZJQ4lrvnxNZcuwLHSZK7xVzb0b/1wMooNnhw18vlStMWQJwKl
DiXs/L/ifab2amg9jshULAPgVSw7QeP2OQ==
=UABf
-----END PGP PUBLIC KEY BLOCK-----
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/ For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8 Cheers!

10 May 2022

Daniel Kahn Gillmor: 2022 Digital Rights Job Fair

I'm lucky enough to work at the intersection between information communications technology and civil rights/civil liberties. I get to combine technical interests and social/political interests. I've talked with many folks over the years who are interested in doing similar work. Some come from a technical background, and some from an activist background (and some from both). Are you one of them? Are you someone who works as an activist or in a technical field who wants to look into different ways of meging these interests? Some great organizers maintain a job board for Digital Rights. Next month they'll host a Digital Rights Job Fair, which offers an opportunity to talk with good people at organizations that fight in different ways for a better world. You need to RSVP to attend. Digital Rights Job Fair

14 April 2022

Daniel Kahn Gillmor: Bitstream Vera Must Die

Bitstream Vera must die.

14 February 2021

Fran ois Marier: Creating a Kodi media PC using a Raspberry Pi 4

Here's how I set up a media PC using Kodi (formerly XMBC) and a Raspberry Pi 4.

Hardware The hardware is fairly straightforward, but here's what I ended up getting: You'll probably want to add a remote control to that setup. I used an old Streamzap I had lying around.

Installing the OS on the SD-card Plug the SD card into a computer using a USB adapter. Download the imager and use it to install Raspbian on the SDcard. Then you can simply plug the SD card into the Pi and boot.

System configuration Using sudo raspi-config, I changed the following:
  • Set hostname (System Options)
  • Wait for network at boot (System Options): needed for NFS
  • Disable screen blanking (Display Options)
  • Enable ssh (Interface Options)
  • Configure locale, timezone and keyboard (Localisation Options)
  • Set WiFi country (Localisation Options)
Then I enabled automatic updates:
apt install unattended-upgrades anacron
echo 'Unattended-Upgrade::Origins-Pattern  
        "origin=Debian,codename=$ distro_codename ,label=Debian";
        "origin=Debian,codename=$ distro_codename ,label=Debian-Security";
        "origin=Raspbian,codename=$ distro_codename ,label=Raspbian";
        "origin=Raspberry Pi Foundation,codename=$ distro_codename ,label=Raspberry Pi Foundation";
 ;'   sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-raspbian

Headless setup Should you need to do the setup without a monitor, you can enable ssh by inserting the SD card into a computer and then creating an empty file called ssh in the boot partition. Plug it into your router and boot it up. Check the IP that it received by looking at the active DHCP leases in your router's admin panel. Then login:
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no pi@192.168.1.xxx
using the default password of raspberry.

Hardening In order to secure the Pi, I followed most of the steps I usually take when setting up a new Linux server. I created a new user account for admin and ssh access:
adduser francois
addgroup sshuser
adduser francois sshuser
adduser francois sudo
and changed the pi user password to a random one:
pwgen -sy 32
sudo passwd pi
before removing its admin permissions:
deluser pi adm
deluser pi sudo
deluser pi dialout
deluser pi cdrom
deluser pi lpadmin
Finally, I enabled the Uncomplicated Firewall by installing its package:
apt install ufw
and only allowing ssh connections. After starting ufw using systemctl start ufw.service, you can check that it's configured as expected using ufw status. It should display the following:
Status: active
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)

Installing Kodi Kodi is very straightforward to install since it's now part of the Raspbian repositories:
apt install kodi
To make it start at boot/login, while still being able to exit and use other apps if needed:
cp /etc/xdg/lxsession/LXDE-pi/autostart ~/.config/lxsession/LXDE-pi/
echo "@kodi" >> ~/.config/lxsession/LXDE-pi/autostart

Network File System In order to avoid having to have all media storage connected directly to the Pi via USB, I setup an NFS share over my local network. First, give static IP allocations to the server and the Pi in your DHCP server, then add it to the /etc/hosts file on your NFS server:
192.168.1.3    pi
Install the NFS server package:
apt instal nfs-kernel-server
Setup the directories to share in /etc/exports:
/pub/movies    pi(ro,insecure,all_squash,subtree_check)
/pub/tv_shows  pi(ro,insecure,all_squash,subtree_check)
Open the right ports on your firewall by putting this in /etc/network/iptables.up.rules:
-A INPUT -s 192.168.1.3 -p udp -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 123 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 2049 -j ACCEPT
Finally, apply all of these changes:
iptables-apply
systemctl restart nfs-kernel-server.service
On the Pi, put the server's static IP in /etc/hosts:
192.168.1.2    fileserver
and this in /etc/fstab:
fileserver:/data/movies  /kodi/movies  nfs  ro,bg,hard,noatime,async,nolock  0  0
fileserver:/data/tv      /kodi/tv      nfs  ro,bg,hard,noatime,async,nolock  0  0
Then create the mount points and mount everything:
mkdir -p /kodi/movies
mkdir /kodi/tv
mount /kodi/movies
mount /kodi/tv

26 January 2021

Gunnar Wolf: Back to school... As a student

Although it was a much larger step when I made a similar announcement seven years ago, when I started my Specialization, it is still a big challenge ahead, and I am very happy to pursue this: I have been admitted to a PhD program at UNAM, the university I have worked at for almost 20 years, and one of the top universities in Latin America. What program will I be part of? Doctorado en Ciencia e Ingenier a de la Computaci n (Computer Science and Engineering Doctorate Quite a broad program name, yes, sounds like anything goes). I am happy to say I managed to do as I hoped seven years ago. As that blog post says, I managed to keep an eye on my keyring-maint duties as well Will even try to link that work with what I do at school. Over the years I spent pursuing my Specialization and Masters degrees at IPN ESIME, I managed to publish two academic papers on the keyring-maint work: Strengthening a Curated Web of Trust in a Geographically Distributed Project and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project s cryptographic keyring. Since that time, several relevant things have happened. Mainly, the SKS Keyserver panorama started looking quite bleak: Various attacks such as the poisoned certificates or *certificate flooding have been mounted against the keyserver network, having as a direct outcome the questioning of the future of the decentralized transitional trust model we take for granted in the OpenPGP world. The global SKS keyserver network has quickly eroded, and its continued functioning is no longer something we can take as a given. Different methods have come up, attempting to answer to this situation, such as WKD and DANE, but they all lose something that can be seen as the essence, almost the heart of the distributed, decentralized Web-of-Trust paradigm: The ability to carry the full certificates for the keys. And that s the problem I will try to tackle with my work: How can we, in the light of the known weaknesses, keep a working, decentralized, distributed trust scheme?

1 January 2021

Andrej Shadura: Transitioning to a new OpenPGP key

Following dkg s example, I decided to finally transition to my new ed25519/cv25519 key. Unlike Daniel, I m not yet trying to split identities, but I m using this chance to drop old identities I no longer use. My new key only has my main email address and the Debian one, and only those versions of my name I still want around. My old PGP key (at the moment in the Debian keyring) is:
pub   rsa4096/0x6EA4D2311A2D268D 2010-10-13 [SC] [expires: 2021-11-11]
      782130B4C9944247977B82FD6EA4D2311A2D268D
uid                   [ultimate] Andrej Shadura <andrew@shadura.me>
uid                   [ultimate] Andrew Shadura <andrew@shadura.me>
uid                   [ultimate] Andrew Shadura <andrewsh@debian.org>
uid                   [ultimate] Andrew O. Shadoura <Andrew.Shadoura@gmail.com>
uid                   [ultimate] Andrej Shadura <andrewsh@debian.org>
sub   rsa4096/0xB2C0FE967C940749 2010-10-13 [E]
sub   rsa3072/0xC8C5F253DD61FECD 2018-03-02 [S] [expires: 2021-11-11]
sub   rsa2048/0x5E408CD91CD839D2 2018-03-10 [S] [expires: 2021-11-11]
The is the key I ve been using in Debian from the very beginning, and its copies at the SKS keyserver network still have my first DD signature from angdraug:
sig  sig   85EE3E0E 2010-12-03 __________ __________ Dmitry Borodaenko <angdraug@mail.ru>
sig  sig   CB4D38A9 2010-12-03 __________ __________ Dmitry Borodaenko <angdraug@debian.org>
My new PGP key is:
pub   ed25519/0xE8446B4AC8C77261 2016-06-13 [SC] [expires: 2022-06-25]
      83DCD17F44B22CC83656EDA1E8446B4AC8C77261
uid                   [ultimate] Andrej Shadura <andrew@shadura.me>
uid                   [ultimate] Andrew Shadura <andrew@shadura.me>
uid                   [ultimate] Andrej Shadura <andrewsh@debian.org>
uid                   [ultimate] Andrew Shadura <andrewsh@debian.org>
uid                   [ultimate] Andrei Shadura <andrew@shadura.me>
sub   cv25519/0xD5A55606B6539A87 2016-06-13 [E] [expires: 2022-06-25]
sub   ed25519/0x52E0EA6F91F1DB8A 2016-06-13 [A] [expires: 2022-06-25]
If you signed my old key and are reading this, please consider signing my new key; if you feel you need to re-confirm this, feel free to contact me; otherwise, a copy of this statement signed by both the old and new keys is available here. I have uploaded this new key to keys.openpgp.org, and also published it through WKD and the SKS network. Both keys can also be downloaded from my website.

31 December 2020

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2021

dkg's 2021 OpenPGP transition As 2021 begins, I'm changing to a new OpenPGP certificate. I did a similar transition two years ago, and a fair amount has changed since then. You might know my old OpenPGP certificate as:
pub   ed25519 2019-01-19 [C] [expires: 2021-01-18]
      C4BC2DDB38CCE96485EBE9C2F20691179038E5C6
uid          Daniel Kahn Gillmor <dkg@fifthhorseman.net>
uid          Daniel Kahn Gillmor <dkg@debian.org>
My new OpenPGP certificate is:
pub   ed25519 2020-12-27 [C] [expires: 2023-12-24]
      C29F8A0C01F35E34D816AA5CE092EB3A5CA10DBA
uid           [ unknown] Daniel Kahn Gillmor
uid           [ unknown] <dkg@debian.org>
uid           [ unknown] <dkg@fifthhorseman.net>
You can find a signed transition statement if you're into that sort of thing. If you're interested in the rationale for why I'm making this transition, read on.

Dangers of Offline Primary Secret Keys There are several reasons for transitioning, but one i simply couldn't argue with was my own technical failure. I put the primary secret key into offline storage some time ago for "safety", and used ext4's filesystem-level encryption layered on top of dm-crypt for additional security. But either the tools changed out from under me, or there were failures on the storage medium, or I've failed to remember my passphrase correctly, because I am unable to regain access to the cleartext of the secret key. In particular, I find myself unable to use e4crypt add_key with the passphrase I know to get a usable working directory. I confess I still find e4crypt pretty difficult to use and I don't use it often, so the problem may entirely be user error (either now, or two years ago when I did the initial setup). Anyway, lesson learned: don't use cryptosystems that you're not comfortable with to encrypt data that you care about recovering. This is a lesson I'm pretty sure I've learned before, sigh, but it's a good reminder.

Split User IDs I'm trying to split out my User IDs again -- this way if you know me by e-mail address, you don't have to think/worry about certifying my name, and if you know me by name, you don't have to think/worry about certifying my e-mail address. I think that's simpler and more sensible. It's also nice because e-mail address-only User IDs can be used effectively in contexts like Autocrypt, which I think are increasingly important if we want to have usable encrypted e-mail. Last time around I initially tried split User IDs but rolled them back and I think most of the bugs I discovered then have been fixed.

Certificate Flooding Another reason for making a transition to a new certificate is that my older certificate is one of the ones that was "flooded" on the SKS keyserver network last year, which was one of the final straws for that teetering project. Transitioning to a new certificate lets that old flooded cert expire and people can just simply move on from it, ideally deleting it from their local keyrings. Hopefully as a community we can move on from SKS to key distribution mechanisms like WKD, Autocrypt, DANE, and keys.openpgp.org, all of which address some of the known problems with keyserver abuse.

Trying New Tools Finally, I'm also interested in thinking about how key and certificate management might be handled in different ways. While I'm reasonably competent in handling GnuPG, the larger OpenPGP community (which I'm a part of) has done a lot of thinking and a lot of work about how people can use OpenPGP. I'm particularly happy with the collaborative work that has gone into the Stateless OpenPGP CLI (aka sop), which helps to generate a powerful interoperability test suite. While sop doesn't offer the level of certificate management I'd need to use it to manage this new certificate in full, I wish something like it would! Starting from a fresh certificate and actually using it helps me to think through what I might actually need from a tool that is roughly as straightforward and opinionated as sop is. If you're a software developer who might use or implement OpenPGP, or a protocol designer, and you haven't played around with any of the various implementations of sop yet, I recommend taking a look. And feedback on the specification is always welcome, too, including ideas for new functionality (maybe even like certificate management).

Next Steps If you're the kind of person who's into making OpenPGP certifications, feel free to check in with me via whatever channels you're used to using to verify that this transition is legit. If you think it is, and you're comfortable, please send me (e-mail is probably best) your certifications over the new certficate. I'll keep on working to make OpenPGP more usable and acceptable. Hopefully, 2021 will be a better year ahead for all of us.

23 October 2020

Birger Schacht: An Analysis of 5 Million OpenPGP Keys

In July I finished my Bachelor s Degree in IT Security at the University of Applied Sciences in St. Poelten. During the studies I did some elective courses, one of which was about Data Analysis using Python, Pandas and Jupyter Notebooks. I found it very interesting to do calculations on different data sets and to visualize them. Towards the end of the Bachelor I had to find a topic for my Bachelor Thesis and as a long time user of OpenPGP I thought it would be interesting to do an analysis of the collection of OpenPGP keys that are available on the keyservers of the SKS keyserver network. So in June 2019 I fetched a copy of one of the key dumps of the one of the keyservers (some keyserver publish these copies of their key database so people who want to join the SKS keyserver network can do an initial import). At that time the copy of the key database contained 5,499,675 keys and was around 12GB. Using the hockeypuck keyserver software I imported the keys into an PostgreSQL database. Hockeypuck uses a table called keys to store the keys and in there the column doc stores the OpenPGP keys in JSON format (always with a data field containing the original unparsed data). For the thesis I split the analysis in three parts, first looking at the Public Key packets, then analysing the User ID packets and finally studying the Signature Packets. To analyse the respective packets I used SQL to export the data to CSV files and then used the pandas read_csv method to create a dataframe of the values. In a couple of cases I did some parsing before converting to a DataFrame to make the analysis step faster. The parsing was done using the pgpdump python library. Together with my advisor I decided to submit the thesis for a journal, so we revised and compressed the whole paper and the outcome was now

PUBLISHED in the Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA). I think the work gives some valuable insight in the development of the use of OpenPGP in the last 30 years. Looking at the public key packets we were able to compare the different public key algorithms and for example visualize how DSA was the most used algorithm until around 2010 when it was replaced by RSA. When looking at the less used algorithms a trend towards ECC based crytography is visible. What we also noticed was an increase of RSA keys with algorithm ID 3 (RSA Sign-Only), which are deprecated. When we took a deeper look at those keys we realized that most of those keys used a specific User ID string in the User ID packets which allowed us to attribute those keys to two software projects both using the Bouncy Castle Java Cryptographic API (resp. the Spongy Castle version for Android). We also stumbled over a tutorial on how to create RSA keys with Bouncycastle which also describes how to create RSA keys with code that produces RSA Sign-Only keys. In one of those projects, this was then fixed. By looking at the User ID packets we did some statistics about the most used email providers used by OpenPGP users. One domain stood out, because it is not the domain of an email provider: tellfinder.com is a domain used in around 45,000 keys. Tellfinder is a Big Data analysis software and the UID of all but two of those keys is TellFinder Page Archiver- Signing Key <support@tellfinder.com>. We also looked at the comments used in OpenPGP User ID fields. In 2013 Daniel Kahn Gillmor published a blog post titled OpenPGP User ID Comments considered harmful in which he pointed out that most of the comments in the User ID field of OpenPGP keys are duplicating information that is already present somewhere in the User ID or the key itself. In our dataset 3,133 comments were exactly the same as the name, 3,346 were the same as the domain and 18,246 comments were similar to the local part of the email address Last but not least we looked at the signature subpackets and the development of some of the preferences (Preferred Symmetric Algorithm, Preferred Hash Algorithm) that are being published using signature packets. Analysing this huge dataset of cryptographic keys of the last 20 to 30 years was very interesting and I learned a lot about the history of PGP resp. OpenPGP and the evolution of cryptography overall. I think it would be interesting to look at even more properties of OpenPGP keys and I also think it would be valuable for the OpenPGP ecosystem if these kinds analysis could be done regularly. An approach like Tor Metrics could lead to interesting findings and could also help to back decisions regarding future developments of the OpenPGP standard.

14 September 2020

Jonathan McDowell: onak 0.6.1 released

Yesterday I did the first release of my OpenPGP compatible keyserver, onak, in 4 years. Actually, 2 releases because I discovered my detection for various versions of libnettle needed some fixing. It was largely driven by the need to get an updated package sorted for Debian due to the removal of dh-systemd, but it should have come sooner. This release has a number of clean-ups for dealing with the hostility shown to the keyserver network in recent years. In particular it implements some of dkg s Abuse-Resistant OpenPGP Keystores, and finally adds support for verifying signatures fully. That opens up the ability to run a keyserver that will only allow verifiable updates to keys. This doesn t tie in with folk who want to run PGP based systems because of the anonymity, but for those of us who think PGP s strength is in the web of trust it s pretty handy. And it s all configurable to taste; you can turn off all the verification if you want, or verify everything but not require any signatures, or even enable v3 keys if you feel like it. The main reason this release didn t come sooner is that I m painfully aware of the bits that are missing. In particular: Anyway. Available locally or via GitHub.
0.6.0 - 13th September 2020
  • Move to CMake over autoconf
  • Add support for issuer fingerprint subpackets
  • Add experimental support for v5 keys
  • Add read-only OpenPGP keyring backed DB backend
  • Move various bits into their own subdirectories in the source tree
  • Add support for full signature verification
  • Drop v3 keys by default when cleaning keys
  • Various code cleanups
  • Implement pieces of draft-dkg-openpgp-abuse-resistant-keystore-03
  • Add support for a fingerprint blacklist (e.g. Evil32)
  • Deprecate the .conf configuration file format
  • Drop version info from armored output
  • Add option to deny new keys and only allow updates to existing keys
  • Various pieces of work removing support for 32 bit key IDs and coping with colliding 64 bit key IDs.
  • Remove support for libnettle versions that lack the full SHA2 suite
0.6.1 - 13th September 2020
  • Fixes for compilation without nettle + with later releases of nettle

17 April 2020

Daniel Kahn Gillmor: Tech-assisted Contact-Tracing against the COVID-19 pandemic

Today at the ACLU, we released a whitepaper discussing how to evaluate some novel cryptographic schemes that are being considered to provide technology-assisted contact-tracing in the face of the COVID-19 pandemic. The document offers guidelines for thinking about potential schemes like this, and what kinds of safeguards we need to expect and demand from these systems so that we might try to address the (hopefully temporary) crisis of the pandemic without also creating a permanent crisis for civil liberties. The proposals that we're seeing (including PACT, DP^3T, TCN, and the Apple/Google proposal) work in pretty similar ways, and the challenges and tradeoffs there are remarkably similar to Internet protocol design decisions. Only now in addition to bytes and packets and questions of efficiency, privacy, and control, we're also dealing directly with risk of physical harm (who gets sick?), society-wide allocation of scarce and critical resources (who gets tested? who gets treatment?), and potentially serious means of exercising powerful social control (who gets forced into quarantine?). My ACLU colleague Jon Callas and i will be doing a Reddit AMA in r/Coronavirus tomorrow starting at 2020-04-17T19:00:00Z (that's 3pm Friday in TZ=America/New_York) about this very topic, if that's the kind of thing you're into.

18 March 2020

Antoine Beaupr : How can I trust this git repository?

Join me in the rabbit hole of git repository verification, and how we could improve it.

Problem statement As part of my work on automating install procedures at Tor, I ended up doing things like:
git clone REPO
./REPO/bootstrap.sh
... something eerily similar to the infamous curl pipe bash method which I often decry. As a short-term workaround, I relied on the SHA-1 checksum of the repository to make sure I have the right code, by running this both on a "trusted" (ie. "local") repository and the remote, then visually comparing the output:
$ git show-ref master
9f9a9d70dd1f1e84dec69a12ebc536c1f05aed1c refs/heads/master
One problem with this approach is that SHA-1 is now considered as flawed as MD5 so it can't be used as an authentication mechanism anymore. It's also fundamentally difficult to compare hashes for humans. The other flaw with comparing local and remote checksums is that we assume we trust the local repository. But how can I trust that repository? I can either:
  1. audit all the code present and all the changes done to it after
  2. or trust someone else to do so
The first option here is not practical in most cases. In this specific use case, I have audited the source code -- I'm the author, even -- what I need is to transfer that code over to another server. (Note that I am replacing those procedures with Fabric, which makes this use case moot for now as the trust path narrows to "trust the SSH server" which I already had anyways. But it's still important for my fellow Tor developers who worry about trusting the git server, especially now that we're moving to GitLab.) But anyways, in most cases, I do need to trust some other fellow developer I collaborate with. To do this, I would need to trust the entire chain between me and them:
  1. the git client
  2. the operating system
  3. the hardware
  4. the network (HTTPS and the CA cartel, specifically)
  5. then the hosting provider (and that hardware/software stack)
  6. and then backwards all the way back to that other person's computer
I want to shorten that chain as much as possible, make it "peer to peer", so to speak. Concretely, it would eliminate the hosting provider and the network, as attackers.

OpenPGP verification My first reaction is (perhaps perversely) to "use OpenPGP" for this. I figured that if I sign every commit, then I can just check the latest commit and see if the signature is good. The first problem here is that this is surprisingly hard. Let's pick some arbitrary commit I did recently:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
diff --git a/fabric_tpa/pytest.ini b/fabric_tpa/pytest.ini
new file mode 100644
index 0000000..71004ea
--- /dev/null
+++ b/fabric_tpa/pytest.ini
@@ -0,0 +1,3 @@
+[pytest]
+# we inline tests directly in the source code
+python_files = *.py
That's the output of git log -p in my local repository. I signed that commit, yet git log is not telling me anything special. To check the signature, I need something special: --show-signature, which looks like this:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature faite le lun 16 mar 2020 14:37:53 EDT
gpg:                avec la clef RSA 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Bonne signature de  Antoine Beaupr  <anarcat@orangeseeds.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@torproject.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@anarc.at>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@koumbit.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@debian.org>  [ultime]
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Can you tell if this is a valid signature? If you speak a little french, maybe you can! But even if you would, you are unlikely to see that output on your own computer. What you would see instead is:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Important part: Can't check signature: No public key. No public key. Because of course you would see that. Why would you have my key lying around, unless you're me. Or, to put it another way, why would that server I'm installing from scratch have a copy of my OpenPGP certificate? Because I'm a Debian developer, my key is actually part of the 800 keys in the debian-keyring package, signed by the APT repositories. So I have a trust path. But that won't work for someone who is not a Debian developer. It will also stop working when my key expires in that repository, as it already has on Debian buster (current stable). So I can't assume I have a trust path there either. One could work with a trusted keyring like we do in the Tor and Debian project, and only work inside that project, that said. But I still feel uncomfortable with those commands. Both git log and git show will happily succeed (return code 0 in the shell) even though the signature verification failed on the commits. Same with git pull and git merge, which will happily push your branch ahead even if the remote has unsigned or badly signed commits. To actually verify commits (or tags), you need the git verify-commit (or git verify-tag) command, which seems to do the right thing:
$ LANG=C.UTF-8 git verify-commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
[1]$
At least it fails with some error code (1, above). But it's not flexible: I can't use it to verify that a "trusted" developer (say one that is in a trusted keyring) signed a given commit. Also, it is not clear what a failure means. Is a signature by an expired certificate okay? What if the key is signed by some random key in my personal keyring? Why should that be trusted?

Worrying about git and GnuPG In general, I'm worried about git's implementation of OpenPGP signatures. There has been numerous cases of interoperability problems with GnuPG specifically that led to security, like EFAIL or SigSpoof. It would be surprising if such a vulnerability did not exist in git. Even if git did everything "just right" (which I have myself found impossible to do when writing code that talks with GnuPG), what does it actually verify? The commit's SHA-1 checksum? The tree's checksum? The entire archive as a zip file? I would bet it signs the commit's SHA-1 sum, but I just don't know, on the top of my head, and neither do git-commit or git-verify-commit say exactly what is happening. I had an interesting conversation with a fellow Debian developer (dkg) about this and we had to admit those limitations:
<anarcat> i'd like to integrate pgp signing into tor's coding practices more, but so far, my approach has been "sign commits" and the verify step was "TBD" <dkg> that's the main reason i've been reluctant to sign git commits. i haven't heard anyone offer a better subsequent step. if torproject could outline something useful, then i'd be less averse to the practice. i'm also pretty sad that git remains stuck on sha1, esp. given the recent demonstrations. all the fancy strong signatures you can make in git won't matter if the underlying git repo gets changed out from under the signature due to sha1's weakness
In other words, even if git implements the arcane GnuPG dialect just so, and would allow us to setup the trust chain just right, and would give us meaningful and workable error messages, it still would fail because it's still stuck in SHA-1. There is work underway to fix that, but in February 2020, Jonathan Corbet described that work as being in a "relatively unstable state", which is hardly something I would like to trust to verify code. Also, when you clone a fresh new repository, you might get an entirely different repository, with a different root and set of commits. The concept of "validity" of a commit, in itself, is hard to establish in this case, because an hostile server could put you backwards in time, on a different branch, or even on an entirely different repository. Git will warn you about a different repository root with warning: no common commits but that's easy to miss. And complete branch switches, rebases and resets from upstream are hardly more noticeable: only a tiny plus sign (+) instead of a star (*) will tell you that a reset happened, along with a warning (forced update) on the same line. Miss those and your git history can be compromised.

Possible ways forward I don't consider the current implementation of OpenPGP signatures in git to be sufficient. Maybe, eventually, it will mature away from SHA-1 and the interface will be more reasonable, but I don't see that happening in the short term. So what do we do?

git evtag The git-evtag extension is a replacement for git tag -s. It's not designed to sign commits (it only verifies tags) but at least it uses a stronger algorithm (SHA-512) to checksum the tree, and will include everything in that tree, including blobs. If that sounds expensive to you, don't worry too much: it takes about 5 seconds to tag the Linux kernel, according to the author. Unfortunately, that checksum is then signed with GnuPG, in a manner similar to git itself, in that it exposes GnuPG output (which can be confusing) and is likely similarly vulnerable to mis-implementation of the GnuPG dialect as git itself. It also does not allow you to specify a keyring to verify against, so you need to trust GnuPG to make sense of the garbage that lives in your personal keyring (and, trust me, it doesn't). And besides, git-evtag is fundamentally the same as signed git tags: checksum everything and sign with GnuPG. The difference is it uses SHA-512 instead of SHA-1, but that's something git will eventually fix itself anyways.

kernel patch attestations The kernel also faces this problem. Linus Torvalds signs the releases with GnuPG, but patches fly all over mailing list without any form of verification apart from clear-text email. So Konstantin Ryabitsev has proposed a new protocol to sign git patches which uses SHA256 to checksum the patch metadata, commit message and the patch itself, and then sign that with GnuPG. It's unclear to me what this solves, if anything, at all. As dkg argues, it would seem better to add OpenPGP support to git-send-email and teach git tools to recognize that (e.g. git-am) at least if you're going to keep using OpenPGP anyways. And furthermore, it doesn't resolve the problems associated with verifying a full archive either, as it only attests "patches".

jcat Unhappy with the current state of affairs, the author of fwupd (Richard Hughes) wrote his own protocol as well, called jcat, which provides signed "catalog files" similar to the ones provided in Microsoft windows. It consists of a "gzip-compressed JSON catalog files, which can be used to store GPG, PKCS-7 and SHA-256 checksums for each file". So yes, it is yet again another wrapper to GnuPG, probably with all the flaws detailed above, on top of being a niche implementation, disconnected from git.

The Update Framework One more thing dkg correctly identified is:
<dkg> anarcat: even if you could do exactly what you describe, there are still some interesting wrinkles that i think would be problems for you. the big one: "git repo's latest commits" is a loophole big enough to drive a truck through. if your adversary controls that repo, then they get to decide which commits to include in the repo. (since every git repo is a view into the same git repo, just some have more commits than others)
In other words, unless you have a repository that has frequent commits (either because of activity or by a bot generating fake commits), you have to rely on the central server to decide what "the latest version" is. This is the kind of problems that binary package distribution systems like APT and TUF solve correctly. Unfortunately, those don't apply to source code distribution, at least not in git form: TUF only deals with "repositories" and binary packages, and APT only deals with binary packages and source tarballs. That said, there's actually no reason why git could not support the TUF specification. Maybe TUF could be the solution to ensure end-to-end cryptographic integrity of the source code itself. OpenPGP-signed tarballs are nice, and signed git tags can be useful, but from my experience, a lot of OpenPGP (or, more accurately, GnuPG) derived tools are brittle and do not offer clear guarantees, and definitely not to the level that TUF tries to address. This would require changes on the git servers and clients, but I think it would be worth it.

Other Projects

OpenBSD There are other tools trying to do parts of what GnuPG is doing, for example minisign and OpenBSD's signify. But they do not integrate with git at all right now. Although I did find a hack] to use signify with git, it's kind of gross...

Golang Unsurprisingly, this is a problem everyone is trying to solve. Golang is planning on hosting a notary which would leverage a "certificate-transparency-style tamper-proof log" which would be ran by Google (see the spec for details). But that doesn't resolve the "evil server" attack, if we treat Google as an adversary (and we should).

Python Python had OpenPGP going for a while on PyPI, but it's unclear if it ever did anything at all. Now the plan seems to be to use TUF but my hunch is that the complexity of the specification is keeping that from moving ahead.

Docker Docker and the container ecosystem has, in theory, moved to TUF in the form of Notary, "a project that allows anyone to have trust over arbitrary collections of data". In practice however, in my somewhat limited experience, setting up TUF and image verification in Docker is far from trivial.

Android and iOS Even in what is possibly one of the strongest models (at least in terms of user friendliness), mobile phones are surprisingly unclear about those kind of questions. I had to ask if Android had end-to-end authentication and I am still not clear on the answer. I have no idea of what iOS does.

Conclusion One of the core problems with everything here is the common usability aspect of cryptography, and specifically the usability of verification procedures. We have become pretty good at encryption. The harder part (and a requirement for proper encryption) is verification. It seems that problem still remains unsolved, in terms of usability. Even Signal, widely considered to be a success in terms of adoption and usability, doesn't properly solve that problem, as users regularly ignore "The security number has changed" warnings... So, even though they deserve a lot of credit in other areas, it seems unlikely that hardcore C hackers (e.g. git and kernel developers) will be able to resolve that problem without at least a little bit of help. And TUF seems like the state of the art specification around here, it would seem wise to start adopting it in the git community as well. Update: git 2.26 introduced a new gpg.minTrustLevel to "tell various signature verification codepaths the required minimum trust level", presumably to control how Git will treat keys in your keyrings, assuming the "trust database" is valid and up to date. For an interesting narrative of how "normal" (without PGP) git verification can fail, see also A Git Horror Story: Repository Integrity With Signed Commits.

10 July 2017

Jonathan McDowell: Going to DebConf 17

Going to DebConf17 Completely forgot to mention this earlier in the year, but delighted to say that in just under 4 weeks I ll be attending DebConf 17 in Montr al. Looking forward to seeing a bunch of fine folk there! Outbound:
2017-08-04 11:40 DUB -> 13:40 KEF WW853
2017-08-04 15:25 KEF -> 17:00 YUL WW251
Inbound:
2017-08-12 19:50 YUL -> 05:00 KEF WW252
2017-08-13 06:20 KEF -> 09:50 DUB WW852
(Image created using GIMP, fonts-dkg-handwriting and the DebConf17 Artwork.)

30 April 2017

Chris Lamb: Free software activities in April 2017

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
I also made the following changes to diffoscope, our recursive and content-aware diff utility used to locate and diagnose reproducibility issues:
  • New features:
    • Add support for comparing Ogg Vorbis files. (0436f9b)
  • Bug fixes:
    • Prevent a traceback when using --new-file with containers. (#861286)
    • Don't crash on invalid archives; print a useful error instead. (#833697).
    • Don't print error output from bzip2 call. (21180c4)
  • Cleanups:
    • Prevent abstraction-level violations by defining visual diff support on Presenter classes. (7b68309)
    • Show Debian packages installed in test output. (c86a9e1)


Debian
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 882-1 for the tryton-server general application platform to fix a path suffix injection attack.
  • Issued DLA 883-1 for curl preventing a buffer read overrun vulnerability.
  • Issued DLA 884-1 for collectd (a statistics collection daemon) to close a potential infinite loop vulnerability.
  • Issued DLA 885-1 for the python-django web development framework patching two open redirect & XSS attack issues.
  • Issued DLA 890-1 for ming, a library to create Flash files, closing multiple heap-based buffer overflows.
  • Issued DLA 892-1 and DLA 891-1 for the libnl3/libnl Netlink protocol libraries, fixing integer overflow issues which could have allowed arbitrary code execution.

Uploads
  • redis (4:4.0-rc3-1) New upstream RC release.
  • adminer:
    • 4.3.0-2 Fix debian/watch file.
    • 4.3.1-1 New upstream release.
  • bfs:
    • 1.0-1 Initial release.
    • 1.0-2 Drop fstype tests as they rely on /etc/mtab being available. (#861471)
  • python-django:
    • 1:1.10.7-1 New upstream security release.
    • 1:1.11-1 New upstream stable release to experimental.

I sponsored the following uploads: I also performed the following QA uploads:
  • gtkglext (1.2.0-7) Correct installation location of gdkglext-config.h after "Multi-Archification" in 1.2.0-5. (#860007)
Finally, I made the following non-maintainer uploads (NMUs):
  • python-formencode (1.3.0-2) Don't ship files in /usr/lib/python 2.7,3 /dist-packages/docs. (#860146)
  • django-assets (0.12-2) Patch pytest plugin to check whether we are running in a Django context, otherwise we can break unrelated testsuites. (#859916)


FTP Team

As a Debian FTP assistant I ACCEPTed 155 packages: aiohttp-cors, bear, colorize, erlang-p1-xmpp, fenrir, firejail, fizmo-console, flask-ldapconn, flask-socketio, fontmanager.app, fonts-blankenburg, fortune-zh, fw4spl, fzy, gajim-antispam, gdal, getdns, gfal2, gmime, golang-github-go-macaron-captcha, golang-github-go-macaron-i18n, golang-github-gogits-chardet, golang-github-gopherjs-gopherjs, golang-github-jroimartin-gocui, golang-github-lunny-nodb, golang-github-markbates-goth, golang-github-neowaylabs-wabbit, golang-github-pkg-xattr, golang-github-siddontang-goredis, golang-github-unknwon-cae, golang-github-unknwon-i18n, golang-github-unknwon-paginater, grpc, grr-client-templates, gst-omx, hddemux, highwayhash, icedove, indexed-gzip, jawn, khal, kytos-utils, libbloom, libdrilbo, libhtml-gumbo-perl, libmonospaceif, libpsortb, libundead, llvm-toolchain-4.0, minetest-mod-homedecor, mini-buildd, mrboom, mumps, nnn, node-anymatch, node-asn1.js, node-assert-plus, node-binary-extensions, node-bn.js, node-boom, node-brfs, node-browser-resolve, node-browserify-des, node-browserify-zlib, node-cipher-base, node-console-browserify, node-constants-browserify, node-delegates, node-diffie-hellman, node-errno, node-falafel, node-hash-base, node-hash-test-vectors, node-hash.js, node-hmac-drbg, node-https-browserify, node-jsbn, node-json-loader, node-json-schema, node-loader-runner, node-miller-rabin, node-minimalistic-crypto-utils, node-p-limit, node-prr, node-sha.js, node-sntp, node-static-module, node-tapable, node-tough-cookie, node-tunein, node-umd, open-infrastructure-storage-tools, opensvc, openvas, pgaudit, php-cassandra, protracker, pygame, pypng, python-ase, python-bip32utils, python-ltfatpy, python-pyqrcode, python-rpaths, python-statistics, python-xarray, qtcharts-opensource-src, r-cran-cellranger, r-cran-lexrankr, r-cran-pwt9, r-cran-rematch, r-cran-shinyjs, r-cran-snowballc, ruby-ddplugin, ruby-google-protobuf, ruby-rack-proxy, ruby-rails-assets-underscore, rustc, sbt, sbt-launcher-interface, sbt-serialization, sbt-template-resolver, scopt, seqsero, shim-signed, sniproxy, sortedcollections, starjava-array, starjava-connect, starjava-datanode, starjava-fits, starjava-registry, starjava-table, starjava-task, starjava-topcat, starjava-ttools, starjava-util, starjava-vo, starjava-votable, switcheroo-control, systemd, tilix, tslib, tt-rss-notifier-chrome, u-boot, unittest++, vc, vim-ledger, vis, wesnoth-1.13, wolfssl, wuzz, xandikos, xtensor-python & xwallpaper. I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against getdns, gfal2, grpc, mrboom, mumps, opensvc, python-ase, sniproxy, starjava-topcat, starjava-ttools, unittest++, wolfssl, xandikos & xtensor-python.

8 February 2017

Antoine Beaupr : Reliably generating good passwords

Passwords are used everywhere in our modern life. Between your email account and your bank card, a lot of critical security infrastructure relies on "something you know", a password. Yet there is little standard documentation on how to generate good passwords. There are some interesting possibilities for doing so; this article will look at what makes a good password and some tools that can be used to generate them. There is growing concern that our dependence on passwords poses a fundamental security flaw. For example, passwords rely on humans, who can be coerced to reveal secret information. Furthermore, passwords are "replayable": if your password is revealed or stolen, anyone can impersonate you to get access to your most critical assets. Therefore, major organizations are trying to move away from single password authentication. Google, for example, is enforcing two factor authentication for its employees and is considering abandoning passwords on phones as well, although we have yet to see that controversial change implemented. Yet passwords are still here and are likely to stick around for a long time until we figure out a better alternative. Note that in this article I use the word "password" instead of "PIN" or "passphrase", which all roughly mean the same thing: a small piece of text that users provide to prove their identity.

What makes a good password? A "good password" may mean different things to different people. I will assert that a good password has the following properties:
  • high entropy: hard to guess for machines
  • transferable: easy to communicate for humans or transfer across various protocols for computers
  • memorable: easy to remember for humans
High entropy means that the password should be unpredictable to an attacker, for all practical purposes. It is tempting (and not uncommon) to choose a password based on something else that you know, but unfortunately those choices are likely to be guessable, no matter how "secret" you believe it is. Yes, with enough effort, an attacker can figure out your birthday, the name of your first lover, your mother's maiden name, where you were last summer, or other secrets people think they have. The only solution here is to use a password randomly generated with enough randomness or "entropy" that brute-forcing the password will be practically infeasible. Considering that a modern off-the-shelf graphics card can guess millions of passwords per second using freely available software like hashcat, the typical requirement of "8 characters" is not considered enough anymore. With proper hardware, a powerful rig can crack such passwords offline within about a day. Even though a recent US National Institute of Standards and Technology (NIST) draft still recommends a minimum of eight characters, we now more often hear recommendations of twelve characters or fourteen characters. A password should also be easily "transferable". Some characters, like & or !, have special meaning on the web or the shell and can wreak havoc when transferred. Certain software also has policies of refusing (or requiring!) some special characters exactly for that reason. Weird characters also make it harder for humans to communicate passwords across voice channels or different cultural backgrounds. In a more extreme example, the popular Signal software even resorted to using only digits to transfer key fingerprints. They outlined that numbers are "easy to localize" (as opposed to words, which are language-specific) and "visually distinct". But the critical piece is the "memorable" part: it is trivial to generate a random string of characters, but those passwords are hard for humans to remember. As xkcd noted, "through 20 years of effort, we've successfully trained everyone to use passwords that are hard for human to remember but easy for computers to guess". It explains how a series of words is a better password than a single word with some characters replaced. Obviously, you should not need to remember all passwords. Indeed, you may store some in password managers (which we'll look at in another article) or write them down in your wallet. In those cases, what you need is not a password, but something I would rather call a "token", or, as Debian Developer Daniel Kahn Gillmor (dkg) said in a private email, a "high entropy, compact, and transferable string". Certain APIs are specifically crafted to use tokens. OAuth, for example, generates "access tokens" that are random strings that give access to services. But in our discussion, we'll use the term "token" in a broader sense. Notice how we removed the "memorable" property and added the "compact" one: we want to efficiently convert the most entropy into the shortest password possible, to work around possibly limiting password policies. For example, some bank cards only allow 5-digit security PINs and most web sites have an upper limit in the password length. The "compact" property applies less to "passwords" than tokens, because I assume that you will only use a password in select places: your password manager, SSH and OpenPGP keys, your computer login, and encryption keys. Everything else should be in a password manager. Those tools are generally under your control and should allow large enough passwords that the compact property is not particularly important.

Generating secure passwords We'll look now at how to generate a strong, transferable, and memorable password. These are most likely the passwords you will deal with most of the time, as security tokens used in other settings should actually never show up on screen: they should be copy-pasted or automatically typed in forms. The password generators described here are all operated from the command line. Password managers often have embedded password generators, but usually don't provide an easy way to generate a password for the vault itself. The previously mentioned xkcd cartoon is probably a common cultural reference in the security crowd and I often use it to explain how to choose a good passphrase. It turns out that someone actually implemented xkcd author Randall Munroe's suggestion into a program called xkcdpass:
    $ xkcdpass
    estop mixing edelweiss conduct rejoin flexitime
In verbose mode, it will show the actual entropy of the generated passphrase:
    $ xkcdpass -V
    The supplied word list is located at /usr/lib/python3/dist-packages/xkcdpass/static/default.txt.
    Your word list contains 38271 words, or 2^15.22 words.
    A 6 word password from this list will have roughly 91 (15.22 * 6) bits of entropy,
    assuming truly random word selection.
    estop mixing edelweiss conduct rejoin flexitime
Note that the above password has 91 bits of entropy, which is about what a fifteen-character password would have, if chosen at random from uppercase, lowercase, digits, and ten symbols:
    log2((26 + 26 + 10 + 10)^15) = approx. 92.548875
It's also interesting to note that this is closer to the entropy of a fifteen-letter base64 encoded password: since each character is six bits, you end up with 90 bits of entropy. xkcdpass is scriptable and easy to use. You can also customize the word list, separators, and so on with different command-line options. By default, xkcdpass uses the 2 of 12 word list from 12 dicts, which is not specifically geared toward password generation but has been curated for "common words" and words of different sizes. Another option is the diceware system. Diceware works by having a word list in which you look up words based on dice rolls. For example, rolling the five dice "1 4 2 1 4" would give the word "bilge". By rolling those dice five times, you generate a five word password that is both memorable and random. Since paper and dice do not seem to be popular anymore, someone wrote that as an actual program, aptly called diceware. It works in a similar fashion, except that passwords are not space separated by default:
    $ diceware
    AbateStripDummy16thThanBrock
Diceware can obviously change the output to look similar to xkcdpass, but can also accept actual dice rolls for those who do not trust their computer's entropy source:
    $ diceware -d ' ' -r realdice -w en_orig
    Please roll 5 dice (or a single dice 5 times).
    What number shows dice number 1? 4
    What number shows dice number 2? 2
    What number shows dice number 3? 6
    [...]
    Aspire O's Ester Court Born Pk
The diceware software ships with a few word lists, and the default list has been deliberately created for generating passwords. It is derived from the standard diceware list with additions from the SecureDrop project. Diceware ships with the EFF word list that has words chosen for better recognition, but it is not enabled by default, even though diceware recommends using it when generating passwords with dice. That is because the EFF list was added later on. The project is currently considering making the EFF list be the default. One disadvantage of diceware is that it doesn't actually show how much entropy the generated password has those interested need to compute it for themselves. The actual number depends on the word list: the default word list has 13 bits of entropy per word (since it is exactly 8192 words long), which means the default 6 word passwords have 78 bits of entropy:
    log2(8192) * 6 = 78
Both of these programs are rather new, having, for example, entered Debian only after the last stable release, so they may not be directly available for your distribution. The manual diceware method, of course, only needs a set of dice and a word list, so that is much more portable, and both the diceware and xkcdpass programs can be installed through pip. However, if this is all too complicated, you can take a look at Openwall's passwdqc, which is older and more widely available. It generates more memorable passphrases while at the same time allowing for better control over the level of entropy:
    $ pwqgen
    vest5Lyric8wake
    $ pwqgen random=78
    Theme9accord=milan8ninety9few
For some reason, passwdqc restricts the entropy of passwords between the bounds of 24 and 85 bits. That tool is also much less customizable than the other two: what you see here is pretty much what you get. The 4096-word list is also hardcoded in the C source code; it comes from a Usenet sci.crypt posting from 1997. A key feature of xkcdpass and diceware is that you can craft your own word list, which can make dictionary-based attacks harder. Indeed, with such word-based password generators, the only viable way to crack those passwords is to use dictionary attacks, because the password is so long that character-based exhaustive searches are not workable, since they would take centuries to complete. Changing from the default dictionary therefore brings some advantage against attackers. This may be yet another "security through obscurity" procedure, however: a naive approach may be to use a dictionary localized to your native language (for example, in my case, French), but that would deter only an attacker that doesn't do basic research about you, so that advantage is quickly lost to determined attackers. One should also note that the entropy of the password doesn't depend on which word list is chosen, only its length. Furthermore, a larger dictionary only expands the search space logarithmically; in other words, doubling the word-list length only adds a single bit of entropy. It is actually much better to add a word to your password than words to the word list that generates it.

Generating security tokens As mentioned before, most password managers feature a way to generate strong security tokens, with different policies (symbols or not, length, etc). In general, you should use your password manager's password-generation functionality to generate tokens for sites you visit. But how are those functionalities implemented and what can you do if your password manager (for example, Firefox's master password feature) does not actually generate passwords for you? pass, the standard UNIX password manager, delegates this task to the widely known pwgen program. It turns out that pwgen has a pretty bad track record for security issues, especially in the default "phoneme" mode, which generates non-uniformly distributed passwords. While pass uses the more "secure" -s mode, I figured it was worth removing that option to discourage the use of pwgen in the default mode. I made a trivial patch to pass so that it generates passwords correctly on its own. The gory details are in this email. It turns out that there are lots of ways to skin this particular cat. I was suggesting the following pipeline to generate the password:
    head -c $entropy /dev/random   base64   tr -d '\n='
The above command reads a certain number of bytes from the kernel (head -c $entropy /dev/random) encodes that using the base64 algorithm and strips out the trailing equal sign and newlines (for large passwords). This is what Gillmor described as a "high-entropy compact printable/transferable string". The priority, in this case, is to have a token that is as compact as possible with the given entropy, while at the same time using a character set that should cause as little trouble as possible on sites that restrict the characters you can use. Gillmor is a co-maintainer of the Assword password manager, which chose base64 because it is widely available and understood and only takes up 33% more space than the original 8-bit binary encoding. After a lengthy discussion, the pass maintainer, Jason A. Donenfeld, chose the following pipeline:
    read -r -n $length pass < <(LC_ALL=C tr -dc "$characters" < /dev/urandom)
The above is similar, except it uses tr to directly to read characters from the kernel, and selects a certain set of characters ($characters) that is defined earlier as consisting of [:alnum:] for letters and digits and [:graph:] for symbols, depending on the user's configuration. Then the read command extracts the chosen number of characters from the output and stores the result in the pass variable. A participant on the mailing list, Brian Candler, has argued that this wastes entropy as the use of tr discards bits from /dev/urandom with little gain in entropy when compared to base64. But in the end, the maintainer argued that reading "reading from /dev/urandom has no [effect] on /proc/sys/kernel/random/entropy_avail on Linux" and dismissed the objection. Another password manager, KeePass uses its own routines to generate tokens, but the procedure is the same: read from the kernel's entropy source (and user-generated sources in case of KeePass) and transform that data into a transferable string.

Conclusion While there are many aspects to password management, we have focused on different techniques for users and developers to generate secure but also usable passwords. Generating a strong yet memorable password is not a trivial problem as the security vulnerabilities of the pwgen software showed. Furthermore, left to their own devices, users will generate passwords that can be easily guessed by a skilled attacker, especially if they can profile the user. It is therefore essential we provide easy tools for users to generate strong passwords and encourage them to store secure tokens in password managers.
Note: this article first appeared in the Linux Weekly News.

Next.