Getting the Belgian eID to work on Linux
systems should be fairly easy, although some people do struggle with it.
For that reason, there is a lot of third-party documentation out there
in the form of blog posts, wiki pages, and other kinds of things.
Unfortunately, some of this documentation is simply wrong. Written by
people who played around with things until it kind of worked, sometimes
you get a situation where something that used to work in the past (but
wasn't really necessary) now stopped working, but it's still added to
a number of locations as though it were the gospel.
And then people follow these instructions and now things don't work
anymore.
One of these revolves around OpenSC.
OpenSC is an open source smartcard library that has support for a
pretty
large
number of smartcards, amongst which the Belgian eID. It provides a
PKCS#11 module as well as a
number of supporting tools.
For those not in the know, PKCS#11 is a standardized C API for
offloading cryptographic operations. It is an API that can be used when
talking to a hardware cryptographic module, in order to make that module
perform some actions, and it is especially popular in the open source
world, with support in
NSS,
amongst others. This library is written and maintained by mozilla, and
is a low-level cryptographic library that is used by Firefox (on all
platforms it supports) as well as by Google Chrome and other browsers
based on that (but only on Linux, and as I understand it, only for
linking with smartcards; their BoringSSL library is used for other
things).
The official eID software that we ship through
eid.belgium.be,
also known as "BeID", provides a PKCS#11 module for the Belgian eID, as
well as a number of support tools to make interacting with the card
easier, such as the "eID viewer", which provides the ability to read
data from the card, and validate their signatures. While the very first
public version of this eID PKCS#11 module was originally based on
OpenSC, it has since been reimplemented as a PKCS#11 module in its own
right, with no lineage to OpenSC whatsoever anymore.
About five years ago, the Belgian eID card was renewed. At the time, a
new physical appearance was the most obvious difference with the old
card, but there were also some technical, on-chip, differences that are
not so apparent. The most important one here, although it is not the
only one, is the fact that newer eID cards now use a NIST
P-384 elliptic curve-based private
keys, rather than the RSA-based
ones that were used in the past. This change required some changes to
any PKCS#11 module that supports the eID; both the BeID one, as well as
the OpenSC card-belpic driver that is written in support of the Belgian
eID.
Obviously, the required changes were implemented for the BeID module;
however, the OpenSC card-belpic driver was not updated. While I did do
some preliminary work on the required changes, I was unable to get it to
work, and eventually other things took up my time so I never finished
the implementation. If someone would like to finish the work that I
started, the preliminal patch that I
wrote
could be a good start -- but like I said, it doesn't yet work. Also,
you'll probably be interested in the official
documentation
of the eID card.
Unfortunately, in the mean time someone added the Applet 1.8 ATR to the
card-belpic.c file, without also implementing the required changes to
the driver so that the PKCS#11 driver actually supports the eID card.
The result of this is that if you have OpenSC installed in NSS for
either Firefox or any Chromium-based browser, and it gets picked up
before the BeID PKCS#11 module, then NSS will stop looking and pass all
crypto operations to the OpenSC PKCS#11 module rather than to the
official eID PKCS#11 module, and things will not work at all, causing a
lot of confusion.
I have therefore taken the following two steps:
The official eID packages now
conflict
with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module,
not the rest of OpenSC, so you can theoretically still use its tools.
This means that once we release this new version of the eID software,
when you do an upgrade and you have OpenSC installed, it will remove
the PKCS#11 module and anything that depends on it. This is normal
and expected.
I have filed a pull
request against OpenSC
that removes the Applet 1.8 ATR from the driver, so that OpenSC will
stop claiming that it supports the 1.8 applet.
When the pull request is accepted, we will update the official eID
software to make the conflict versioned, so that as soon as it works
again you will again be able to install the OpenSC and BeID packages at
the same time.
In the mean time, if you have the OpenSC PKCS#11 module installed on
your system, and your eID authentication does not work, try removing it.
I recently came a cross a x509 P(rivate)KI Root Certificate which had
a pathLen constrain set on the (self signed) Root Certificate.
Since that is not commonly seen I looked a bit around to get a
better understanding about how the pathLen basic constrain
should be used.
Primary source is
RFC 5280 section 4.2.1.9
The pathLenConstraint field is meaningful only if the cA boolean is
asserted and the key usage extension, if present, asserts the
keyCertSign bit (Section 4.2.1.3). In this case, it gives the
maximum number of non-self-issued intermediate certificates that may
follow this certificate in a valid certification path
Since the Root is always self-issued it doesn't count towards the limit,
and since it's the last certificate (or the first depending on how you count)
in a chain, it's pretty much pointless to configure a pathLen constrain
directly on a Root Certificate.
Another relevant resource are the
Baseline Requirements of the CA/Browser
Forum (currently v2.0.2).
Section 7.1.2.1.4 "Root CA Basic Constraints" describes it
as NOT RECOMMENDED for a Root CA.
Last but not least there is the awesome
x509 Limbo project
which has a
section for validating pathLen constrains.
Since the RFC 5280 based assumption is that self signed certs do not
count, they do not check a case with such a constrain on the Root itself, and what
the implementations do about it. So the assumption right now is that they
properly ignore it.
Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just
don't do it.
This article has been originally posted on November 4, 2023, and has
been updated (at the bottom) since.
Thanks to All Saints Day, I ve just had a 5 days weekend. One of those
days I woke up and decided I absolutely needed a cartonnage box for the
cardboard and linocut piecepack I ve been working on for quite some
time.
I started drawing a plan with measures before breakfast, then decided to
change some important details, restarted from scratch, did a quick dig
through the bookbinding materials and settled on 2 mm cardboard for the
structure, black fabric-like paper for the outside and a scrap of paper
with a manuscript print for the inside.
Then we had the only day with no rain among the five, so some time was
spent doing things outside, but on the next day I quickly finished two
boxes, at two different heights.
The weather situation also meant that while I managed to take passable
pictures of the first stages of the box making in natural light, the
last few stages required some creative artificial lightning, even if it
wasn t that late in the evening. I need to build1 myself a
light box.
And then decided that since they are C6 sized, they also work well for
postcards or for other A6 pieces of paper, so I will probably need
to make another one when the piecepack set will be finally finished.
The original plan was to use a linocut of the piecepack suites as the
front cover; I don t currently have one ready, but will make it while
printing the rest of the piecepack set. One day :D
One of the boxes was temporarily used for the plastic piecepack I got
with the book, and that one works well, but since it s a set with
standard suites I think I will want to make another box, using some of
the paper with fleur-de-lis that I saw in the stash.
I ve also started to write detailed instructions: I will publish them as
soon as they are ready, and then either update this post, or they will
be mentioned in an additional post if I will have already made more
boxes in the meanwhile.
Update 2024-03-25: the instructions have been published on my craft
patterns website
My effort to improve transparency and confidence of public apt archives continues. I started to work on this in Apt Archive Transparency in which I mention the debdistget project in passing. Debdistget is responsible for mirroring index files for some public apt archives. I ve realized that having a publicly auditable and preserved mirror of the apt repositories is central to being able to do apt transparency work, so the debdistget project has become more central to my project than I thought. Currently I track Trisquel, PureOS, Gnuinos and their upstreams Ubuntu, Debian and Devuan.
Debdistget download Release/Package/Sources files and store them in a git repository published on GitLab. Due to size constraints, it uses two repositories: one for the Release/InRelease files (which are small) and one that also include the Package/Sources files (which are large). See for example the repository for Trisquel release files and the Trisquel package/sources files. Repositories for all distributions can be found in debdistutils archives GitLab sub-group.
The reason for splitting into two repositories was that the git repository for the combined files become large, and that some of my use-cases only needed the release files. Currently the repositories with packages (which contain a couple of months worth of data now) are 9GB for Ubuntu, 2.5GB for Trisquel/Debian/PureOS, 970MB for Devuan and 450MB for Gnuinos. The repository size is correlated to the size of the archive (for the initial import) plus the frequency and size of updates. Ubuntu s use of Apt Phased Updates (which triggers a higher churn of Packages file modifications) appears to be the primary reason for its larger size.
Working with large Git repositories is inefficient and the GitLab CI/CD jobs generate quite some network traffic downloading the git repository over and over again. The most heavy user is the debdistdiff project that download all distribution package repositories to do diff operations on the package lists between distributions. The daily job takes around 80 minutes to run, with the majority of time is spent on downloading the archives. Yes I know I could look into runner-side caching but I dislike complexity caused by caching.
Fortunately not all use-cases requires the package files. The debdistcanary project only needs the Release/InRelease files, in order to commit signatures to the Sigstore and Sigsum transparency logs. These jobs still run fairly quickly, but watching the repository size growth worries me. Currently these repositories are at Debian 440MB, PureOS 130MB, Ubuntu/Devuan 90MB, Trisquel 12MB, Gnuinos 2MB. Here I believe the main size correlation is update frequency, and Debian is large because I track the volatile unstable.
So I hit a scalability end with my first approach. A couple of months ago I solved this by discarding and resetting these archival repositories. The GitLab CI/CD jobs were fast again and all was well. However this meant discarding precious historic information. A couple of days ago I was reaching the limits of practicality again, and started to explore ways to fix this. I like having data stored in git (it allows easy integration with software integrity tools such as GnuPG and Sigstore, and the git log provides a kind of temporal ordering of data), so it felt like giving up on nice properties to use a traditional database with on-disk approach. So I started to learn about Git-LFS and understanding that it was able to handle multi-GB worth of data that looked promising.
Fairly quickly I scripted up a GitLab CI/CD job that incrementally update the Release/Package/Sources files in a git repository that uses Git-LFS to store all the files. The repository size is now at Ubuntu 650kb, Debian 300kb, Trisquel 50kb, Devuan 250kb, PureOS 172kb and Gnuinos 17kb. As can be expected, jobs are quick to clone the git archives: debdistdiff pipelines went from a run-time of 80 minutes down to 10 minutes which more reasonable correlate with the archive size and CPU run-time.
The LFS storage size for those repositories are at Ubuntu 15GB, Debian 8GB, Trisquel 1.7GB, Devuan 1.1GB, PureOS/Gnuinos 420MB. This is for a couple of days worth of data. It seems native Git is better at compressing/deduplicating data than Git-LFS is: the combined size for Ubuntu is already 15GB for a couple of days data compared to 8GB for a couple of months worth of data with pure Git. This may be a sub-optimal implementation of Git-LFS in GitLab but it does worry me that this new approach will be difficult to scale too. At some level the difference is understandable, Git-LFS probably store two different Packages files around 90MB each for Trisquel as two 90MB files, but native Git would store it as one compressed version of the 90MB file and one relatively small patch to turn the old files into the next file. So the Git-LFS approach surprisingly scale less well for overall storage-size. Still, the original repository is much smaller, and you usually don t have to pull all LFS files anyway. So it is net win.
Throughout this work, I kept thinking about how my approach relates to Debian s snapshot service. Ultimately what I would want is a combination of these two services. To have a good foundation to do transparency work I would want to have a collection of all Release/Packages/Sources files ever published, and ultimately also the source code and binaries. While it makes sense to start on the latest stable releases of distributions, this effort should scale backwards in time as well. For reproducing binaries from source code, I need to be able to securely find earlier versions of binary packages used for rebuilds. So I need to import all the Release/Packages/Sources packages from snapshot into my repositories. The latency to retrieve files from that server is slow so I haven t been able to find an efficient/parallelized way to download the files. If I m able to finish this, I would have confidence that my new Git-LFS based approach to store these files will scale over many years to come. This remains to be seen. Perhaps the repository has to be split up per release or per architecture or similar.
Another factor is storage costs. While the git repository size for a Git-LFS based repository with files from several years may be possible to sustain, the Git-LFS storage size surely won t be. It seems GitLab charges the same for files in repositories and in Git-LFS, and it is around $500 per 100GB per year. It may be possible to setup a separate Git-LFS backend not hosted at GitLab to serve the LFS files. Does anyone know of a suitable server implementation for this? I had a quick look at the Git-LFS implementation list and it seems the closest reasonable approach would be to setup the Gitea-clone Forgejo as a self-hosted server. Perhaps a cloud storage approach a la S3 is the way to go? The cost to host this on GitLab will be manageable for up to ~1TB ($5000/year) but scaling it to storing say 500TB of data would mean an yearly fee of $2.5M which seems like poor value for the money.
I realized that ultimately I would want a git repository locally with the entire content of all apt archives, including their binary and source packages, ever published. The storage requirements for a service like snapshot (~300TB of data?) is today not prohibitly expensive: 20TB disks are $500 a piece, so a storage enclosure with 36 disks would be around $18.000 for 720TB and using RAID1 means 360TB which is a good start. While I have heard about ~TB-sized Git-LFS repositories, would Git-LFS scale to 1PB? Perhaps the size of a git repository with multi-millions number of Git-LFS pointer files will become unmanageable? To get started on this approach, I decided to import a mirror of Debian s bookworm for amd64 into a Git-LFS repository. That is around 175GB so reasonable cheap to host even on GitLab ($1000/year for 200GB). Having this repository publicly available will make it possible to write software that uses this approach (e.g., porting debdistreproduce), to find out if this is useful and if it could scale. Distributing the apt repository via Git-LFS would also enable other interesting ideas to protecting the data. Consider configuring apt to use a local file:// URL to this git repository, and verifying the git checkout using some method similar to Guix s approach to trusting git content or Sigstore s gitsign.
A naive push of the 175GB archive in a single git commit ran into pack size limitations:
remote: fatal: pack exceeds maximum allowed size (4.88 GiB)
however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository:
git init git lfs install git lfs track 'dists/**' 'pool/**' git add .gitattributes git commit -m"Add Git-LFS track attributes." .gitattributes time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git . git add dists project git commit -m"Add." -a git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git git push --set-upstream origin --all for d in pool//; do echo $d; time git add $d; git commit -m"Add $d." -a git push done
The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources.
Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.
What do you think? Happy Hacking!
I ended 2022 with a musical retrospective and very much enjoyed writing
that blog post. As such, I have decided to do the same for 2023! From now on,
this will probably be an annual thing :)
Albums
In 2023, I added 73 new albums to my collection nearly 2 albums every three
weeks! I listed them below in the order in which I acquired them.
I purchased most of these albums when I could and borrowed the rest at
libraries. If you want to browse though, I added links to the album covers
pointing either to websites where you can buy them or to Discogs when digital
copies weren't available.
Once again this year, it seems that Punk (mostly O !) and Metal dominate my
list, mostly fueled by Angry Metal Guy and the amazing Montr al
Skinhead/Punk concert scene.
Concerts
A trend I started in 2022 was to go to as many concerts of artists I like as
possible. I'm happy to report I went to around 80% more concerts in 2023 than
in 2022! Looking back at my list, April was quite a busy month...
Here are the concerts I went to in 2023:
March 8th: Godspeed You! Black Emperor
April 11th: Alexandra Str liski
April 12th: Bikini Kill
April 21th: Brigada Flores Magon, Union Thugs
April 28th: Komintern Sect, The Outcasts, Violent Way, Ultra Razzia, Over the
Hill
May 3rd: First Fragment
May 12th: Rhapsody of Fire, Wind Rose
May 13th: Aeternam
June 2nd: Mortier, La Gachette
June 17th: Ultra Razzia, Total Nada, BLEMISH
June 30th: Avishai Cohen Trio
July 9th: Richard Galliano
August 18th: Gojira, Mastodon, Lorna Shore
September 14th: Jinjer
September 22nd: CUIR, Salvaje Punk, Hysteric Polemix, Perestroika, Ultra Razzia, Ilusion, Over the Hill, Asbestos
October 6th: Rancoeur, Street Code, Tenaz, Mortimer, Guernica, High Anxiety
Although metalfinder continues to work as intended, I'm very glad to have
discovered the Montr al underground scene has departed from Facebook/Instagram
and adopted en masseGancio, a FOSS community agenda that supports
ActivityPub. Our local instance, askapunk.net
is pretty much all I could ask for :)
That's it for 2023!
I know that people rave about GMail's spam filtering, but it didn't work for
me: I was seeing too many false positives. I personally prefer to see some
false negatives (i.e. letting some spam through), but to reduce false
positives as much as possible (and ideally have a way to tune this).
Here's the local SpamAssassin setup I
have put together over many years. In addition to the parts I describe here,
I also turn off
greylisting on my email
provider (KolabNow) because I don't want to have to
wait for up to 10 minutes for a "2FA" email to go through.
This setup assumes that you download all of your emails to your local
machine. I use fetchmail for this, though
similar tools should work too.
Three tiers of emails
The main reason my setup works for me, despite my receiving hundreds of spam
messages every day, is that I split incoming emails into three tiers via
procmail:
not spam: delivered to inbox
likely spam: quarantined in a soft_spam/ folder
definitely spam: silently deleted
I only ever have to review the likely spam tier for false positives, which
is on the order of 10-30 spam emails a day. I never even see the the
hundreds that are silently deleted due to a very high score.
This is implemented based on a threshold in my .procmailrc:
# Use spamassassin to check for spam
:0fw: .spamassassin.lock
/usr/bin/spamassassin
# Throw away messages with a score of > 12.0
:0
* ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\*
/dev/null
:0:
* ^X-Spam-Status: Yes
$HOME/Mail/soft_spam/
# Deliver all other messages
:0:
$ DEFAULT
I also use the following ~/.muttrc configuration to easily report false
negatives/positives and examine my likely spam folder via a shortcut in
mutt:
unignore X-Spam-Level
unignore X-Spam-Status
macro index S "c=soft_spam/\n" "Switch to soft_spam"
# Tell mutt about SpamAssassin headers so that I can sort by spam score
spam "X-Spam-Status: (Yes No), (hits score)=(-?[0-9]+\.[0-9])" "%3"
folder-hook =soft_spam 'push ol'
folder-hook =spam 'push ou'
# <Esc>d = de-register as non-spam, register as spam, move to spam folder.
macro index \ed "<enter-command>unset wait_key\n<pipe-entry>spamassassin -r\n<enter-command>set wait_key\n<save-message>=spam\n" "report the message as spam"
# <Esc>u = unregister as spam, register as non-spam, move to inbox folder.
macro index \eu "<enter-command>unset wait_key\n<pipe-entry>spamassassin -k\n<enter-command>set wait_key\n<save-message>=inbox\n" "correct the false positive (this is not spam)"
Custom SpamAssassin rules
In addition to the default ruleset that comes with SpamAssassin, I've also
accrued a number of custom rules over the years.
The first set comes from the (now defunct) SpamAssassin Rules
Emporium.
The second set is the one that backs bugs.debian.org and
lists.debian.org.
Note this second one includes archived copies of some of the SARE rules and
so I only use some of the rules in the common/ directory.
Finally, I wrote a few custom rules of my
own based
on specific kinds of emails I have seen slip through the cracks. I haven't
written any of those in a long time and I suspect some of my rules are now
obsolete. You may want to do your own testing before you copy these outright.
In addition to rules to match more spam, I've also written a ruleset to
remove false positives in French
emails
coming from many of the above custom rules. I also wrote a rule to get a
bonus to any email that comes with a patch:
describe FM_PATCH Includes a patch
body FM_PATCH /\bdiff -pruN\b/
score FM_PATCH -1.0
since it's not very common in spam emails
SpamAssassin settings
When it comes to my system-wide SpamAssassin configuration in
/etc/spamassassin/, I enable the following plugins:
Some of these require extra helper packages or Perl libraries to be
installed. See the comments in the relevant *.pre files.
My ~/.spamassassin/user_prefs file contains the following configuration:
as well as manual score
reductions
due to false positives, and manual score
increases
to help push certain types of spam emails over the 12.0 definitely spam
threshold.
Finally, I have the FuzzyOCR
package installed since it has
occasionally flagged some spam that other tools had missed. It is a little
resource intensive though and so you may want to avoid this one if you are
filtering spam for other people.
As always, feel free to leave a comment if you do something else that works
well and that's not included in my setup. This is a work-in-progress.
Freexian Meetup, by Stefano Rivera, Utkarsh Gupta, et al.
During DebConf, Freexian organized a
meetup for its
collaborators and those interested in learning more about Freexian and its
services. It was well received and many people interested in Freexian showed up.
Some developers who were interested in contributing to LTS came to get more
details about joining the project. And some prospective customers came to get to
know us and ask questions.
Sadly, the tragic loss of Abraham
shook DebConf, both individually and structurally. The meetup got rescheduled to
a small room without video coverage. With that, we still had a wholesome
interaction and here s a quick picture from the meetup taken by Utkarsh (which
is also why he s missing!).
Debusine, by Rapha l Hertzog, et al.
Freexian has been investing into
debusine for a while, but
development speed is about to increase dramatically thanks to funding from
SovereignTechFund.de. Rapha l laid out the
5 milestones of
the funding contract, and filed the
issues for the first milestone.
Together with Enrico and Stefano, they established a
workflow
for the expanded team.
Among the first steps of this milestone, Enrico started to work on a
developer-friendly description of debusine
that we can use when we reach out to the many Debian contributors that we will
have to interact with. And Rapha l started the design work of the autopkgtest
and lintian tasks,
i.e. what s the interface to schedule such tasks, what behavior and what
associated options do we support?
At this point you might wonder what debusine is supposed to be let us try to
answer this: Debusine manages scheduling and distribution of Debian-related
build and QA tasks to a network of worker machines. It also manages the
resulting artifacts and provides the results in an easy to consume way.
We want to make it easy for Debian contributors to leverage all the great QA
tools that Debian provides. We want to build the next generation of Debian s
build infrastructure, one that will continue to reliably do what it already
does, but that will also enable distribution-wide experiments, custom package
repositories and custom workflows with advanced package reviews.
If this all sounds interesting to you, don t hesitate to
watch the project on salsa.debian.org
and to contribute.
lpr/lpd in Debian, by Thorsten Alteholz
During Debconf23, Till Kamppeter presented CPDB (Common Print Dialog Backend),
a new way to handle print queues. After this talk it was discussed whether the
old lpr/lpd based printing system could be abandoned in Debian or whether there
is still demand for it.
So Thorsten asked on the
debian-devel email list
whether anybody uses it. Oddly enough, these old packages are still useful:
Within a small network it is easier to distribute a printcap file, than to
properly configure cups clients.
One of the biggest manufacturers of WLAN router and DSL boxes only supports
raw queues when attaching an USB printer to their hardware. Admittedly the
CPDB still has problems with such raw queues.
The Pharos printing system at MIT is still lpd-based.
As a result, the lpr/lpd stuff is not yet ready to be abandoned and Thorsten
will adopt the relevant packages (or rather move them under the umbrella of the
debian-printing team). Though it is not planned to develop new features, those
packages should at least have a maintainer. This month Thorsten adopted rlpr,
an utility for lpd printing without using /etc/printcap. The next one he is
working on is lprng, a lpr/lpd printer spooling system. If you know of any
other package that is also needed and still maintained by the QA team, please
tell Thorsten.
/usr-merge, by Helmut Grohne
Discussion about lifting the file move moratorium has been initiated with the
CTTE and the release team. A formal lift is
dependent on updating debootstrap in older suites though. A significant number
of packages can automatically move their systemd unit files if
dh_installsystemd and systemd.pc change their installation targets.
Unfortunately, doing so makes some packages FTBFS and therefore
patches have been filed.
The analysis tool, dumat, has been enhanced to better understand
which upgrade scenarios are considered supported
to reduce false positive bug filings and gained a mode for
local operation on a .changes file
meant for inclusion in salsa-ci. The filing of bugs from dumat is still
manual to improve the quality of reports.
Since September, the moratorium
has been lifted.
Miscellaneous contributions
Rapha l updated Django s backport in bullseye-backports to match the latest
security release that was published in bookworm. Tracker.debian.org is still
using that backport.
Helmut Grohne sent 13 patches for cross build failures.
Helmut Grohne performed a maintenance upload of debvm enabling its
use in autopkgtests.
Helmut Grohne wrote an API-compatible reimplementation of
autopkgtest-build-qemu. It is powered by mmdebstrap, therefore
unprivileged, EFI-only and will soon be
included in mmdebstrap.
Santiago continued the work regarding how to make it easier to
(automatically) test reverse dependencies.
An example
of the ongoing work was presented during the Salsa CI BoF at DebConf 23.
In fact, omniorb-dfsg test pipelines as the above were used for the
omniorb-dfsg 4.3.0 transition,
verifying how the reverse dependencies (tango, pytango and omnievents) were
built and how their autopkgtest jobs run with the to-be-uploaded omniorb-dfsg
new release.
Utkarsh and Stefano attended and helped run DebConf 23. Also continued
winding up DebConf 22 accounting.
Introduction
DebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from the 3rd to the 17th of September, 2023. Ever since I got to know about it (which was more than an year ago), I was excited to attend DebConf in my home country. This was my second DebConf, as I attended one last year in Kosovo. I was very happy that I didn t need to apply for a visa to attend. I got full bursary to attend the event (thanks a lot to Debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :)
For the conference, I submitted two talks. One was suggested by Sahil on Debian packaging for beginners, while the other was suggested by Praveen who opined that a talk covering broader topics about freedom in self-hosting services will be better, when I started discussing about submitting a talk about prav app project. So I submitted one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting.
My friend Suresh - who is enthusiastic about Debian and free software - wanted to attend the DebConf as well. When the registration started, I reminded him about applying. We landed in Kochi on the 28th of August 2023 during the festival of Onam. We celebrated Onam in Kochi, had a trip to Wayanad, and returned to Kochi. On the evening of the 3rd of September, we reached the venue - Four Points Hotel by Sheraton, at Infopark Kochi, Ernakulam, Kerala, India.
Hotel overview
The hotel had 14 floors, and featured a swimming pool and gym (these were included in our package). The hotel gave us elevator access for only our floor, along with public spaces like the reception, gym, swimming pool, and dining areas. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel was in cahoots with winterwear companies? :)
Meals
On the first day, Suresh and I had dinner at the eatery on the third floor. At the entrance, a member of the hotel staff asked us about how many people we wanted a table for. I told her that it s just the two of us at the moment, but (as we are attending a conference) we might be joined by others. Regardless, they gave us a table for just two. Within a few minutes, we were joined by Alper from Turkey and urbec from Germany. So we shifted to a larger table but then we were joined by even more people, so we were busy adding more chairs to our table. urbec had already been in Kerala for the past 5-6 days and was, on one hand, very happy already with the quality and taste of bananas in Kerala and on the other, rather afraid of the spicy food :)
Two days later, the lunch and dinner were shifted to the All Spice Restaurant on the 14th floor, but the breakfast was still served at the eatery. Since the eatery (on the 3rd floor) had greater variety of food than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the spicy food. It is difficult for locals to help them, because what we consider mild can be spicy for others. It is not easy to satisfy everyone at the dining table, but I think the organizing team did a very good job in the food department. (That said, it didn t matter for me after a point, and you will know why.) The pappadam were really good, and I liked the rice labelled Kerala rice . I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it too (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn t like some of them, mostly because they were very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was the eatery).
The excellent Swag Bag
The DebConf registration desk was at the second floor. We were given a very nice swag bag. They were available in multiple colors - grey, green, blue, red - and included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a paper coaster, and stickers. It rained almost every day in Kochi during our stay, so handing out an umbrella to every attendee was a good idea.
A gift for Nattie
During breakfast one day, Nattie (Belgium) expressed the desire to buy a coffee filter. The next time I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)
Being a mentor
There were many newbies who were eager to learn and contribute to Debian. So, I mentored whoever came to me and was interested in learning. I conducted a packaging workshop in the bootcamp, but could only cover how to set up the Debian Unstable environment, and had to leave out how to package (but I covered that in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people understand why we sign GPG keys and how to sign them. I planned to take a workshop on it but cancelled it later.
My talk
My Debian packaging talk was on the 10th of September, 2023. I had not prepared slides for my Debian packaging talk in advance - I thought that I could do it during the trip, but I didn t get the time so I prepared them on the day before the talk. Since it was mostly a tutorial, the slides did not need much preparation. My thanks to Suresh, who helped me with the slides and made it possible to complete them in such a short time frame.
My talk was well-received by the audience, going by their comments. I am glad that I could give an interesting presentation.
Visiting a saree shop
After my talk, Suresh, Alper, and I went with Anisa and Kristi - who are both from Albania, and have a never-ending fascination for Indian culture :) - to buy them sarees. We took autos to Kakkanad market and found a shop with a great variety of sarees. I was slightly familiar with the area around the hotel, as I had been there for a week. Indian women usually don t try on sarees while buying - they just select the design. But Anisa wanted to put one on and take a few photos as well. The shop staff did not have a trial saree for this purpose, so they took a saree from a mannequin. It took about an hour for the lady at the shop to help Anisa put on that saree but you could tell that she was in heaven wearing that saree, and she bought it immediately :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which would go well with the mundu we already had, but we could not find anything to our liking.
Cheese and Wine Party
On the 11th of September we had the Cheese and Wine Party, a tradition of every DebConf. I brought Kaju Samosa and Nankhatai from home. Many attendees expressed their appreciation for the samosas. During the party, I was with Abhas and had a lot of fun. Abhas brought packets of paan and served them at the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was less fun compared to the previous DebConfs - you could only drink alcohol served by the hotel in public places. If you bought your own alcohol, you could only drink in private places (such as in your room, or a friend s room), but not in public places.
Party at my room
Last year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after the Cheese and Wine party to have pastis. My idea was to have them with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wanted and he ended up bringing some ten people. Suddenly, the room was crowded. I was having good time at the party, serving them the snacks given to me by Abhas. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC became ineffective and I found myself sweating.
I left the room and roamed around in the hotel for some fresh air. I came back after about 1.5 hours - for most part, I was sitting at the ground floor with TK Saurabh. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 2:30 AM. Nobody seemed to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.
Sadhya Thali
On the 12th of September, we had a sadhya thali for lunch. It is a vegetarian thali served on a banana leaf on the eve of Thiruvonam. It wasn t Thiruvonam on this day, but we got a special and filling lunch. The rasam and payasam were especially yummy.
Day trip
On the 13th of September, we had a daytrip. I chose the daytrip houseboat in Allepey. Suresh chose the same, and we registered for it as soon as it was open. This was the most sought-after daytrip by the DebConf attendees - around 80 people registered for it.
Our bus was set to leave at 9 AM on the 13th of September. Me and Suresh woke up at 8:40 and hurried to get to the bus in time. It took two hours to reach the venue where we get the houseboat.
The houseboat experience was good. The trip featured some good scenery. I got to experience the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. By evening, we came back to the place where we had boarded the boat.
A good friend lost
When we came back from the daytrip, we received news that Abhraham Raji was involved in a fatal accident during a kayaking trip.
Abraham Raji was a very good friend of mine. In my Albania-Kosovo-Dubai trip last year, he was my roommate at our Tirana apartment. I roamed around in Dubai with him, and we had many discussions during DebConf22 Kosovo. He was the one who took the photo of me on my homepage. I also met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu, and went to his flat in Kochi this year in June.
We had many projects in common. He was a Free Software activist and was the designer of the DebConf23 logo, in addition to those for other Debian events in India.
We were all fairly shocked by the news. I was devastated. Food lost its taste, and it became difficult to sleep. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them.
The next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping with the difficult situation.
On the next day (the 14th of September), the Debian project leader Jonathan Carter addressed and announced the news officially. THe Debian project also mentioned it on their website.
Abraham was supposed to give a talk, but following the incident, all talks were cancelled for the day. The conference dinner was also cancelled.
As I write, 9 days have passed since his death, but even now I cannot come to terms with it.
Visiting Abraham s house
On the 15th of September, the conference ran two buses from the hotel to Abraham s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting opposite me, and he began conversing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap.
In two hours, we reached Abraham s house. I couldn t control myself and burst into tears. I went to see the body. I met his family (mother, father and sister), but I had nothing to say and I felt helpless. Owing to the loss of sleep and appetite over the past few days, I had no energy, and didn t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled for the 16th of September.
A Japanese gift
I got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.
Group photo
On the 16th of September, we had a group photo. I am glad that this year I was more clear in this picture than in DebConf22.
Volunteer work and talks attended
I attended the training session for the video team and worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas presentation on home automation. He basically demonstrated how he liberated Internet-enabled home devices. I also liked Kristi s presentation on ways to engage with the GNOME community.
I also attended lightning talks on the last day. Badri, Wouter, and I gave a demo on how to register on the Prav app. Prav got a fair share of advertising during the last few days.
The night of the 17th of September
Suresh left the hotel and Badri joined me in my room. Thanks to the efforts of Abhijit PA, Kiran, and Ananthu, I wore a mundu.
I then joined Kalyani, Mangesh, Ruchika, Anisa, Ananthu and Kiran. We took pictures and this marked the last night of DebConf23.
Departure day
The 18th of September was the day of departure. Badri slept in my room and left early morning (06:30 AM). I dropped him off at the hotel gate. The breakfast was at the eatery (3rd floor) again, and it was good.
Sahil, Saswata, Nilesh, and I hung out on the ground floor.
I had an 8 PM flight from Kochi to Delhi, for which I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were joined by other DebConf23 attendees at the Kochi airport, where we took another selfie.
Joost and I were on the same flight, and we sat next to each other. He then took a connecting flight from Delhi to Netherlands, while I went with Yash to the New Delhi Railway Station, where we took our respective trains. I reached home on the morning of the 19th of September, 2023.
Big thanks to the organizers
DebConf23 was hard to organize - strict alcohol laws, weird hotel rules, death of a close friend (almost a family member), and a scary notice by the immigration bureau. The people from the team are my close friends and I am proud of them for organizing such a good event.
None of this would have been possible without the organizers who put more than a year-long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the time leading up to DebConf. Kudos to them.
The organizers also tried their best to get clearance for countries not approved by the ministry. I am also sad that people from China, Kosovo, and Iran could not join. In particular, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport to be a valid travel document), considering how we Indians were so well-received in their country last year.
Note about myself
I am writing this on the 22nd of September, 2023. It took me three days to put up this post - this was one of the tragic and hard posts for me to write. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me.
PS: Credits to contrapunctus for making grammar, phrasing, and capitalization changes.
On Sunday 17 September 2023, the annual Debian Developers and Contributors
Conference came to a close.
Over 474 attendees representing 35 countries from around the world came
together for a combined 89 events made up of Talks, Discussons, Birds of a
Feather (BoF) gatherings, workshops, and activities in support of furthering
our distribution, learning from our mentors and peers, building our community,
and having a bit of fun.
The conference was preceded by the annual
DebCamp hacking session held September 3d
through September 9th where Debian Developers and Contributors convened to
focus on their Individual Debian related projects or work in team sprints
geared toward in-person collaboration in developing Debian.
In particular this year Sprints took place to advance development in
Mobian/Debian, Reproducible Builds, and Python in Debian. This year also
featured a BootCamp that was held for newcomers staged by a team of
dedicated mentors who shared hands-on experience in Debian and offered a
deeper understanding of how to work in and contribute to the community.
The actual Debian Developers Conference started on Sunday 10 September 2023.
In addition to the traditional 'Bits from the DPL' talk, the continuous
key-signing party, lightning talks and the announcement of next year's
DebConf4, there were several update sessions shared by internal projects and
teams.
Many of the hosted discussion sessions were presented by our technical
teams who highlighted the work and focus of the Long Term Support (LTS),
Android tools, Debian Derivatives, Debian Installer, Debian Image, and the
Debian Science teams. The Python, Perl, and Ruby programming language teams
also shared updates on their work and efforts.
Two of the larger local Debian communities, Debian Brasil and Debian India
shared how their respective collaborations in Debian moved the project
forward and how they attracted new members and opportunities both in
Debian, F/OSS, and the sciences with their HowTos of demonstrated community
engagement.
The schedule
was updated each day with planned and ad-hoc activities introduced by
attendees over the course of the conference. Several activities that were
unable to be held in past years due to the Global COVID-19 Pandemic were
celebrated as they returned to the conference's schedule: a job fair, the
open-mic and poetry night, the traditional Cheese and Wine party, the group
photos and the Day Trips.
For those who were not able to attend, most of the talks and sessions were
videoed for live room streams with the recorded videos to be made available
later through the
Debian meetings archive website.
Almost all of the sessions facilitated remote participation via IRC
messaging apps or online collaborative text documents which allowed remote
attendees to 'be in the room' to ask questions or share comments with the
speaker or assembled audience.
DebConf23 saw over 4.3 TiB of data streamed, 55 hours of scheduled talks,
23 network access points, 11 network switches, 75 kb of equipment imported,
400 meters of gaffer tape used, 1,463 viewed streaming hours, 461 T-shirts,
35 country Geoip viewers, 5 day trips, and an average of 169 meals planned
per day.
All of these events, activies, conversations, and streams coupled with our
love, interest, and participation in Debian annd F/OSS certainly made this
conference an overall success both here in Kochi, India and On-line around
the world.
The DebConf23 website
will remain active for archival purposes and will continue to offer
links to the presentations and videos of talks and events.
Next year, DebConf24 will be held
in Haifa, Israel. As tradition follows before the next DebConf the local
organizers in Israel will start the conference activites with DebCamp with
particular focus on individual and team work towards improving the
distribution.
DebConf is committed to a safe and welcome environment for all
participants. See the
web page about the Code of Conduct in DebConf23 website
for more details on this.
Debian thanks the commitment of numerous sponsors
to support DebConf23, particularly our Platinum Sponsors:
Infomaniak,
Proxmox,
and Siemens.
We also wish to thank our Video and Infrastructure teams, the DebConf23 and
DebConf commitiees, our host nation of India, and each and every person who
helped contribute to this event and to Debian overall.
Thank you all for your work in helping Debian continue to be "The Universal
Operating System".
See you next year!
About Debian
The Debian Project was founded in 1993 by Ian Murdock to be a truly free
community project. Since then the project has grown to be one of the
largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and maintain
Debian software. Available in 70 languages, and supporting a huge range
of computer types, Debian calls itself the universal operating system.
About DebConf
DebConf is the Debian Project's developer conference. In addition to a
full schedule of technical, social and policy talks, DebConf provides an
opportunity for developers, contributors and other interested people to
meet in person and work together more closely. It has taken place
annually since 2000 in locations as varied as Scotland, Argentina, and
Bosnia and Herzegovina. More information about DebConf is available from
https://debconf.org/.
About Infomaniak
Infomaniak is a key player in the
European cloud market and the leading developer of Web technologies in
Switzerland. It aims to be an independent European alternative to the web
giants and is committed to an ethical and sustainable Web that respects
privacy and creates local jobs. Infomaniak develops cloud solutions (IaaS,
PaaS, VPS), productivity tools for online collaboration and video and radio
streaming services.
About Proxmox
Proxmox develops powerful, yet easy-to-use
open-source server software. The product portfolio from Proxmox, including
server virtualization, backup, and email security, helps companies of any
size, sector, or industry to simplify their IT infrastructures. The Proxmox
solutions are based on the great Debian platform, and we are happy that we
can give back to the community by sponsoring DebConf23.
About Siemens
Siemens is technology company focused on
industry, infrastructure and transport. From resource-efficient factories,
resilient supply chains, smarter buildings and grids, to cleaner and more
comfortable transportation, and advanced healthcare, the company creates
technology with purpose adding real value for customers. By combining the
real and the digital worlds, Siemens empowers its customers to transform
their industries and markets, helping them to enhance the everyday of
billions of people.
Contact Information
For further information, please visit the DebConf23 web page at
https://debconf23.debconf.org/ or send
mail to press@debian.org.
Welcome to the June 2023 report from the Reproducible Builds project
In our reports, we outline the most important things that we have been up to over the past month. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.
We are very happy to announce the upcoming Reproducible Builds Summit which set to take place from October 31st November 2nd 2023, in the vibrant city of Hamburg, Germany.
Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens.
If you re interesting in joining us this year, please make sure to read the event page] which has more details about the event and location. (You may also be interested in attending PackagingCon 2023 held a few days before in Berlin.)
This month, Vagrant Cascadian will present at FOSSY 2023 on the topic of Breaking the Chains of Trusting Trust:
Corrupted build environments can deliver compromised cryptographically signed binaries. Several exploits in critical supply chains have been demonstrated in recent years, proving that this is not just theoretical. The most well secured build environments are still single points of failure when they fail. [ ] This talk will focus on the state of the art from several angles in related Free and Open Source Software projects, what works, current challenges and future plans for building trustworthy toolchains you do not need to trust.
Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2023 website, including the full programme schedule.
Marcel Fourn , Dominik Wermke, William Enck, Sascha Fahl and Yasemin Acar recently published an academic paper in the 44th IEEE Symposium on Security and Privacy titled It s like flossing your teeth: On the Importance and Challenges of Reproducible Builds for Software Supply Chain Security . The abstract reads as follows:
The 2020 Solarwinds attack was a tipping point that caused a heightened awareness about the security of the software supply chain and in particular the large amount of trust placed in build systems. Reproducible Builds (R-Bs) provide a strong foundation to build defenses for arbitrary attacks against build systems by ensuring that given the same source code, build environment, and build instructions, bitwise-identical artifacts are created.
However, in contrast to other papers that touch on some theoretical aspect of reproducible builds, the authors paper takes a different approach. Starting with the observation that much of the software industry believes R-Bs are too far out of reach for most projects and conjoining that with a goal of to help identify a path for R-Bs to become a commonplace property , the paper has a different methodology:
We conducted a series of 24 semi-structured expert interviews with participants from the Reproducible-Builds.org project, and iterated on our questions with the reproducible builds community. We identified a range of motivations that can encourage open source developers to strive for R-Bs, including indicators of quality, security benefits, and more efficient caching of artifacts. We identify experiences that help and hinder adoption, which heavily include communication with upstream projects. We conclude with recommendations on how to better integrate R-Bs with the efforts of the open source and free software community.
Vagrant Cascadian mentioned that Packaging Con 2023 is being held in Berlin, the weekend before the Reproducible Builds summit later this year. In particular, Vagrant noticed that the Call for Proposals (CFP) closes at the end of July.
Larry Doolittle was searching Usenet archives and discovered a thread from December 1999 titled Time independent checksum(cksum) on comp.unix.programming. Larry notes that it starts with Jayan asking about comparing binaries that might have difference in their embedded timestamps (that is, perhaps, Foreshadowing diffoscope, amiright? ) and goes on to observe that:
The antagonist is David Schwartz, who correctly says There are dozens of complex reasons why what seems to be the same sequence of operations might produce different end results, but goes on to say I totally disagree with your general viewpoint that compilers must provide for reproducability [sic].
Dwight Tovey and I (Larry Doolittle) argue for reproducible builds. I assert Any program especially a mission-critical program like a compiler that cannot reproduce a result at will is broken. Also it s commonplace to take a binary from the net, and check to see if it was trojaned by attempting to recreate it from source.
Distribution work
27 reviews of Debian packages were added, 40 were updated and 8 were removed this month adding to our knowledge about identified issues. A new randomness_in_documentation_generated_by_mkdocs toolchain issue was added by Chris Lamb [], and the deterministic flag on the paths_vary_due_to_usrmerge issue as we are not currently testing usrmerge issues [] issues.
Roland Clobus posted his 18th update of the status of reproducible Debian ISO images on our mailing list. Roland reported that all major desktops build reproducibly with bullseye, bookworm, trixie and sid , but he also mentioned amongst many changes that not only are the non-free images being built (and are reproducible) but that the live images are generated officially by Debian itself. []
Jan-Benedict Glaw noticed a problem when building NetBSD for the VAX architecture. Noting that Reproducible builds [are] probably not as reproducible as we thought , Jan-Benedict goes on to describe that when two builds from different source directories won t produce the same result and adds various notes about sub-optimal handling of the CFLAGS environment variable. []
F-Droid added 21 new reproducible apps in June, resulting in a new record of 145 reproducible apps in total. []. (This page now sports missing data for March May 2023.) F-Droid contributors also reported an issue with broken resources in APKs making some builds unreproducible. []
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE
Testing framework
The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:
Additions to a (relatively) new Documented Jenkins Maintenance (djm) script to automatically shrink a cache & save a backup of old data [], automatically split out previous months data from logfiles into specially-named files [], prevent concurrent remote logfile fetches by using a lock file [] and to add/remove various debugging statements [].
Updates to the automated system health checks to, for example, to correctly detect new kernel warnings due to a wording change [] and to explicitly observe which old/unused kernels should be removed []. This was related to an improvement so that various kernel issues on Ubuntu-based nodes are automatically fixed. []
Holger and Vagrant Cascadian updated all thirty-five hosts running Debian on the amd64, armhf, and i386 architectures to Debian bookworm, with the exception of the Jenkins host itself which will be upgraded after the release of Debian 12.1. In addition, Mattia Rizzolo updated the email configuration for the @reproducible-builds.org domain to correctly accept incoming mails from jenkins.debian.net [] as well as to set up DomainKeys Identified Mail (DKIM) signing []. And working together with Holger, Mattia also updated the Jenkins configuration to start testing Debian trixie which resulted in stopped testing Debian buster. And, finally, Jan-Benedict Glaw contributed patches for improved NetBSD testing.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
EDIT: One of my 2 keys has died. There are what seems like golden bubbles
under the epoxy, over one of the chips and those were not there before. I've
emailed SoloKeys and I'm waiting for a reply, but for now, I've stopped using
the Solo V2 altogether :(
I recently received the two Solo V2 hardware tokens I ordered as part of their
crowdfunding campaign, back in March 2022. It did take them longer than
advertised to ship me the tokens, but that's hardly unexpected from such
small-scale, crowdfunded undertaking.
I'm mostly happy about my purchase and I'm glad to get rid of the aging Tomu
boards I was using as U2F tokens1. Still, beware: I am not sure
it's a product I would recommend if what you want is simply something that
works. If you do not care about open-source hardware, the Solo V2 is not for
you.
The Good
I first want to mention I find the Solo V2 gorgeous. I really like the black and
gold color scheme of the USB-A model (which is reversible!) and it seems like a
well built and solid device. I'm not afraid to have it on my keyring and I fully
expect it to last a long time.
I'm also very impressed by the modular design: the PCB sits inside a shell,
which decouples the logic from the USB interface and lets them manufacture a
single board for both the USB-C and USB-A models. The clear epoxy layer on top
of the PCB module also looks very nice in my opinion.
I'm also very happy the Solo V2 has capacitive touch buttons instead of
physical "clicky" buttons, as it means the device has no moving parts. The
token has three buttons (the gold metal strips): one on each side of the device
and a third one near the keyhole.
As far as I've seen, the FIDO2 functions seem to work well via the USB
interface and do not require any configuration on a Debian 12 machine. I've
already migrated to the Solo V2 for web-based 2FA and I am in the process of
migrating to an SSH ed25519-sk key. Here is a guide I recommend if
you plan on setting those up with a Solo V2.
The Bad and the Ugly
Sadly, the Solo V2 is far from being a perfect project. First of all, since the
crowdfunding campaign is still being fulfilled, it is not currently
commercially available. Chances are you won't be able to buy one directly
before at least Q4 2023.
I've also hit what seems to be a pretty big firmware bug, or at least, one that
affects my use case quite a bit. Invoking gpg crashes the Solo V2 completely
if you also have scdaemon installed. Since scdaemon is necessary to use
gpg with an OpenPGP smartcard, this means you cannot issue any gpg commands
(like signing a git commit...) while the Solo V2 is plugged in.
Any gpg commands that queries scdaemon, such as gpg --edit-card or gpg
--sign foo.txt times out after about 20 seconds and leaves the token
unresponsive to both touch and CLI commands.
The way to "fix" this issue is to make sure scdaemon does not interact with
the Solo V2 anymore, using the reader-port argument:
Plug both your Solo V2 and your OpenPGP smartcard
To get a list of the tokens scdaemon sees, run the following command: $
echo scd getinfo reader_list gpg-connect-agent --decode awk '/^D/ print
$2 '
Identify your OpenPGP smartcard. For example, my Nitrokey Start is listed as
20A0:4211:FSIJ-1.2.15-43211613:0
Create a file in ~/.gnupg/scdaemon.conf with the following line
reader-port $YOUR_TOKEN_ID. For example, in my case I have: reader-port
20A0:4211:FSIJ-1.2.15-43211613:0
Reload scdaemon: $ gpgconf --reload scdaemon
Although this is clearly a firmware bug2, I do believe GnuPG is also
partly to blame here. Let's just say I was not very surprised to have to battle
scdaemon again, as I've had previous issues with it.
Which leads me to my biggest gripe so far: it seems SoloKeys (the company)
isn't really fixing firmware issues anymore and doesn't seems to care. The last
firmware release is about a year old.
Although people are experiencing serious bugs, there is no official way to
report them, which leads to issues being seemingly ignored. For
example, the NFC feature is apparently killing keys (!!!), but no one
from the company seems to have acknowledged the issue. The same goes for my
GnuPG bug, which was flagged in September 2022.
For a project that mainly differentiates itself from its (superior) competition
by being "Open", it's not a very good look... Although SoloKeys is still an
unprofitable open source side business of its creators3, this kind of
attitude certainly doesn't help foster trust.
Conclusion
If you want to have a nice, durable FIDO2 token, I would suggest you get one of
the many models Yubico offers. They are similarly priced, are readily
commercially available, are part of a nice and maintained software ecosystem
and have more features than the Solo V2 (OpenPGP support being the one I miss
the most). Yubikeys are the practical option.
What they are not is open-source hardware, whereas the Solo V2 is. As
bunnie very well explained on his blog in 2019, it does not mean
the later is inherently more trustable than the former, but it does make the
Solo V2 the ideological option. Knowledge is power and it should be free.
As such, tread carefully with SoloKeys, but don't dismiss them altogether: the
Solo V2 is certainly functioning well enough for me.
Although U2F is still part of the FIDO2 specification, the Tomus
predate this standard and were thus not fully compliant with FIDO2. So long
and thanks for all the fish little boards, you've served me well!
Posted on May 26, 2023
I write letters. The kind that are written on paper with a dip pen 1 and ink, stamped and sent through the post, spend a few days or weeks maturing like good wine in a depot somewhere2, and then get delivered to the recipient.
Some of them (mostly cards) are to people who will receive them and thank me via xmpp (that sounds odd, but actually works out nicely), but others are proper letters with long texts that I exchange with penpals.
Most of those are fountain pen frea^Wenthusiasts, so I usually use a different ink each time, and try to vary the paper, and I need to keep track of what I ve used.
Some time ago, I ve read a Victorian book3 which recommended keeping a correspondence book to register all mail received and sent, the topics and whether it had been replied or otherwise acted upon. I don t have the mail traffic of a Victorian lady (or even middle class woman), but this looked like something fun to do, and if I added fields for the inks and paper used it would also have useful side effect.
So I headed over to the obvious program anybody would use for these things (XeLaTeX, of course) and quickly designed a page with fields for the basic thinks I want to record; it was a bit hurried, and I may improve on it the next time I make one, but I expect this one to last me two or three years, and it is good enough.
I ve decided to make it A6 sized, so that it doesn t require a lot of space on my busy desktop, and it could be carried inside a portable desktop, if I ever decide to finish the one for which I ve made a mockup years ago :)
I ve also added a few pages for the addresses of my correspondents (and an index of the letters I ve exchanged with them), and a few empty pages for other notes.
Then I ve used my a6_book.py script to rearrange the A6 pages into signatures and impress them on A4; to reduce later effort I ve added an option to order the pages in such a way that if I then cut four A4 sheet in half at a time (the limit of my rotary cutter) the signatures are ready to be folded. It s not the default because it requires that the pages are a multiple of 32 rather than just 16 (and they are padded up with empty pages if they aren t).
If you re also interested in making one, here are the files:
After printing (an older version where some of the pages are repeated. whoops, but it only happened 4 times, and it s not a big deal), it was time for binding this into a book.
I ve opted for Coptic stitch, so that the book will open completely flat and writing on it will be easier and the covers are 2 mm cardboard covered in linen-look bookbinding paper (sadly I no longer have a source for bookbinding cloth made from actual cloth).
I tried to screenprint a simple design on the cover: the first attempt was unusable (the paper was smaller than the screen, so I couldn t keep it in the right place and moved as I was screenprinting); on the second attempt I used some masking tape to keep the paper in place, and they were a bit better, but I need more practice with the technique.
Finally, I decided that for such a Victorian thing I will use an Iron-gall ink, but it s Rohrer & Knlingner Scabiosa, with a purple undertone, because life s too short to use blue-black ink :D
And now, I m off to write an actual letter, rather than writing online about things that are related to letter writing.
not a quill! I m a modern person who uses steel nibs!
Milano Roserio, I m looking at you. a month to deliver a postcard from Lombardy to Ticino? not even a letter, which could have hidden contraband, a postcard.
I think. I ve looked at some plausible candidates and couldn t find the source.
In a previous blog post I described the use of virtual
postings to track accidental personal/family expenses. I've always been
uncomfortable with that, and in hledger 1yr I outlined a potential scheme
for finally addressing the virtual posting problem.
separate journals
My outline built on top of continuing to maintain both personal and family
financial data in the same place, but I've decided that this can't work,
because the different "directions" (or signs) of accidental transactions
originating from either the family or personal side can't be addressed with any
kind of alternate view on the same data.
To illustrate with an example.
A negative balance in family:liabilities:jon means "family owes jon". A
coffee bought by mistake on the family credit card will have a negative
posting on the credit card, and thus a positive one on the liabilities
account. ("jon owes family"). That's fine.
But what about when I buy family stuff on a personal card? The other side of
of the transaction is also going to have a positive sign, so it can't be
posted to family:liabilities:jon: it would have to go to somewhere else,
like jon:liabilities:family. Now I have two accounts which track versions
of the same thing, and they cannot be combined with a simple transaction
since they're looking at the same value from opposite directions (and signs).
Back when I first described the problem I was using
a single journal file for all my transactions. After moving to lots of separate
journal files (in hledger 1yr), it's become clearer to me that I don't
need to maintain the Family and Personal data together, at all: they can be
entirely separate journals.
getting data between journals
When I moved to a new set of ledger files for 2023, I needed to carry forward
the balances from 2022 in the form of "opening balance" transactions. This was
achieved by a report on the 2022 data, exported as CSV, and imported into the
2023 data (all following the scheme outlined by fully-fledged hledger.))
Separate Personal and Family journals need some information from each other, and I
can achieve that in the same way as for opening balances: with an export of
the relevant transactions as CSV, then imported on the other side. HLedger's
CSV import system is flexible enough that we can effectively invert the sign
of liabilities, addressing the problem above.
Worked example
We start with an accidental coffee purchased on the family card (and so this
belongs to the Family ledger)
I've encoded the expense category that the Personal ledger will be interested
in (the last bit, expenses:coffee) as a sub-account of the liabilities
category that the Family ledger is interested in1 (the first bit,
liabilities:jon). When viewed on the Family side, the expense category is not
interesting, and we can hide it with HLedger's alias feature2:
This is then converted into a journal file by hledger import. The rules file
for the import is very simple: the fields date, description, account1
and amount are taken as-is; account2 is hard-coded to liabilities:family.
The resulting transaction looks like
Before this journal is included by the main one, we have to adjust the expense
account, to remove the liabilities:jon: prefix. The import rules can't do
this3 , so we use another journal file as a go-between with another alias
rule:
alias /^liabilities.jon:/ =
This results, finally, in the following transaction in the Personal ledger:
avoiding double-counting
There's one set of transactions that we don't want to export across this divide,
and that's because they're already there: any time I transfer money from myself
to the family accounts (or vice versa) to address the accrued debt, the transaction
is visible from both my family and personal statements. To avoid exporting these
and double-counting them, I make sure those transactions don't post to an account
matching the pattern used in the hledger reg report. That's what the trailing
colon is for: It ensures I only export transactions which are to a sub-account
of liabilities:jon, and not to the root account liabilities:jon itself:
which is where I put the repayment transactions. I could instead use a more
explicit sub-account like liabilities:jon:repayments or similar, since the
trailing colon is quite subtle, but this works for me.
Wrap up
I've been really on the fence as to whether the complexity of this scheme is
worth it to avoid the virtual postings. The previous scheme was much simpler.
I have definitely made some mistakes with it, which didn't get caught by the
double-entry rules that virtual postings ignore, but they're for small sums
of money anyway.
On the other hand, a lot of the "machinery" of this already existed for getting
opening balances between calendar years, and the gory details are written down
and hidden inside the Makefile. I also expect that I will continue to see
advantages in having Family and Personal entirely separate, as they can each
develop and adapt to their own needs without having to consider the other side
of things every time.
It's a running experiment, and time will tell if it's a good idea.
This scheme was originally suggested to me by Pranesh on Twitter
(described in dues), but I discounted it at the time
because of the exact arrangement they suggested, not realising
the broader idea might work.
I've hand-waved one problem with using hledger aliases here. If
we use them as described, to hide the Personal expense details, we need
them to not be applied when performing the CSV-generating report.
Therefore, in practise I have them in a front-most family/2023.journal
file, which imports the data from another family/2023-back.journal,
and the CSV export is performed on the backing journal with the data
and not the alias.
HLedger import rules can't manipulate the fields from the CSV a great
deal, but one change I proposed and started hacking on would allow for
this: to expose Regexp match-groups as interpolatable tokens:
https://github.com/simonmichael/hledger/issues/2009.
As suggested in my initial announcement of apt-sigstore my plan was to look into stronger uses of Sigstore than rekor, and I m now happy to announce that the apt-cosign plugin has been added to apt-sigstore and the operational project debdistcanary is publishing cosign-statements about the InRelease file published by the following distributions: Trisquel GNU/Linux, PureOS, Gnuinos, Ubuntu, Debian and Devuan.
Summarizing the commands that you need to run as root to experience the great new world:
Then run your usual apt-get update and look in the syslog to debug things.
This is the kind of work that gets done while waiting for the build machines to attempt to reproducibly build PureOS. Unfortunately, the results is that a meager 16% of the 765 added/modifed packages are reproducible by me. There is some infrastructure work to be done to improve things: we should use sbuild for example. The build infrastructure should produce signed statements for each package it builds: One statement saying that it attempted to reproducible build a particular binary package (thus generated some build logs and diffoscope-output for auditing), and one statements saying that it actually was able to reproduce a package. Verifying such claims during apt-get install or possibly dpkg -i is a logical next step.
There is some code cleanups and release work to be done now. Which distribution will be the first apt-based distribution that includes native support for Sigstore? Let s see.
Sigstore is not the only relevant transparency log around, and I ve been trying to learn a bit about Sigsum to be able to support it as well. The more improved confidence about system security, the merrier!
Do you want your apt-get update to only ever use files whose hash checksum have been recorded in the globally immutable tamper-resistance ledger rekor provided by the Sigstore project? Well I thought you d never ask, but now you can, thanks to my new projects apt-verify and apt-sigstore. I have not done proper stable releases yet, so this is work in progress. To try it out, adapt to the modern era of running random stuff from the Internet as root, and run the following commands. Use a container or virtual machine if you have trust issues.
If the stars are aligned (and the puppet projects of debdistget and debdistcanary have ran their GitLab CI/CD pipeline recently enough) you will see a successful output from apt-get update and your syslog will contain debug logs showing the entries from the rekor log for the release index files that you downloaded. See sample outputs in the README.
If you get tired of it, disabling is easy:
chmod -x /etc/apt/verify.d/apt-rekor
Our project currently supports Trisquel GNU/Linux 10 (nabia) & 11 (aramo), PureOS 10 (byzantium), Gnuinos chimaera, Ubuntu 20.04 (focal) & 22.04 (jammy), Debian 10 (buster) & 11 (bullseye), and Devuan GNU+Linux 4.0 (chimaera). Others can be supported to, please open an issue about it, although my focus is on FSDG-compliant distributions and their upstreams.
This is a continuation of my previous work on apt-canary. I have realized that it was better to separate out the generic part of apt-canary into my new project apt-verify that offers a plugin-based method, and then rewrote apt-canary to be one such plugin. Then apt-sigstore s apt-rekor was my second plugin for apt-verify.
Due to the design of things, and some current limitations, Ubuntu is the least stable since they push out new signed InRelease files frequently (mostly due to their use of Phased-Update-Percentage) and debdistget and debdistcanary CI/CD runs have a hard time keeping up. If you have insight on how to improve this, please comment me in the issue tracking the race condition.
There are limitations of what additional safety a rekor-based solution actually provides, but I expect that to improve as I get a cosign-based approach up and running. Currently apt-rekor mostly make targeted attacks less deniable. With a cosign-based approach, we could design things such that your machine only downloads updates when they have been publicly archived in an immutable fashion, or submitted for validation by a third-party such as my reproducible build setup for Trisquel GNU/Linux aramo.
What do you think? Happy Hacking!
I ve used hardware-backed OpenPGP keys since 2006 when I imported newly generated rsa1024 subkeys to a FSFE Fellowship card. This worked well for several years, and I recall buying more ZeitControl cards for multi-machine usage and backup purposes. As a side note, I recall being unsatisfied with the weak 1024-bit RSA subkeys at the time my primary key was a somewhat stronger 1280-bit RSA key created back in 2002 but OpenPGP cards at the time didn t support more than 1024 bit RSA, and were (and still often are) also limited to power-of-two RSA key sizes which I dislike.
I had my master key on disk with a strong password for a while, mostly to refresh expiration time of the subkeys and to sign other s OpenPGP keys. At some point I stopped carrying around encrypted copies of my master key. That was my main setup when I migrated to a new stronger RSA 3744 bit key with rsa2048 subkeys on a YubiKey NEO back in 2014. At that point, signing other s OpenPGP keys was a rare enough occurrence that I settled with bringing out my offline machine to perform this operation, transferring the public key to sign on USB sticks. In 2019 I re-evaluated my OpenPGP setup and ended up creating a offline Ed25519 key with subkeys on a FST-01G running Gnuk. My approach for signing other s OpenPGP keys were still to bring out my offline machine and sign things using the master secret using USB sticks for storage and transport. Which meant I almost never did that, because it took too much effort. So my 2019-era Ed25519 key still only has a handful of signatures on it, since I had essentially stopped signing other s keys which is the traditional way of getting signatures in return.
None of this caused any critical problem for me because I continued to use my old 2014-era RSA3744 key in parallel with my new 2019-era Ed25519 key, since too many systems didn t handle Ed25519. However, during 2022 this changed, and the only remaining environment that I still used my RSA3744 key for was in Debian and they require OpenPGP signatures on the new key to allow it to replace an older key. I was in denial about this sub-optimal solution during 2022 and endured its practical consequences, having to use the YubiKey NEO (which I had replaced with a permanently inserted YubiKey Nano at some point) for Debian-related purposes alone.
In December 2022 I bought a new laptop and setup a FST-01SZ with my Ed25519 key, and while I have taken a vacation from Debian, I continue to extend the expiration period on the old RSA3744-key in case I will ever have to use it again, so the overall OpenPGP setup was still sub-optimal. Having two valid OpenPGP keys at the same time causes people to use both for email encryption (leading me to have to use both devices), and the WKD Key Discovery protocol doesn t like two valid keys either. At FOSDEM 23 I ran into Andre Heinecke at GnuPG and I couldn t help complain about how complex and unsatisfying all OpenPGP-related matters were, and he mildly ignored my rant and asked why I didn t put the master key on another smartcard. The comment sunk in when I came home, and recently I connected all the dots and this post is a summary of what I did to move my offline OpenPGP master key to a Nitrokey Start.
First a word about device choice, I still prefer to use hardware devices that are as compatible with free software as possible, but the FST-01G or FST-01SZ are no longer easily available for purchase. I got a comment about Nitrokey start in my last post, and had two of them available to experiment with. There are things to dislike with the Nitrokey Start compared to the YubiKey (e.g., relative insecure chip architecture, the bulkier form factor and lack of FIDO/U2F/OATH support) but as far as I know there is no more widely available owner-controlled device that is manufactured for an intended purpose of implementing an OpenPGP card. Thus it hits the sweet spot for me.
The first step is to run latest firmware on the Nitrokey Start for bug-fixes and important OpenSSH 9.0 compatibility and there are reproducible-built firmware published that you can install using pynitrokey. I run Trisquel 11 aramo on my laptop, which does not include the Python Pip package (likely because it promotes installing non-free software) so that was a slight complication. Building the firmware locally may have worked, and I would like to do that eventually to confirm the published firmware, however to save time I settled with installing the Ubuntu 22.04 packages on my machine:
$ sha256sum python3-pip*
ded6b3867a4a4cbaff0940cab366975d6aeecc76b9f2d2efa3deceb062668b1c python3-pip_22.0.2+dfsg-1ubuntu0.2_all.deb
e1561575130c41dc3309023a345de337e84b4b04c21c74db57f599e267114325 python3-pip-whl_22.0.2+dfsg-1ubuntu0.2_all.deb
$ doas dpkg -i python3-pip*
...
$ doas apt install -f
...
$
Installing pynitrokey downloaded a bunch of dependencies, and it would be nice to audit the license and security vulnerabilities for each of them. (Verbose output below slightly redacted.)
jas@kaka:~$ pip3 install --user pynitrokey
Collecting pynitrokey
Downloading pynitrokey-0.4.34-py3-none-any.whl (572 kB)
Collecting frozendict~=2.3.4
Downloading frozendict-2.3.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (113 kB)
Requirement already satisfied: click<9,>=8.0.0 in /usr/lib/python3/dist-packages (from pynitrokey) (8.0.3)
Collecting ecdsa
Downloading ecdsa-0.18.0-py2.py3-none-any.whl (142 kB)
Collecting python-dateutil~=2.7.0
Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
Collecting fido2<2,>=1.1.0
Downloading fido2-1.1.0-py3-none-any.whl (201 kB)
Collecting tlv8
Downloading tlv8-0.10.0.tar.gz (16 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: certifi>=14.5.14 in /usr/lib/python3/dist-packages (from pynitrokey) (2020.6.20)
Requirement already satisfied: pyusb in /usr/lib/python3/dist-packages (from pynitrokey) (1.2.1.post1)
Collecting urllib3~=1.26.7
Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting spsdk<1.8.0,>=1.7.0
Downloading spsdk-1.7.1-py3-none-any.whl (684 kB)
Collecting typing_extensions~=4.3.0
Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Requirement already satisfied: cryptography<37,>=3.4.4 in /usr/lib/python3/dist-packages (from pynitrokey) (3.4.8)
Collecting intelhex
Downloading intelhex-2.3.0-py2.py3-none-any.whl (50 kB)
Collecting nkdfu
Downloading nkdfu-0.2-py3-none-any.whl (16 kB)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from pynitrokey) (2.25.1)
Collecting tqdm
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting nrfutil<7,>=6.1.4
Downloading nrfutil-6.1.7.tar.gz (845 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi in /usr/lib/python3/dist-packages (from pynitrokey) (1.15.0)
Collecting crcmod
Downloading crcmod-1.7.tar.gz (89 kB)
Preparing metadata (setup.py) ... done
Collecting libusb1==1.9.3
Downloading libusb1-1.9.3-py3-none-any.whl (60 kB)
Collecting pc_ble_driver_py>=0.16.4
Downloading pc_ble_driver_py-0.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
Collecting piccata
Downloading piccata-2.0.3-py3-none-any.whl (21 kB)
Collecting protobuf<4.0.0,>=3.17.3
Downloading protobuf-3.20.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Collecting pyserial
Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
Collecting pyspinel>=1.0.0a3
Downloading pyspinel-1.0.3.tar.gz (58 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from nrfutil<7,>=6.1.4->pynitrokey) (5.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil~=2.7.0->pynitrokey) (1.16.0)
Collecting pylink-square<0.11.9,>=0.8.2
Downloading pylink_square-0.11.1-py2.py3-none-any.whl (78 kB)
Collecting jinja2<3.1,>=2.11
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting bincopy<17.11,>=17.10.2
Downloading bincopy-17.10.3-py3-none-any.whl (17 kB)
Collecting fastjsonschema>=2.15.1
Downloading fastjsonschema-2.16.3-py3-none-any.whl (23 kB)
Collecting astunparse<2,>=1.6
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting oscrypto~=1.2
Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
Collecting deepmerge==0.3.0
Downloading deepmerge-0.3.0-py2.py3-none-any.whl (7.6 kB)
Collecting pyocd<=0.31.0,>=0.28.3
Downloading pyocd-0.31.0-py3-none-any.whl (12.5 MB)
Collecting click-option-group<0.6,>=0.3.0
Downloading click_option_group-0.5.5-py3-none-any.whl (12 kB)
Collecting pycryptodome<4,>=3.9.3
Downloading pycryptodome-3.17-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pyocd-pemicro<1.2.0,>=1.1.1
Downloading pyocd_pemicro-1.1.5-py3-none-any.whl (9.0 kB)
Requirement already satisfied: colorama<1,>=0.4.4 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (0.4.4)
Collecting commentjson<1,>=0.9
Downloading commentjson-0.9.0.tar.gz (8.7 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: asn1crypto<2,>=1.2 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.0)
Collecting pypemicro<0.2.0,>=0.1.9
Downloading pypemicro-0.1.11-py3-none-any.whl (5.7 MB)
Collecting libusbsio>=2.1.11
Downloading libusbsio-2.1.11-py3-none-any.whl (247 kB)
Collecting sly==0.4
Downloading sly-0.4.tar.gz (60 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml<0.18.0,>=0.17
Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting cmsis-pack-manager<0.3.0
Downloading cmsis_pack_manager-0.2.10-py2.py3-none-manylinux1_x86_64.whl (25.1 MB)
Collecting click-command-tree==1.1.0
Downloading click_command_tree-1.1.0-py3-none-any.whl (3.6 kB)
Requirement already satisfied: bitstring<3.2,>=3.1 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (3.1.7)
Collecting hexdump~=3.3
Downloading hexdump-3.3.zip (12 kB)
Preparing metadata (setup.py) ... done
Collecting fire
Downloading fire-0.5.0.tar.gz (88 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse<2,>=1.6->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.37.1)
Collecting humanfriendly
Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Collecting argparse-addons>=0.4.0
Downloading argparse_addons-0.12.0-py3-none-any.whl (3.3 kB)
Collecting pyelftools
Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)
Collecting milksnake>=0.1.2
Downloading milksnake-0.1.5-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: appdirs>=1.4 in /usr/lib/python3/dist-packages (from cmsis-pack-manager<0.3.0->spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.4)
Collecting lark-parser<0.8.0,>=0.7.1
Downloading lark-parser-0.7.8.tar.gz (276 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2<3.1,>=2.11->spsdk<1.8.0,>=1.7.0->pynitrokey) (2.0.1)
Collecting asn1crypto<2,>=1.2
Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
Collecting wrapt
Downloading wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting future
Downloading future-0.18.3.tar.gz (840 kB)
Preparing metadata (setup.py) ... done
Collecting psutil>=5.2.2
Downloading psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting capstone<5.0,>=4.0
Downloading capstone-4.0.2-py2.py3-none-manylinux1_x86_64.whl (2.1 MB)
Collecting naturalsort<2.0,>=1.5
Downloading naturalsort-1.5.1.tar.gz (7.4 kB)
Preparing metadata (setup.py) ... done
Collecting prettytable<3.0,>=2.0
Downloading prettytable-2.5.0-py3-none-any.whl (24 kB)
Collecting intervaltree<4.0,>=3.0.2
Downloading intervaltree-3.1.0.tar.gz (32 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml.clib>=0.2.6
Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB)
Collecting termcolor
Downloading termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting sortedcontainers<3.0,>=2.0
Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: wcwidth in /usr/lib/python3/dist-packages (from prettytable<3.0,>=2.0->pyocd<=0.31.0,>=0.28.3->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.2.5)
Building wheels for collected packages: nrfutil, crcmod, sly, tlv8, commentjson, hexdump, pyspinel, fire, intervaltree, lark-parser, naturalsort, future
Building wheel for nrfutil (setup.py) ... done
Created wheel for nrfutil: filename=nrfutil-6.1.7-py3-none-any.whl size=898520 sha256=de6f8803f51d6c26d24dc7df6292064a468ff3f389d73370433fde5582b84a10
Stored in directory: /home/jas/.cache/pip/wheels/39/2b/9b/98ab2dd716da746290e6728bdb557b14c1c9a54cb9ed86e13b
Building wheel for crcmod (setup.py) ... done
Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31422 sha256=5149ac56fcbfa0606760eef5220fcedc66be560adf68cf38c604af3ad0e4a8b0
Stored in directory: /home/jas/.cache/pip/wheels/85/4c/07/72215c529bd59d67e3dac29711d7aba1b692f543c808ba9e86
Building wheel for sly (setup.py) ... done
Created wheel for sly: filename=sly-0.4-py3-none-any.whl size=27352 sha256=f614e413918de45c73d1e9a8dca61ca07dc760d9740553400efc234c891f7fde
Stored in directory: /home/jas/.cache/pip/wheels/a2/23/4a/6a84282a0d2c29f003012dc565b3126e427972e8b8157ea51f
Building wheel for tlv8 (setup.py) ... done
Created wheel for tlv8: filename=tlv8-0.10.0-py3-none-any.whl size=11266 sha256=3ec8b3c45977a3addbc66b7b99e1d81b146607c3a269502b9b5651900a0e2d08
Stored in directory: /home/jas/.cache/pip/wheels/e9/35/86/66a473cc2abb0c7f21ed39c30a3b2219b16bd2cdb4b33cfc2c
Building wheel for commentjson (setup.py) ... done
Created wheel for commentjson: filename=commentjson-0.9.0-py3-none-any.whl size=12092 sha256=28b6413132d6d7798a18cf8c76885dc69f676ea763ffcb08775a3c2c43444f4a
Stored in directory: /home/jas/.cache/pip/wheels/7d/90/23/6358a234ca5b4ec0866d447079b97fedf9883387d1d7d074e5
Building wheel for hexdump (setup.py) ... done
Created wheel for hexdump: filename=hexdump-3.3-py3-none-any.whl size=8913 sha256=79dfadd42edbc9acaeac1987464f2df4053784fff18b96408c1309b74fd09f50
Stored in directory: /home/jas/.cache/pip/wheels/26/28/f7/f47d7ecd9ae44c4457e72c8bb617ef18ab332ee2b2a1047e87
Building wheel for pyspinel (setup.py) ... done
Created wheel for pyspinel: filename=pyspinel-1.0.3-py3-none-any.whl size=65033 sha256=01dc27f81f28b4830a0cf2336dc737ef309a1287fcf33f57a8a4c5bed3b5f0a6
Stored in directory: /home/jas/.cache/pip/wheels/95/ec/4b/6e3e2ee18e7292d26a65659f75d07411a6e69158bb05507590
Building wheel for fire (setup.py) ... done
Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=3d288585478c91a6914629eb739ea789828eb2d0267febc7c5390cb24ba153e8
Stored in directory: /home/jas/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
Building wheel for intervaltree (setup.py) ... done
Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26119 sha256=5ff1def22ba883af25c90d90ef7c6518496fcd47dd2cbc53a57ec04cd60dc21d
Stored in directory: /home/jas/.cache/pip/wheels/fa/80/8c/43488a924a046b733b64de3fac99252674c892a4c3801c0a61
Building wheel for lark-parser (setup.py) ... done
Created wheel for lark-parser: filename=lark_parser-0.7.8-py2.py3-none-any.whl size=62527 sha256=3d2ec1d0f926fc2688d40777f7ef93c9986f874169132b1af590b6afc038f4be
Stored in directory: /home/jas/.cache/pip/wheels/29/30/94/33e8b58318aa05cb1842b365843036e0280af5983abb966b83
Building wheel for naturalsort (setup.py) ... done
Created wheel for naturalsort: filename=naturalsort-1.5.1-py3-none-any.whl size=7526 sha256=bdecac4a49f2416924548cae6c124c85d5333e9e61c563232678ed182969d453
Stored in directory: /home/jas/.cache/pip/wheels/a6/8e/c9/98cfa614fff2979b457fa2d9ad45ec85fa417e7e3e2e43be51
Building wheel for future (setup.py) ... done
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492037 sha256=57a01e68feca2b5563f5f624141267f399082d2f05f55886f71b5d6e6cf2b02c
Stored in directory: /home/jas/.cache/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built nrfutil crcmod sly tlv8 commentjson hexdump pyspinel fire intervaltree lark-parser naturalsort future
Installing collected packages: tlv8, sortedcontainers, sly, pyserial, pyelftools, piccata, naturalsort, libusb1, lark-parser, intelhex, hexdump, fastjsonschema, crcmod, asn1crypto, wrapt, urllib3, typing_extensions, tqdm, termcolor, ruamel.yaml.clib, python-dateutil, pyspinel, pypemicro, pycryptodome, psutil, protobuf, prettytable, oscrypto, milksnake, libusbsio, jinja2, intervaltree, humanfriendly, future, frozendict, fido2, ecdsa, deepmerge, commentjson, click-option-group, click-command-tree, capstone, astunparse, argparse-addons, ruamel.yaml, pyocd-pemicro, pylink-square, pc_ble_driver_py, fire, cmsis-pack-manager, bincopy, pyocd, nrfutil, nkdfu, spsdk, pynitrokey
WARNING: The script nitropy is installed in '/home/jas/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed argparse-addons-0.12.0 asn1crypto-1.5.1 astunparse-1.6.3 bincopy-17.10.3 capstone-4.0.2 click-command-tree-1.1.0 click-option-group-0.5.5 cmsis-pack-manager-0.2.10 commentjson-0.9.0 crcmod-1.7 deepmerge-0.3.0 ecdsa-0.18.0 fastjsonschema-2.16.3 fido2-1.1.0 fire-0.5.0 frozendict-2.3.5 future-0.18.3 hexdump-3.3 humanfriendly-10.0 intelhex-2.3.0 intervaltree-3.1.0 jinja2-3.0.3 lark-parser-0.7.8 libusb1-1.9.3 libusbsio-2.1.11 milksnake-0.1.5 naturalsort-1.5.1 nkdfu-0.2 nrfutil-6.1.7 oscrypto-1.3.0 pc_ble_driver_py-0.17.0 piccata-2.0.3 prettytable-2.5.0 protobuf-3.20.3 psutil-5.9.4 pycryptodome-3.17 pyelftools-0.29 pylink-square-0.11.1 pynitrokey-0.4.34 pyocd-0.31.0 pyocd-pemicro-1.1.5 pypemicro-0.1.11 pyserial-3.5 pyspinel-1.0.3 python-dateutil-2.7.5 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 sly-0.4 sortedcontainers-2.4.0 spsdk-1.7.1 termcolor-2.2.0 tlv8-0.10.0 tqdm-4.65.0 typing_extensions-4.3.0 urllib3-1.26.15 wrapt-1.15.0
jas@kaka:~$
Then upgrading the device worked remarkable well, although I wish that the tool would have printed URLs and checksums for the firmware files to allow easy confirmation.
jas@kaka:~$ PATH=$PATH:/home/jas/.local/bin
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.15-5D271572: Nitrokey Nitrokey Start (RTM.12.1-RC2-modified)
jas@kaka:~$ nitropy start update
Command line tool to interact with Nitrokey devices 0.4.34
Nitrokey Start firmware update tool
Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
System: Linux, is_linux: True
Python: 3.10.6
Saving run log to: /tmp/nitropy.log.gc5753a8
Admin PIN:
Firmware data to be used:
- FirmwareType.REGNUAL: 4408, hash: ...b'72a30389' valid (from ...built/RTM.13/regnual.bin)
- FirmwareType.GNUK: 129024, hash: ...b'25a4289b' valid (from ...prebuilt/RTM.13/gnuk.bin)
Currently connected device strings:
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.15-5D271572
Revision: RTM.12.1-RC2-modified
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
initial device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.15-5D271572', 'Revision': 'RTM.12.1-RC2-modified', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
Please note:
- Latest firmware available is:
RTM.13 (published: 2022-12-08T10:59:11Z)
- provided firmware: None
- all data will be removed from the device!
- do not interrupt update process - the device may not run properly!
- the process should not take more than 1 minute
Do you want to continue? [yes/no]: yes
...
Starting bootloader upload procedure
Device: Nitrokey Start FSIJ-1.2.15-5D271572
Connected to the device
Running update!
Do NOT remove the device from the USB slot, until further notice
Downloading flash upgrade program...
Executing flash upgrade...
Waiting for device to appear:
Wait 20 seconds.....
Downloading the program
Protecting device
Finish flashing
Resetting device
Update procedure finished. Device could be removed from USB slot.
Currently connected device strings (after upgrade):
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.19-5D271572
Revision: RTM.13
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
device can now be safely removed from the USB slot
final device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.19-5D271572', 'Revision': 'RTM.13', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
finishing session 2023-03-16 21:49:07.371291
Log saved to: /tmp/nitropy.log.gc5753a8
jas@kaka:~$
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.19-5D271572: Nitrokey Nitrokey Start (RTM.13)
jas@kaka:~$
Before importing the master key to this device, it should be configured. Note the commands in the beginning to make sure scdaemon/pcscd is not running because they may have cached state from earlier cards. Change PIN code as you like after this, my experience with Gnuk was that the Admin PIN had to be changed first, then you import the key, and then you change the PIN.
jas@kaka:~$ gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
OK
ERR 67125247 Slut p fil <GPG Agent>
jas@kaka:~$ ps auxww grep -e pcsc -e scd
jas 11651 0.0 0.0 3468 1672 pts/0 R+ 21:54 0:00 grep --color=auto -e pcsc -e scd
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......:
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
gpg/card> admin
Admin commands are allowed
gpg/card> kdf-setup
gpg/card> passwd
gpg: OpenPGP card no. D276000124010200FFFE5D2715720000 detected
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? 3
PIN changed.
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? q
gpg/card> name
Cardholder's surname: Josefsson
Cardholder's given name: Simon
gpg/card> lang
Language preferences: sv
gpg/card> sex
Salutation (M = Mr., F = Ms., or space): m
gpg/card> login
Login data (account name): jas
gpg/card> url
URL to retrieve public key: https://josefsson.org/key-20190320.txt
gpg/card> forcesig
gpg/card> key-attr
Changing card key attribute for: Signature key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
Note: There is no guarantee that the card supports the requested size.
If the key generation does not succeed, please check the
documentation of your card to see what sizes are allowed.
Changing card key attribute for: Encryption key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: cv25519
Changing card key attribute for: Authentication key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
gpg/card>
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: on
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
jas@kaka:~$
Once setup, bring out your offline machine and boot it and mount your USB stick with the offline key. The paths below will be different, and this is using a somewhat unorthodox approach of working with fresh GnuPG configuration paths that I chose for the USB stick.
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ cp -a gnupghome-backup-masterkey gnupghome-import-nitrokey-5D271572
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ gpg --homedir $PWD/gnupghome-import-nitrokey-5D271572 --edit-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg> keytocard
Really move the primary key? (y/N) y
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg>
Save changes? (y/N) y
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$
At this point it is useful to confirm that the Nitrokey has the master key available and that is possible to sign statements with it, back on your regular machine:
jas@kaka:~$ gpg --card-status
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 1
KDF setting ......: on
Signature key ....: B1D2 BD13 75BE CB78 4CF4 F8C4 D73C F638 C53C 06BE
created ....: 2019-03-20 23:37:24
Encryption key....: [none]
Authentication key: [none]
General key info..: pub ed25519/D73CF638C53C06BE 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec> ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 5D271572
ssb> ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
jas@kaka:~$ echo foo gpg -a --sign gpg --verify
gpg: Signature made Thu Mar 16 22:11:02 2023 CET
gpg: using EDDSA key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg: Good signature from "Simon Josefsson <simon@josefsson.org>" [ultimate]
jas@kaka:~$
Finally to retrieve and sign a key, for example Andre Heinecke s that I could confirm the OpenPGP key identifier from his business card.
jas@kaka:~$ gpg --locate-external-keys aheinecke@gnupg.com
gpg: key 1FDF723CF462B6B1: public key "Andre Heinecke <aheinecke@gnupg.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 2 signed: 7 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1 valid: 7 signed: 64 trust: 7-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2023-05-26
pub rsa3072 2015-12-08 [SC] [expires: 2025-12-05]
94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1
uid [ unknown] Andre Heinecke <aheinecke@gnupg.com>
sub ed25519 2017-02-13 [S]
sub ed25519 2017-02-13 [A]
sub rsa3072 2015-12-08 [E] [expires: 2025-12-05]
sub rsa3072 2015-12-08 [A] [expires: 2025-12-05]
jas@kaka:~$ gpg --edit-key "94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1"
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
sub ed25519/2978E9D40CBABA5C
created: 2017-02-13 expires: never usage: S
sub ed25519/DC74D901C8E2DD47
created: 2017-02-13 expires: never usage: A
The following key was revoked on 2017-02-23 by RSA key 1FDF723CF462B6B1 Andre Heinecke <aheinecke@gnupg.com>
sub cv25519/1FFE3151683260AB
created: 2017-02-13 revoked: 2017-02-23 usage: E
sub rsa3072/8CC999BDAA45C71F
created: 2015-12-08 expires: 2025-12-05 usage: E
sub rsa3072/6304A4B539CE444A
created: 2015-12-08 expires: 2025-12-05 usage: A
[ unknown] (1). Andre Heinecke <aheinecke@gnupg.com>
gpg> sign
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
Primary key fingerprint: 94A5 C9A0 3C2F E5CA 3B09 5D8E 1FDF 723C F462 B6B1
Andre Heinecke <aheinecke@gnupg.com>
This key is due to expire on 2025-12-05.
Are you sure that you want to sign this key with your
key "Simon Josefsson <simon@josefsson.org>" (D73CF638C53C06BE)
Really sign? (y/N) y
gpg> quit
Save changes? (y/N) y
jas@kaka:~$
This is on my day-to-day machine, using the NitroKey Start with the offline key. No need to boot the old offline machine just to sign keys or extend expiry anymore! At FOSDEM 23 I managed to get at least one DD signature on my new key, and the Debian keyring maintainers accepted my Ed25519 key. Hopefully I can now finally let my 2014-era RSA3744 key expire in 2023-09-19 and not extend it any further. This should finish my transition to a simpler OpenPGP key setup, yay!
Been looking into some existant sshd implementations in go. Most of the
projects on github seem to use the standard x/crypto/ssh lib.
During testing, i just wanted to see which banner these kind of ssh servers
provide, using the simple command:
And noticed that at least some of these sshds did not accept any further
connection. Simple DoS via netcat, nice.
Until this day, the Golang documentation is missing some crucial hint that the
function handling the connection should be called as
goroutine, otherwise it simply
blocks any further incoming connections.
Created some pull requests on the most starred projects i found, seems even
experienced golang devs missed this part.
Posted on March 6, 2023
When I paint postcards I tend to start with a draft (usually on lightweight (250 g/m ) watercolour paper, then trace1 the drawing on blank postcards and paint it again.
I keep the drafts for a number of reasons; for the views / architectural ones I m using a landscape photo album that I bought many years ago, but lately I ve also sent a few cards with my historical outfits to people who like to be kept updated on that, and I wanted a different book for those, both for better organization and to be able to keep them in the portrait direction.
If you know me, you can easily guess that buying one wasn t considered as an option.
Since I m not going to be writing on the pages, I decided to use a relatively cheap 200 g / m linoprint paper with a nice feel, and I ve settled on a B6 size (before trimming) to hold A6 postcard drafts.
For the binding I ve decided to use a technique I ve learned from a craft book ages ago that doesn t use tapes, and added a full hard cover in dark grey linen-feel2 paper. For the end-papers I ve used some random sheets of light blue paper (probably around 100-something g / m ), and that s the thing where I could have done better, but they work.
Up to now there isn t anything I hadn t done before, what was new was the fact that this book was meant to hold things between the pages, and I needed to provide space for them.
After looking on the internet for solutions, I settled on adding spacers by making a signature composed of paper - spacer - paper - spacer, with the spacers being 2 cm wide, folded in half.
And then, between finishing binding the book and making the cover I utterly forgot to add the head bands. Argh. It s not the first time I make this error.
I m happy enough with the result. There are things that are easy to improve on in the next iteration (endpapers and head bands), and something in me is not 100% happy with the fact that the spacers aren t placed between every sheet, but there are places with no spacer and places with two of them, but I can t think of (and couldn t find) a way to make them otherwise with a sewn book, unless I sew each individual sheet, which sounds way too bulky (the album I m using for the landscapes was glued, but I didn t really want to go that way).
The size is smaller than the other one I was using and doesn t leave a lot of room around the paintings, but that isn t necessarily a bad thing, because it also means less wasted space.
I believe that one of my next project will be another similar book in a landscape format, for those postcard drafts that aren t landscapes nor clothing related.
And then maybe another? or two? or
Traceback (most recent call last):
TooManyProjectsError: project queue is full
yes, trace. I can t draw. I have too many hobbies to spend the required amount of time every day to practice it. I m going to fake it. 85% of the time I m tracing from a photo I took myself, so I m not even going to consider it cheating.
the description of which, on the online shop, made it look like fabric, even if the price was suspiciously low, so I bought a sheet to see what it was. It wasn t fabric. It feels and looks nice, but I m not sure how sturdy it s going to be.
Background
Silver-Platter makes it easier to
publish automated changes to repositories. However, in its default mode, the
only option for reviewing changes before publishing them is to run in dry-run mode.
This can be quite cumbersome if you have a lot of repositories.
A new batch mode now makes it possible to generate a large number of changes
against different repositories using a script, review and optionally alter the
diffs, and then all publish them (and potentially refresh them later if
conflicts appear).
Example running pyupgrade
I m using the pyupgrade example recipe that comes with silver-platter.
---name:pyupgradecommand:'pyupgrade--exit-zero-even-if-changed$(find-name"test_*.py")'mode:proposemerge-request:commit-message:Upgrade Python code to a modern version
And a list of candidate repositories to process in candidates.yaml.
There is also a file called batch.yaml that describes the pending changes:
1 2 3 4 5 6 7 8 910111213
name:pyupgradework:-url:https://github.com/dulwich/dulwichname:dulwichdescription:Upgrade to modern Python statementscommit-message:Run pyupgrademode:propose-url:https://github.com/jelmer/xandikosname:xandikosdescription:Upgrade to modern Python statementscommit-message:Run pyupgrademode:proposerecipe:../pyupgrade.yaml
At this point the changes can be reviewed, and batch.yaml edited as the user sees fit - they
can remove entries that don t appear to be correct, edit the metadata for the merge
requests, etc. It s also possible to make changes to the clones.
Once you re happy, publish the results:
$ svpbatchpublishpyupgrade
This will publish all the changes, using the mode and parameters specified in
batch.yaml.
batch.yaml is automatically stripped of any entries in work that have fully
landed, i.e. where the pull request has been merged or where the changes were
pushed to the origin.
To check up on the status of your changes, run svp batch status:
$ svpbatchstatuspyupgrade
To refresh any merge proposals that may have become out of date, simply run publish again:
It's been a year since I started exploring HLedger, and I'm still
going. The rollover to 2023 was an opportunity to revisit my approach.
Some time ago I stumbled across Dmitry Astapov's HLedger notes (fully-fledged
hledger, which I briefly
mentioned in eventual consistency) and decided to adopt some of its ideas.
new year, new journal
First up, Astapov encourages starting a new journal file for a new calendar
year. I do this for other, accounting-adjacent files as a matter of course,
and I did it for my GNUCash files prior to adopting HLedger. But the reason
for those is a general suspicion that a simple mistake with those softwares
could irrevocably corrupt my data. I'm much more confident with HLedger, so
rolling over at years end isn't necessary for that. But there are other
advantages. A quick obvious one is you can get rid of old accounts (such as
expense accounts tied to a particular project, now completed).
one journal per import
In the first year, I periodically imported account data via CSV exports
of transactions and HLedger's (excellent) CSV import system. I imported
all the transactions, once each, into a single, large journal file.
Astapov instead advocates for creating a separate journal
for each CSV that you wish to import, and keep around the CSV, leaving you
with a 1:1 mapping of CSV:journal. Then use HLedger's "include" mechanism to
pull them all into the main journal.
With the former approach, where the CSV data was imported precisely, once, it
was only exposed to your import rules once. The workflow ended up being:
import transactions; notice some that you could have matched with import rules
and auto-coded; write the rule for the next time. With Astapov's approach, you
can re-generate the journal from the CSV at any point in the future with an
updated set of import rules.
tracking dependencies
Now we get onto the job of driving the generation of all these derivative
journal files. Astapov has built a sophisticated system using Haskell's "Shake",
which I'm not yet familiar, but for my sins I'm quite adept at (GNU-flavoured)
UNIX Make, so I started building with that. An example rule
This captures the dependency between the journal and the underlying CSV
but also to the relevant rules file; if I modify that, and this target
is run in the future, all dependent journals should be re-generated.1
opening balances
It's all fine and well starting over in a new year, and I might be generous
to forgive debts, but I can't count on others to do the same. We need
to carry over some balance information from one year to the next. Astapov has
a more complex (or perhaps featureful) scheme for this involving a custom
Haskell program, but I bodged something with a pair of make targets:
I think this could be golfed into a year-generic rule with a little more work.
The nice thing about this approach is the opening balances for a given year
might change, if adjustments are made in prior years. They shouldn't, for
real accounts, but very well could for more "virtual" liabilities. (including:
deciding to write off debts.)
run lots of reports
Astapov advocates for running lots of reports, and automatically. There's a
really obvious advantage of that to me: there's no chance anyone except me
will actually interact with HLedger itself. For family finances, I need
reports to be able to discuss anything with my wife.
Extending my make rules to run reports is trivial. I've gone for HTML
reports for the most part, as they're the easiest on the eye. Unfortunately
the most useful report to discuss (at least at the moment) would be a list
of transactions in a given expense category, and the register/aregister
commands did not support HTML as an output format. I submitted my first
HLedger patch to add HTML output support to aregister:
https://github.com/simonmichael/hledger/pull/2000
addressing the virtual posting problem
I wrote in my original hledger blog post that I had to resort to
unbalanced virtual postings in order to record both a liability between
my personal cash and family, as well as categorise the spend. I still
haven't found a nice way around that.
But I suspect having broken out the journal into lots of other journals
paves the way to a better solution to the above.
The form of a solution I am thinking of is: some scheme whereby the two
destination accounts are combined together; perhaps, choose one as a primary
and encode the other information in sub-accounts under that. For example,
repeating the example from my hledger blog post:
(I note this is very similar to a solution proposed to me by someone
responding on twitter).
The next step is to recognise that sometimes when looking at the data I
care about one aspect, and at other times the other, but rarely both. So
for the case where I'm thinking about family finances, I could use
account aliases
to effectively flatten out the expense category portion and ignore it.
On the other hand, when I'm concerned about how I've spent my personal
cash and not about how much I owe the family account, I could use
aliases to do the opposite: rewrite-away the family:liabilities:jon
prefix and combine the transactions with the regular jon:expenses
account heirarchy.
(this is all speculative: I need to actually try this.)
catching errors after an import
When I import the transactions for a given real bank account, I check the
final balance against another source: usually a bank statement, to make
sure they agree. I wasn't using any of the myriad methods to make sure
that this remains true later on, and so there was the risk that I make an
edit to something and accidentally remove a transaction that contributed
to that number, and not notice (until the next import).
The CSV data my bank gives me for accounts (not for credit cards) also includes
a 'resulting balance' field. It was therefore trivial to extend the CSV import
rules to add balance
assertions to
the transactions that are generated. This catches the problem.
There are a couple of warts with balance assertions on every such
transaction: for example, dealing with the duplicate transaction for paying
a credit card: one from the bank statement, one from the credit card.
Removing one of the two is sufficient to correct the account balances but
sometimes they don't agree on the transaction date, or the transactions
within a given day are sorted slightly differently by HLedger than by the
bank. The simple solution is to just manually delete one or two assertions:
there remain plenty more for assurance.
going forward
I've only scratched the surface of the suggestions in Astapov's "full fledged
HLedger" notes. I'm up to step 2 of 14. I'm expecting to return to it once
the changes I've made have bedded in a little bit.
I suppose I could anonymize and share the framework (Makefile etc) that I am
using if anyone was interested. It would take some work, though, so I don't know
when I'd get around to it.
the rm latest bit is to clear up some state-tracking files that HLedger writes to avoid importing duplicate transactions. In this case, I know better than HLedger.