Search Results: "Bernhard R. Link"

8 December 2014

Bernhard R. Link: The Colon in the Shell.

I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell: To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:
: [arguments]
	No effect; the command does nothing beyond expanding arguments and performing any specified redirections.  A zero exit code is returned.
If you wonder what the difference to true is: I don't know any difference (except that there is no /bin/:) So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command. Then there is more things you can do with the colon, most I'd put under "abuse": This is of course not a complete list. But unless I missed something else, those are the most common cases I run into. [1] <rant>If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.</rant>

16 November 2014

Bernhard R. Link: Enabling Change

Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in. Transparency Ideally every change is well communicated early and openly. Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally. Especially bad can be extending the change later or or shortening transition periods. Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them. Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change. Take responsibility Every transformation means costs. Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs: processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and. Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it. It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it. Those are the people whose good will you want to fight for, not the people you want to fight against. They have to pay with their labor/resources and thus their good will for your benefit the overall benefit. This is much easier if you acknowledge that fact. If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration. You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest. But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies. And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change. Allow different metrics People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem. If you want to persuade them, try to take that into account. If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems. If you want to persuade people, it is worth to try to understand those. If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere. But how do you want to win people over if you do not even appear to understand their problems. Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like? Don't get trolled and don't troll There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments. If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering. Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage. When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion. Give security Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks. Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security. In the end, almost every measure boils down to that: You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation). You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition). You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...). You give people security by not letting them alone (by having good documentation, availability of training, ...). Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above: It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old. Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance. Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes. (After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard). Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.

28 August 2014

Bernhard R. Link: Where key expiry dates are useful and where they are not.

Some recent blog (here and here) suggest short key expiry times. Then also highlight some thing many people forget: The expiry time of a key can be changed every time with just a new self-signature. Especially that can be made retroactively (you cannot avoid that, if you allow changing it: Nothing would stop an attacker from just changing the clock of one of his computers). (By the way: did you know you can also reduce the validity time of a key? If you look at the resulting packets in your key, this is simply a revocation packet of the previous self-signature followed by a new self-signature with a shorter expiration date.) In my eyes that fact has a very simple consequence: An expiry date on your gpg main key is almost totally worthless. If you for example lose your private key and have no revocation certificate for it, then a expiry time will not help at all: Once someone else got the private key (for example by brute forcing it, as computers got faster over the years or because they could brute-force the pass-phrase for your backup they got somehow), they can just extend the expiry date and make it look like it is still valid. (And if they do not have the private key, there is nothing they can do anyway). There is one place where expiration dates make much more sense, though: subkeys. As the expiration date of a subkey is part of the signature of that subkey with the main key, someone having access to only the subkey cannot change the date. This also makes it feasible to use new subkeys over the time, as you can let the previous subkey expire and use a new one. And only someone having the private main key (hopefully you), can extend its validity (or sign a new one). (I generally suggest to always have a signing subkey and never ever use the main key except off-line to sign subkeys or other keys. The fact that it can sign other keys just makes the main key just too precious to operate it on-line (even if it is on some smartcard the reader cannot show you what you just sign)).

2 June 2014

Bernhard R. Link: beware of changed python Popen defaults

From the python subprocess documentation:
Changed in version 3.3.1: bufsize now defaults to -1 to enable buffering by default to match the behavior that most code expects. In versions prior to Python 3.2.4 and 3.3.1 it incorrectly defaulted to 0 which was unbuffered and allowed short reads. This was unintentional and did not match the behavior of Python 2 as most code expected.
So it was unintentional it seems that the previous documentation clearly documented the default to be 0 and the implementation matching the documentation. And it was unintentional that it was the only sane value for any non-trivial handling of pipes (without running into deadlocks). Yay for breaking programs that follow the documentation! Yay for changing such an important setting between 3.2.3 and 3.2.4 and introducing deathlocks into programs.

16 February 2014

Bernhard R. Link: unstable busybox and builtins

In case you are using busybox-static like me to create custom initramfses, here a little warning: The current busybox-static in unstable lost its ability to find builtins with no /proc/self/exe, so if you use it make sure you either have all builtins you need up until mount /proc (including mount) and after you umount all file systems as explicit symlinks or simply create a /proc/self/exe -> /bin/busybox symlink...

15 August 2013

Bernhard R. Link: slides for git-dpm talk at debconf13

Since at my git-dpm talk at debconf13 I got the speed a bit wrong and as the slides I uploaded to penta seem not to work from the html export, I've also uploaded the slides to

12 June 2013

Bernhard R. Link: listing your git repositories on

With the new gitweb version available on alioth after the upgrade to wheezy (thanks to the alioth admins for their work on alioth), there is a new feature available I want to advertise a bit here: listing only a subtree of all repositories. Before now one could only either look at a specific repository or get the list of all repositories and the list of all repositories is quite large and slow. With the new feature you can link to all the repositories in your alioth project. For example in reprepro's case that is Much more I missed what is now possible with the link getting a list of all debian-science repositories (still slow enough, but much better than the full list).

9 May 2013

Bernhard R. Link: gnutls and valgrind

Memo to myself (as I tend to forget it): If you develop gnutls using applications, recompile gnutls with --disable-hardware-acceleration to be able to test them without getting flooded with false-positives.

4 May 2013

Ian Campbell: Taking over qcontrol upstream, releasing qcontrol 0.5.0

Taking over Qcontrol upstream When I took over the qcontrol package in Debian, back in October 2012, I was aware that upstream hadn't been active for quite a while (since late 2009). I figured that would be OK since the software was pretty simple and pretty much feature complete. Nevertheless since I've been maintaining the package I've had a small number of wishlist bugs (many with patches, thanks guys!) which are really upstream issues. While I could carry them as patches in the Debian package they would really be better taken care of by an upstream. With that in mind I mailed the previous upstream (and original author) Byron Bradley back in February to ask if he would mind if I took over as upstream. Unfortunately I haven't yet heard back, given how long upstream has been inactive for this isn't terribly surprising so having waited several months I have decided to just go ahead and take over upstream development. Thanks Byron for all your work on the project, I hope you don't mind me taking over. Since it is unlikely that I will be able to access the old website or Subversion repository I have converted the repository to git and uploaded it to gitorious: I don't expect I'll be doing an awful lot of proactive upstream development but at least now I have a hat I can put on when someone reports an "upstream" issue against qcontrol and somewhere which can accept upstream patches from myself and others. I'm still deciding what to do about a website etc. I may just enable the gitorious wiki or I may setup an ikiwiki software site type setup, which has the advantage of being a little more flexible and providing a neat way of dealing with bug reports without being too much overhead to setup up. In the meantime its been almost 5 years since the last release, so... New Release: qcontrol 0.5.0 This is a rollup of some changes which were made in the old upstream SVN repository but never released and some patches which had been made in the Debian packaging. What's here corresponds to the Debian 0.4.2+svn-r40-3 package. The 0.4.2 release was untagged in SVN, but corresponds to r14, new stuff since then includes: As well as the change of maintainer I think the addition of the daemon mode warrants the bump to 0.5.0. Get the new release from git or

4 April 2013

Bernhard R. Link: Git package workflows

Given the recent discussions on I use the opportunity to describe how you can handle upstream history in a git-dpm workflow. One of the primary points of git-dpm is that you should be able to just check out the Debian branch, get the .orig.tar file(s) (for example using pristine-tar, by git-dpm prepare or by just downloading them) and then calling dpkg-buildpackage. Thus the contents of the Debian branch need to be clean from dpkg-source's point of view, that is do not contain any files the .orig.tar file(s) contains not nor any modified files. The easy way The easiest way to get there is by importing the upstream tarball(s) as a git commit, which one will usually do with git-dpm import-new-upstream as that also does some of the bookkeeping. This new git commit will have (by default) the previous upstream commit and any parent you give with -p as parents. (i.e. with -p it will be a merge commit) and its content will be the contents of the tarball (with multiple orig files, it gets more complicated). The idea is of course that you give the upstream tag/commit belonging to this release tarball with -p so that it becomes part of your history and so git blame can find those commits. Thus you get a commit with the exact orig contents (so pristine-tar can more easily create small deltas) and the history combined.. Sometimes there are files in the upstream tarball that you do not want to have in your Debian branch (as you remove them in debian/rules clean), then when using this method you will have those files in the upstream branch but you delete them in the Debian branch. (This is why git-dpm merge-patched (the operation to merge a new branch with upstream + patches with your previous debian/ directory) will look which files relative to the previous upstream branch are deleted and delete them also in the newly merged branch by default). The complicated way There is only a way without importing the .orig.tar file(s), though that is a bit more complicated: The idea is that if your upstream's git repository contains all the files needed for building your Debian package (for example if you call autoreconf in your Debian package and clean all the generated files in the clean target, or if upstream has a less sophisticated release process and their .tar contains only stuff from the git repository), you can just use the upstream git commit as base for your Debian branch. Thus you can make upstream's commit/tag your upstream branch, by recording it with git-dpm new-upstream together with the .orig.tar it belongs to (Be careful, git-dpm does not check if that branch contains any files different than your .orig.tar and could not decide if it misses any files you need to build even if it tried to tell). Once that is merged with the debian/ directory to create the Debian branch, you run dpkg-buildpackage, which will call dpkg-source which compares your working directory with the contents of the .orig.tar with the patches applied. As it will only see files not there but no files modified or added (if everything was done correctly), one can work directly in the git checkout without needing to import the .orig.tar files at all (altough the pristine-tar deltas might get a bit bigger).

10 February 2013

Bernhard R. Link: Debian version strings

As I did not find a nice explanation of Debian version numbers to point people at, here some random collection of information about: All our packages have a version. For the package managers to know which to replace with which, those versions needs an ordering. As version orderings are like opinions (everyone has one), none of them would match any single one chosen for our tools to implement. So maintainers of Debian packages sometimes have to translate those versions into something the Debian tools understand. But first let's start with some basics: A Debian version string is of the form: [Epoch:]Upstream-Version[-Debian-Revision] To make this form unique, the Upstream-Version may not contain an colon if there is no epoch and not contain a minus, if there is no Debian-Revision. The Epoch must be an integer (so no colons allowed). And the Debian-Revision may not contain a minus sign (so the Debian-Revision is everything right of the right-most minus sign, or empty if there is no such sign). Two versions are compared by comparing all three parts. If the epochs differ, the biggest epoch wins. With same epochs, the biggest upstream version wins. With same epochs and same upstream versions, the biggest revision wins. Comparing first the upstream version and then the revision is the only sensible thing to do, but it can have counter-intuitive effects if you try to compare versions with minus signs as Debian versions:
$ dpkg --compare-versions '1-2' '<<' '1-1-1' && echo true   echo false
$ dpkg --compare-versions '1-2-1' '<<' '1-1-1-1' && echo true   echo false
To compare two version parts (Upstream-Version or Debian-Revision), the string is split into pairs of digits and non digits. Consecutive digits are treated as a number and compared numerrically. Non-digit parts are compared just like ASCII strings with the exception that letters are sorted before non-letters and the tilde is treated specially (see below). So 3pl12 and 3pl3s are slit into (3, 'pl', 12, '') and (3, 'pl', 3, 's') and the first is the larger version. Comparing digits as characters makes not sense at least at the beginning of the string (otherweise version 10.1 would be smaller than 9.3). For digits later in the string there are two different version schemes competing here: There is GNU style 0.9.0 followed by 0.10.0 and decimal fractions like 0.11 < 0.9. Here a version comparison algorithm has to choose one and the one chosen by dpkg is both the one supporting the GNU numbering and also the one easier supporting the other scheme: Imagine one software going 0.8 0.9 0.10 and one going 1.1 1.15 1.2. With out versioning scheme the first just works, while the second has to be translated into 1.1 1.15 1.20 to still be monotonic. The other way around, we would have to translate the first form to 0.08 0.09 0.10, or better 0.008 0.009 0.010 as we do not know how big those numbers will be, i.e. one would have to know beforehand where the numbers will end up, while adding zeros as needed for our scheme can be done with only knowing the previous numbers. Another decision to be taken is how to treat non-numbers. The way dpkg did this was assuming adding stuff to the end increases numbers. This has the advantage to not needing to special case dots, as say will be bigger than 0.9.6 naturally. I think back then this decision was also easier as usually anything attached was making the version bigger and one often saw versions like to denote some patches done atop of 3.3 in the 3th revision. But this scheme has the disadvantage that version schemes like 1.0rc1 1.0rc2 1.0 do not map naturally. The classic way to work arround this is to translate that into 1.0rc1 1.0rc2 1.0.0 which works because the dot is a non-letter (it also works with 1.0-rc1 and 1.0+rc1 as the dot has a bigger ASCII number than minus or plus). The new way is the specially treated tilde character. This character was added some years ago to sort before anything else, including an empty string. This means that 1.0~rc1 is less than 1.0:
dpkg --compare-versions '1.0~rc1-1' '<<' '1.0-1' && echo true   echo false
This scheme is especially useful if you want to create a package sorting before a package already there, as you for example do want with backports (as a user having a backport installed upgrading to the next distribution should get the backport replaced with the actual package). That's why backport usually having versions like 1.0-2~bpo60+1. Here 1.0-2 is the version of the un-backported version; bpo60 is a note that this is backported to Debian 6 (AKA squeeze) and the +1 is the number of the backport in case there are multiple tries necessary. (Note the use of the plus sign as the minus sign is not allowed in revisions and would make the part before a part of the upstream version). Now, when to use which technique? Some common examples:

30 January 2013

Paul Tagliamonte: dput-ng/1.4 in unstable

dput-ng (1.4) unstable; urgency=low
   [ Arno T ll ]
   * Really fix #696659 by making sure the command line tool uses the most recent
     version of the library.
   * Mark several fields to be required in profiles (incoming, method)
   * Fix broken tests.
   * Do not run the check-debs hook in our mentors.d.n profile
   * Fix "[dcut] dm bombed out" by using the profile key only when defined
     (Closes: #698232)
   * Parse the gecos field to obtain the user name / email address from the local
     system when DEBFULLNAME and DEBEMAIL are not set.
   * Fix "dcut reschedule sends "None-day" to ftp-master if the delay is
     not specified" by forcing the corresponding parameter (Closes: #698719)
   [ Luca Falavigna ]
   * Implement default_keyid option. This is particularly useful with multiple
     GPG keys, so dcut is aware of which one to use.
   * Make scp uploader aware of "port" configuration option.
   [ Paul Tagliamonte ]
   * Hack around Launchpad's SFTP implementation. We musn't stat *anything*.
     "Be vewy vewy quiet, I'm hunting wabbits" (Closes: #696558).
   * Rewrote the test suite to actually test the majority of the codepaths we
     take during an upload. Back up to 60%.
   * Added a README for the twitter hook, Thanks to Sandro Tosi for the bug,
     and Gergely Nagy for poking me about it. (Closes: #697768).
   * Added a doc for helping folks install hooks into dput-ng (Closes: #697862).
   * Properly remove DEFAULT from loadable config blocks. (Closes: #698157).
   * Allow upload of more then one file. Thanks to Iain Lane for the
     suggestion. (Closes: #698855).
   [ Bernhard R. Link ]
   * allow empty incoming dir to upload directly to the home directory
   [ Sandro Tosi ]
   * Install example hooks (Closes: #697767).
Thanks to all the contributors! For anyone who doesn t know, you should check out the docs.

13 January 2013

Bernhard R. Link: some signature basics

While almost everyone has already worked with cryptographic signatures, they are usually only used as black boxes, without taking a closer look. This article intends to shed some lights behind the scenes. Let's take a look at some signature. In ascii-armoured form or behind a clearsigned message one often does only see something like this:
Version: GnuPG v1.4.12 (GNU/Linux)
This is actually only a form of base64 encoded data stream. It can be translated to the actual byte stream using gpg's --enarmor and --dearmour commands (Can be quite useful if some tool only expects one BEGIN SIGNATURE/END SIGNATURE block but you want to include multiple signatures but cannot generate them with a single gpg invocation because the keys are stored too securely in different places). Reading byte streams manually is not much fun, so I wrote gpg2txt some years ago, which can give you some more information. Above signature looks like the following:
89 02 1C -- packet type 2 (signature) length 540
        04 00 -- version 4 sigclass 0
        01 -- pubkey 1 (RSA)
        02 -- digest 2 (SHA1)
        00 06 -- hashed data of 6 bytes
                05 02 -- subpacket type 2 (signature creation time) length 4
                        50 F2 AC 50 -- created 1358081104 (2013-01-13 12:45:04)
        00 0A -- unhashed data of 10 bytes
                09 10 -- subpacket type 16 (issuer key ID) length 8
                        7F 11 72 03 23 FB 02 DC -- issuer 7F11720323FB02DC
        D5 0C -- digeststart 213,12
        0F FA -- integer with 4090 bits
                02 D0 [....]
Now, what does this mean. First all gpg data (signatures, keyrings, ...) is stored as a series of blocks (which makes it trivial to concatenate public keys, keyrings or signatures). Each block has a type and a length. A single signature is a single block. If you create multiple signatures at once (by giving multiple -u to gpg) there are simple multiple blocks one after the other. Then there is a version and a signature class. Version 4 is the current format, some really old stuff (or things wanting to be compatible with very old stuff) sometimes still have version 3. The signature class means what kind of signature it is. There are roughly two signature classes: A verbatim signature (like this one), or a signature of a clearsigned signature. With a clearsigned signature not the file itself is hashed, but instead a normalized form that is supposed to be invariant under usual modifications by mailers. (This is done so people can still read the text of a mail but the recipient can still verify it even if there were some slight distortions on the way.) Then the type of the key used and the digest algorithm used for creating this signature. The digest algorithm (together with the signclass, see above) describes which hashing algorithm is used. (You never sign a message, you only sign a hashsum. (Otherwise your signature would be as big as your message and it would take ages to create a signature, as asymetric keys are necessarily very slow)). This example uses SHA1, which is no longer recommended: As SHA1 has shown some weaknesses, it may get broken in the not too distant future. And then it might be possible to take this signature and claim it is the signature of something else. (If your signatures are still using SHA1, you might want to edit your key preferences and/or set a digest algorithm to use in your ~/.gnupg/gpg.conf. Then there are some more information about this signature: the time it was generated on and the key it was generated with. Then, after the first 2 bytes of the message digest (I suppose it was added in cleartext to allow checking if the message is OK before starting with expensive cryptograhic stuff, but it might not checked anywhere at all), there is the actual signature. Format-wise the signature itself is the most boring stuff. It's simply one big number for RSA or two smaller numbers for DSA. Some little detail is still missing: What is this "hashed data" and "unhashed data" about? If the signed digest would only be a digest of the message text, then having a timestamp in the signature would not make much sense, as anyone could edit it without making the signature invalid. That's why the digest is not only signed message, but also parts of the information about the signature (those are the hashed parts) but not everything (not the unhashed parts).

Russ Allbery: afs-monitor 2.4

I love free software. Two more patches from users of my AFS monitoring scripts (primarily for Nagios) on the heels of the last release, so have another release. This one adds another ignore line to check_afs_bos for the output when there are long-running scheduled jobs, and adds an optional regex filter to check_afs_quotas to select the volumes whose quota one wants to check. Thanks to Georg Sluyterman and Christian Ospelkaus for the patches. You can get the latest version from the afs-monitor distribution page. As before, Debian packages have been uploaded to my personal repository. Also, a quick note about a previous journal entry: Philip Martin and Bernhard R. Link pointed me at the relevant part of the gpg source, which will only check signatures if all signatures use the same class and digest. I updated my previous entry. Thank you!

11 January 2013

Russ Allbery: release 1.45 and multiple PGP signatures

release is the tool that I use to manage software releases. This version adds support for signing a tarball with multiple PGP keys at once. For WebAuth, we have an official signing key, but I also wanted to add my personal signature. The minimum supported version of Python was also bumped to 2.5 (see below for why). You can get the latest version from my scripts page. Normally, I wouldn't announce this to everyone, since it's a fairly minor change, except I had a question and a quick Python note. The question: how does one get GnuPG to verify multiple signatures? When I run GnuPG on the resulting detached *.asc file created by this new code, I get the following:
gpg: WARNING: multiple signatures detected.  Only the first will be checked.
gpg: Signature made Thu 10 Jan 2013 03:58:43 PM PST using RSA key ID 5736DE75
gpg: Good signature from "Russ Allbery <>"
gpg:                 aka "Russ Allbery <>"
gpg:                 aka "Russ Allbery <>"
gpg:                 aka "Russ Allbery <>"
That's nice, I suppose, but it's not as helpful as it could be. Why not verify both signatures and report all of the results? I searched both the gpg man page and the Internet for some option I'm missing, without any luck. (Searching Google for that exact message amusingly turns up lots of verbatim output from Debian repository verification and almost nothing else.) If anyone knows, send me email ( or any other convenient address), and I'll update this post with the answer. UPDATE: gpg can currently only verify multiple signatures if the signatures are from keys with the same class and digest. I was attempting this with a 2048-bit RSA key and a 1024-bit DSA key, which gpg does not support. (Yet another reason to replace the WebAuth signing key with a newly-generated key.) Thanks to Philip Martin and Bernhard R. Link for the information! The note is that, way back when I was first learning Python, one of the things that bothered me the most was the lack of safe program execution. The best available was os.spawnv(), which left something to be desired. If you too were bothered by this but don't program in Python enough to keep track of the current state of the art, there's now a subprocess module that has much saner interfaces that work properly, do reasonable things, and avoid the shell. It was added in Python 2.4, with some nicer interfaces added in Python 2.5, so it's fairly safe to use at this point.

5 January 2013

Paul Tagliamonte: Updates to dput-ng since version 1.0

Big release notes since 1.0: We ve got a new list feel free to subscribe!
  * Avoid failing on upload if a pre/post upload hook is missing from the
  * Fix "dcut raises FtpUploadException" by correctly initializing the uploader
    classes from dcut (Closes: #696467)
  * Add bash completions for dput-ng (Closes: #695412).
  * Add in a script to set the default profile depending on the building
    distro (Ubuntu support)
  * Fix a bug where meta-class info won't be loaded if the config file has the
    same name.
  * Add an Ubuntu upload target.
  * Added .udeb detection to the check debs hook.
  * Catch the correct exception falling out of bin/dcut
  * Fix the dput manpages to use --uid rather then the old --dm flag.
  * Fix the CLI flag registration by setting required=True
    in cancel and upload.
  * Move make_delayed_upload above the logging call for sanity's sake.
  * Fix "connects to the host even with -s" (Closes: #695347)
Thanks to everone who s contributed!
     7  Bernhard R. Link
     4  Ansgar Burchardt
     3  Luca Falavigna
     2  Michael Gilbert
     2  Salvatore Bonaccorso
     1  Benjamin Drung
     1  Gergely Nagy
     1  Jakub Wilk
     1  Jimmy Kaplowitz
     1  Luke Faraone
     1  Sandro Tosi
This has been your every-once-in-a-while dput-ng update. We re looking for more code contributions (to make sure everyone s happy), doc updates (etc) or ideas.

29 November 2012

Bernhard R. Link: Gulliver's Travels

After seeing some book descriptions recently on planet debian, let me add some short recommendation, too. Almost everyone has heard about Gulliver's Travels already, so usually only very cursory. For example: did you know the book describes 4 journeys and not only the travel to Lilliput? Given how influential the book has been, that is even more suprising. Words like "endian" or "yahoo" originate from it. My favorite is the third travel, though, especially the acadamy of Lagado, from which I want to share two gems: " His lordship added, 'That he would not, by any further particulars, prevent the pleasure I should certainly take in viewing the grand academy, whither he was resolved I should go.' He only desired me to observe a ruined building, upon the side of a mountain about three miles distant, of which he gave me this account: 'That he had a very convenient mill within half a mile of his house, turned by a current from a large river, and sufficient for his own family, as well as a great number of his tenants; that about seven years ago, a club of those projectors came to him with proposals to destroy this mill, and build another on the side of that mountain, on the long ridge whereof a long canal must be cut, for a repository of water, to be conveyed up by pipes and engines to supply the mill, because the wind and air upon a height agitated the water, and thereby made it fitter for motion, and because the water, descending down a declivity, would turn the mill with half the current of a river whose course is more upon a level.' He said, 'that being then not very well with the court, and pressed by many of his friends, he complied with the proposal; and after employing a hundred men for two years, the work miscarried, the projectors went off, laying the blame entirely upon him, railing at him ever since, and putting others upon the same experiment, with equal assurance of success, as well as equal disappointment.' " "I went into another room, where the walls and ceiling were all hung round with cobwebs, except a narrow passage for the artist to go in and out. At my entrance, he called aloud to me, 'not to disturb his webs.' He lamented 'the fatal mistake the world had been so long in, of using silkworms, while we had such plenty of domestic insects who infinitely excelled the former, because they understood how to weave, as well as spin.' And he proposed further, 'that by employing spiders, the charge of dyeing silks should be wholly saved;' whereof I was fully convinced, when he showed me a vast number of flies most beautifully coloured, wherewith he fed his spiders, assuring us 'that the webs would take a tincture from them; and as he had them of all hues, he hoped to fit everybody s fancy, as soon as he could find proper food for the flies, of certain gums, oils, and other glutinous matter, to give a strength and consistence to the threads.'"

20 October 2012

Bernhard R. Link: Fun with physics: Quantum Leaps

A quantum leap is a leap between two states where there is no state in between. That makes it usually quite small, but also quite sudden (think of Lasers). So a quantum leap is a jump not allowing any intermediate states, i.e. a "abrupt change, sudden increase" like Merriam Webster defines it. This then get a "dramatic advance" and suddenly the meaning shifted from something so small it could not be divided to something quite big. But before you complain people use the new common meaning instead of the classic physicalistic meaning, ask yourself: Would you prefer if people kept talking about "disruptive" changes to announce they did something big? Update: I'm using quantum jump in the sense as for example used in If quantum jump is something different to you, my post might not make much sense.

19 October 2012

Bernhard R. Link: Time flies like an arrow

It's now 10 years I am Debian Developer. In retrospect it feels like a very short time. I guess because not so much in Debian's big picture has changed. Except I sometimes have the feeling that less people care about users and more people instead prefer solutions incapacitating users. But perhaps I'm only getting old and grumpy and thriving for systems enabling the user to do what they want was only a stop-gap until there where also open source solutions for second-guessing what the user should have wanted. Anyway, thanks to all of you in and around Debian that made the last ten years such a nice and rewarding experience and I'm looking forward to the next ten years.

30 June 2012

Bernhard R. Link: ACPI power button for the rest of us

The acpi-support maintainer unfortunately decided 2012-06-21 that having some script installed by a package to cleanly shut down the computer should not be possible without having consolekit and thus dbus installed. So (assuming this package will migrate to wheezy which it most likely will tomorrow) with wheezy you will either have to write your own event script or install consolekit and dbus everywhere. You need two files. You need one in /etc/acpi/events/, for example a /etc/acpi/events/powerbtn:
event=button[ /]power
Which causes a power-button even to call a script /etc/acpi/, which you of course also need:
/sbin/shutdown -h -P now "Power button pressed"
You can also name it differently, but /etc/acpi/ has the advantage that the script from acpi-support-base (in case it was only removed and not purged) does not call shutdown itself if it is there. (And do not forget to restart acpid, otherwise it does not know about your event script yet). For those too lazy I've also prepared a package acpi-support-minimal, which only contains those scripts (and a postinst to restart acpid to bring it into effect with installation), which can be get via apt-get using
deb wheezy-acpi-minimal main
deb-src wheezy-acpi-minimal main
or directly from Sadly the acpi-support maintainer sees no issue at all and ftp-master doesn't like so tiny packages (which is understandable but means the solution is more than a apt-get away).