Search Results: "joel"

27 May 2020

Kees Cook: security things in Linux v5.5

Previously: v5.4. I got a bit behind on this blog post series! Let s get caught up. Here are a bunch of security things I found interesting in the Linux kernel v5.5 release: restrict perf_event_open() from LSM
Given the recurring flaws in the perf subsystem, there has been a strong desire to be able to entirely disable the interface. While the kernel.perf_event_paranoid sysctl knob has existed for a while, attempts to extend its control to block all perf_event_open() calls have failed in the past. Distribution kernels have carried the rejected sysctl patch for many years, but now Joel Fernandes has implemented a solution that was deemed acceptable: instead of extending the sysctl, add LSM hooks so that LSMs (e.g. SELinux, Apparmor, etc) can make these choices as part of their overall system policy. generic fast full refcount_t
Will Deacon took the recent refcount_t hardening work for both x86 and arm64 and distilled the implementations into a single architecture-agnostic C version. The result was almost as fast as the x86 assembly version, but it covered more cases (e.g. increment-from-zero), and is now available by default for all architectures. (There is no longer any Kconfig associated with refcount_t; the use of the primitive provides full coverage.) linker script cleanup for exception tables
When Rick Edgecombe presented his work on building Execute-Only memory under a hypervisor, he noted a region of memory that the kernel was attempting to read directly (instead of execute). He rearranged things for his x86-only patch series to work around the issue. Since I d just been working in this area, I realized the root cause of this problem was the location of the exception table (which is strictly a lookup table and is never executed) and built a fix for the issue and applied it to all architectures, since it turns out the exception tables for almost all architectures are just a data table. Hopefully this will help clear the path for more Execute-Only memory work on all architectures. In the process of this, I also updated the section fill bytes on x86 to be a trap (0xCC, int3), instead of a NOP instruction so functions would need to be targeted more precisely by attacks. KASLR for 32-bit PowerPC
Joining many other architectures, Jason Yan added kernel text base-address offset randomization (KASLR) to 32-bit PowerPC. seccomp for RISC-V
After a bit of long road, David Abdurachmanov has added seccomp support to the RISC-V architecture. The series uncovered some more corner cases in the seccomp self tests code, which is always nice since then we get to make it more robust for the future! seccomp USER_NOTIF continuation
When the seccomp SECCOMP_RET_USER_NOTIF interface was added, it seemed like it would only be used in very limited conditions, so the idea of needing to handle normal requests didn t seem very onerous. However, since then, it has become clear that the overhead of a monitor process needing to perform lots of normal open() calls on behalf of the monitored process started to look more and more slow and fragile. To deal with this, it became clear that there needed to be a way for the USER_NOTIF interface to indicate that seccomp should just continue as normal and allow the syscall without any special handling. Christian Brauner implemented SECCOMP_USER_NOTIF_FLAG_CONTINUE to get this done. It comes with a bit of a disclaimer due to the chance that monitors may use it in places where ToCToU is a risk, and for possible conflicts with SECCOMP_RET_TRACE. But overall, this is a net win for container monitoring tools. EFI_RNG_PROTOCOL for x86
Some EFI systems provide a Random Number Generator interface, which is useful for gaining some entropy in the kernel during very early boot. The arm64 boot stub has been using this for a while now, but Dominik Brodowski has now added support for x86 to do the same. This entropy is useful for kernel subsystems performing very earlier initialization whre random numbers are needed (like randomizing aspects of the SLUB memory allocator). FORTIFY_SOURCE for MIPS
As has been enabled on many other architectures, Dmitry Korotin got MIPS building with CONFIG_FORTIFY_SOURCE, so compile-time (and some run-time) buffer overflows during calls to the memcpy() and strcpy() families of functions will be detected. limit copy_ to,from _user() size to INT_MAX
As done for VFS, vsnprintf(), and strscpy(), I went ahead and limited the size of copy_to_user() and copy_from_user() calls to INT_MAX in order to catch any weird overflows in size calculations. Other things
Alexander Popov pointed out some more v5.5 features that I missed in this blog post. I m repeating them here, with some minor edits/clarifications. Thank you Alexander! Edit: added Alexander Popov s notes That s it for v5.5! Let me know if there s anything else that I should call out here. Next up: Linux v5.6.

2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

18 March 2017

Vincent Sanders: A rose by any other name would smell as sweet

Often I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of "code smell" to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.

Such smells may include:
I am most certainly not alone in using this approach and Fowler et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from never to always but in my experience making the old working code smell nice is almost always less effort and risk than a re-write.
TestsWhen I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.

Test coverage is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both Fowler and Marick have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.

Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.

Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.

It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.

For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.
ExampleA short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header that helps the browser interpret the resource. However this meta-data is often problematic (sorry that should read "a misleading lie") so the actual content must be examined to get a better answer for the user. This is known as mime sniffing and of course there is a living specification.

The source code that provides this API (Linked to it rather than included for brevity) has a few smells:
While some of these are obvious the non-use of the global string table and the API complexity needed detailed knowledge of the codebase, just to highlight how subjective the sniff test can be. There is also one huge air freshener in all of this which definitely comes from experience and that is the modules author. Their name at the top of this would ordinarily be cause for me to move on, but I needed an example!

First thing to check is the API use

$ git grep -i -e mimesniff_compute_effective_type --or -e mimesniff_init --or -e mimesniff_fini
content/hlcache.c: error = mimesniff_compute_effective_type(handle, NULL, 0,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/mimesniff.c:nserror mimesniff_init(void)
content/mimesniff.c:void mimesniff_fini(void)
content/mimesniff.c:nserror mimesniff_compute_effective_type(llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_compute_effective_type(struct llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_init(void);
content/mimesniff.h:void mimesniff_fini(void);
desktop/netsurf.c: ret = mimesniff_init();
desktop/netsurf.c: mimesniff_fini();

This immediately shows me that this API is used in only a very small area, this is often not the case but the general approach still applies.

After a little investigation the usage is effectively that the mimesniff_init API must be called before the mimesniff_compute_effective_type API and the mimesniff_fini releases the initialised resources.

A simple test case was added to cover the API, this exercised the behaviour both when the init was called before the computation and not. Also some simple tests for a limited number of well behaved inputs.

By changing to using the global string table the initialisation and finalisation API can be removed altogether along with a large amount of global context and pre-processor macros. This single change removes a lot of smell from the module and raises test coverage both because the global string table already has good coverage and because there are now many fewer lines and conditionals to check in the mimesniff module.

I stopped the refactor at this point but were this more than an example I probably would have:
ConclusionThe approach examined here reduce the smell of code in an incremental, testable way to improve the codebase going forward. This is mainly necessary on larger complex codebases where technical debt and bit-rot are real issues that can quickly overwhelm a codebase if not kept in check.

This technique is subjective but helps a programmer to quantify and examine a piece of code in a structured fashion. However it is only a tool and should not be over applied nor used as a metric to proxy for code quality.

15 February 2017

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox. You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos. If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox). You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos). If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

18 August 2016

Zlatan Todori : DebConf16 - new age in Debian community gathering

DebConf16 Finally got some time to write this blog post. DebConf for me is always something special, a family gathering of weird combination of geeks (or is weird a default geek state?). To be honest, I finally can compare Debian as hacker conference to other so-called hacker conferences. With that hat on, I can say that Debian is by far the most organized and highest quality conference. Maybe I am biased, but I don't care too much about that. I simply love Debian and that is no secret. So lets dive into my view on DebConf16 which was held in Cape Town, South Africa. Cape Town This was the first time we had conference on African continent (and I now see for the first time DebConf bid for Asia, which leaves only Australia and beautiful Pacific islands to start a bid). Cape Town by itself, is pretty much Europe-like city. That was kinda a bum for me on first day, especially as we were hosted at University of Cape Town (which is quite beautiful uni) and the surrounding neighborhood was very European. Almost right after the first day I was fine because I started exploring the huge city. Cape Town is really huge, it has by stats ~4mil people, and unofficially it has ~6mil. Certainly a lot to explore and I hope one day to be back there (I actually hope as soon as possible). The good, bad and ugly I will start with bad and ugly as I want to finish with good notes. Racism down there is still HUGE. You don't have signs on the road saying that, but there is clearly separation between white and black people. The houses near uni all had fences on walls (most of them even electrical ones with sharp blades on it) with bars on windows. That just bring tensions and certainly doesn't improve anything. To be honest, if someone wants to break in they still can do easily so the fences maybe need to bring intimidation but they actually only bring tension (my personal view). Also many houses have sign of Armed Force Response (something in those lines) where in case someone would start breaking in, armed forces would come to protect the home. Also compared to workforce, white appear to hold most of profit/big business positions and fields, while black are street workers, bar workers etc etc. On the street you can feel from time to time the tension between people. Going out to bars also showed the separation - they were either almost exclusively white or exclusively black. Very sad state to see. Sharing love and mixing is something that pushes us forward and here I saw clear blockades for such things. The bad part of Cape Town is, and this is not only special to Cape Town but to almost all major cities, is that small crime is on wide scale. Pickpocketing here is something you must pay attention to it. To me, personally, nothing happened but I heard a lot of stories from my friends on whom were such activities attempted (although I am not sure did the criminals succeed). Enough of bad as my blog post will not change this and it is a topic for debate and active involvement which I can't unfortunately do at this moment. THE GOOD! There are so many great local people I met! As I mentioned, I want to visit that city again and again and again. If you don't fear of those bad things, this city has great local cuisine, a lot of great people, awesome art soul and they dance with heart (I guess when you live in rough times, you try to use free time at your best). There were difference between white and black bars/clubs - white were almost like standard European, a lot of drinking and not much dancing, and black were a lot of dancing and not much drinking (maybe the economical power has something to do with it but I certainly felt more love in black bars). Cape Town has awesome mountain, the Table Mountain. I went on hiking with my friends, and I must say (again to myself) - do the damn hiking as much as possible. After every hike I feel so inspired, that I will start thinking that I hate myself for not doing it more often! The view from Table mountain is just majestic (you can even see the Cape of Good Hope). The WOW moments are just firing up in you. Now lets transfer to DebConf itself. As always, organization was on quite high level. I loved the badge design, it had a map and nice amount of information on it. The place we stayed was kinda not that good but if you take it into account that those a old student dorms (in we all were in female student dorm :D ) it is pretty fancy by its own account. Talks were near which is always good. The general layout of talks and front desk position was perfect in my opinion. All in one place basically. Wine and Cheese this year was kinda funny story because of the cheese restrictions but Cheese cabal managed to pull out things. It was actually very well organized. Met some new people during the party/ceremony which always makes me grow as a person. Cultural mix on DebConf is just fantastic. Not only you learn a lot about Debian, hacking on it, but sheer cultural diversity makes this small con such a vibrant place and home to a lot. Debian Dinner happened in Aquarium were I had nice dinner and chat with my old friends. Aquarium by itself is a thing where you can visit and see a lot of strange creatures that live on this third rock from Sun. Speaking of old friends - I love that I Apollo again rejoined us (by missing the DebConf15), seeing Joel again (and he finally visited Banja Luka as aftermath!), mbiebl, ah, moray, Milan, santiago and tons of others. Of course we always miss a few such as zack and vorlon this year (but they had pretty okay-ish reasons I would say). Speaking of new friends, I made few local friends which makes me happy and at least one Indian/Hindu friend. Why did I mention this separately - well we had an accident during Group Photo (btw, where is our Lithuanian, German based nowdays, photographer?!) where 3 laptops of our GSoC students were stolen :( . I was luckily enough to, on behalf of Purism, donate Librem11 prototype to one of them, which ended up being the Indian friend. She is working on real time communications which is of interest also to Purism for our future projects. Regarding Debian Day Trip, Joel and me opted out and we went on our own adventure through Cape Town in pursue of meeting and talking to local people, finding out interesting things which proved to be a great decision. We found about their first Thursday of month festival and we found about Mama Africa restaurant. That restaurant is going into special memories (me playing drums with local band must always be a special memory, right?!). Huh, to be honest writing about DebConf would probably need a book by itself and I always try to keep my posts as short as possible so I will try to stop here (maybe I write few bits in future more about it but hardly). Now the notes. Although I saw the racial segregation, I also saw the hope. These things need time. I come from country that is torn apart in nationalism and religious hate so I understand this issues is hard and deep on so many levels. While the tensions are high, I see people try to talk about it, try to find solution and I feel it is slowly transforming into open society, where we will realize that there is only one race on this planet and it is called - HUMAN RACE. We are all earthlings, and as sooner we realize that, sooner we will be on path to really build society up and not fake things that actually are enslaving our minds. I just want in the end to say thank you DebConf, thank you Debian and everyone could learn from this community as a model (which can be improved!) for future societies.

22 June 2016

Andrew Cater: Why I must use Free Software - and why I tell others to do so

My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times, I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
...
In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace 1996 https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much a "do-ocracy" - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.




7 July 2015

Lunar: Reproducible builds: week 10 in Stretch cycle

What happened about the reproducible builds effort this week: Media coverage Daniel Stender published an English translation of the article which originally appeared in Linux Magazin in Admin Magazine. Toolchain fixes Fixes landed in the Debian archive: Lunar submitted to Debian the patch already sent upstream adding a --clamp-mtime option to tar. Patches have been submitted to add support for SOURCE_DATE_EPOCH to txt2man (Reiner Herrmann), epydoc (Reiner Herrmann), GCC (Dhole), and Doxygen (akira). Dhole uploaded a new experimental debhelper to the reproducible repository which exports SOURCE_DATE_EPOCH. As part of the experiment, the patch also sets TZ to UTC which should help with most timezone issues. It might still be problematic for some packages which would change their settings based on this. Mattia Rizzolo sent upstream a patch originally written by Lunar to make the generate-id() function be deterministic in libxslt. While that patch was quickly rejected by upstream, Andrew Ayer came up with a much better one which sadly could have some performance impact. Daniel Veillard replied with another patch that should be deterministic in most cases without needing extra data structures. It's impact is currently being investigated by retesting packages on reproducible.debian.net. akira added a new option to sbuild for configuring the path in which packages are built. This will be needed for the srebuild script. Niko Tyni asked Perl upstream about it using the __DATE__ and __TIME__ C processor macros. Packages fixed The following 143 packages became reproducible due to changes in their build dependencies: alot, argvalidate, astroquery, blender, bpython, brian, calibre, cfourcc, chaussette, checkbox-ng, cloc, configshell, daisy-player, dipy, dnsruby, dput-ng, dsc-statistics, eliom, emacspeak, freeipmi, geant321, gpick, grapefruit, heat-cfntools, imagetooth, jansson, jmapviewer, lava-tool, libhtml-lint-perl, libtime-y2038-perl, lift, lua-ldoc, luarocks, mailman-api, matroxset, maven-hpi-plugin, mknbi, mpi4py, mpmath, msnlib, munkres, musicbrainzngs, nova, pecomato, pgrouting, pngcheck, powerline, profitbricks-client, pyepr, pylibssh2, pylogsparser, pystemmer, pytest, python-amqp, python-apt, python-carrot, python-crypto, python-darts.lib.utils.lru, python-demgengeo, python-graph, python-mock, python-musicbrainz2, python-pathtools, python-pskc, python-psutil, python-pypump, python-repoze.sphinx.autointerface, python-repoze.tm2, python-repoze.what-plugins, python-repoze.what, python-repoze.who-plugins, python-xstatic-term.js, reclass, resource-agents, rgain, rttool, ruby-aggregate, ruby-archive-tar-minitar, ruby-bcat, ruby-blankslate, ruby-coffee-script, ruby-colored, ruby-dbd-mysql, ruby-dbd-odbc, ruby-dbd-pg, ruby-dbd-sqlite3, ruby-dbi, ruby-dirty-memoize, ruby-encryptor, ruby-erubis, ruby-fast-xs, ruby-fusefs, ruby-gd, ruby-git, ruby-globalhotkeys, ruby-god, ruby-hike, ruby-hmac, ruby-integration, ruby-ipaddress, ruby-jnunemaker-matchy, ruby-memoize, ruby-merb-core, ruby-merb-haml, ruby-merb-helpers, ruby-metaid, ruby-mina, ruby-net-irc, ruby-net-netrc, ruby-odbc, ruby-packet, ruby-parseconfig, ruby-platform, ruby-plist, ruby-popen4, ruby-rchardet, ruby-romkan, ruby-rubyforge, ruby-rubytorrent, ruby-samuel, ruby-shoulda-matchers, ruby-sourcify, ruby-test-spec, ruby-validatable, ruby-wirble, ruby-xml-simple, ruby-zoom, ryu, simplejson, spamassassin-heatu, speaklater, stompserver, syncevolution, syncmaildir, thin, ticgit, tox, transmissionrpc, vdr-plugin-xine, waitress, whereami, xlsx2csv, zathura. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net A new package set for the X Strike Force has been added. (h01ger) Bugs tagged with locale are now visible in the statistics. (h01ger) Some work has been done add tests for NetBSD. (h01ger) Many changes by Mattia Rizzolo have been merged on the whole infrastructure: debbindiff development Version 26 has been released on June 28th fixing the comparison of files of unknown format. (Lunar) A missing dependency identified in python-rpm affecting debbindiff installation without recommended packages was promptly fixed by Michal iha . Lunar also started a massive code rearchitecture to enhance code reuse and enable new features. Nothing visible yet, though. Documentation update josch and Mattia Rizzolo documented how to reschedule packages from Alioth. Package reviews 142 obsolete reviews have been removed, 344 added and 107 updated this week. Chris West (Faux) filled 13 new bugs for packages failing to build from sources. The following new issues have been added: snapshot_placeholder_replaced_with_timestamp_in_pom_properties, different_encoding, timestamps_in_documentation_generated_by_org_mode and timestamps_in_pdf_generated_by_matplotlib.

29 June 2015

Lunar: Reproducible builds: week 9 in Stretch cycle

What happened about the reproducible builds effort this week: Toolchain fixes Norbert Preining uploaded texinfo/6.0.0.dfsg.1-2 which makes texinfo indices reproducible. Original patch by Chris Lamb. Lunar submitted recently rebased patches to make the file order of files inside .deb stable. akira filled #789843 to make tex4ht stop printing timestamps in its HTML output by default. Dhole wrote a patch for xutils-dev to prevent timestamps when creating gzip compresed files. Reiner Herrmann sent a follow-up patch for wheel to use UTC as timezone when outputing timestamps. Mattia Rizzolo started a discussion regarding the failure to build from source of subversion when -Wdate-time is added to CPPFLAGS which happens when asking dpkg-buildflags to use the reproducible profile. SWIG errors out because it doesn't recognize the aforementioned flag. Trying to get the .buildinfo specification to more definitive state, Lunar started a discussion on storing the checksums of the binary package used in dpkg status database. akira discovered while proposing a fix for simgrid that CMake internal command to create tarballs would record a timestamp in the gzip header. A way to prevent it is to use the GZIP environment variable to ask gzip not to store timestamps, but this will soon become unsupported. It's up for discussion if the best place to fix the problem would be to fix it for all CMake users at once. Infrastructure-related work Andreas Henriksson did a delayed NMU upload of pbuilder which adds minimal support for build profiles and includes several fixes from Mattia Rizzolo affecting reproducibility tests. Neils Thykier uploaded lintian which both raises the severity of package-contains-timestamped-gzip and avoids false positives for this tag (thanks to Tomasz Buchert). Petter Reinholdtsen filled #789761 suggesting that how-can-i-help should prompt its users about fixing reproducibility issues. Packages fixed The following packages became reproducible due to changes in their build dependencies: autorun4linuxcd, libwildmagic, lifelines, plexus-i18n, texlive-base, texlive-extra, texlive-lang. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Untested uploaded as they are not in main: Patches submitted which have not made their way to the archive yet: debbindiff development debbindiff/23 includes a few bugfixes by Helmut Grohne that result in a significant speedup (especially on larger files). It used to exhibit the quadratic time string concatenation antipattern. Version 24 was released on June 23rd in a hurry to fix an undefined variable introduced in the previous version. (Reiner Herrmann) debbindiff now has a test suite! It is written using the PyTest framework (thanks Isis Lovecruft for the suggestion). The current focus has been on the comparators, and we are now at 93% of code coverage for these modules. Several problems were identified and fixed in the process: paths appearing in output of javap, readelf, objdump, zipinfo, unsqusahfs; useless MD5 checksum and last modified date in javap output; bad handling of charsets in PO files; the destination path for gzip compressed files not ending in .gz; only metadata of cpio archives were actually compared. stat output was further trimmed to make directory comparison more useful. Having the test suite enabled a refactoring of how comparators were written, switching from a forest of differences to a single tree. This helped removing dust from the oldest parts of the code. Together with some other small changes, version 25 was released on June 27th. A follow up release was made the next day to fix a hole in the test suite and the resulting unidentified leftover from the comparator refactoring. (Lunar) Documentation update Ximin Luo improved code examples for some proposed environment variables for reference timestamps. Dhole added an example on how to fix timestamps C pre-processor macros by adding a way to set the build date externally. akira documented her fix for tex4ht timestamps. Package reviews 94 obsolete reviews have been removed, 330 added and 153 updated this week. Hats off for Chris West (Faux) who investigated many fail to build from source issues and reported the relevant bugs. Slight improvements were made to the scripts for editing the review database, edit-notes and clean-notes. (Mattia Rizzolo) Meetings A meeting was held on June 23rd. Minutes are available. The next meeting will happen on Tuesday 2015-07-07 at 17:00 UTC. Misc. The Linux Foundation announced that it was funding the work of Lunar and h01ger on reproducible builds in Debian and other distributions. This was further relayed in a Bits from Debian blog post.

30 June 2014

Russell Coker: Links June 2014

Russ Albery wrote an insightful blog post about trust, computer security, and training programmers [1]. He makes a good case that social problems in our community decrease the availability of skilled people to write and audit security code. The Lawfare blog has an insightful article by Dan Geer about Heartbleed as a Metaphor [2]. He makes some good points about security and design, ways of potentially solving some flaws and problems with the various solutions. Eben Moglen wrote an insightful article for The Guardian about the way that the NSA spying is a direct threat to democracy [3] The TED blog has an interesting interview with Kitra Cahana about her work living with and photographing nomads in the US [4]. I was surprised to learn that there s an active nomad community in the US based on the culture that started in the Great Depression. Apparently people are using Youtube to learn about nomad culture before joining. Dave Johnson wrote an interesting Salon article about why CEOs make 300* as much money as workers [5]. Note that actually contributing to the financial success of the company is not one of the reasons. Maia Szalavitz wrote an interesting Slate article about Autism and Anorexia [6]. Apparently some people on the Autism Spectrum are mis-diagnosed with Anorexia due to food intolerance. Groups of four professors have applied for the job of president and vice-chancellor of the University of Alberta [7]. While it was a joke to apply in that way, 1/4 of the university president s salary is greater than the salary of a professor and the university would get a team of 4 people to do the job so it would really make sense to hire them. Of course the university could just pay a more reasonable salary for the president and hire an extra 3 professors. But the same argument applies for lots of highly paid jobs. Is a CEO who gets paid $10M per annim really going to do a better job than a team of 100 people who are paid $100K? Joel on Software wrote an insightful article explaining why hiring 1/200 applicants doesn t mean you hire the top 0.5% of workers [8]. He suggests that the best employees almost never apply through regular channels so an intern program is the only way to get a chance of hiring the best people. Chaotic Idealism has an interesting article on some of the bogus claims about autism and violence [9]. Salon has an interesting articleby Lindsay Abrams about the way the food industry in the US lobbies for laws to prevent employees from reporting animal cruelty or contamination of the food supply and how drones will now be used for investigative journalism [10]. Jacobin Mag has an interesting article by Geoff Shullenberger about the Voluntariat , the people who volunteer their time to help commercial organisations [11]. I don t object to people voluntarily helping companies, but when they are exploited or when the company also requires voluntary help from the government it becomes a problem. We need some legislation about this. Laura Hudson wrote an insightful article about how Riot Games solved their online abuse problem [12]. There are ideas in this that can apply to all online communities. Matt LeMay wrote an interesting article for Medium titled What (Else) Can Men Do? Grow The Fuck Up [13]. It s a general commentary on the treatment of women in geek communities (and most other places). Foz Meadows wrote an insightful analysis of the attempts of bigots to influence science-fiction [14]. If I had more spare time I d read some of the books by bigoted authors on the Sad Puppy Slate (from a library of course) and see if they lack talent in the same way that Orson Scott Card does. Racialicious has an interesting article by Phenderson Djeli Clark about the horrible racism and bigotry of H.P. Lovecraft [15]. I have only read two HP Lovecraft stories, one was mediocre and the other (The Horroe at Red Hook) was quite poor largely due to his inability to write about non-white people. Grace Wyler wrote an insightful article for Vice magazine about the right-wing terrorists in the US killing cops [16]. Paul Rosenberg wrote an interesting and amusing (for people outside the US) article about the gun crazies in the US [17]. Maybe the gun crazies should have a loaded assault rifles and tequila party to follow up on their gun appreciation day . A US TV show made a 4 minute clip of some of the stupid things that Tony Abbott has done [18]. Tony is almost as stupid as Dubya.

17 November 2013

Dmitrijs Ledkovs: libnih (upstart dependency) ported to kFreeBSD/eglibc


Back in September at the Linux Plumbers Conference in New Orleans, James Hunt and I presented Upstart roadmap, which included porting Upstart to kFreeBSD/glibc. The first milestone is now complete.

Libnih is ported to kFreeBSD/eglibc. During the course to get there the following has happened:

  • Later libnih test-suite gets a test case failure, which is then reported to FreeBSD.org. Turns out some syscalls on FreeBSD returned bogus siginfo_t.si_status information in some cases. Luckily there is now a full test-case and kernel patch posted by Jilles Tjoelker. Once that's committed upstream, I'll push for uploads to the affected kernels in Debian.


  • A few small patches applied to libnih, mostly adding POSIX compliant header includes and the like.
So with a patched libc and patched kernel one can compile libnih & run its test-suite. There are however some caveats feature miss-partiy:

  • nih_watch / inotify support is disabled. This is TODO on my list to port nih_watch to use kevent/kqueue EVFILT_VNODE.
  • FreeBSD kernel doesn't have abstract namespace domain sockets, so at the moment I have disabled some abstract namespace test-cases & changed dbus-connection tests to use pathname sockets instead. This is still a TODO because unlink before bind is not done, and the socket is not cleaned up after servers close it at the moment.




With all caveats above, here are the test-suite results:

================================================
libnih 1.0.4: nih-dbus-tool/test-suite.log
================================================

# TOTAL: 2492
# PASS: 2492
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

===========================================
libnih 1.0.4: nih-dbus/test-suite.log
===========================================

# TOTAL: 110
# PASS: 110
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

======================================
libnih 1.0.4: nih/test-suite.log
======================================

# TOTAL: 788
# PASS: 787
# SKIP: 0
# XFAIL: 1
# FAIL: 0
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

PASS: test_watch
================

not ok 1 - nih_watch not supprted yet # TODO
XFAIL: test_watch 1 - nih_watch not supprted yet # TODO

at tests/test_watch.c:1594 (main).
1..1

5 September 2013

Vincent Sanders: Strive for continuous improvement, instead of perfection.


Kim Collins was perhaps thinking more about physical improvement but his advice holds well for software.

A lot has been written about the problems around software engineers wanting to rewrite a codebase because of "legacy" issues. Experience has taught me that refactoring is a generally better solution than rewriting because you always have something that works and can be released if necessary.

Although that observation applies to the whole of a project, sometimes the technical debt in component modules means they need reimplementing. Within NetSurf we have historically had problems when such a change was done because of the large number of supported platforms and configurations.
HistoryA year ago I implemented a Continuous Integration (CI) solution for NetSurf which, combined with our switch to GIT for revision control, has transformed our process. Making several important refactor and rewrites possible while being confident about the overall project stability.

I know it has been a year because the VPS hosting bill from Mythic turned up and we are still a very happy customer. We have taken the opportunity to extend the infrastructure to add additional build systems which is still within the NetSurf projects means.

Over the last twelve months the CI system has attempted over 100,000 builds including the projects libraries and browser. Each commit causes an attempt to build for eight platforms, in multiple configurations with multiple compilers. Because of this the response time to a commit is dependant on the slowest build slave (the mac mini OS X leopard system).

Currently this means a browser build, not including the support libraries, completes in around 450 seconds. The eleven support libraries range from 30 to 330 seconds each. This gives a reasonable response time for most operations. The worst case involves changing the core buildsystem which causes everything to be rebuilt from scratch taking some 40 minutes.

The CI system has gained capability since it was first set up, there are now jobs that:
DownsidesIt has not all been positive though, the administration required to keep the builds running has been more than expected and it has highlighted just how costly supporting all our platforms is. When I say costly I do not just refer to the physical requirements of providing build slaves but more importantly the time required.

Some examples include:
  • Procuring the hardware, installing the operating system and configuring the build environment for the OS X slaves
  • Getting the toolchain built and installed for cross compilation
  • Dealing with software upgrades and updates on the systems
  • Solving version issues with interacting parts, especially limiting is the lack of JAVA 1.6 on PPC OS X preventing jenkins updates
This administration is not interesting to me and consumes time which could otherwise be spent improving the browser. Though the benefits of having the system are considered by the development team to outweigh the disadvantages.

The highlighting of the costs of supporting so many platforms has lead us to reevaluate their future viability. Certainly the PPC mac os X port is in gravest danger of being imminently dropped and was only saved when the build slaves drive failed because there were actual users.

There is also the question of the BeOS platform which we are currently unable to even build with the CI system at all as it cannot be targeted for cross compilation and cannot run a sufficiently complete JAVA implementation to run a jenkins slave.

An unexpected side effect of publishing every CI build has been that many non developer user are directly downloading and using these builds. In some cases we get messages to the mailing list about a specific build while the rest of the job is still ongoing.

Despite the prominent warning on the download area and clear explanation on the mailing lists we still get complaints and opinions about what we should be "allowing" in terms of stability and features with these builds. For anyone else considering allowing general access to CI builds I would recommend a very clear statement of intent and to have a policy prepared for when when users ignore the statement.
Tools
Using jenkins has also been a learning experience. It is generally great but there are some issues I have which, while not insurmountable, are troubling:
Configuration and history cannot easily be stored in a revision control system.
This means our system has to be restored from a backup in case of failure and I cannot simply redeploy it from scratch.

Job filtering, especially for matrix jobs with many combinations, is unnecessarily complicated.
This requires the use of a single text line "combination filter" which is a java expression limiting which combinations are built. An interface allowing the user to graphically select from a grid similar to the output tables showing success would be preferable. Such a tool could even generate the textural combination filter if thats easier.

This is especially problematic of the main browser job which has options for label (platform that can compile the build), javascript enablement, compiler and frontend (the windowing toolkit if you prefer e.g. linux label can build both gtk and framebuffer). The filter for this job is several kilobytes of text which due to the first issue has to be cut and pasted by hand.

Handling of simple makefile based projects is rudimentary.
This has been worked around mainly by creating shell scripts to perform the builds. These scripts are checked into the repositories so they are readily modified. Initially we had the text in each job but that quickly became unmanageable.

Output parsing is limited.
Fortunately several plugins are available which mitigate this issue but I cannot help feeling that they ought to be integrated by default.

Website output is not readily modifiable.
Instead of perhaps providing a default css file and all generated content using that styling someone with knowledge of JAVA must write a plugin to change any of the look and feel of the jenkins tool. I understand this helps all jenkins instances look like the same program but it means integrating jenkins into the rest of our projects web site is not straightforward.
ConclusionIn conclusion I think the CI system is an invaluable tool for almost any non trivial software project but the implementation costs with current tools should not be underestimated.

17 August 2013

Andrew Cater: Debian - 20th birthday

Happy Birthday! Second oldest Linux distribution still going - Slackware is a little older. It's also been the bane of my life / my obsession / fixation and joy for about 15 of those years. I'm pleased and proud to be associated with the Debian Project as a Debian developer: many of my colleagues are at Debconf 13 in Vaumarcus in Switzerland right now but there will be celebrations around the world.

Great distribution: great hardware support on lots of architectures: huge numbers of derivatives - but what makes it is the people, including some of those who have contributed hugely but are sadly no longer with us. Debian is a phenomenal, supportive, family but, as ever, sometimes dysfunctional

Joel Klecker, Chris Rutter, Frans Pop, Adrian von Bidder and all the others - I so wish you'd all been here to see this.

24 July 2013

Wouter Verhelst: Crowdsourcing the search for prior art in software patents

On IRC, someone pointed out this blog post by Joel Spolsky, wherein he talks of 'Ask Patents', a Stack Exchange site set up after a change of law to allow the public to submit prior art for patent applications. It's not a perfect solution. If the patent office can't find enough people willing to do this kind of boring search work while being paid for it, I'm not sure they'll find enough people willing to do this kind of boring search work in their free time. But it's certainly a step in the right direction.

12 June 2013

Daniel Pocock: When conspiracy theories come true

In a blog attempting to join the dots between Australia's intelligence services and the US PRISM program, I speculated that Australia has had extensive access to data harvested by PRISM. My hunch has just been exposed rather dramatically today: Top secret data center under construction to house all our secrets. "There has been no discussion of the project in Senate estimates committee hearings, and the public reports of Parliament's joint committee on intelligence and security make no reference to it." The news story also suggests that the relationship is bi-directional, with data harvested from Australian ISPs cross referenced against US databases from Facebook and Google. From a human rights perspective, the implications of this data sharing are much more dramatic than the NSA just snooping on US citizens within the US. While Australia often tries to portray the image that it is a safe and happy country, it has a dark side. Police are kept busy investigating politicians and Government officials: Prime Minister and her associates connected to a union slush fund, the central bank's banknote printing agency's chief executive and 8 colleagues on conspiracy charges and even a senator on good old fashioned shoplifting and assault charges Exporting PRISM data to a human rights backwater As an Australian, my own rights and those of my family are more heavily protected and respected living in Europe than if I lived in my country of birth. Australia does not have a bill of rights like the US (where PRISM data is harvested) or the European Convention of Human Rights. Once US data is exported to Australia, it can be used to support sinister programs like the indefinite detention of coloured people in sinister desert camps. Just this week, the Government left over 50 bodies of poor refugees for the sharks to eat. For someone with such a lack of a conscience, can you imagine them thinking twice about using PRISM data for unscrupulous purposes? Is US PRISM data at greater risk? Despite the stench of corruption and lip service to human rights, is there any hard evidence that PRISM data will be at greater risk if allowed into the hands of Australians? Death by SMS One former Australian official (having dual-citizenship with Sri Lanka) is accused of facilitating war crimes in the Sri Lankan civil war. At present he is Sri Lanka's ambassador at the UN, giving him the benefit of diplomatic immunity. When he was working for Australia, how much data did he have access to? Isn't it ludicrous that in the name of stopping terrorism, data may have been collected and handed to war criminals on a silver platter? Behind enemy lines Like James Bond in reverse, an Australian official was exposed unofficially engaging in a relationship with a Vietnamese spy colonel. In addition to the $20 million in bribery to win a bank note printing contract, is data being used as a tool to leverage trade negotiations, officially or otherwise? If an Australian official with top-secret clearance ends up under the control of a foreign agent, what is the risk that they would be exploited to obtain PRISM data? Defence minister resignation in Chinese spy scandal In 2009 Australia's defence minister was identified having close links to a Chinese-Australian business woman including all the usual little luxuries: first class flights, apartment rental and joint business deals with family members. He subsequently resigned. Little details were made public. How much PRISM data was directly available to political officials like the defence minister? Disappearing Prime Ministers The disappearance of a serving Prime Minister, Harold Holt, during the cold war remains an unsolved mystery to this day. People continue to ask was he swept out to sea or voluntarily whisked away in a Chinese or Soviet nuclear submarine. While Australia has traditionally had strong ties to Britain and the US, there have been exceptions to the rule. Fortunately this all happened before PRISM and Harold Holt red submarine rumours are no more well-founded than the UFO theories of his abduction. Where did all the data go? As the music industry has found out the hard way, digital data can be copied at will with the originator having no control. The revelations that Britain and Australia are in on the PRISM shenanigans only confirms that this data is seen as a new form of currency for exchange in geo-political posturing. While used weapons take decades to work their way down the chain, data can spread at the speed of light.

28 May 2013

Lars Wirzenius: On the usefulness of wishlist bugs

Almost every bug tracker for free software projects has a section for wishlist bugs. Often this results in ever growing lists of wishes, most of which will never be fulfilled. A long list of bugs, even if it is wishlist bugs, is rarely useful. It's hard to keep track of what is there, and so the information is not kept up to date when things change. Even if a wishlist bug gets implemented, the bug is often overlooked, and remains on the list. Joel Spolsky calls this a software inventory. Carrying a large inventory has costs, and usually results in slowed-down development. It increases the friction of doing things in a project. I have some quite old wishlists for my own projects. I have even more wishlists hidden in my own GTD system. I am not happy about either, and I'm going to have to do something about that. Just closing all wishlist bugs immediately would be bad manners. Keeping them open indefinitely, just in case someone will some day decide to look through the list in order to hack on something, is wishful thinking: that rarely happens. So I'm thinking of a compromise: keep wishlist bugs open for a while, perhaps a few months, and if nothing is happening to them, close them with an apology.

2 May 2012

Uwe Hermann: sigrok - cross-platform, open-source logic analyzer software with protocol decoder support

sigrok logo I'm happy to finally announce an open-source (GNU GPL), cross-platform (Linux, Mac OS X, FreeBSD, Windows, ...) logic analyzer software package myself and Bert Vermeulen have been working on for quite a long time now: sigrok (it groks your signals). History I originally started working on an open-source logic analyzer software named "flosslogic" in 2010, because I grew tired of almost all devices having a proprietary and Windows-only software, often with limited features, limited input/output file formats, limited usability, limited protocol decoder support, and so on. Thus, the goal was to write a portable, GPL'd, software that can talk to many different logic analyzers via modules/plugins, supports many input/output formats, and many different protocol decoders. The advantage being, that every time we add a new driver for another logic analyzer it automatically supports all the input/output formats we already have, you can use all the protocol decoders we already wrote, etc. It also works the other way around: If someone writes a new protocol decoder or file format driver, it can automatically be used with any of the supported logic analyzers out of the box. Turns out Bert Vermeulen had been working on a similar software for a while too (due to exactly the same reasons, crappy Windows software, etc.) so it was only logical that we joined forces and worked on this together. We kept Bert's name for the software package ("sigrok"), set up a SourceForge project, mailing lists, IRC channel, wiki, etc. and started working. Overview, Features You can get the lastest sigrok source code from our main git repository:
  $ git clone git://sigrok.git.sourceforge.net/gitroot/sigrok/sigrok
Here's a short overview of sigrok and its features as of today. The software consists of the following components: We're happy to hear about other (maybe special-purpose) frontends you may want to write using libsigrok/libsigrokdecode as helper libs! Firmware Saleae Logic Some logic analyzer devices require firmware to be uploaded before they can be used. As always, firmware is a bit of a pain, but here's what we currently do: For non-free firmware we provide instructions how to extract it from the vendor software or from USB dumps, if possible. For distributable firmware we have a git repo where you can get it (thanks ASIX for allowing us to distribute the ASIX Sigma/Sigma2 firmware files!).
  $ git clone git://sigrok.git.sourceforge.net/gitroot/sigrok/sigrok-firmwares
Finally, for all Cypress FX2 based logic analyzers we have an open-source (GNU GPL) firmware named fx2lafw, started by myself, but most work (and finishing the firmware) was then done by Joel Holdsworth, thanks! The support list includes Saleae Logic, CWAV USBee SX, CWAV USBee AX, Robomotic Minilogic/BugLogic3, Braintechnology USB-LPS, and many others. Get the code from the fw2lafw git repository:
  $ git clone git://sigrok.git.sourceforge.net/gitroot/sigrok/fx2lafw
Example dumps We collect various captured logic analyzer signals / protocol dumps in the sigrok-dumps git repository:
  $ git clone git://sigrok.git.sourceforge.net/gitroot/sigrok/sigrok-dumps
They can be useful for testing the sigrok command-line application, the sigrok GUIs, or the protocol decoders. We're happy to include further contributed example data in our repository, please send us .sr files of any interesting data/protocol you may come across (even if sigrok doesn't yet have a protocol decoder for that protocol). See the Example dumps wiki page for details. Packages, distros, installers sigrok Windows installer I'm currently working on updated Debian packages for sigrok (will be apt-get install sigrok to get everything), and we're happy about further packaging efforts for other distros. We have preliminary Windows installer files (using NSIS), but the Windows code needs some more fixes and portability improvements before it's really usable. On Mac OS X you can use fink/Macports to install as usual, fancier .app installer files are being worked on. Future Apart from support for more logic analyzers, input/output formats, and protocol decoders, we have a number of other plans for the next few releases. This includes support for analog data, i.e. support for (USB) oscilloscopes, multimeters, spectrum analyzers, and such stuff. This will also require additional GUI support (which could take a while). Also, we want to improve/fix the Windows support, and test/port sigrok to other architectures we come across. Performance improvements for the protocol decoding as well as more features there are also planned. Contact Feel free to contact us on the sigrok-devel mailing list, or in the IRC channel #sigrok on Freenode. There's also an identi.ca group for sigrok. We're always happy about feedback, bug reports, suggestions for improving sigrok, and patches of course!

13 May 2011

Gustavo Noronha Silva: My experience with GNOME 3 so far

You know, GNOME Shell and I are not really strangers to each other for a long time now. I have been using it almost daily as my main desktop since late 2009, when I started shipping it to Debian experimental. That means that I had ample opportunity to both get used to it and witness the huge improvements it had with every new release. My general feeling towards GNOME 3 is this: . Yes, I love it! I love the new themes, I love the window borders, I love the top panel, the overview, the dynamic workspaces, I love the Me menu, I love the clock and the calendar, the system indicators with the beautiful symbolic icons, being able to search for applications in such a nice way, the window animations, the multi-screen support, the new nautilus, the dash, looking glass. It s hard for me to even express how thankful I am and how much admiration I have for the awesome folks who helped bring this to life. Thanks so much!
My GNOME3 desktop

My GNOME 3 desktop

After all this time, there are only two things I can say I dislike about GNOME3, apart from some minor wishlists: the alt-tab behaviour and the message tray. Let me expand on those. Message Tray Of the very few things I dislike, there is only 1 I hate and cannot see myself living with: the accordion animation in the message tray. No, really, it s such a terrible, terrible idea. Every single time I try to use that thing I overshoot while moving the mouse to the left, then overshoot again moving the mouse to the right because the frigging icon has moved. It s no good knowing that I can click in the text or in the empty area to its right, it feels wrong. It s terrible that my actual target is moving at all. Every single time I use it is a small frustration for me it s as if the message tray was playing games with me, laughing at me for not having good enough mouse pointer driving skills even more than the infamous sub-menus used to. And I had to go through that penance whenever I wanted to find the person I was chating with to resume the conversation. I wrote an extension that disables the accordion animation by simply not showing the title at all when you hover the icon, and I patched GNOME Shell s CSS to make the icons a bit bigger, so that it s easier for me to hit them with the mouse. It s clearly not ideal, and you still have to click the various people icons to figure out which of your friends who were lazy enough to not add a picture to their IM profiles is that one, but it s still much better than chasing the (smaller) ones around to figure that out. Perhaps we should have the icons be bigger and always have the title bellow them? I don t know, I trust the awesome designers who designed the awesomess that s everywhere else will come up with a great idea.
My Message Tray

My message tray with bigger icons and no accordion animation.

Alt Tab The number 1 feature of workspaces for me has always been locality being able to not see all of the other applications and windows that are open elsewhere. This lowers the amount of noise when I m trying to find something. The overview is very nice in this matter only windows in the current workspace are shown, and even when you have an extra screen, the windows in there appear in that screen. The alt-tab behaviour, on the other hand, of showing all windows and apps, even with the separator, bothers me. It s really useful to have when you want to go to a specific window no matter where it is (I usually use the dash for that, though), but it adds noise when you want to go to a specific window _in this_ workspace, which is the most common use case for my usage. So I copied the alt-tab code over to an extension and modified it so that only windows in the current workspace would be considered. In addition to that, with the windows in the extra screen always being there no matter what workspace you re in (which I think is an awesome idea), they are effectively in all workspaces, so they add constant noise even with my extension. It s also weird to have to look at the main screen to switch to a window in the extra screen. There s a huge discontinuity. What I would prefer is having alt-tab to follow the mouse regarding screens only show windows in the extra screen if you hit alt-tab in there, making sure the selector thing appears in there as well.

13 March 2011

Lars Wirzenius: DPL elections: candidate counts

Out of curiosity, and because it is Sunday morning and I have a cold and can't get my brain to do anything tricky, I counted the number of candidates in each year's DPL elections.
Year Count Names
1999 4 Joseph Carter, Ben Collins, Wichert Akkerman, Richard Braakman
2000 4 Ben Collins, Wichert Akkerman, Joel Klecker, Matthew Vernon
2001 4 Branden Robinson, Anand Kumria, Ben Collins, Bdale Garbee
2002 3 Branden Robinson, Rapha l Hertzog, Bdale Garbee
2003 4 Moshe Zadka, Bdale Garbee, Branden Robinson, Martin Michlmayr
2004 3 Martin Michlmayr, Gergely Nagy, Branden Robinson
2005 6 Matthew Garrett, Andreas Schuldei, Angus Lees, Anthony Towns, Jonathan Walther, Branden Robinson
2006 7 Jeroen van Wolffelaar, Ari Pollak, Steve McIntyre, Anthony Towns, Andreas Schuldei, Jonathan (Ted) Walther, Bill Allombert
2007 8 Wouter Verhelst, Aigars Mahinovs, Gustavo Franco, Sam Hocevar, Steve McIntyre, Rapha l Hertzog, Anthony Towns, Simon Richter
2008 3 Marc Brockschmidt, Rapha l Hertzog, Steve McIntyre
2009 2 Stefano Zacchiroli, Steve McIntyre
2010 4 Stefano Zacchiroli, Wouter Verhelst, Charles Plessy, Margarita Manterola
2011 1 Stefano Zacchiroli (no vote yet)
Winner indicate by boldface. I expect Zack to win over "None Of The Above", so I went ahead and boldfaced him already, even if there has not been a vote for this year. Median number of candidates is 4.

15 November 2010

Matt Zimmerman: Ubuntu: Project, Platform, Products

When most people talk about Ubuntu, they usually mean our flagship product, Ubuntu Desktop Edition. Sometimes, they might mean the Ubuntu project, or the community of people who work on it, or various other things. Similarly, Debian might mean the Debian operating system, or the package repositories, or the project, and so on. This gets a little confusing sometimes. When I m talking about Ubuntu, I ve started to use more specific terminology to explain what I mean, and this seems to help people understand better the nature of the whole Ubuntu. In particular, I use the three Ps:
  • a portfolio of products, including Desktop Edition, Server Edition, Netbook Edition, Kubuntu and more. These are software bundles which can be downloaded, pre-installed on retail computers, and so on. Each one is designed to meet a certain set of user needs, and to work on a specific form factor of computer.
  • a technology platform, which can be used to build a wide range of products. It is primarily of interest to developers, who build derivative distributions, OS products, applications and infrastructure using Ubuntu packages. This platform is the common foundation of the Ubuntu products above, and includes things like the global package repository. Joel Spolsky does a good job of explaining why platforms are distinctly different from products, and should be treated as such.
  • an open community project, which collectively produces, distributes, promotes and supports the products and the platform. The Ubuntu project has a philosophy, a government, and various tools and processes to help contributors work together. Canonical supports the Ubuntu project by providing resources and infrastructure, and also directly participates in Ubuntu at various levels.
This breakdown may seem a bit obvious to those of us on the inside , but it s confusing to people who are encountering it for the first time. I m sharing this in the hope that if more people start using the same words, it will get easier for people to understand how these pieces fit together. I ll also be linking to it a lot, to help put things into context using this framework.

9 February 2010

Martin F. Krafft: Sign me up to social networking!

I do not like it when people tell Web 2.0 sites to send me invitation e-mail. I won t enumerate the reasons here. But there is one reason for why I don t like you passing on my address to those sites, which is subject of this article: Unlike popular belief, the Web 2.0 is not a money-printing machine. It s a long road until you can actually generate real money with user content. Therefore, some shadey sites are probably selling contact details to advertisers to make ends meet while hoping for the big cashflow. I don t have any data to back this up, and I want to change that:
Please tell all your Web 2.0 sites to send me an invitation! Please use an address in the signmeup.madduck.net domain for that, and make sure to include the domain name of the service to which you sign me up before the @ symbol. Also append a hyphen/dash and a random, short string. More on that in just a sec. For instance, if you are one of those people that believes that letting people know where you are (and have been) at any point in time, tell Foursquare to send an invitation to:
foursquare.com-ponies@signmeup.madduck.net
The reason for the random, short string ( ponies ) is simply so that I can later cross-check that a message receiving spam actually went through a social networking site I intend to catalog the invitation messages.
Thank you for your time. Keep in mind: the more, the merrier. I ll make sure to report back on the outcome of this little experiment right here, so watch this space. NP: Billy Joel: Cold Spring Harbor

Next.