Search Results: "wolff"

1 September 2017

Bits from Debian: New Debian Developers and Maintainers (July and August 2017)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

31 August 2017

Chris Lamb: Free software activities in August 2017

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Presented a status update at Debconf17 in Montr al, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian
Patches contributed
  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)

Uploads
  • python-django 1:1.11.4-1 New upstream release.
  • redis:
    • 4:4.0.1-3 Drop yet more non-deterministic tests.
    • 4:4.0.1-4 Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 Drop even more tests with timing issues.
    • 4:4.0.1-6 Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.

Debian bugs filed
  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)

FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest. I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

2 March 2017

Antoine Beaupr : A short history of password hashers

These are notes from my research that led to the publication of the password hashers article. This article is more technical than the previous ones and compares the various cryptographic primitives and algorithms used in the various software I have reviewed. The criteria for inclusion on this list is fairly vague: I mostly included a password hasher if it was significantly different from the previous implementations in some way, and I have included all the major ones I could find as well.

The first password hashers Nic Wolff claims to be the first to have written such a program, all the way back in 2003. Back then the hashing algorithm was MD5, although Wolff has now updated the algorithm to use SHA-1 and still maintains his webpage for public use. Another ancient but unrelated implementation, is the Standford University Applied Cryptography's pwdhash software. That implementation was published in 2004 and unfortunately, that implementation was not updated and still uses MD5 as an hashing algorithm, but at least it uses HMAC to generate tokens, which makes the use of rainbow tables impractical. Those implementations are the simplest password hashers: the inputs are simply the site URL and a password. So the algorithms are, basically, for Wolff's:
token = base64(SHA1(password + domain))
And for Standford's PwdHash:
token = base64(HMAC(MD5, password, domain)))

SuperGenPass Another unrelated implementation that is still around is supergenpass is a bookmarklet that was created around 2007, originally using MD5 as well but now supports SHA512 now although still limited to 24 characters like MD5 (which needlessly limits the entropy of the resulting password) and still defaults MD5 with not enough rounds (10, when key derivation recommendations are more generally around 10 000, so that it's slower to bruteforce). Note that Chris Zarate, the supergenpass author, actually credits Nic Wolff as the inspiration for his implementation. Supergenpass is still in active development and is available for the browser (as a bookmarklet) or mobile (as an webpage). Supergenpass allows you to modify the password length, but also add an extra profile secret which adds to the password and generates a personalized identicon presumably to prevent phishing but it also introduces the interesting protection, the profile-specific secret only found later in Password Hasher Plus. So the Supergenpass algorithm looks something like this:
token = base64(SHA512(password + profileSecret + ":" + domain, rounds))

The Wijjo Password Hasher Another popular implementation is the Wijjo Password Hasher, created around 2006. It was probably the first shipped as a browser extension which greatly improved the security of the product as users didn't have to continually download the software on the fly. Wijjo's algorithm also improved on the above algorithms, as it uses HMAC-SHA1 instead of plain SHA-1 or HMAC-MD5, which makes it harder to recover the plaintext. Password Hasher allows you to set different password policies (use digits, punctuation, mixed case, special characters and password length) and saves the site names it uses for future reference. It also happens that the Wijjo Password Hasher, in turn, took its inspiration on different project, hashapass.com, created in 2006 and also based on HMAC-SHA-1. Indeed, hashapass "can easily be generated on almost any modern Unix-like system using the following command line pattern":
echo -n parameter \
  openssl dgst -sha1 -binary -hmac password \
  openssl enc -base64 \
  cut -c 1-8
So the algorithm here is obviously:
token = base64(HMAC(SHA1, password, domain + ":" + counter)))[:8]
... although in the case of Password Hasher, there is a special routine that takes the token and inserts random characters in locations determined by the sum of the values of the characters in the token.

Password Hasher Plus Years later, in 2010, Eric Woodruff ported the Wijjo Password Hasher to Chrome and called it Password Hasher Plus. Like the original Password Hasher, the "plus" version also keeps those settings in the extension and uses HMAC-SHA-1 to generate the password, as it is designed to be backwards-compatible with the Wijjo Password Hasher. Woodruff did add one interesting feature: a profile-specific secret key that gets mixed in to create the security token, like what SuperGenPass does now. Stealing the master password is therefore not enough to generate tokens anymore. This solves one security concern with Password Hasher: an hostile page could watch your keystrokes and steal your master password and use it to derive passwords on other sites. Having a profile-specific secret key, not accessible to the site's Javascript works around that issue, but typing the master password directly in the password field, while convenient, is just a bad idea, period. The final algorithm looks something like:
token = base64(HMAC(SHA1, password, base64(HMAC(SHA1, profileSecret, domain + ":" + counter))))
Honestly, that seems rather strange, but it's what I read from the source code, which is available only after decompressing the extension nowadays. I would have expected the simplest version:
token = base64(HMAC(SHA1, HMAC(SHA1, profileSecret, password), domain + ":" + counter))
The idea here would be "hide" the master password from bruteforce attacks as soon as possible... But maybe this is all equivalent. Regardless, Password Hasher Plus then takes the token and applies the same special character insertion routine as the Password Hasher.

LessPass Last year, Guillaume Vincent a french self-described "humanist and scuba diving fan" released the lesspass extension for Chrome, Firefox and Android. Lesspass introduces several interesting features. It is probably the first to include a commandline version. It also uses a more robust key derivation algorithm (PBKDF2) and takes into account the username on the site, allowing multi account support. The original release (version 1) used only 8192 rounds which is now considered too low. In the bug report it was interesting to note that LessPass couldn't do the usual practice of running the key derivation for 1 second to determine the number of rounds needed as the results need to be deterministic. At first glance, the LessPass source code seems clear and easy to read which is always a good sign, but of course, the devil is in the details. One key feature that is missing from Password Hasher Plus is the profile-specific seed, although it should be impossible, for a hostile web page to steal keystrokes from a browser extension, as far as I know. The algorithm then gets a little more interesting:
entropy = PBKDF2(SHA256, masterPassword, domain + username + counter, rounds, length)
where
    rounds=10000
    length=32
entropy is then used to pick characters to match the chosen profile. Regarding code readability, I got quickly confused by the PBKDF2 implementation: SubtleCrypto.ImportKey() doesn't seem to support PBKDF2 in the API, yet it's how it is used there... Is it just something to extract key material? We see later what looks like a more standard AES-based PBKDF2 implementation, but this code looks just strange to me. It could be me unfamilarity with newer Javascript coding patterns, however. There is also a lesspass-specific character picking routing that is also not base64, and different from the original Password Hasher algorithm.

Master Password A review of password hashers would hardly be complete without mentioning the Master Password and its elaborate algorithm. While the applications surrounding the project are not as refined (there is no web browser plugin and the web interface can't be easily turned into a bookmarklet), the algorithm has been well developed. Of all the password managers reviewed here, Master Password uses one of the strongest key derivation algorithms out there, scrypt:
key = scrypt( password, salt, cost, size, parallelization, length )
where
salt = "com.lyndir.masterpassword" + len(username) + name
cost = 32768
size = 8
parallelization = 2
length = 64
entropy = hmac-sha256(key, "com.lyndir.masterpassword" + len(domain) + domain + counter )
Master Password the uses one of 6 sets of "templates" specially crafted to be "easy for a user to read from a screen and type using a keyboard or smartphone" and "compatible with most site's password policies", our "transferable" criteria defined in the first passwords article. For example, the default template mixes vowels, consonants, numbers and symbols, but carefully avoiding possibly visibly similar characters like O and 0 or i and 1 (although it does mix 1 and l, oddly enough). The main strength of Master Password seems to be the clear definition of its algorithm (although Hashpass.com does give out OpenSSL commandline examples...), which led to its reuse in another application called freepass. The Master Password app also doubles as a stateful password manager...

Other implementations I have also considered including easypasswords, which uses PBKDF2-HMAC-SHA1, in my list of recommendations. I discovered only recently that the author wrote a detailed review of many more password hashers and scores them according to their relative strength. In the end, I ended up covering more LessPass since the design is very similar and LessPass does seem a bit more usable. Covering LessPass also allowed me to show the contrast and issues regarding the algorithm changes, for example. It is also interesting to note that the EasyPasswords author has criticized the Master Password algorithm quite severely:
[...] scrypt isn t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key as long as the username doesn t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary[...]
During a discussion with the Master Password author, he outlined that "there is nothing "easy" about brute-force deriving a 64-byte key through a SHA-256 algorithm." SHA-256 is used in the last stage because it is "extremely fast". scrypt is used as a key derivation algorithm to generate a large secret and is "intentionnally slow": "we don't want it to be easy to reverse the master password from a site password". "But it' unnecessary for the second phase because the input to the second phase is so large. A master password is tiny, there are only a few thousand or million possibilities to try. A master key is 8^64, the search space is huge. Reversing that doesn't need to be made slower. And it's nice for the password generation to be fast after the key has been prepared in-memory so we can display site passwords easily on a mobile app instead of having to lock the UI a few seconds for every password." Finally, I considered covering Blum's Mental Hash (also covered here and elsewhere). This consists of an algorithm that can basically be ran by the human brain directly. It's not for the faint of heart, however: if I understand it correctly, it will require remembering a password that is basically a string of 26 digits, plus compute modulo arithmetics on the outputs. Needless to say, most people don't do modulo arithmetics every day...

3 July 2016

Reproducible builds folks: Reproducible builds: week 61 in Stretch cycle

What happened in the Reproducible Builds effort between June 19th and June 25th 2016. Media coverage GSoC and Outreachy updates Toolchain fixes Other upstream fixes Emil Velikov searched on IRC for hints on how to guarantee unique values during build to invalidate shader caches in Mesa, when also no VCS information is available. A possible solution is a timestamp, which is unique enough for local builds, but can still be reproducible by allowing it to be overwritten with SOURCE_DATE_EPOCH. Packages fixed The following 9 packages have become reproducible due to changes in their build dependencies: cclib librun-parts-perl llvm-toolchain-snapshot python-crypto python-openid r-bioc-shortread r-bioc-variantannotation ruby-hdfeos5 sqlparse The following packages have become reproducible after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews 139 reviews have been added, 20 have been updated and 21 have been removed in this week. New issues found: 53 FTBFS bugs have been reported by Chris Lamb, Santiago Vila and Mateusz ukasik. diffoscope development Quote of the week "My builds are so reproducible, they fail exactly every second time." Johannes Ziemke (@discordianfish) Misc. This week's edition was written by Chris Lamb (lamby), Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

15 June 2016

Reproducible builds folks: Reproducible builds: week 59 in Stretch cycle

What happened in the Reproducible Builds effort between June 5th and June 11th 2016: Media coverage Ed Maste gave a talk at BSDCan 2016 on reproducible builds (slides, video). GSoC and Outreachy updates Weekly reports by our participants: Documentation update - Ximin Luo proposed a modification to our SOURCE_DATE_EPOCH spec explaining FORCE_SOURCE_DATE. Some upstream build tools (e.g. TeX, see below) have expressed a desire to control which cases of embedded timestamps should obey SOURCE_DATE_EPOCH. They were not convinced by our arguments on why this is a bad idea, so we agreed on an environment variable FORCE_SOURCE_DATE for them to implement their desired behaviour - named generically, so that at least we can set it centrally. For more details, see the text just linked. However, we strongly urge most build tools not to use this, and instead obey SOURCE_DATE_EPOCH unconditionally in all cases. Toolchain fixes Packages fixed The following 16 packages have become reproducible due to changes in their build-dependencies: apertium-dan-nor apertium-swe-nor asterisk-prompt-fr-armelle blktrace canl-c code-saturne coinor-symphony dsc-statistics frobby libphp-jpgraph paje.app proxycheck pybit spip tircd xbs The following 5 packages are new in Debian and appear to be reproducible so far: golang-github-bowery-prompt golang-github-pkg-errors golang-gopkg-dancannon-gorethink.v2 libtask-kensho-perl sspace The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed: The following packages have become reproducible after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews 68 reviews have been added, 19 have been updated and 28 have been removed in this week. New and updated issues: 26 FTBFS bugs have been reported by Chris Lamb, 1 by Santiago Vila and 1 by Sascha Steinbiss. diffoscope development strip-nondeterminism development disorderfs development tests.reproducible-builds.org Misc. Steven Chamberlain submitted a patch to FreeBSD's makefs to allow reproducible builds of the kfreebsd installer. Ed Maste committed a patch to FreeBSD's binutils to enable determinstic archives by default in GNU ar. Helmut Grohne experimented with cross+native reproductions of dash with some success, using rebootstrap. This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen, Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC.

30 May 2016

Reproducible builds folks: Reproducible builds: week 57 in Stretch cycle

What happened in the Reproducible Builds effort between May 22nd and May 28th 2016: Media coverage Documentation update Toolchain fixes Packages fixed The following 18 packages have become reproducible due to changes in their build dependencies: canl-c configshell dbus-java dune-common frobby frown installation-guide jexcelapi libjsyntaxpane-java malaga octave-ocs paje.app pd-boids pfstools r-cran-rniftilib scscp-imcce snort vim-addon-manager The following packages have become reproducible after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews 123 reviews have been added, 57 have been updated and 135 have been removed in this week. 21 FTBFS bugs have been reported by Chris Lamb and Santiago Vila. strip-nondeterminism development tests.reproducible-builds.org Misc. This week's edition was written by Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

17 May 2016

Reproducible builds folks: Reproducible builds: week 55 in Stretch cycle

What happened in the Reproducible Builds effort between May 8th and May 14th 2016: Documentation updates Toolchain fixes Packages fixed The following 28 packages have become newly reproducible due to changes in their build dependencies: actor-framework ask asterisk-prompt-fr-armelle asterisk-prompt-fr-proformatique coccinelle cwebx d-itg device-tree-compiler flann fortunes-es idlastro jabref konclude latexdiff libint minlog modplugtools mummer mwrap mxallowd mysql-mmm ocaml-atd ocamlviz postbooks pycorrfit pyscanfcs python-pcs weka The following 9 packages had older versions which were reproducible, and their latest versions are now reproducible again due to changes in their build dependencies: csync2 dune-common dune-localfunctions libcommons-jxpath-java libcommons-logging-java libstax-java libyanfs-java python-daemon yacas The following packages have become newly reproducible after being fixed: The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews 344 reviews have been added, 125 have been updated and 20 have been removed in this week. 14 FTBFS bugs have been reported by Chris Lamb. tests.reproducible-builds.org Misc. Dan Kegel sent a mail to report about his experiments with a reproducible dpkg PPA for Ubuntu. According to him sudo add-apt-repository ppa:dank/dpkg && sudo apt-get update && sudo apt-get install dpkg should be enough to get reproducible builds on Ubuntu 16.04. This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

30 April 2016

Daniel Stender: What I've worked on for Debian this month

This month I've worked on the following things for Debian: To begin with that, I've set up a Debhelper sequencer script for dh-buildinfo1, this add-on now can be used with dh $@ --with buildinfo in deb/rules instead of having to explicitly call it somewhere in an override. Debops I've set up initial Debian packages of Debops2, a collection of fine crafted Ansible roles and playbooks especially for Debian servers, shipped with a couple of convenience and wrapper scripts in Python3. There are two binary packages, one for the toolset (debops), and the other for the playbooks and roles of the project (debops-playbooks). The application is easy to use, just initialize a new project with debops-init foo and add your server(s) to foo/ansible/inventory/hosts belonging to groups representing services and things you want to employ on them. Like the group [debops_gitlab] automatically installs a complete running Gitlab setup on one or a multitude of servers in the same run with the debops command4. Use other groups like [debops_mariadb_server] accordingly in the same host inventory. Ansible runs agent less, so you don't have to prepare freshly setup servers with nothing special to use that tool randomly (like on localhost). The list of things you could deploy with Debops is quite amazing and you've got dozens of services at your hand. The new packages are currently in experimental because they need some more fine tuning, like there are a couple of minor error messages which recently occur using it, but it works well. The (early staged) documentation unfortunately couldn't be packaged because of the scattered resp. collective nature of the project (all parts have their own Github repositories)5, and also how to generate the upstream tarball remains a bit of a challenge (currently, it's the outcome of debops-init)6. I'll have this package in unstable soon. More info on Debops is coming up, then. Hashicorp's Packer I'm very glad to announce that Packer7 is ready being available in unstable, and the two year old RFP bug could be finally closed8. It's another great and much convenient devops tool which does a lot of different things in an automated fashion using only a single "one-argument" CLI tool in combination with a couple of lines in a configuration script (thanks to Yaroslav Halchenko for the tip). Packer helps creating machine images for different platforms. This is like when you use e.g. Debian installations in a Qemu box for testing or development purposes. Instead of setting up a new virtual machine manually like installing Debian on another computer this process could be automated with Packer, like I've written about in this blog entry here9. You just need a template containing instructions for the included Qemu-builder and a preseeding script for the Debian installer, and there you go drinking your coffee while Packer does all the work for you: downloading the installation ISO image, creating the new virtual harddrive, booting the emulator, running the whole installation process automatically like answering questions, selecting things, rebooting without ISO image to complete the installation etc. A couple of minutes and you have a new pre-baked virtual machine image like from a vendoring machine, a fresh one everytime you need it. Packer10 supports a number of builders for different target platforms (desktop virtualization solutions as much as public cloud providers and private cloud software), can build in parallel, and also the full range of common provisioners can be employed in the process to equip the newly installed OSs. Vagrant boxes could be generated by one of the included postprocessors. I'll write more on Packer here on this blog, soon. There were more then two dozens of packages missing to complete Packer11, which is the achievement of combined forces within the pkg-go group. Much thanks esp. to Alexandre Viau who have worked on the most of the needed new packages. Thanks also to the FTP-masters which were always very quick in reviewing the Go packages, so that it could be proceeded to build and package the sub dependent new ones always consecutively. Squirrel3 I've didn't had the most work with it and just sponsored this for Fabian Wolff, but want to highlight here that there's a new package of Squirrel12 now available in Debian13. Squirrel is a lightweight scripting language, somewhat comparable to Lua. It's fully object-oriented and highly embeddable, it's used in a lot of commerical computer games under the hood for implementing intelligence for bots next to other things14, but also for the Internet of Things (it's embedded in hardware from Electric Imp). Squirrel functions could be called from C++15. I've filed an ITP bug for Squirrel already in 2011 (#651195), but always something else got in the way, and it ended up being an RFP. I'm really glad that it got picked up and completed. misc There were a couple of uploads on updated upstream tarballs and for fixing bugs, namely afl/2.10b-1 and 2.11b-1, python-afl/0.5.3-1, pyutilib/5.3.2-1, pyomo/4.3.11327-1, libvigraimpex/1.10.0+git20160211.167be93dfsg-2 (fix of #820429, thanks for Tobias Frost), and gamera/3.4.2+svn1454-1. For the pkg-go group, I've set up a new package of github-mitchellh-ioprogress (which is needed by the official DigitalOcean CLI tool doctl, now RFP #807956 instead of ITP due to the lack of time - again facing a lot of missing packages), and provided a little patch for dh-make-golang updating some standards16. For Packer I've also updated azure-go-autorest and azure-sdk as team upload (#821938, #821832), but it came out that the project which is currently under heavy development towards a new official release broke a lot in the past weeks (and no Git branching have been used), so that Packer as a matter of fact needed a vendored snapshot, although there have been only a couple of commits in between. Docker-registry hat the same problem with the new package of azure-sdk/2.1.1~beta1, so that it needed to be fixed, too (#822146). By the way, the tool ratt17 comes very handy for automatically test building down all reverse dependencies, not only for Go packages (thanks to Tianon Gravi for the tip). Finally, I've posted the needed reverse depencies as RFP bugs for Terraform18 (again quite a lot), Vuls19, and cve-dictionary20, which is needed for Vuls. I'll let them rest a while waiting to get picked up before working anything down.

Daniel Stender: My work for Debian in April

This month I've worked on the these things for Debian: To begin with that, I've set up a Debhelper sequencer script for dh-buildinfo1, this add-on now can be used with dh $@ --with buildinfo in deb/rules instead of having to explicitly call it somewhere in an override. Debops I've set up initial Debian packages of Debops2, a collection of fine crafted Ansible roles and playbooks especially for Debian servers (servers which run on Debian), which are shipped with a couple of helper and wrapper scripts in Python3. There are two binary packages, one for the toolset (debops), and the other for the playbooks and roles of the project (debops-playbooks). The application is easy to use, just initialize a new project with debops-init foo and add your server(s) to foo/ansible/inventory/hosts belonging to groups representing services and things you want to employ on them. For example, the group [debops_gitlab] automatically installs a complete running Gitlab setup on one or a multitude of servers in the same run with the debops command4. Other groups like [debops_mariadb_server] could be used accordingly in the same host inventory. Ansible works without agent, so you don't have to prepare freshly setup servers with nothing special to use that tool randomly (like on localhost). The list of things you could deploy with Debops is quite amazing and dozens of services are at hand. The new Debian packages are currently in experimental because they need some more fine tuning, e.g. there are a couple of minor error messages which recently occur using it, but it works well. The (early staged) documentation unfortunately couldn't be packaged because of the scattered resp. collective nature of the project (all parts have their own Github repositories)5, and also how to generate the upstream tarball remains a bit of a challenge (currently, it's the outcome of debops-init)6. I'll have this package in unstable soon. More info on Debops is coming up, then. HashiCorp's Packer I'm very glad to announce that Packer7 is ready being available in unstable, and the RFP bug could be finally closed after I've taken it over8. It's another great and much convenient devops tool which does a lot of different things in an automated fashion using only a single "one-argument" CLI tool in combination with a couple of lines in a configuration script (thanks to Yaroslav Halchenko for the tip). Packer helps creating machine images for different platforms. This is like when you use e.g. Debian installations in a Qemu box for testing or development purposes. Instead of setting up a new virtual machine manually the same way as installing Debian on another computer this process can be completely automated with Packer, like I've written about in this blog entry here9. You just need a template which contains instructions for the included Qemu builder and a preseeding script for the Debian installer, and there you go drinking your coffee while Packer does all the work: download the ISO image for installation, create the new virtual harddrive, boot the emulator, run the whole installation process automatically like with answering questions, selecting things, reboot without ISO image to complete the installation etc. A couple of minutes and you have a new pre-baked virtual machine image like from a vendoring machine, another fresh one could be created anytime. Packer10 supports a number of builders for different target platforms (desktop virtualization solutions as much as public cloud providers and private cloud software), can build in parallel, and also the full range of common provisioners can be employed in the process to equip the newly installed OSs with services and programs. Vagrant boxes could be generated by one of the included postprocessors. I'll write more on Packer here on this blog, soon. There were more then two dozens of packages missing to complete Packer11, which is the achievement of combined forces within the pkg-go group. Much thanks esp. to Alexandre Viau who have worked on the most of the needed new packages. Thanks also to the FTP masters which were always very quick in reviewing the Go packages, so that it could be proceeded to build and package the sub dependent new ones always consecutively. Squirrel3 I've didn't had the major work of that and just sponsored this for Fabian Wolff, but want to highlight here that there's a new package of Squirrel12 now available in Debian13. Squirrel is a lightweight scripting language, somewhat comparable to Lua. It's fully object-oriented and highly embeddable, it's used in a lot of commerical computer games under the hood for implementing intelligence for bots next to other things14, but also for the Internet of Things (it's embedded in hardware from Electric Imp). Squirrel functions could be called from C++15. I've filed an ITP bug for Squirrel already in 2011 (#651195), but always something else had a higher priority, and it ended up being an RFP. I'm really glad that it got picked up and completed quickly afterwards. misc There were a couple of uploads on updated upstream tarballs and for fixing bugs, namely afl/2.10b-1 and 2.11b-1, python-afl/0.5.3-1, pyutilib/5.3.2-1, pyomo/4.3.11327-1, libvigraimpex/1.10.0+git20160211.167be93dfsg-2 (fix of #820429, thanks to Tobias Frost), and gamera/3.4.2+svn1454-1. For the pkg-go group, I've set up a new package of github-mitchellh-ioprogress (which is needed by the official DigitalOcean CLI tool doctl, now RFP #807956 instead of ITP due to the lack of time, again a lot of missing packages are missing for that), and provided a little patch for dh-make-golang updating some standards16. For Packer I've also updated azure-go-autorest and azure-sdk as team upload (#821938, #821832), but it came out that the project which is currently under heavy development towards a new official release broke a lot in the past weeks (no Git branching have been used), so that Packer as a matter of fact needed a vendored snapshot, although there have been only a couple of commits in between. Docker-registry has the same problem with the new package of azure-sdk/2.1.1~beta1, so that it needed to be fixed, too (#822146). By the way, the tool ratt17 comes very handy for automatically test building down all reverse dependencies, not only for Go packages (thanks to Tianon Gravi for the tip). Finally, I've posted the needed reverse depencies as RFP bugs for Terraform18 (again quite a lot), Vuls19, and cve-dictionary20, which is needed for Vuls. I'll let them rest a while waiting to get picked up before working anything down.

27 April 2016

Niels Thykier: auto-decrufter in top 5 after 10 months

About 10 months ago, we enabled an auto-decrufter in dak. Then after 3 months it had become the top 11th remover . Today, there are only 3 humans left that have removed more packages than the auto-decrufter impressively enough, one of them is not even an active FTP-master (anymore). The current score board:
 5371 Luca Falavigna
 5121 Alexander Reichle-Schmehl
 4401 Ansgar Burchardt
 3928 DAK's auto-decrufter
 3257 Scott Kitterman
 2225 Joerg Jaspert
 1983 James Troup
 1793 Torsten Werner
 1025 Jeroen van Wolffelaar
  763 Ryan Murray
For comparison, here is the number removals by year for the past 6 years:
 5103 2011
 2765 2012
 3342 2013
 3394 2014
 3766 2015  (1842 removed by auto-decrufter)
 2845 2016  (2086 removed by auto-decrufter)
Which tells us that in 2015, the FTP masters and the decrufter performed on average over 10 removals a day. And by the looks of it, 2016 will surpass that. Of course, the auto-decrufter has a tendency to increase the number of removed items since it is an advocate of remove early, remove often! .:) Data is from https://ftp-master.debian.org/removals-full.txt. Scoreboard computed as:
  grep ftpmaster: removals-full.txt   \
   perl -pe 's/.*ftpmaster:\s+//; s/\]$//;'   \
   sort   uniq -c   sort --numeric --reverse   head -n10
Removals by year computed as:
 grep ftpmaster: removals-full.txt   \
   perl -pe 's/.* (\d 4 ) \d 2 :\d 2 :\d 2 .*/$1/'   uniq -c   tail -n6
(yes, both could be done with fewer commands)
Filed under: Debian

4 May 2013

Russ Allbery: Review: Democracy at Work

Review: Democracy at Work, by Richard Wolff
Publisher: Haymarket
Copyright: 2012
ISBN: 1-60846-247-1
Format: Kindle
Pages: 220
I've been reading (mostly on-line) and thinking quite a bit lately about workplace governance models, economic structure, and why the current organization of the US workplace bothers me so intensely, partly triggered by reading John Kenneth Galbraith's The Affluent Society. The economic monoculture has made that process particularly frustrating. It's rare to find a discussion, even in the context of organizational strategies that are considered radical, that avoids the standard frame of productivity and business value. Most discussion is long on tactics and short on strategy and examination of goals. Wolff's appearance on Moyers & Company was a rare breath of fresh air, enough so that I grabbed his book shortly afterwards. Wolff is a Marxian economist (meaning that he makes use of Marxist analysis of economics and capitalism while separating them from Marxist politics and Marx's advocacy of revolutionary socialism), which for me was part of the interest. Marxist thought of any branch is not common in the United States; we're regularly deprived of several sides in the international conversation on economic models. I was taught Marxist theory in elementary and high school (in retrospect, surprisingly even-handedly and well, despite the biases of my schooling), but none of the later developments of Marxist thought. I think that's a typical experience here; in the United States, Marxism culminated in Mao and Stalin, and no further development of the underlying theories is ever mentioned. Democracy at Work is subtitled A Cure for Capitalism and does indeed advocate a concrete alternative to capitalist business structures. But this is only the last third of the book (and in some ways the least useful, as I'll discuss in a moment). The first two-thirds of the book is basically a remedial education in modern, as opposed to historic, Marxian economics for US readers like myself who have never heard it before, cast in the context of the current financial crisis. This may well be old hat for Europeans, but if you've been wondering what (at least some) modern-day Marxists actually believe, or are saying to yourself "there are modern-day Marxists after the collapse of the Soviet Union?", I recommend this book to your attention. It's an excellent summary, which I read with the delightful feeling of an expanding viewpoint and the discovery of new directions from which to look at a problem. There's quite a bit in this section that's worth thinking about, including another take on the nature of the recent economic collapse and how that fits into a Marxian analysis of capitalist crises. But there was one point in Wolff's explanation that I found particularly helpful. He completely restructured my understanding of the Marxian analysis of worker exploitation and profit allocation. There are two angles of Marxist economic thought and socialist economics that get a great deal of attention, at least in the United States, in history and economics classes: the role (or lack thereof) of markets in price setting, and the ownership of the means of production. Defenders of capitalism like to focus on the former, since it's quite easy to identify the advantages of theoretical free markets in finding ideal prices and balancing supply and demand, whereas central planning of prices and production has resulted in some catastrophic and deadly failures. (Although I will note, with passing interest, that those failures predate large-scale computing, and there are now large corporations that manage budgets larger than some countries via centralized command-and-control economic practices.) Defenders of socialism are more likely to focus on the ownership of the means of production, since it's easy to show prima facie unfairness in owners of capital extracting vast profits without having to do any work themselves, only be lucky enough to start with large quantities of money. Wolff, however, argues that both of these focuses misses a core critique by Marx of the workplace structure in capitalism, and that, by ignoring that critique, supposedly Marxist countries did not create anything that was actually Marxist in implementation. The Soviet Union was just as much a capitalist country as the United States is. It was state capitalism rather than private capitalism, but the core capitalist structure was intact. Wolff arrives at this conclusion, which may be well-trodden ground in parts of the world that include active Marxist thought but which was quite startling to this American, by treating the ownership of capital as a partial distraction. He focuses on a more direct question and practical question: who determines what a worker does on a day-to-day basis and how that product is used? Who determines what profits are collected and how they are spent? In private capitalism, this is done by the owners of the capital: large shareholders, major investors, and the managerial class that they hire. In Soviet state capitalism, this is done by national politicians, bureaucrats, and the managerial class that they hire. In neither model is it done by the workers themselves. The Soviet model gives theoretical ownership of the capital to the workers, but that ownership is diffused, centralized, and politicized, redirected through the mechanisms of the state, and therefore is effectively ignored. Ownership and control is entirely captured by the political class. Both of these systems are capitalist in Wolff's view of Marx: there is a class of owners and managers, who control the terms and nature of work and who allocate the profits, and a class of workers, who have to do what they're told, are not paid full value for their labor, and don't have a say in how the profits their work generates are spent. At the most important level of day-to-day autonomy and empowerment, they are functionally identical. They are both equally hierarchical and exploitative; the only difference is in whether the system is controlled by rich individuals or by well-connected politicians. (And, as any study of modern politics quickly reveals, the distinction between those two groups in most countries is murky at best.) Wolff convincingly recasts modern economic history as a constant pendulum swing between private capitalism and state capitalism. Crises in one system push countries towards the other system; subsequent crises push the country back towards the first system. Regulation grows and shrinks, companies are nationalized and then privatized, but both systems are united in excluding the worker from any meaningful control over their work life. The first two-thirds of the book was full of insights like this for me. I didn't agree with all of it, but all of it was worthwhile and thought-provoking. But I was a bit leery of Wolff's proposed solution. My past experience with critics of capitalism is that the critique is often quite compelling, but the proposed solution is much less believable. And, sadly, that concern was warranted here as well. The core of Wolff's proposal is predictable but possibly sound: a restructuring of the workplace to be radically democratic. The business would be owned entirely and exclusively by the people who work for it, equally regardless of the job of the worker, and the workers would decide democratically or via elected repesentatives from among the workers how to allocate the profits of the business, what standards and business practices should be followed, and how the work is to be done. I was particularly interested to hear that this model (workers' self-directed enterprises) has apparently been successful in Spain in the form of the Mondragon Cooperative. Given all the tricky, small details that have to be resolved in an actual workplace, an existence proof is worth more than pounds of theory. Unfortunately, like a lot of proposed alternatives, Wolff's description of WSDEs is quite fuzzy and involves a lot of hand-waving. I was never able to build, from this book, a coherent and complete mental model of how such a workplace would function. Wolff tends to surround every specific in a halo of contingencies, possibilities, and alternative models, and is maddeningly nonspecific on such practical matters as how line management would work, how such a business would do financial planning or project approval, how competing interests in different parts of the organization would be balanced, and other practical governance matters that fill my work life. Maybe the answer to all of that is just "democracy," but I'm dubious. Democracy has a number of well-known flaws that I thought weren't adequately addressed. For example, democracies are often quite happy to further and reinforce existing prejudice (such as sexism or racism), and are prone to yielding control to the most charismatic. Democracies also have an informed voter problem, which seems like it would be particularly acute if democracy is going to make detailed business decisions. And, for larger organizations, control by pre-existing money could re-enter the equation via propaganda and campaigning around votes. Some of this gap in the book could be addressed via a more in-depth look at how Mondragon and any other real-life examples work, but that is sadly missing here. (I am interested enough now, though, that I'd read a good popular treatment of the history and methodology of Mondragon, although I don't think I'm up to working through an academic study.) The hierarchical, dictatorial management structure imposed by capitalism is so awful that WSDEs don't have a particularly high bar to meet to be fairer and more empowering than what we have today. The question, rather, is would they function sufficiently well that the business would be able to make effective decisions, and that's unclear to me from this book. This is, as Wolff spends some time discussing, particularly difficult when in direct competition with capitalist enterprises. This sort of endeavor will probably trade some degree of economic efficiency and raw marketplace power for improvements to fairness and empowerment, but that means it's going to require support from the surrounding society, which is a huge obstacle. I doubt there's a free lunch here: true fairness and equity also leading to improved economic efficiency in a capitalist context is a nice dream, but not horribly practical. Wolff also seems to suffer from something else I associate with Marxist thought: a preferential focus on a very narrowly-defined type of productive worker, apparently left over from Marx's original critique in the context of industrialization. Wolff inserts a very odd and quite awkward distinction in his WSDE model between workers who directly produce the product that's sold, and who in his model therefore produce the profits of the business, and all other workers in the business. He then gives special privileges to the former group to decide how much profit to return in worker compensation and how much to use for other purposes, thus making the supporting workers second-class citizens within this supposedly equal workplace. Speaking as someone who works in IT, and hence would be classified by Wolff into the supporting rather than directly productive category, I do not find this division at all convincing, and Wolff never provides a coherent explanation for why he introduced it. He only says that it's necessary for the governance of the business to not be exploitative, which seems to assume that there is a special economic role played by the workers who work directly on a saleable product. Maybe there is some analysis that could convince me of this, but, if so, it's not present in this book. It struck me as a recipe for continuing the exploitation of the most invisible and powerless workers in capitalism: janitors, groundskeepers, and other low-paid service jobs. I wish the whole book were as insightful and pointed as the first two-thirds, but alas I found the WSDE discussion to be somewhat muddled and utopian. "How do we get there from here" is always the hardest part of this type of discussion, and Wolff has no special skill in that department. But, despite that, I got a lot of fascinating ideas and new conceptual frameworks out of this book, and I'm tempted to read it again. I suspect some of this, similar to my discovery of promises and infinite streams in programming, is filling in of odd gaps in my personal education rather than a discovery of unusual, new ideas. But if you too have gotten your political education within the US capitalism ber alles bubble, this book may fill in similar gaps in your knowledge. If so, it's a very rewarding experience. If you're curious about a preview of Wolff's perspective without paying for the book, I recommend watching the first episode of Moyers & Company in which he appeared. Wolff is a clear and engaging speaker, and his interview provides a good feel for his discussion style and his general perspective. Rating: 8 out of 10

13 March 2011

Lars Wirzenius: DPL elections: candidate counts

Out of curiosity, and because it is Sunday morning and I have a cold and can't get my brain to do anything tricky, I counted the number of candidates in each year's DPL elections.
Year Count Names
1999 4 Joseph Carter, Ben Collins, Wichert Akkerman, Richard Braakman
2000 4 Ben Collins, Wichert Akkerman, Joel Klecker, Matthew Vernon
2001 4 Branden Robinson, Anand Kumria, Ben Collins, Bdale Garbee
2002 3 Branden Robinson, Rapha l Hertzog, Bdale Garbee
2003 4 Moshe Zadka, Bdale Garbee, Branden Robinson, Martin Michlmayr
2004 3 Martin Michlmayr, Gergely Nagy, Branden Robinson
2005 6 Matthew Garrett, Andreas Schuldei, Angus Lees, Anthony Towns, Jonathan Walther, Branden Robinson
2006 7 Jeroen van Wolffelaar, Ari Pollak, Steve McIntyre, Anthony Towns, Andreas Schuldei, Jonathan (Ted) Walther, Bill Allombert
2007 8 Wouter Verhelst, Aigars Mahinovs, Gustavo Franco, Sam Hocevar, Steve McIntyre, Rapha l Hertzog, Anthony Towns, Simon Richter
2008 3 Marc Brockschmidt, Rapha l Hertzog, Steve McIntyre
2009 2 Stefano Zacchiroli, Steve McIntyre
2010 4 Stefano Zacchiroli, Wouter Verhelst, Charles Plessy, Margarita Manterola
2011 1 Stefano Zacchiroli (no vote yet)
Winner indicate by boldface. I expect Zack to win over "None Of The Above", so I went ahead and boldfaced him already, even if there has not been a vote for this year. Median number of candidates is 4.

16 September 2009

Jeroen van Wolffelaar: How I'm going to recall the Dunc-Tank saga

Percentage of Debian Developers preferring new elections to having AJ as DPL in March/April: 17.3%. Percentage of Debian Developers preferring new elections to having AJ as DPL in October: 14.8% That's right. contrary to what one would expect from merely reading the mailinglists, an even larger majority now finds AJ acceptable as DPL. This does not render the objections of the couple dozen DDs irrelevant though, diversity is a great good. However, diversity can easily escalate to divisiveness when different opinions or more accurately the ways they are voiced start to severely hinder activity by others. I myself have been quite demotived by the whole discussion, as has happened to me multiple times before. We Developers do not seem to be able to have constructive discussion without getting extremely heated up. I believe that this inability to hold discussions about tricky subjects is way more hurting the project than any of the "tricky subjects" themselves. Actually, the tidbit in my DPL-elections platform about Communication among Debian contributors is still equally relevant. Several potential improvements have been proposed in the past, including debating by wiki, debating inside a smaller taskforce, or even having a representative democracy. Let's explore the representative democracy principle a bit further: In general, I consider making decisions by GR (or to draw a parallel to country government, referendum) bad: A majority of voters doesn't have adequate and balanced information to make a reasonable decision, that takes doing quite some research. Instead, any discussion (vote winning?) is going to be significantly depending on emotions. Take the voting-down of the European constitution by France and the Netherlands: Hardly anyone has adequate information to really make a tradeoff, still quite a number of people voted, and supposedly one of most prevalent considerations was fear of this "Europe" thingy. Therefore, I'd really prefer that issues like firmware are dealt with by smallish team of people who really can look into the issue well, consider all arguments either way, and make a decision. As with every decision in Debian, it can be overturned by GR, but still. Note that we don't need any consitutional change for this, the DPL already has the power to delegate a person or group of people to make some decision nobody else specifically is responsible for already. A complicating factor is though that it's hard to compose such a taskforce that will have the project's support once a discussion is 'heated up'. In other news, rumour has it that -private has blown up today. I'm so looking forward to opening that mailfolder.

Jeroen van Wolffelaar: Priorities of distributions

Compare:

Jeroen van Wolffelaar: Phase change

It's done: today I heard about the last (passing) mark for the last course I need to follow for my Master in Applied Computing Science, at Utrecht University: a paper about RFID security and privacy as part of the Cryptography course. Besides some small paper still due for (ahum) my Bachelor, the only thing left is my thesis with associated research. Today I got an invitation for my first working day at ORTEC in Gouda, next month, where I'll be doing research (and implementation) on some exciting new route planning algorithms. It'll require some getting used to it, five days a week, 8 hours a day, for the rest of the year. One of the biggest challenges will be getting up in time every single day, but I'm sure I'll be fine with that eventually. I have confidence that the interesting research will make up for it completely! Already this Friday I'll be going to FOSDEM with Thijs, partly by train, and partly hitchhiking a ride from Joost. Too bad my laptop practically died... But reading mail etc. not the most important thing in FOSDEM, you can do that at home too I hope to meet yet again a whole lot of old and new Debian people over there. For the first time in 3 years no talk from me, other duties took too much time lately. I'll need to discover how much time I'll be able to spend on Debian as a full-time employee, but I think it'll make it easier for me to divide my time: I really missed doing Debian stuff from time to time, and I need to fix up a number of neglected areas real soon now. Good night!

Jeroen van Wolffelaar: Summer of Code

June is the last month before summer holidays, and at this phase in my study, courses take a lot of time: practical assignments, seminars to give, multiple exams. And mid-June, I'll be attending DebConf in Edinburgh, including delivering a talk about Mole, the subject of my Google Summer of Code project. In the period of April/May students participating in the Summer of Code project had the time to get more familiar with the project they were going to work for. Because I'm already somewhat familiar with Debian, I opted to organize an Etch release party together with some friends, because, well, what better way is there to get to know the community? During the past two weeks I've been really starting with the project. I updated the wiki page, added a new development page where I keep track of my work listing the subtasks I'm working on and their status. The first month will be mostly work on the internals, but starting with the second month I'll also be working on the web interface, meaning that there should be some stuff to show off. At the moment I'm refactoring some bits of Mole that are currently implemented in a hacky way, in order to make it all as extensible as it should be. This includes defining a config file format for mole tables, so that it's no longer necessary to modify the python code for new tables. It also involves some better way of stacking tables to each other, so that the package file extraction code can be moved away from the core of mole into a separate worker to keep mole itself pure and simple and generally applicable. I'll post irregular updates to my blog. I'm looking forward to seeing a lot of you again this Thursday, when I'll arrive in EDI!

Jeroen van Wolffelaar: 40 minutes

Yesterday I had one of the more stressful and eventful 40 minutes of my life... events basicly initiated by the fact that I arrived at Schiphol airport 40 minutes before my plane was scheduled to take off. The first thing I noticed was that the queue for security extended beyond the check-in desks next to it, so just that bit would probably take more than fourty minutes. Oops. What are all those people doing flying out on friday noons? Second thing I couldn't help to notice is that the check-in desk was actually already closed, double-oops. Well, the lady that was sitting there anyone concluded after a phonecall that my bag was fine as hand luggage and so that I could still make it. She also queue-jumped me straight to the security X-ray machine and ordered me to run from there on. Well, not so fast, running through security isn't appreciated. Actually, since I planned to check-in this bag, I kind of violated this whole anti-liquor^Wliquid silliness with no less than 7 items. The guy that unpacked my luggage let me keep my lenses fluid, and forgot to notice 3 of the items, so damage was limited, and I proceeded with my explosives, I mean, toothpaste and such, to the gate. In my hurry to run to the gate, I put my boarding pass in my pocket... and upon arrival at the gate, I noticed my boarding pass was gone, so I went ask some lady at the desk there about it... and overheard the guy next to her, who was on the phone, mention my last name. "Someone found your passport". Turns out they also had my boarding pass, but I appreciated knowing where my passport was anyway, since obviously I didn't have it on me anymore either. So within a minute I was running off again to a different gate to pick up those two sort-of-important items. You'd say that after having once seen my plane taxi away from an empty gate upon returning there from picking up my boarding pass from lost&found for the second time that very morning would've taught me a lesson... But apparantly not. Back at the gate I was getting past the boarding pass checkpoint, and the person over there saw my pass, and said "Hey, so you're that guy that a bit ago lost this and that at the security checkpoint?" Since that happened at the other end of the airport, gossip must be going pretty fast at Schiphol... At least I've never showed up at the airport at the wrong date (unlike two of my friends I'm meeting over here managed to do each independently)... Yet.

20 July 2008

Christian Perrier: D-I (and related stuff) l10n completion

Last news I gave for D-I and related stuff l10n completion was on June 15th. We're now one month later, so let's see what happened since then. D-I "level 1" (ie the core D-I) had several templates changes which bringed the translation ratio from 100% to 99% and below several times, for all languages. Several teams have still been able to cope with that. Other levels had no changes and I therefore spent some time nagging them to complete the missing bits here and there. The current status is: Changes are still planned for level 1: D-I developers are really active these days, particularly with the great "come back" of J r my Bobbio and the constant QA work by Frans Pop. I also tried to merge in some work done in Ubuntu's Rosetta and then interest those people who translate there to collaborate to D-I "upstream". This lead me to merge updates for Icelandic, Kazakh, Afrikaans, Malay, Welsh and Serbian. Contacts were made with the relevant translators: My conclusion about all this is mitigated: I'm somewhat sorry to see some work happening in Rosetta, still without any connection with "upstream" work, with people apparently working without even some internal coordination and quite anonymously. I wonder if these people know that their work has few chances to end up anywhere, except in Ubuntu (for D-I translation, I even doubt it ends up somewhere: this is just unused work....until I grab it for D-I "suptream").

23 November 2007

Joerg Jaspert: Todays work

Work. No, not at my workplace, for Debian. I decided to modify the ftp-master webpage a little bit, just adding some css magic (and the ability to conform with that xhtml1.0 strict thing out there). (And to be honest - about 95% of the “work” needed for this was done by Mark Hymers…) But that only happened after something else, which took away most of my time today (and yesterday and the day before). Namely - a nice overview of pending removals. The main purpose of that overview is of course “ftpteam members doing removals can take it for their work”, but i think it may also be nice for users to look there, instead of wading through all the bugreports against ftp.debian.org, which include bugs about totally different topics than removals. (And then I am lazy and want a commandline to paste… :) ) The design for that page was done by Martin Ferrari AKA Tincho, you don’t want to see how my design did look like… :), the idea for it is from Jeroen van Wolffelaar who wrote the first version of it in Perl (it is now implemented in Ruby). The removals html page is regenerated every hour, using the SOAP interface to the BTS, so at least the bugs information should be recent. The other information might have errors in them, don’t trust them too far. It should be right, but then - its only informational.. :) And now: Lets do some of those removals!

2 June 2007

Frank Lichtenheld: packages.debian.org status and development

Copy of a mail I sent to debian-www earlier today. I don't think it warrants posting to -devel-announce, but I post it here to make it visible to people not usually following debian-www. Since I seem to sense an increased stream of offers to help out with packages.d.o coming my way (but maybe that is just wishful thinking ;) I wanted to give a short update on the development of the current code and the status of the infrastructure so that nobody can claim he wanted to help but failed due to lack of information.

Next.