Search Results: "sez"

17 March 2015

Raphaël Hertzog: Freexian s report about Debian Long Term Support, February 2015

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In February, 58 work hours have been equally split among 4 paid contributors. Their reports are available: Evolution of the situation During the last month, we gained 3 paid work hours: we re now at 61 hours per month sponsored by 28 organizations and we have one supplementary sponsor in the pipe that should bring 4 more hours. The increase is not very quick but seems to be steady. Hopefully at some point, we will have enough resources to do a more exhaustive job. For now, the paid contributors handle in priority the most popular packages used by the sponsors and there are some packages in the end of the queue which have open security issues for months already (example: CVE-2012-6685 on libnokogiri-ruby). So, as usual, we are looking for more sponsors. In terms of security updates waiting to be handled, the situation looks a little bit worse than last month: the dla-needed.txt file lists 40 packages awaiting an update (3 more than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total (5 less than last month). We are getting a bit more effective with CVE triage. A logo for the LTS project? Every time that I write an LTS report, I remember that it would be nice if my LTS related articles could feature a nice picture/logo that reminds people of the LTS team/initiative. Is there anyone up for the challenge of creating that logo? :-) Thanks to our sponsors The new sponsors of the month are in bold.

No comment Liked this article? Click here. My blog is Flattr-enabled.

12 February 2015

Raphaël Hertzog: Freexian s report about Debian Long Term Support, January 2015

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In January, 48 work hours have been equally split among 4 paid contributors. Their reports are available: Evolution of the situation During the last month, the number of paid work hours has made a noticeable jump: we re now at 58 hours per month. At this rate, we would need 3 more months to reach our minimal goal of funding the equivalent of a half-time position. Unfortunately, the number of new sponsors actually in the process is not likely to be enough to have a similar raise next month. So, as usual, we are looking for more sponsors. In terms of security updates waiting to be handled, the situation looks a bit worse than last month: the dla-needed.txt file lists 37 packages awaiting an update (7 more than last month), the list of open vulnerabilities in Squeeze shows about 63 affected packages in total (7 more than last month). The increase is not too worrying, but the waiting time before an issue is dealt with is sometimes more problematic. To be able to deal with all incoming issues in a timely manner, the LTS team needs more resources: some months will have more issues than usual, some issues will be longer to handle than others, etc. Thanks to our sponsors The new sponsors of the month are in bold.

No comment Liked this article? Click here. My blog is Flattr-enabled.

16 January 2015

Raphaël Hertzog: Freexian s fifth report about Debian Long Term Support

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In December 46 work hours have been equally split among 4 paid contributors (note that Thorsten and Rapha l have actually spent more hours because they took over some hours that Holger did not do over the former months). Their reports are available: Evolution of the situation Compared to last month, the number of paid work hours has almost not increased (we are at 48 hours per month). We still have a couple of new sponsors in the pipe but with the new year they did not complete the process yet. Hopefully next month will see a noticeable increase. As usual, we are looking for more sponsors to reach our our minimal goal of funding the equivalent of a half-time position. Those of you who are struggling to spend money in the last quarter due to budget overrun, now is a good time to see if you want to include Debian LTS support in your 2015 budget! In terms of security updates waiting to be handled, the situation looks similar to last month: the dla-needed.txt file lists 30 packages awaiting an update (3 more than last month), the list of open vulnerabilities in Squeeze shows about 56 affected packages in total. We do not manage to clear the backlog but it s not getting significantly worse either. Thanks to our sponsors

No comment Liked this article? Click here. My blog is Flattr-enabled.

16 December 2014

Raphael Geissert: Editing Debian online with sources.debian.net

How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:

Clone it or fork it:
git clone https://github.com/rgeissert/ace-sourced.n.git

For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.

11 December 2014

Raphaël Hertzog: Freexian s fourth report about Debian Long Term Support

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available: New paid contributors Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors. Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors. Evolution of the situation Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year. But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project! In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues). The good side is that even though the kernel update spent a large chunk of time to Holger and Rapha l, we still managed to further reduce the backlog of security issues. Thanks to our sponsors

No comment Liked this article? Click here. My blog is Flattr-enabled.

12 November 2014

Raphaël Hertzog: Freexian s third report about Debian Long Term Support

Like last month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In October 2014, we affected 13.75h works hours to 3 contributors: Obviously, only the hours done have been paid. Should the backlog grow further, we will seek for more paid contributors (to share the workload) and to make it easier to redispatch work hours once a contributor knows that he won t be able to handle the hours that were affected to him/her. Evolution of the situation Compared to last month, we gained two new sponsors (Daevel and FOSSter, thanks to them!) and we have now 45.5 hours of paid LTS work to spend each month. That s great but we are still far from our minimal goal of funding the equivalent of a half-time position. In terms of security updates waiting to be handled, the situation is a bit worse than last month: while the dla-needed.txt file only lists 33 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total. This differences has two explanations: CVE triaging for squeeze has not been done in the last days, and the POODLE issue(s) with SSLv3 affects a very large number of packages where it s not always clear what the proper action is. In any case, it s never too late to join the growing list of sponsors and help us do a better job, please check with your company managers. If not possible for this year, consider including it in the budget for next year. Thanks to our sponsors Let me thank our main sponsors:

No comment Liked this article? Click here. My blog is Flattr-enabled.

15 October 2014

Raphaël Hertzog: Freexian s second report about Debian Long Term Support

Like last month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports: Evolution of the situation Compared to last month, we have gained 5 new sponsors, that s great. We re now at almost 25% of a full-time position. But we re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive. We re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included). Open questions Dear readers, what can we do to convince more companies to join the effort? The list of sponsors contains almost exclusively companies from Europe. It s true that Freexian s offer is in Euro but the economy is world-wide and it s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback. What are the main obstacles that you face when you try to convince your managers to get the company to contribute? By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year. Thanks to our sponsors Let me thank our main sponsors:

10 September 2014

Raphaël Hertzog: Freexian s first report about Debian Long Term Support

When we setup Freexian s offer to bring together funding from multiple companies in order to sponsor the work of multiple developers on Debian LTS, one of the rules that I imposed is that all paid contributors must provide a public monthly report of their paid work. While the LTS project officially started in June, the first month where contributors were actually paid has been July. Freexian sponsored Thorsten Alteholz and Holger Levsen for 10.5 hours each in July and for 16.5 hours each in August. Here are their reports: It s worth noting that Freexian sponsored Holger s work to fix the security tracker to support squeeze-lts. It s my belief that using the money of our sponsors to make it easier for everybody to contribute to Debian LTS is money well spent. As evidenced by the progress bar on Freexian s offer page, we have not yet reached our minimal goal of funding the equivalent of a half-time position. And it shows in the results, the dla-needed.txt still shows around 30 open issues. This is slightly better than the state two months ago but we can improve a lot on the average time to push out a security update To have an idea of the relative importance of the contributions of the paid developers, I counted the number of uploads made by Thorsten and Holger since July: of 40 updates, they took care of 19 of them, so about the half. I also looked at the other contributors: Rapha l Geissert stands out with 9 updates (I believe that he is contracted by lectricit de France for doing this) and most of the other contributors look like regular Debian maintainers taking care of their own packages (Paul Gevers with cacti, Christoph Berg with postgresql, Peter Palfrader with tor, Didier Raboud with cups, Kurt Roeckx with openssl, Balint Reczey with wireshark) except Matt Palmer and Luciano Bello who (likely) are benevolent members of the LTS team. There are multiple things to learn here:
  1. Paid contributors already handle almost 70% of the updates. Counting only on volunteers would not have worked.
  2. Quite a few companies that promised help (and got mentioned in the press release) have not delivered the promised help yet (neither through Freexian nor directly).
Last but not least, this project wouldn t exist without the support of multiple companies and organizations. Many thanks to them: Hopefully this list will expand over time! Any help to reach out to new companies and organizations is more than welcome.

One comment Liked this article? Click here. My blog is Flattr-enabled.

27 April 2014

Vincent Bernat: Local corporate APT repositories

Distributing software efficiently accross your platform can be difficult. Every distribution comes with a package manager which is usually suited for this task. APT can be relied upon on when using Debian or a derivative. Unfortunately, the official repositories may not contain everything you need. When you require unpackaged software or more recent versions, it is possible to setup your own local repository. Most of what is presented here was setup for Dailymotion and was greatly inspired by the work done by Rapha l Pinson at Orange.

Setting up your repositories There are three kinds of repositories you may want to setup:
  1. A distribution mirror. Such a mirror will save bandwidth, provide faster downloads and permanent access, even when someone searches Google on Google.
  2. A local repository for your own packages with the ability to have a staging zone to test packages on some servers before putting them in production.
  3. Mirrors for unofficial repositories, like Ubuntu PPA. To avoid unexpected changes, such a repository will also get a staging and a production zone.
Before going further, it is quite important to understand what a repository is. Let s illustrate with the following line from my /etc/apt/sources.list:
deb http://ftp.debian.org/debian/ unstable main contrib non-free
In this example, http://ftp.debian.org/debian/ is the repository and unstable is the distribution. A distribution is subdivided into components. We have three components: main, contrib and non-free. To setup repositories, we will use reprepro. This is not the only solution but it has a good balance between versatility and simplicity. reprepro can only handle one repository. So, the first choice is about how you will split your packages in repositories, distributions and components. Here is what matters:
  • A repository cannot contain two identical packages (same name, same version, same architecture).
  • Inside a component, you can only have one version of a package.
  • Usually, a distribution is a subset of the versions while a component is a subset of the packages. For example, in Debian, with the distribution unstable, you choose to get the most recent versions while with the component main, you choose to get DFSG-free software only.
If you go for several repositories, you will have to handle several reprepro instances and won t be able to easily copy packages from one place to another. At Dailymotion, we put everything in the same repository but it would also be perfectly valid to have three repositories:
  • one to mirror the distribution,
  • one for your local packages, and
  • one to mirror unofficial repositories.
Here is our target setup: Local APT repository

Initial setup First, create a system user to work with the repositories:
$ adduser --system --disabled-password --disabled-login \
>         --home /srv/packages \
>         --group reprepro
All operations should be done with this user only. If you want to setup several repositories, create a directory for each of them. Each repository has those subdirectories:
  • conf/ contains the configuration files,
  • gpg/ contains the GPG stuff to sign the repository1,
  • logs/ contains the logs,
  • www/ contains the repository that should be exported by the web server.
Here is the content of conf/options:
outdir +b/www
logdir +b/logs
gnupghome +b/gpg
Then, you need to create the GPG key to sign the repository:
$ GNUPGHOME=gpg gpg --gen-key
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 10y
Key expires at mer. 08 nov. 2023 22:30:58 CET
Is this correct? (y/N) y
Real name: Dailymotion Archive Automatic Signing Key
Email address: the-it-operations@dailymotion.com
Comment: 
[...]
By setting an empty password, you allow reprepro to run unattended. You will have to distribute the public key of your new repository to let APT check the archive signature. An easy way is to ship it in some package.

Local mirror of an official distribution Let s start by mirroring a distribution. We want a local mirror of Ubuntu Precise. For this, we need to do two things:
  1. Setup a new distribution in conf/distributions.
  2. Configure the update sources in conf/updates.
Let s add this block to conf/distributions:
# Ubuntu Precise
Origin: Ubuntu
Label: Ubuntu
Suite: precise
Version: 12.04
Codename: precise
Architectures: i386 amd64
Components: main restricted universe multiverse
UDebComponents: main restricted universe multiverse
Description: Ubuntu Precise 12.04 (with updates and security)
Contents: .gz .bz2
UDebIndices: Packages Release . .gz
Tracking: minimal
Update: - ubuntu-precise ubuntu-precise-updates ubuntu-precise-security
SignWith: yes
This defines the precise distribution in our repository. It contains four components: main, restricted, universe and multiverse (like the regular distribution in official repositories). The Update line starts with a dash. This means reprepro will mark everything as deleted before updating with the provided sources. Old packages will not be kept when they are removed from Ubuntu. In conf/updates, we define the sources:
# Ubuntu Precise
Name: ubuntu-precise
Method: http://fr.archive.ubuntu.com/ubuntu
Fallback: http://de.archive.ubuntu.com/ubuntu
Suite: precise
Components: main main multiverse restricted universe
UDebComponents: main restricted universe multiverse
Architectures: amd64 i386
VerifyRelease: 437D05B5
GetInRelease: no
# Ubuntu Precise Updates
Name: ubuntu-precise-updates
Method: http://fr.archive.ubuntu.com/ubuntu
Fallback: http://de.archive.ubuntu.com/ubuntu
Suite: precise-updates
Components: main restricted universe multiverse
UDebComponents: main restricted universe multiverse
Architectures: amd64 i386
VerifyRelease: 437D05B5
GetInRelease: no
# Ubuntu Precise Security
Name: ubuntu-precise-security
Method: http://fr.archive.ubuntu.com/ubuntu
Fallback: http://de.archive.ubuntu.com/ubuntu
Suite: precise-security
Components: main restricted universe multiverse
UDebComponents: main restricted universe multiverse
Architectures: amd64 i386
VerifyRelease: 437D05B5
GetInRelease: no
The VerifyRelease lines are GPG key fingerprint to use to check the remote repository. The key needs to be imported in the local keyring:
$ gpg --keyring /usr/share/keyrings/ubuntu-archive-keyring.gpg \
>     --export 437D05B5   GNUPGHOME=gpg gpg --import
Another important point is that we merge three distributions (precise, precise-updates and precise-security) into a single distribution (precise) in our local repository. This may cause some difficulties with tools expecting the three distributions to be available (like the Debian Installer2). Next, you can run reprepro and ask it to update your local mirror:
$ reprepro update
This will take some time on the first run. You can execute this command every night. reprepro is not the fastest mirror solution but it is easy to setup, flexible and reliable.

Repository for local packages Let s configure the repository to accept local packages. For each official distribution (like precise), we will configure two distributions:
  • precise-staging contains packages that have not been fully tested and not ready to go to production.
  • precise-prod contains production packages copied from precise-staging.
In our workflow, packages are introduced in precise-staging where they can be tested and will be copied to precise-prod when we want them to be available for production. You can adopt a more complex workflow if you need. The reprepro part is quite easy. We add the following blocks into conf/distributions:
# Dailymotion Precise packages (staging)
Origin: Dailymotion #  
Label: dm-staging   #  
Suite: precise-staging
Codename: precise-staging
Architectures: i386 amd64 source
Components: main role/dns role/database role/web #  
Description: Dailymotion Precise staging repository
Contents: .gz .bz2
Tracking: keep
SignWith: yes
NotAutomatic: yes #  
Log: packages.dm-precise-staging.log
 --type=dsc email-changes
# Dailymotion Precise packages (prod)
Origin: Dailymotion #  
Label: dm-prod      #  
Suite: precise-prod
Codename: precise-prod
Architectures: i386 amd64 source
Components: main role/dns role/database role/web #  
Description: Dailymotion Precise prod repository
Contents: .gz .bz2
Tracking: keep
SignWith: yes
Log: packages.dm-precise-prod.log
First notice we use several components (in ):
  • main will contain packages that are not specific to a subset of the platform. If you put a package in main, it should work correctly on any host.
  • role/* are components dedicated to a subset of the platform. For example, in role/dns, we ship a custom version of BIND.
The staging distribution has the NotAutomatic flag (in ) which disallows the package manager to install those packages except if the user explicitely requests it. Just below, when a new dsc file is uploaded, the hook email-changes will be executed. It should be in the conf/ directory. The Origin and Label lines (in ) are quite important to be able to define an explicit policy of which packages should be installed. Let s say we use the following /etc/apt/sources.list file:
# Ubuntu packages
deb http://packages.dm.gg/dailymotion precise main restricted universe multiverse
# Dailymotion packages
deb http://packages.dm.gg/dailymotion precise-prod    main role/dns
deb http://packages.dm.gg/dailymotion precise-staging main role/dns
All servers have the precise-staging distribution. We must ensure we won t install those packages by mistake. The NotAutomatic flag is one possible safe-guard. We also use a tailored /etc/apt/preferences:
Explanation: Dailymotion packages of a specific component should be more preferred
Package: *
Pin: release o=Dailymotion, l=dm-prod, c=role/*
Pin-Priority: 950
Explanation: Dailymotion packages should be preferred
Package: *
Pin: release o=Dailymotion, l=dm-prod
Pin-Priority: 900
Explanation: staging should never be preferred
Package: *
Pin: release o=Dailymotion, l=dm-staging
Pin-Priority: -100
By default, packages will have a priority of 500. By setting a priority of -100 to the staging distribution, we ensure the packages cannot be installed at all. This is stronger than NotAutomatic which sets the priority to 1. When a package exists in Ubuntu and in our local repository, we ensure that, if this is a production package, we will use ours by using a priority of 900 (or 950 if we match a specific role component). Have a look at the How APT Interprets Priorities section of apt_preferences(5) manual page for additional information. Keep in mind that version matters only when the priority is the same. To check if everything works as you expect, use apt-cache policy:
$ apt-cache policy php5-memcache
  Installed: 3.0.8-1~precise2~dm1
  Candidate: 3.0.8-1~precise2~dm1
  Version table:
 *** 3.0.8-1~precise2~dm1 0
        950 http://packages.dm.gg/dailymotion/ precise-prod/role/web amd64 Packages
        100 /var/lib/dpkg/status
     3.0.8-1~precise1~dm4 0
        900 http://packages.dm.gg/dailymotion/ precise-prod/main amd64 Packages
       -100 http://packages.dm.gg/dailymotion/ precise-staging/main amd64 Packages
     3.0.6-1 0
        500 http://packages.dm.gg/dailymotion/ precise/universe amd64 Packages
If we want to install a package from the staging distribution, we can use apt-get with the -t precise-staging option to raise the priority of this distribution to 990. Once you have tested your package, you can copy it from the staging distribution to the production distribution:
$ reprepro -C main copysrc precise-prod precise-staging wackadoodle

Local mirror of third-party repositories Sometimes, you want a software published on some third-party repository without to repackage it yourself. A common example is the repositories edited by hardware vendors. Like for an Ubuntu mirror, there are two steps: defining the distribution and defining the source. We chose to put such mirrors into the same distributions as our local packages but with a dedicated component for each mirror. This way, those third-party packages will share the same workflow as our local packages: they will appear in the staging distribution, we validate them and copy them to the production distribution. The first step is to add the components and an appropriate Update line to conf/distributions:
Origin: Dailymotion
Label: dm-staging
Suite: precise-staging
Components: main role/dns role/database role/web vendor/hp
Update: hp
# [...]
Origin: Dailymotion
Label: dm-prod
Suite: precise-prod
Components: main role/dns role/database role/web vendor/hp
# [...]
We added the vendor/hp component to both the staging and the production distributions. However, only the staging distribution gets an Update line (remember, packages will be copied manually into the production distribution). We declare the source in conf/updates:
# HP repository
Name: hp
Method: http://downloads.linux.hp.com/SDR/downloads/ManagementComponentPack/
Suite: precise/current
Components: non-free>vendor/hp
Architectures: i386 amd64
VerifyRelease: 2689B887
GetInRelease: no
Don t forget to add the GPG key to your local keyring. Notice an interesting feature of reprepro: we copy the remote non-free component to our local vendor/hp component. Then, you can synchronize the mirror with reprepro update. Once the packages have been tested, you will have to copy them in the production distribution.

Building Debian packages Our reprepro setup seems complete, but how do we put packages into the staging distribution? You have several options to build Debian packages for your local repository. It really depends on how much time you want to invest in this activity:
  1. Build packages from source by adding a debian/ directory. This is the classic way of building Debian packages. You can start from scratch or use an existing package as a base. In the latest case, the package can be from the official archive but for a more recent distribution or a backport or from an unofficial repository.
  2. Use a tool that will create a binary package from a directory, like fpm. Such a tool will try to guess a lot of things to minimize your work. It can even download everything for you.
There is no universal solution. If you don t have the time budget for building packages from source, have a look at fpm. I would advise you to use the first approach when possible because you will get those perks for free:
  • You keep the sources in your repository. Whenever you need to rebuild something to fix an emergency bug, you won t have to hunt the sources which may be unavailable when you need them the most. Of course, this only works if you build packages that don t download stuff directly from the Internet.
  • You also keep the recipe3 to build the package in your repository. If someone enables some option and rebuild the package, you won t accidently drop this option on the next build. Those changes can be documented in debian/changelog. Moreover, you can use a version control software for the whole debian/ directory.
  • You can propose your package for inclusion into Debian. This will help many people once the package hits the archive.

Builders We chose pbuilder as a builder4. Its setup is quite straightforward. Here is our /etc/pbuilderrc:
DISTRIBUTION=$DIST
NAME="$DIST-$ARCH"
MIRRORSITE=http://packages.dm.gg/dailymotion
COMPONENTS=("main" "restricted" "universe" "multiverse")
OTHERMIRROR="deb http://packages.dm.gg/dailymotion $ DIST -staging main"
HOOKDIR=/etc/pbuilder/hooks.d
BASE=/var/cache/pbuilder/dailymotion
BASETGZ=$BASE/$NAME/base.tgz
BUILDRESULT=$BASE/$NAME/results/
APTCACHE=$BASE/$NAME/aptcache/
DEBBUILDOPTS="-sa"
KEYRING="/usr/share/keyrings/dailymotion-archive.keyring.gpg"
DEBOOTSTRAPOPTS=("--arch" "$ARCH" "--variant=buildd" "$ DEBOOTSTRAPOPTS[@] " "--keyring=$KEYRING")
APTKEYRINGS=("$KEYRING")
EXTRAPACKAGES=("dailymotion-archive-keyring")
pbuilder is expected to be invoked with DIST, ARCH and optionally ROLE environment variables. Building the initial bases can be done like this:
for ARCH in i386 amd64; do
  for DIST in precise; do
    export ARCH
    export DIST
    pbuilder --create
  done
done
We don t create a base for each role. Instead, we use a D hook to add the appropriate source:
#!/bin/bash
[ -z "$ROLE" ]    
  cat >> /etc/apt/sources.list <<EOF
deb http://packages.dm.gg/dailymotion $ DIST -staging role/$ ROLE 
EOF
 
apt-get update
We ensure packages from our staging distribution are preferred over other packages by adding an /etc/apt/preferences file in a E hook:
#!/bin/bash
cat > /etc/apt/preferences <<EOF
Explanation: Dailymotion packages are of higher priority
Package: *
Pin: release o=Dailymotion
Pin-Priority: 900
EOF
We also use a C hook to get a shell in case there is an error. This is convenient to debug a problem:
#!/bin/bash
apt-get install -y --force-yes vim less
cd /tmp/buildd/*/debian/..
/bin/bash < /dev/tty > /dev/tty 2> /dev/tty
A manual build can be run with:
$ ARCH=amd64 DIST=precise ROLE=web pbuilder \
>         --build somepackage.dsc

Version numbering To avoid to apply complex rules to chose a version number for a package, we chose to treat everything as a backport, even in-house software. We use the following scheme: X-Y~preciseZ+dmW.
  • X is the upstream version5.
  • Y is the Debian version. If there is no Debian version, use 0.
  • Z is the Ubuntu backport version. Again, if such a version doesn t exist, use 0.
  • W is our version of the package. We increment it when we make a change to the packaging. This is the only number we are allowed to control. All the others are set by an upstream entity, unless it doesn t exist and in this case, you use 0.
Let s suppose you need to backport wackadoodle. It is available in a more recent version of Ubuntu as 1.4-3. Your first backport will be 1.4-3~precise0+dm1. After a change to the packaging, the version will be 1.4-3~precise0+dm2. A new upstream version 1.5 is available and you need it. You will use 1.5-0~precise0+dm1. Later, this new upstream version will be available in some version of Ubuntu as 1.5-3ubuntu1. You will rebase your changes on this version and get 1.5-3ubuntu1~precise0+dm1. When using Debian instead of Ubuntu, a compatible convention could be : X-Y~bpo70+Z~dm+W.

Uploading To upload a package, a common setup is the following workflow:
  1. Upload the source package to an incoming directory.
  2. reprepro will notice the source package, check its correctness (signature, distribution) and put it in the archive.
  3. The builder will notice a new package needs to be built and build it.
  4. Once the package is built, the builder will upload the result to the incoming directory.
  5. reprepro will notice again the new binary package and integrate it in the archive.
This workflow has the disadvantage to have many moving pieces and to leave the user in the dark while the compilation is in progress. As an alternative, a simple script can be used to execute each step synchronously. The user can follow on their terminal that everything works as expected. Once we have the .changes file, the build script just issues the appropriate command to include the result in the archive:
$ reprepro -C main include precise-staging \
>      wackadoodle_1.4-3~precise0+dm4_amd64.changes
Happy hacking!

  1. The gpg/ directory could be shared by several repositories.
  2. We teached Debian Installer to work with our setup with an appropriate preseed file.
  3. fpm-cookery is a convenient tool to write recipes for fpm, similar to Homebrew or a BSD port tree. It could be used to achieve the same goal.
  4. sbuild is an alternative to pbuilder and is the official builder for both Debian and Ubuntu. Historically, pbuilder was more focused on developers needs.
  5. For a Git snapshot, we use something like 1.4-git20130905+1-ae42dc1 which is a snapshot made after version 1.4 (use 0.0 if no version has ever been released) at the given date. The following 1 is to be able to package different snapshots at the same date while the hash is here in case you need to retrieve the exact snapshot.

22 November 2013

Vincent Sanders: Error analysis is the sweet spot for improvement

Although Don Norman was discussing designers attitude to user errors I assert the same is true for programmers when we use static program analysis tools.

The errors, or rather defects in the jargon, that a static analysis tools produce can be considered low cost well formed bug reports available very early in the development process.

When I say low cost it is because they can be found by a machine without a user or fellow developer wasting their time finding them. Well formed comes because the machine can describe exactly how it came to the logical deduction leading to the defect.
IntroductionStatic analysis is in general terms using the computer to examine a program for logic errors beyond those of pure syntax before it is executed. Examining a running program for defects is known as dynamic program analysis and while a powerful tool in its own right is not the topic of discussion.

This analysis has historically been confined to compiled languages as their compilers already had the Abstract Syntax Tree (AST) of the code available for analysis. As an example the C language (released in 1972) had the lint tool (released in 1979) based on the PCC compiler.

Practical early compilers (I am generalising here as the 19070s were a time of white hot innovation in computing and examples of just about any innovation in the field could probably be found) were pretty primitive and produced executables which were less good than hand written assembler output. Due to practical constraints the progress of optimising compilers was not as rapid as might be desired so static analysis was largely used as an external process.

Before progressing I ought to explain why I just mixed the concept of an optimising compiler and static analysis. The act of optimisation within those compilers requires program analysis, from which they can generate defect reports which we all know and love as compiler warnings, also explaining why many warnings only appear at higher optimisation levels where deeper analysis is required.

The attentive reader may now enquire as to why we would need external analysis tools when our compilers already perform the task. The answer stems from the issue that a compiler is trying to reconcile many desirable traits including:
The slow progress in creating optimising compilers initially centred around the problem of getting the compiled output in a reasonable time to allow for a practical edit-compile-run-debug cycle although the issues more recently have moved more towards the compiler implementation costs.

Because the output generation time is still a significant factor compilers limit the level of static analysis performed to that strictly required to produce good output. In standard operation optimising compilers do not do the extended analysis necessary to find all the defects that might be detectable.

An example: compiling one 200,000 line C program with the clang (v3.3) compiler producing x86 instruction binaries at optimisation level 2 takes 70 seconds but using the clang based scan-build static analysis tool took 517 seconds or more than seven times as long.
Using static analysis
As already described warnings are a by-product of an optimising compilers analysis and most good programmers will endeavour to remove all warnings from a project. Thus almost all programmers are already using static analysis to some degree.

The external analysis tools available can produce many more defect reports than the compiler alone as long as the developer is prepared to wait for the output. Because of this delay static analysis is often done outside the usual developers cycle and often integrated into a projects Continuous Integration (CI) system.

The resulting defects are usually presented as annotated source code with a numbered list of logical steps which shows how the defect can present. For example the steps might highlight where a line of code allocates memory from the heap and then an exit path where no reference to the allocated memory is kept resulting in a resource leak.

Once the analysis has been performed and a list of defects generated the main problem with this technology rears its ugly head, that of so called "false positives". The analysis is fundamentally an undecidable problem (it is a variation of the halting problem) and relies on algorithms to generate approximate solutions. Because of this some of the identified defects are erroneous.

The level of erroneous defect reports varies depending on the codebase being analysed and how good the analysis tool being used is. It is not uncommon to see false positive rates, even with the best tools, in excess of 10%

Good tools allow for this and provide ways to supply additional context through model files or hints in the source code to suppress the incorrect defect reports. This is analogous to using asserts to explicitly constrain variable values or a type cast to suppress a type warning.

Even once the false positives have been dealt with there comes the problem of defects which while they may be theoretically possible take so many steps to achieve that their probability is remote at best. These defects are often better categorized as a missing constraint and the better analysis tools generate fewer than the more naive implementations.

An issue with some defect reports is that often defects will appear in a small number of modules within programs, generally where the developers already know the code is of poor quality, thus not adding useful knowledge about a project.

As with all code quality tools static analysis can be helpful but is not a panacea code may be completely defect free but still fail to function correctly.
Defect DensityA term that is often used as a metric for code quality is the defect density. This is nothing more than the ratio of defect to thousands of lines of code e.g. a defect density of 0.9 means that there is approximately one defect found in every 1100 lines of code.

The often quoted industry average defect density value is 1, as with all software metrics this can be a useful indicator but should not be used without understanding.

The value will be affected by improvements in the tool as well as how lines of code are counted so is exceptionally susceptible to gaming and long term trends must be treated with scepticism.
Practical examplesI have integrated two distinct static analysis tools into the development workflow for the NetSurf project which I shall present as case studies. These examples show a good open source solution and a commercial offering highlighting the issues with each.

Several other solutions, both open source and commercial, exist many of which have been examined and discarded as either impractical or proving less useful than those selected. However the investigation was not comprehensive and only considered what was practical for the project at the time.
clangThe clang project is a frontend to the LLVM project providing an optimising compiler for the C, C++ and objective C languages. As part of this project the compiler has been enhanced to run a collection of "checkers" which implement various methods of analysis on the code being compiled.

The "scan-build" tool is provided to make the using these features straightforward. This tool generates defect reports as a series of html files which show the analysis results.


NetSurf CI system scan-build overview
Because the scan-build takes in excess of eight minutes on powerful hardware the NetSurf developers are not going to run this tool themselves as a matter of course. To get the useful output without the downsides it was decided to integrate the scan into the CI system code quality checks.

NetSurf CI system scan-build result list
Whenever a git commit happens to the mainline branch and the standard check build completes successfully on all target architectures the scan is performed and the results are published as a list of defects.

The list is accessible directly through the CI interface and also incorporates a trend graph showing how many defects were detected in each build.

A scan-build report showing an extremely unlikely path to a defect
Each defect listed has a detail link which reveals the full analysis and logic necessary to cause the defect to occur.

Unfortunately even NetSurf which is a relatively small piece of software (around 200,000 lines of code at time of writing) causes 107 defects to be emitted by scan-build.

All but 25 of the defects are however "Dead Store" where the code has a value assigned but is never checked. These errors are simply not interesting to the developers and are occurring in code generated by a tool.

Of the remaining defects identified the majority are false positives and several (like the example in the image above) are simply improbable requiring a large number of steps to reach.

This shows up the main problem with the scan-build tool in that there is no way to suppress certain checks, mark defects as erroneous or avoid false positives using a model file. This reduces the usefulness of these builds because the developers all need to remember that this list of defects is not relevant.

Most of the NetSurf developers know that the project currently has 107 outstanding issues and if a code change or tool improvement were to change that value we have to manually work through the defect list one by one to check what had changed.
CoverityThe coverity SAVE tool is a commercial offering from a company founded in the Computer Systems Laboratory at Stanford University in Palo Alto, California. The results of the original novel research has produced a good solution which improved on analysis tools previously available.

Coverity Interface showing summary of NetSurf analysis. Layout issues are a NetSurf bug
The company hosts a gratis service for open source projects, they even provide scans for the Linux kernel so project size does not appear to be an issue.

The challenges faced integrating the coverity tool into the build process differed from clang however the issue of execution time remained and the CI service was used.

The coverity scanning tool is a binary executable which collects data on the build which is then submitted to the coverity service to be analysed. This tool obviously relies upon the developer running the executable to trust coverity to some degree.

A basic examination of the binary was performed and determined the executable was not establishing network connections or performing and observably undesirable behaviour. From this investigation the decision was made that running the tool inside a sandbox environment on a CI build slave was safe. The CI system also submits the collected results in a compressed form directly to the coverity scan service.

Care must be taken to only submit builds according to the services Acceptable Use Policy which limits the submission frequency of NetSurf scans to every other day. To ensure the project stays within the rules the build performed by the CI system is manually controlled and confined to a subset of NetSurf developers.

Coverity connect defect management console for NetSurfThe results are presented using the coverity connect web technology based defect management tool. Access to the coverity connect interface is controlled by a user management system which precludes publicly publishing the results within the CI system.

Unfortunately NetSurf itself does not currently have good enough JavaScript DOM bindings to support this interface so another browser must be used to view it.

Despite the drawbacks the quality of the analysis results is greatly superior to the clang solution. The false positive rate is very low while finding many real issues which had not been previously detected.

The analysis can be enhanced by use of collection configuration and modelling files which remove intended constructions from consideration reducing the false positive rate to very low levels. The ability to easily and persistently suppress false positives through the web interface is also available.

The false positive management capabilities coupled with a user interface that makes understanding the defect path simple make this solution very practical and indeed the NetSurf developers have removed over 50 actual issues within a relatively short period since the introduction of the tool.

Not all of those defects could be considered serious but they had the effect of encouraging deeper inspection of some very dubious smelling source.
ConclusionsThe principle conclusions of implementing and using static analysis have been:

When I started looking at this technology I was somewhat dubious about its usefulness but I have definitely changed my mind. It is a useful addition to any non-trivial project and the return on time and effort should be repaid handsomely in all but already perfect code (if you believe you have such code I have a bridge to sell you).

22 February 2013

Hideki Yamane: Postgresql + GPU / SELinux approach

I've participated an event (18th, Feb), Kohei Kaigai from NEC talked about 3 topics.


  • Places to Visit in Europe
  • PG-Strom: GPU Accelerated AsynchronousSuper-Parallel Query
  • Row-Level Security: DB Security, prevent information leak

I've interested in PG-Strom, it can accelerated query 8 times with $100 GPU card, for more detail, see above link.

This event was hosted in Red Hat office at Yebisu( ), Tokyo. Thanks to Ryo Fujita from Red Hat for his coordination.


25 December 2011

Matthew Palmer: The Other Way...

Chris Siebenmann sez:
The profusion of network cables strung through doorways here demonstrates that two drops per sysadmin isn t anywhere near enough.
What I actually suspect it demonstrates is that Chris company hasn t learnt about the magic that is VLANs. All of the reasons he cites in the longer, explanatory blog post could be solved with VLANs. The only time you can t get away with one gigabit drop per office and an 8 port VLAN-capable switch is when you need high capacity, and given how many companies struggle by with wifi, I m going to guess that sustained gigabit-per-machine is not a common requirement. So, for Christmas, buy your colleages a bunch of gigabit VLAN capable switches, and you can avoid both the nightmare of not having enough network ports, and the more hideous tragedy of having to crawl around the roofspace and recable an entire office.

1 June 2010

Debian News: New Debian Developers (May 2010)

The following developers got their Debian accounts in the last month: Congratulations!

23 December 2008

Emilio Pozuelo Monfort: Collaborative maintenance

The Debian Python Modules Team is discussing which DVCS to switch to from SVN. Ondrej Certik asked how to generate a list of commiters to the team s repository, so I looked at it and got this:
emilio@saturno:~/deb/python-modules$ svn log egrep "^r[0-9]+ cut -f2 -d sed s/-guest// sort uniq -c sort -n -r
865 piotr
609 morph
598 kov
532 bzed
388 pox
302 arnau
253 certik
216 shlomme
212 malex
175 hertzog
140 nslater
130 kobold
123 nijel
121 kitterma
106 bernat
99 kibi
87 varun
83 stratus
81 nobse
81 netzwurm
78 azatoth
76 mca
73 dottedmag
70 jluebbe
68 zack
68 cgalisteo
61 speijnik
61 odd_bloke
60 rganesan
55 kumanna
52 werner
50 haas
48 mejo
45 ucko
43 pabs
42 stew
42 luciano
41 mithrandi
40 wardi
36 gudjon
35 jandd
34 smcv
34 brettp
32 jenner
31 davidvilla
31 aurel32
30 rousseau
30 mtaylor
28 thomasbl
26 lool
25 gaspa
25 ffm
24 adn
22 jmalonzo
21 santiago
21 appaji
18 goedson
17 toadstool
17 sto
17 awen
16 mlizaur
16 akumar
15 nacho
14 smr
14 hanska
13 tviehmann
13 norsetto
13 mbaldessari
12 stone
12 sharky
11 rainct
11 fabrizio
10 lash
9 rodrigogc
9 pcc
9 miriam
9 madduck
9 ftlerror
8 pere
8 crschmidt
7 ncommander
7 myon
7 abuss
6 jwilk
6 bdrung
6 atehwa
5 kcoyner
5 catlee
5 andyp
4 vt
4 ross
4 osrevolution
4 lamby
4 baby
3 sez
3 joss
3 geole
2 rustybear
2 edmonds
2 astraw
2 ana
1 twerner
1 tincho
1 pochu
1 danderson
As it s likely that the Python Applications Packaging Team will switch too to the same DVCS at the same time, here are the numbers for its repo:

emilio@saturno:~/deb/python-apps$ svn log egrep "^r[0-9]+ cut -f2 -d sed s/-guest// sort uniq -c sort -n -r
401 nijel
288 piotr
235 gothicx
159 pochu
76 nslater
69 kumanna
68 rainct
66 gilir
63 certik
52 vdanjean
52 bzed
46 dottedmag
41 stani
39 varun
37 kitterma
36 morph
35 odd_bloke
29 pcc
29 gudjon
28 appaji
25 thomasbl
24 arnau
20 sc
20 andyp
18 jalet
15 gerardo
14 eike
14 ana
13 dfiloni
11 tklauser
10 ryanakca
10 nxvl
10 akumar
8 sez
8 baby
6 catlee
4 osrevolution
4 cody-somerville
2 mithrandi
2 cjsmo
1 nenolod
1 ffm
Here I m the 4th most committer :D And while I was on it, I thought I could do the same for the GNOME and GStreamer teams:
emilio@saturno:~/deb/pkg-gnome$ svn log egrep "^r[0-9]+ cut -f2 -d sed s/-guest// sort uniq -c sort -n -r
5357 lool
2701 joss
1633 slomo
1164 kov
825 seb128
622 jordi
621 jdassen
574 manphiz
335 sjoerd
298 mlang
296 netsnipe
291 grm
255 ross
236 ari
203 pochu
198 ondrej
190 he
180 kilian
176 alanbach
170 ftlerror
148 nobse
112 marco
87 jak
84 samm
78 rfrancoise
75 oysteigi
73 jsogo
65 svena
65 otavio
55 duck
54 jcurbo
53 zorglub
53 rtp
49 wasabi
49 giskard
42 tagoh
42 kartikm
40 gpastore
34 brad
32 robtaylor
31 xaiki
30 stratus
30 daf
26 johannes
24 sander-m
21 kk
19 bubulle
16 arnau
15 dodji
12 mbanck
11 ruoso
11 fpeters
11 dedu
11 christine
10 cpm
7 ember
7 drew
7 debotux
6 tico
6 emil
6 bradsmith
5 robster
5 carlosliu
4 rotty
4 diegoe
3 biebl
2 thibaut
2 ejad
1 naoliv
1 huats
1 gilir

emilio@saturno:~/deb/pkg-gstreamer$ svn log egrep "^r[0-9]+ cut -f2 -d sed s/-guest// sort uniq -c sort -n -r
891 lool
840 slomo
99 pnormand
69 sjoerd
27 seb128
21 manphiz
8 he
7 aquette
4 elmarco
1 fabian
Conclusions:
- Why do I have the full python-modules and pkg-gstreamer trees, if I have just one commit to DPMT, and don t even have commit access to the GStreamer team?
- If you don t want to seem like you have done less commits than you have actually done, don t change your alioth name when you become a DD ;) (hint: pox-guest and piotr in python-modules are the same person)
- If the switch to a new VCS was based on a vote where you have one vote per commit, the top 3 commiters in pkg-gnome could win the vote if they chosed the same! For python-apps it s the 4 top commiters, and the 7 ones for python-modules. pkg-gstreamer is a bit special :)

29 October 2008

Benjamin Mako Hill: An Invisible Handful of Stretched Metaphors

The following list is merely a small selection of scholarly articles listed in the ISI Web of Knowledge with "invisible hand" in their title: And, finally:

9 June 2008

Igor Genibel: Premi re s ance

Lev t t pour tre frais et dispo pour la premi re s ance, je me suis pr par rapidement. La nuit a t assez courte, couch vers minuit pour finaliser le support de cours. Enfin, il est pr t. J'ai juste quelques ajustements y apporter et il partira pour l'impression. Mieux vaut tard que jamais ;) J' tais donc pr t 7h30 pour un petit d jeuner ma foi agr able au bord de la piscine et accompagn de Tisserin et de la superbe Grue Couronn e quelques dizaines de centim tres de moi ;) Un petit tour au coin wifi internet pour envoyer 2 ou 3 mails et me voil attendre. Tout compte fait, j'ai d attendre jusqu' 10h30 pour que l'on vienne me chercher. La salle n' tait pas disponible, les petits tracas administratifs de derni re minute. Une fois sur place l'ENA (Ecole Nationale de l'Administration) je me suis lanc pr senter le syst me 25 personnes. Mais impossible de faire fonctionner loe r troprojecteur mis ma disposition. Pb de configuration Xorg voir ce soir. Donc j'ai projet mes transparent fait avec MagicPoint g n r s en HTML sur une machine sous Windows. Apr s une br ve introduction nous voil partis dans le vif du sujet. Les stagiaires taient tout ou e et donc je n'ai pas m nag les informations dispens es. Repas 14h frugal pour ma part, je n'avais pas trop faim. Poulet en sauve, crudit s, riz, pommes de terre. Me voil reparti 15h pour la deuxi me partie de la journ e. J'ai demand aux stagiaires de se pr senter... Bon, il va falloir que je fasse les bases des syst mes Unix pour tout le monde sans exception. Seance fini vers 18h o je suis rentr l'h tel, seul. Donc au programme de la soir e, remplir mon billet quotidien, r cuper la conf Xorg me permettant d'utiliser le r troprojecteur, manger et ensuite rododo ;)

11 October 2007

Matthew Palmer: EBay Sez: Linux is for scammers

In what I can only assume is a case of "graphic design spec gone horribly wrong", EBay has decided to portray our lovable penguin friend as the mascot of the online scam artist: Linux is for scammers, says EBay (Click on the image for a full-size version) Personally, I think the scam artists' best friend is a Windows machine, as it provides a rich source of untraceable connectivity, but my guess is that Microsoft has scarier lawyers. I don't know how long the image has been up there for, but I'd imagine it won't last long. Linux users may not be that numerous, but we're a noisy lot. <grin>

21 May 2007

Stefano Zacchiroli: i had a dream

I'm Going to Play Some Number at the Lotto The Lotto Game is a really popular gambling game in Italy, run by the state itself. As in all good gambling game: you, the player, can win a lot of money while who is running the game (the Italian state in our case) will win a lot of money for sure, no matter what. Anyhow, to the facts, it's a well-known folklore that people win playing Lotto FOR SURE by playing numbers told while dreaming by parents, friends, kitties, whatever, ... As a scientist, I've no reason not to trust this folklore, why should I? As a strange coincidence (another fact that strongly hints I HAVE TO PLAY), this night I had a dream (well, it was more this morning while reading the planet, but it doesn't really matter..., folklore is folklore!). There were a lot, really a lot of people whispering this to my ears:
09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

well, ok, they aren't in decimal notation, and they aren't even comprised between 1 and 90, ... but what would you expect from a bunch of geeks??? They are probably trying to exploit some overflow in the hands of the lotto extractor or something like that... But I won't desist, I'll play those number! Open questions: any proficient Lotto player to answer them around?

3 May 2007

Stefano Zacchiroli: i had a dream

I'm Going to Play Some Number at the Lotto The Lotto Game is a really popular gambling game in Italy, run by the state itself. As in all good gambling game: you, the player, can win a lot of money while who is running the game (the Italian state in our case) will win a lot of money for sure, no matter what. Anyhow, to the facts, it's a well-known folklore that people win playing Lotto FOR SURE by playing numbers told while dreaming by parents, friends, kitties, whatever, ... As a scientist, I've no reason not to trust this folklore, why should I? As a strange coincidence (another fact that strongly hints I HAVE TO PLAY), this night I had a dream (well, it was more this morning while reading the planet, but it doesn't really matter..., folklore is folklore!). There were a lot, really a lot of people whispering this to my ears:
09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

well, ok, they aren't in decimal notation, and they aren't even comprised between 1 and 90, ... but what would you expect from a bunch of geeks??? They are probably trying to exploit some overflow in the hands of the lotto extractor or something like that... But I won't desist, I'll play those number! Open questions: any proficient Lotto player to answer them around?

11 January 2007

Martin F. Krafft: Destinationen

Just now on the Swiss hotline, a lady announced in the typical Swiss way of speaking "high German" that the airline now offers bla bla bla to over 200 "Destinationen". I had to snicker. English readers may wonder what the deal is, and even Germans might just yawn: "Destinationen" is not a German word, it's "destinations" germanified, and it's no news that the German language is seriously deteriorating as English words are creeping in. At the risk of being repetitive: it's not the English words per se, it's the fact that they are being conjugated or declinated according to German rules, which irritates me (and many others). The reason for this blog post was simply the humour: a Swiss lady speaking "high German" with her subtle Swiss accent, using words that don't exist as if there was nothing to it. I actually had to laugh out loud. Note that I have great respect for Swiss people speaking German, it being somewhat of a foreign language to them. I am perfectly aware that many are uncomfortable doing so, and this post is not trying to make fun of them, but rather expose the irony of the use of a non-German word. NP: Dream Theater / When Dream and Day Reunite Update: Christof Roduner informs me that "Destinationen" is actually a German word, or at least recognised by the Duden:
De s ti na ti on, die; -, -en <lat.> (Reiseziel; veraltet f r
Bestimmung, Endzweck)
Aus: Duden - Die deutsche Rechtschreibung, 24. Aufl. Mannheim 2006.
That must be the new orthography, which I won't comment on at this point.

Next.

Previous.