Search Results: "Martin Pitt"

28 December 2023

Antonio Terceiro: Debian CI: 10 years later

It was 2013, and I was on a break from work between Christmas and New Year of 2013. I had been working at Linaro for well over a year, on the LAVA project. I was living and breathing automated testing infrastructure, mostly for testing low-level components such as kernels and bootloaders, on real hardware. At this point I was also a Debian contributor for quite some years, and had become an official project members two years prior. Most of my involvement was in the Ruby team, where we were already consistently running upstream test suites during package builds. During that break, I put these two contexts together, and came to the conclusion that Debian needed a dedicated service that would test the contents of the Debian archive. I was aware of the existance of autopkgtest, and started working on a very simple service that would later become Debian CI. In January 2014, debci was initially announced on that month's Misc Developer News, and later uploaded to Debian. It's been continuously developed for the last 10 years, evolved from a single shell script running tests in a loop into a distributed system with 47 geographically-distributed machines as of writing this piece, became part of the official Debian release process gating migrations to testing, had 5 Summer of Code and Outrechy interns working on it, and processed beyond 40 million test runs. In there years, Debian CI has received contributions from a lot of people, but I would like to give special credits to the following:

18 July 2017

Norbert Preining: Calibre and rar support

Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.
Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer. Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.
deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main
The releases are signed with my Debian key 0x6CACA448860CDC13 Enjoy

9 May 2017

Martin Pitt: Cockpit is now just an apt install away

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it s now a simple
$ sudo apt install cockpit
away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu. After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit? Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would. Login screen System page The left side bar is the equivalent of a task switcher , and the applications (i. e. modules for administering various aspects of your server) are run in parallel. The main idea of Cockpit is that it should not behave special in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time. It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not Cockpit .

Remote machines In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget: Add remote Remote terminal The Fedora 26 machine has some more Cockpit modules installed, including a lot of playground ones, thus you see a lot more menu entries there.

Under the hood Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the local host only world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren t. This is where the cockpit bridge comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many channels to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine. One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named d1 and gets a property from systemd s hostnamed:
$ cockpit-bridge --interact=---
  "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1"  
---
d1
  "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop"  
---
and it will reply with something like
d1
 "reply":[[ "t":"s","v":"donald" ]],"id":"hostname-prop" 
---
( donald is my laptop s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host. Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don t directly speak this, but use a nice JavaScript API. As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at DevConf.cz in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn t put any effort into design whatsoever, but it does work .

Next steps Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I d like to replace the external PPA/repository on the Install instructions with just it s readily available in -backports ! Unfortunately there s some procedural blockers there, the Ubuntu backport request suffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it s just about zero risk - it s a new leaf package in testing. Have fun playing around with it, and please report bugs! Feel free to discuss and ask questions on the Google+ post.

25 February 2017

Martin Pitt: systemd 233 about to be released, please help testing

systemd 233 is scheduled to be released next week, and there is only a handful of small issues left. As usual there are tons of improvements and fixes, but the most intrusive one probably is another attempt to move from legacy cgroup v1 to a hybrid setup where the new unified (cgroup v2) hierarchy is mounted at /sys/fs/cgroup/unified/ and the legacy one stays at /sys/fs/cgroup/ as usual. This should provide an easier path for software like Docker or LXC to migrate to the unified hiearchy, but even that hybrid mode broke some bits. While systemd 233 will not make it into Debian stretch or Ubuntu zesty, as both are in feature freeze, it will soon be available in Debian experimental, and in the next Ubuntu release after 17.04 gets released. Thus now is another good time to give this some thorough testing! To help with this, please give the PPA with builds from upstream master a spin. In addition to the usual packages for Ubuntu 16.10 I also uploaded a build for Ubuntu zesty, and a build for Debian stretch (aka testing) which also works on Debian sid. You can use that URL as an apt source:
deb [trusted=yes] https://people.debian.org/~mpitt/tmp/systemd-master-20170225/ /
These packages pass our autopkgtests and I tested them manually too. LXC and LXD work fine, docker.io/runc needs a fix which I uploaded to Ubuntu zesty. (It s not yet available in Debian, sorry.) Please file reports about regressions on GitHub, but please also le me know about successes on my Google+ page so that we can get some idea about how many people tested this. Thank you, and happy booting!

6 February 2017

Martin Pitt: Migrated blog from WordPress to Hugo

My WordPress blog got hacked two days ago and now twice today. This morning I purged MySQL and restored a good backup from three days ago, changed all DB and WordPress passwords (both the old and new ones were long and autogenerated ones), but not even an hour after the redeploy the hack was back. (It can still be seen on Planet Debian and Planet Ubuntu. Neither the Apache logs nor the Journal had anything obvious, nor were there any new files in global or user www directories, so I m a bit stumped how this happened. Certainly not due to bruteforcing a password, that would both have shown in the logs and also have triggered ban2fail, so this looks like an actual vulnerability. I upgraded to WordPress 4.7.1 a few days ago, and apparently 4.7.2 fixes a few vulnerabilities, although all of them don t sound like they would match my situation. jessie-backports is still at 4.7.1, so I missed that update. But either way, all WordPress blogs hosted on my server are down for the time being. I took this as motivation to finally migrate to something more robust. WordPress has tons of features that I never need, and also a lot of overhead (dynamic generation, MySQL, its own user/passwords, etc.). I had a look around, and it seems Hugo and Blogofile are nice contenders no privileges, no database, outputting static files, input is Markdown (so much nicer to type than HTML!), and maintaining your blog in git and previewing the changes on my local laptop are straightforward. I happened to try Hugo first, and like it enough to give it an extended try you have plenty of themes to choose from and they are straightforward to customize, so I don t need to spend a lot of time learning and crafting CSS. I ran the WordPress to Hugo Exporter, and it produced remarkable results fairly usable HTML Markdown and metadata conversion, it keeps all the original URLs, and it s painless to use. Nicely done! So here it is, on to a much more secure server now! \o/

Jaldhar Vyas: Don't Believe Everything You Read on Debian Planet

Martin Pitt won the popular vote.

Martin Pitt: Hacked By SA3D HaCk3D


HaCkeD by SA3D HaCk3D
HaCkeD By SA3D HaCk3D

Long Live to peshmarga
KurDish HaCk3rS WaS Here fucked
FUCK ISIS !

28 January 2017

Bits from Debian: Debian at FOSDEM 2017

On February 4th and 5th, Debian will be attending FOSDEM 2017 in Brussels, Belgium; a yearly gratis event (no registration needed) run by volunteers from the Open Source and Free Software community. It's free, and it's big: more than 600 speakers, over 600 events, in 29 rooms. This year more than 45 current or past Debian contributors will speak at FOSDEM: Alexandre Viau, Bradley M. Kuhn, Daniel Pocock, Guus Sliepen, Johan Van de Wauw, John Sullivan, Josh Triplett, Julien Danjou, Keith Packard, Martin Pitt, Peter Van Eynde, Richard Hartmann, Sebastian Dr ge, Stefano Zacchiroli and Wouter Verhelst, among others. Similar to previous years, the event will be hosted at Universit libre de Bruxelles. Debian contributors and enthusiasts will be taking shifts at the Debian stand with gadgets, T-Shirts and swag. You can find us at stand number 4 in building K, 1 B; CoreOS Linux and PostgreSQL will be our neighbours. See https://wiki.debian.org/DebianEvents/be/2017/FOSDEM for more details. We are looking forward to meeting you all!

9 August 2016

Reproducible builds folks: Finishing the final variations for reprotest

Author: ceridwen I've been working on getting the last of the variations working. With no responses on the mailing list from anyone outside Debian and with limited time remaining, with Lunar I've decided to deemphasize it.
  1. Build path is done.
  2. Host and domain name use domainname and hostname. This site is old, but it indicates that domainname was available on most OSes and hostname was available everywhere as of 2004. Prebuilder uses a Linux-specific utility (unshare --uts) to run this variation in a chroot, but I'm not doing this for reprotest: if you want this variation, use qemu.
  3. User/group will not be portable, because they'll rely on useradd/groupadd and su. useradd and groupadd work on many but not all OSes, notably not including FreeBSD or MacOS X. su was universal in 2004.
  4. Time is not done but will probably be portable to some systems, because it will rely on date -s. Unfortunately, I haven't been able to find any information on how common date -s is across Unix-like OSes, as the -s option is not part of the POSIX standard.
  5. At the moment, I have no idea how to implement changes for /bin/sh and the login shell that will even work across different distributions of Linux, much less different OSes. There are a couple of key problems, starting with the need to find two different shells to use, because there's no way to find out what shells are installed. This blog post explains why /etc/shells doesn't work well for finding what shells are available: not everything in /etc/shells is necessarily a shell (Ubuntu has /usr/bin/screen) and not all available shells are in /etc/shells. Also, there's no good way to find out what shell is the system default because /bin/sh can be an arbitrary binary, not a symlink,and no good way to identify what it is if it is a binary. I can hard-code shell choices, but which shells? bash is obvious for Linux, but what's the best second choice?
On other topics:
  1. reprotest fails to build properly on jessie: Lunar said, and I agree, that fixing this is not a priority. I need someone with more knowledge of Debian Python packaging to help me. If I'm going to support old versions, I also need some kind of CI server, because I don't have the time or ability to maintain old versions of Debian and Python myself.
  2. libc-bin: ldconfig segfaults when run using "setarch uname26": I don't have a good solution for this, but I don't want to hard-code and maintain an architecture-specific disable. Would changing the argument to setarch work around the segfault? Is there a way to test for the presence of the bug that won't cause a segfault or similar crash?
  3. Please put adt-virt-* binaries back onto PATH: reprotest is not affected by this change because I forked the autopkgtest code rather than depending on it, so that reprotest can be installed through PyPi. At the moment, reprotest doesn't make its versions of the programs in virt/ available on $PATH. This is primarily because of the problems with distributing command-line scripts with setuptools. The approach I'm currently using, including the virt/ programs as non-code data with include reprotest/virt/* in MANIFEST.in, doesn't install them to make them available for other programs. Using one of the other approaches potentially could, but it's difficult to make the imports work with those approaches. (I haven't found a way to do it.) I think the best solution to this approach is to split autopkgtest into the Debian-specific components and the general-purpose virtualization components, but I don't have the time to do this myself or to negotiate with Martin Pitt, if he'd even be receptive. I'm also unsure at this point if it wouldn't be better for reprotest to switch from autopkgtest to using something like Ansible to run the virtualization, because Ansible has some solved some of the portability problems already and is not tied to Debian.
My goal is to finish the variations (finally), though as this has always proved more difficult than I expected in the past, I don't make any guarantees. Beyond that, I want to start working on finishing docstrings in the reprotest-specific (i.e., not inherited from autopkgtest) code, improving the documentation in general, and improving the tests.

28 July 2016

Michael Prokop: systemd backport of v230 available for Debian/jessie

At DebConf 16 I was working on a systemd backport for Debian/jessie. Results are officially available via the Debian archive now. In Debian jessie we have systemd v215 (which originally dates back to 2014-07-03 upstream-wise, plus changes + fixes from pkg-systemd folks of course). Now via Debian backports you have the option to update systemd to a very recent version: v230. If you have jessie-backports enabled it s just an apt install systemd -t jessie-backports away. For the upstream changes between v215 and v230 see upstream s NEWS file for list of changes. (Actually the systemd backport is available since 2016-07-19 for amd64, arm64 + armhf, though for mips, mipsel, powerpc, ppc64el + s390x we had to fight against GCC ICEs when compiling on/for Debian/jessie and for i386 architecture the systemd test-suite identified broken O_TMPFILE permission handling.) Thanks to the Alexander Wirt from the backports team for accepting my backport, thanks to intrigeri for the related apparmor backport, Guus Sliepen for the related ifupdown backport and Didier Raboud for the related usb-modeswitch/usb-modeswitch-data backports. Thanks to everyone testing my systemd backport and reporting feedback. Thanks a lot to Felipe Sateler and Martin Pitt for reviews, feedback and cooperation. And special thanks to Michael Biebl for all his feedback, reviews and help with the systemd backport from its very beginnings until the latest upload. PS: I cannot stress this enough how fantastic Debian s pkg-systemd team is. Responsive, friendly, helpful, dedicated and skilled folks, thanks folks!

8 July 2016

Reproducible builds folks: Managing container and environment state

Author: ceridwen With some more help from Martin Pitt, it became clear to me that my previous mental model of how autopkgtest worked is very different from how it does work. I'll illustrate by borrowing my previous example. I know that schroot has the following behavior:
The default behaviour is as follows (all directory paths are inside the chroot). A login shell is run in the current working directory. If this is not available, it will try $HOME (when --preserve-environment is used), then the user's home directory, and / inside the chroot in turn. A command is always run in the current working directory inside the chroot. If none of the directories are available, schroot will exit with an error status.
I was naively thinking that the way autopkgtest would work is that it would set the current working directory of the schroot call and the ensuing subprocess call would thus take place in that directory inside the schroot. That is not how it works. If you want to change directories inside the virtual server, you have to use cd. The same is true of, at least, environment variables, which have their own specific handling in the adt_testbed.Testbed methods but have to be passed as strings, and umask. I'm assuming this is because the direct methods with qemu images or LXC containers don't work. What this means is that I was thinking about the problem the wrong way: what reprotest needs to do is generate shell scripts. This is how autopkgtest works. If this goes beyond laying out commands linearly one after another, for instance if it demands conditionals or other nested constructs, the right way to do it is to build an abstract syntax tree representation of the shell script and then convert it to code. Whether I need more complicated shell scripts depends on my approach to handling state in the containers. I need to know what state persists across separate command executions: if I call adt_testbed.Testbed.execute() twice, what if any changes I make to the container will carry forward from the first to the second? There are three categories here. First, some properties of a system aren't preserved even from one command execution to the next, like working directory and environment variables. (I thought working directory would be preserved, but it's not). The second is state that persists while the testbed is open and is then automatically reverted when it's closed, like files copied into temporary directories on the tesbed. The third is state that persists across different sessions on the same container and must be cleaned up by reprotest. It's worth noting that which state falls into which category may vary by the container in question, though for the most part I can either do unnecessary cleanup or issue unnecessary commands to handle the differences. autopkgtest itself has a very different approach to cleanup, as it relies almost entirely on the builtin reversion capabilities from some of its containers. I would prefer to avoid doing the same, partly because I know that some of the modifications I need to make, for instance creating new users or mounting disorderfs, can't be reverted by the faster, simpler containers like schroot. From discussions with Lunar, I think that the variations that correspond to environment variables (captures_environment, home, locales, path, and timezone) fall into the first category, but because of the special handling for them they don't require sending a separate command. The shell (bash foo) accepts a script or command as an argument so it also doesn't need a separate command. Setting the working directory and umask require separate commands that have to be joined. On Linux, setarch also accepts a command/script as an argument so can be handled like the shell, but there's no unified POSIX protocol for mocking uname so other OSes will require different approaches. Users, groups, file ordering, host, and domain will require cleanup in all containers except for (maybe) qemu. If I want to handle the cleanup in the shell scripts themselves, I need conditionals so that for instance the shell script only tries to unmount disorderfs if disorderfs was successfully mounted. This approach would simplify the error handling problems I've had before, where when a build crashes cleanup code run from Python doesn't get run until after the testbed stop accepting commands. Lunar suggested the Plumbum library to me, and I think I can use it to avoid writing my own shell AST library. It has a method that converts the Python representation of a shell script into a string that can be passed to Testbed.command(). Integrating Plumbum to generate the necessary scripts is where I'm going in the next week. Any feedback on any of this is welcome. I'm also curious what other projects are using autopkgtest code. Holger brought to my attention piuparts, which brings the list up to four that I'm aware of, autopkgtest itself, sbuild, piuparts, and now reprotest.

9 June 2016

Martin Pitt: autopkgtest 4.0: Simplified CLI, deprecating adt

Historically, the adt-run command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don t know anyone or any CI system which actually makes use of the multiple tests on a single command line feature. The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests. The other long-standing confusion is the pervasive adt acronym, which is still from the very early times when autopkgtest was called autodebtest (this was changed one month after autodebtest s inception, in 2006!). Thus in some recent night/weekend hack sessions I ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.) New autopkgtest command The adt-run program is now superseded by autopkgtest: README.running-tests got updated to the new CLI, as usual you can also read the HTML online. The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version. Image build tools All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with autopkgtest instead of adt . For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial. In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old adt- prefix. Environment variables in tests Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*: As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years). Feedback As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it. Happy testing!

20 April 2016

Reproducible builds folks: Reproducible builds: week 51 in Stretch cycle

What happened in the reproducible builds effort between April 10th and April 16th 2016: Toolchain fixes Antoine Beaupr suggested that gitpkg stops recording timestamps when creating upstream archives. Antoine Beaupr also pointed out that git-buildpackage diverges from the default gzip settings which is a problem for reproducibly recreating released tarballs which were made using the defaults. Alexis Bienven e submitted a patch extending sphinx SOURCE_DATE_EPOCH support to copyright year. Packages fixed The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330, avis, brailleutils, charactermanaj, classycle, commons-io, commons-javaflow, commons-jci, gap-radiroot, jebl2, jetty, libcommons-el-java, libcommons-jxpath-java, libjackson-json-java, libjogl2-java, libmicroba-java, libproxool-java, libregexp-java, mobile-atlas-creator, octave-econometrics, octave-linear-algebra, octave-odepkg, octave-optiminterp, rapidsvn, remotetea, ruby-rinku, tachyon, xhtmlrenderer. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: diffoscope development Zbigniew J drzejewski-Szmek noted in #820631 that diffoscope doesn't work properly when a file contains several cpio archives. Package reviews 21 reviews have been added, 14 updated and 22 removed in this week. New issue found: timestamps_in_htm_by_gap. Chris Lamb reported 10 new FTBFS issues. Misc. The video and the slides from the talk "Reproducible builds ecosystem" at LibrePlanet 2016 have been published now. This week's edition was written by Lunar and Holger Levsen. h01ger automated the maintenance and publishing of this weekly newsletter via git.

4 April 2016

Raphaël Hertzog: My Free Software Activities in February and March 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it s one of the best ways to find volunteers to work with me on projects that matter to me. I skipped my monthly report last time so this one will cover two months. I will try to list only the most important things to not make it too long.  The Debian Handbook I worked with Ryuunosuke Ayanokouzi to prepare a paperback version of the Japanese translation of my book. Thanks to the efforts of everybody, it s now available. Unfortunately, Lulu declined to take it in distribution program so it won t be available on traditional bookstores (like Amazon, etc.). The reason is that they do not support non-latin character sets in the meta-data. I tried to cheat a little bit by inputting the description in English (still explaining that the book was in Japanese) but they rejected it nevertheless because the English title could mislead people. So the paperback is only available on lulu.com. Fortunately, the shipping costs are reasonable if you pick the most economic offer. Following this I invited the Italian, Spanish and Brazilian Portuguese translators to complete the work (they were close will all the strings already translated, mainly missing translated screenshots and some backcover content) so that we can also release paperback versions in those languages. It s getting close to completion for them. Hopefully we will have those available until next month. Distro Tracker In early February, I tweaked the configuration to send (by email) exceptions generated by incoming mails and by routine task. Before this they were logged but I did not take the time to look into them. This quickly brought a few issues into light and I fixed them as they appeared: for instance the bounce handling code was getting confused when the character case was not respected, and it appears that some emails come back to us after having been lowercased. Also the code was broken when the References field used more than one line on incoming control emails. This brought into light a whole class of problems with the database storing twice the same email with only differing case. So I did further work to merge all those duplicate entries behind a single email entry. Later, the experimental Sources files changed and I had to tweak the code to work with the removal of the Files field (relying instead on Checksums-* to find out the various files part of the entry). At some point, I also fixed the login form to not generate an exception when the user submits an empty form. I also decided that I no longer wanted to support Django 1.7 in distro tracker as Django 1.8 is the current LTS version. I asked the Debian system administrators to update the package on tracker.debian.org with the version in jessie-backports. This allowed me to fix a few deprecation warnings that I kept triggering because I wanted the code to work with Django 1.7. One of those warnings was generated by django-jsonfield though and I could not fix it immediately. Instead I prepared a pull request that I submitted to the upstream author. Oh, and a last thing, I tweaked the CSS to densify the layout on the package page. This was one of the most requested changes from the people who were still preferring packages.qa.debian.org over tracker.debian.org. Kali and new pkg-security team As part of my Kali work, I have been fixing RC bugs in Debian packages that we use in Kali. But in many cases, I stumbled upon packages whose maintainers were really missing in action (MIA). Up to now, we were only doing non-maintainers upload (NMU) but I want to be able to maintain those packages more effectively so we created a new pkg-security team (we re only two right now and we have no documentation yet, but if you want to join, you re welcome, in particular if you maintain a package which is useful in the security field). arm64 work. The first 3 packages that we took over (ssldump, sucrack, xprobe) are actually packages that were missing arm64 builds. We just started our arm64 port on Kali and we fixed them for that architecture. Since they were no longer properly maintained, in most cases it was just a matter of using dh_autoreconf to get up-to-date config. sub,guess files. We still miss a few packages on arm64: vboot-utils (that we will likely take over soon since it s offered for adoption), ruby-libv8 and ruby-therubyracer, ntopng (we have to wait a new luajit which is only in experimental right now). We also noticed that dh-make-golang was not available on arm64, after some discussion on #debian-buildd, I filed two bugs for this: #819472 on dh-make-golang and #819473 on dh-golang. RC bug fixing. hdparm was affected by multiple RC bugs and the release managers were trying to get rid of it from testing. This removed multiple packages that were used by Kali and its users. So I investigated the situation of that package, convinced the current maintainers to orphan it, asked for new maintainers on debian-devel, reviewed multiple updates prepared by the new volunteers and sponsored their work. Now hdparm is again RC-bug free and has the latest upstream version. We also updated jsonpickle to 0.9.3-1 to fix RC bug #812114 (that I forwarded upstream first). Systemd presets support in init-system-helpers. I tried to find someone (to hire) to implement the system preset feature I requested in #772555 but I failed. Still Andreas Henriksson was kind enough to give it a try and sent a first patch. I tried it and found some issues so I continued to improve it and simplify it I submitted an updated patch and pinged Martin Pitt. He pointed me to the DEP-8 test failures that my patch was creating. I quickly fixed those afterwards. This patch is in use in Kali and lets us disable network services by default. I would like to see it merged in Debian so that everybody can setup systemd preset file and have their desire respected at installation time. Misc bug reports. I filed #813801 to request a new upstream release of kismet. Same for masscan in #816644 and for wkhtmltopdf in #816714. We packaged (before Debian) a new upstream release of ruby-msgpack and found out that it was not building on armel/armhf so we filed two upstream tickets (with a suggested fix). In #814805, we asked the pyscard maintainer to reinstate python-pyscard that was dropped (keeping only the Python3 version) as we use the Python 2 version in Kali. And there s more: I filed #816553 (segfault) and #816554 against cdebootstrap. I asked for dh-python to have a better behaviour after having being bitten by the fact that dh with python3 was not doing what I expected it to do (see #818175). And I reported #818907 against live-build since it is failing to handle a package whose name contains an upper case character (it s not policy compliant but dpkg supports them). Misc packaging I uploaded Django 1.9.2 to unstable and 1.8.9 to jessie-backports. I provided the supplementary information that Julien Cristau asked me in #807654 but despite this, this jessie update has been ignored for the second point release in a row. It is now outdated until I update it to include the security fixes that have been released in the mean time but I m not yet sure that I will do it the lack of cooperation of the release team for that kind of request is discouraging. I sponsored multiple uploads of dolibarr (on security update notably) and tcpdf (to fix one RC bug). Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

18 December 2015

Martin Pitt: What s new in autopkgtest: LXD, MaaS, apt pinning, and more

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading. New LXD virtualization backend 3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the classic LXC. Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with
  lxc remote add lco https://images.linuxcontainers.org:8443
and use the image to run e. g. the libpng test from the archive:
  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64
The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd. I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to St phane, Serge, Tycho, and the other LXD authors! The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use. It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks. MaaS setup script While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle. MaaS (Metal as a Service) provides just that it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:
  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey
The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options. Note that this is not wired into Ubuntu s production CI environment, but it will be. Selectively using packages from -proposed Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes. . These days we are using a more fine-grained approach: A test run is now specific for a trigger , that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for foo ) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day. This new behaviour is controlled by an extension of the --apt-pocket option. So you can say
  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...
and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release. Caveat:Unfortunately apt s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it s still helpful in many cases that don t involve library transitions or other package sets that need to land in lockstep. Unified testbed setup script There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release. I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands. While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it. Misc Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don t hesitate to contact me or file a bug report.

11 December 2015

Antonio Terceiro: Bits from the Debian Continuous Integration project

It s been almost 2 years since the Debian Continuous Integration project has been launched, and it has proven to be a useful resource for the development of Debian. I have previously made a an introductory post, and this this is an update on the latest developments. Infrastructure upgrade Back in early 2014 when Debian CI was launched, there were less than 200 source packages with declared test suite metadata, and using a single worker machine polling the archive for updates and running tests sequentially in an infinite loop ( the simplest thing that could possibly work ) was OK-ish. Then our community started an incredible, slow and persistent effort to prepare source packages for automated testing, and we now have almost 5,000 of them. The original, over-simplistic design had to be replaced. The effort of transforming debci in a distributed system was started by Martin Pitt, who did an huge amount of work. In the latest months I was able to complete that work, to a point where I am confident in letting it run (mostly) unatended. We also had lots of contributions to the web UI from Brandon Fairchild, who was a GSOC intern in 2014, and continues to contribute to this date. All this work culminated in the migration from a single-worker model to a master/workers setup, currently with 10 worker nodes. On busy periods all of those worker nodes will go on for days with full utilization, but even then the turnaround between package upload and a test run is now a lot faster than it used to. Debian members can inspect the resource usage on those systems, as well as the length of the processing queue, by browsing to the corresponding munin instance (requires authentication via a SSL client certificated issued by sso.debian.org). The system is currenly being hosted on a Amazon EC2 account sponsored by Amazon. The setup is fully automated and reproducible. It is not fully (or at all) documented yet, but those interested should feel free to get in touch on IRC (OFTC, #debci) Testing backend changed from schroot to lxc Together with the infrastructure updates, we also switched to using lxc instead of schroot as backend. Most test suites should not be affected by this, but the default lxc settings might cause some very specific issues in a few packages. See for example #806542 ( liblinux-prctl-perl: autopkgtest failures: seccomp, capbset ) Adding support for KVM is also in the plans, and we will get to that at some point. Learn more If you want to learn more on how you can add tests for your package, a good first start is the debci online documentation (which is also available locally if you install debci ). You might also be interested in watching the live tutorial (WebM, 469 MB!) that has been presented at Debconf 15 earlier this year, full of tips and real examples from the archive. It would be awesome if someone wanted to transcribe that into a text tutorial ;-) How to get involved There are a few ways you can contribute: autodep8. if you are knowledgeable on a subset of packages that are very similar and can have their tests executed in a similar way, such as $Language libraries , you might consider writing a test metadata generator so that each package does not need to declare a debian/tests/control file explicitly, requiring only The Testsuite: header in debian/control. Ruby and Perl are already covered, and there is initial support for NodeJS. Adding support for new types of packages is very easy. See the source repository. If you manage to add support for your favorite language, please get in touch so we can discuss whitelisting the relavant packages in ci.debian.net so that they will get their tests executed even before being uploaded with the proper Testsuite: control field. autopkgtest. autopkgtest is responsible for actually running your tests, and you can use it to reproduce test runs locally. debci. debci is the system running in ci.debian.net (version 1.0, currently in testing, is exactly what is running up there, minus a version number and a changelog entry). It can also be used to have private clones of ci.debian.net, e.g. for derivatives or internal Debian-related development. See for example the Ubuntu autopkgtest site. Getting in touch For maintainer queries and general discussion: For the development of debci/autopkgtest/autodep8

18 October 2015

Lunar: Reproducible builds: week 25 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Niko Tyni wrote a new patch adding support for SOURCE_DATE_EPOCH in Pod::Man. This would complement or replace the previously implemented POD_MAN_DATE environment variable in a more generic way. Niko Tyni proposed a fix to prevent mtime variation in directories due to debhelper usage of cp --parents -p. Packages fixed The following 119 packages became reproducible due to changes in their build dependencies: aac-tactics, aafigure, apgdiff, bin-prot, boxbackup, calendar, camlmix, cconv, cdist, cl-asdf, cli-common, cluster-glue, cppo, cvs, esdl, ess, faucc, fauhdlc, fbcat, flex-old, freetennis, ftgl, gap, ghc, git-cola, globus-authz-callout-error, globus-authz, globus-callout, globus-common, globus-ftp-client, globus-ftp-control, globus-gass-cache, globus-gass-copy, globus-gass-transfer, globus-gram-client, globus-gram-job-manager-callout-error, globus-gram-protocol, globus-gridmap-callout-error, globus-gsi-callback, globus-gsi-cert-utils, globus-gsi-credential, globus-gsi-openssl-error, globus-gsi-proxy-core, globus-gsi-proxy-ssl, globus-gsi-sysconfig, globus-gss-assist, globus-gssapi-error, globus-gssapi-gsi, globus-net-manager, globus-openssl-module, globus-rsl, globus-scheduler-event-generator, globus-xio-gridftp-driver, globus-xio-gsi-driver, globus-xio, gnome-control-center, grml2usb, grub, guilt, hgview, htmlcxx, hwloc, imms, kde-l10n, keystone, kimwitu++, kimwitu-doc, kmod, krb5, laby, ledger, libcrypto++, libopendbx, libsyncml, libwps, lprng-doc, madwimax, maria, mediawiki-math, menhir, misery, monotone-viz, morse, mpfr4, obus, ocaml-csv, ocaml-reins, ocamldsort, ocp-indent, openscenegraph, opensp, optcomp, opus, otags, pa-bench, pa-ounit, pa-test, parmap, pcaputils, perl-cross-debian, prooftree, pyfits, pywavelets, pywbem, rpy, signify, siscone, swtchart, tipa, typerep, tyxml, unison2.32.52, unison2.40.102, unison, uuidm, variantslib, zipios++, zlibc, zope-maildrophost. The following packages became reproducible after getting fixed: Packages which could not be tested: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: Lunar reported that test strings depend on default character encoding of the build system in ongl. reproducible.debian.net The 189 packages composing the Arch Linux core repository are now being tested. No packages are currently reproducible, but most of the time the difference is limited to metadata. This has already gained some interest in the Arch Linux community. An explicit log message is now visible when a build has been killed due to the 12 hours timeout. (h01ger) Remote build setup has been made more robust and self maintenance has been further improved. (h01ger) The minimum age for rescheduling of already tested amd64 packages has been lowered from 14 to 7 days, thanks to the increase of hardware resources sponsored by ProfitBricks last week. (h01ger) diffoscope development diffoscope version 37 has been released on October 15th. It adds support for two new file formats (CBFS images and Debian .dsc files). After proposing the required changes to TLSH, fuzzy hashes are now computed incrementally. This will avoid reading entire files in memory which caused problems for large packages. New tests have been added for the command-line interface. More character encoding issues have been fixed. Malformed md5sums will now be compared as binary files instead of making diffoscope crash amongst several other minor fixes. Version 38 was released two days later to fix the versioned dependency on python3-tlsh. strip-nondeterminism development strip-nondeterminism version 0.013-1 has been uploaded to the archive. It fixes an issue with nonconformant PNG files with trailing garbage reported by Roland Rosenfeld. disorderfs development disorderfs version 0.4.1-1 is a stop-gap release that will disable lock propagation, unless --share-locks=yes is specified, as it still is affected by unidentified issues. Documentation update Lunar has been busy creating a proper website for reproducible-builds.org that would be a common location for news, documentation, and tools for all free software projects working on reproducible builds. It's not yet ready to be published, but it's surely getting there. Homepage of the future reproducible-builds.org website  Who's involved?  page of the future reproducible-builds.org website Package reviews 103 reviews have been removed, 394 added and 29 updated this week. 72 FTBFS issues were reported by Chris West and Niko Tyni. New issues: random_order_in_static_libraries, random_order_in_md5sums.

16 June 2015

Martin Pitt: autopkgtest 3.14 now twice as rebooty

Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out. First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the wait for go down and back up part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details. The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron. Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary marker , saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another nonstandard way of rebooting the testbed. README.package-tests shows an example how this looks like. 3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases. Enjoy, and let me know if you run into troubles or have questions!

4 May 2015

Lunar: Reproducible builds: first week in Stretch cycle

Debian Jessie has been released on April 25th, 2015. This has opened the Stretch development cycle. Reactions to the idea of making Debian build reproducibly have been pretty enthusiastic. As the pace is now likely to be even faster, let's see if we can keep everyone up-to-date on the developments. Before the release of Jessie The story goes back a long way but a formal announcement to the project has only been sent in February 2015. Since then, too much work has happened to make a complete report, but to give some highlights: Lunar did a pretty improvised lightning talk during the Mini-DebConf in Lyon. This past week It seems changes were pilling behind the curtains given the amount of activity that happened in just one week. Toolchain fixes We also rebased the experimental version of debhelper twice to merge the latest set of changes. Lunar submitted a patch to add a -creation-date to genisoimage. Reiner Herrmann opened #783938 to request making -notimestamp the default behavior for javadoc. Juan Picca submitted a patch to add a --use-date flag to texi2html. Packages fixed The following packages became reproducible due to changes of their build dependencies: apport, batctl, cil, commons-math3, devscripts, disruptor, ehcache, ftphs, gtk2hs-buildtools, haskell-abstract-deque, haskell-abstract-par, haskell-acid-state, haskell-adjunctions, haskell-aeson, haskell-aeson-pretty, haskell-alut, haskell-ansi-terminal, haskell-async, haskell-attoparsec, haskell-augeas, haskell-auto-update, haskell-binary-conduit, haskell-hscurses, jsch, ledgersmb, libapache2-mod-auth-mellon, libarchive-tar-wrapper-perl, libbusiness-onlinepayment-payflowpro-perl, libcapture-tiny-perl, libchi-perl, libcommons-codec-java, libconfig-model-itself-perl, libconfig-model-tester-perl, libcpan-perl-releases-perl, libcrypt-unixcrypt-perl, libdatetime-timezone-perl, libdbd-firebird-perl, libdbix-class-resultset-recursiveupdate-perl, libdbix-profile-perl, libdevel-cover-perl, libdevel-ptkdb-perl, libfile-tail-perl, libfinance-quote-perl, libformat-human-bytes-perl, libgtk2-perl, libhibernate-validator-java, libimage-exiftool-perl, libjson-perl, liblinux-prctl-perl, liblog-any-perl, libmail-imapclient-perl, libmocked-perl, libmodule-build-xsutil-perl, libmodule-extractuse-perl, libmodule-signature-perl, libmoosex-simpleconfig-perl, libmoox-handlesvia-perl, libnet-frame-layer-ipv6-perl, libnet-openssh-perl, libnumber-format-perl, libobject-id-perl, libpackage-pkg-perl, libpdf-fdf-simple-perl, libpod-webserver-perl, libpoe-component-pubsub-perl, libregexp-grammars-perl, libreply-perl, libscalar-defer-perl, libsereal-encoder-perl, libspreadsheet-read-perl, libspring-java, libsql-abstract-more-perl, libsvn-class-perl, libtemplate-plugin-gravatar-perl, libterm-progressbar-perl, libterm-shellui-perl, libtest-dir-perl, libtest-log4perl-perl, libtext-context-eitherside-perl, libtime-warp-perl, libtree-simple-perl, libwww-shorten-simple-perl, libwx-perl-processstream-perl, libxml-filter-xslt-perl, libxml-writer-string-perl, libyaml-tiny-perl, mupen64plus-core, nmap, openssl, pkg-perl-tools, quodlibet, r-cran-rjags, r-cran-rjson, r-cran-sn, r-cran-statmod, ruby-nokogiri, sezpoz, skksearch, slurm-llnl, stellarium. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Improvements to reproducible.debian.net Mattia Rizzolo has been working on compressing logs using gzip to save disk space. The web server would uncompress them on-the-fly for clients which does not accept gzip content. Mattia Rizzolo worked on a new page listing various breakage: missing or bad debbindiff output, missing build logs, unavailable build dependencies. Holger Levsen added a new execution environment to run debbindiff using dependencies from testing. This is required for packages built with GHC as the compiler only understands interfaces built by the same version. debbindiff development Version 17 has been uploaded to unstable. It now supports comparing ISO9660 images, dictzip files and should compare identical files much faster. Documentation update Various small updates and fixes to the pages about PDF produced by LaTeX, DVI produced by LaTeX, static libraries, Javadoc, PE binaries, and Epydoc. Package reviews Known issues have been tagged when known to be deterministic as some might unfortunately not show up on every single build. For example, two new issues have been identified by building with one timezone in April and one in May. RD and help2man add current month and year to the documentation they are producing. 1162 packages have been removed and 774 have been added in the past week. Most of them are the work of proper automated investigation done by Chris West. Summer of code Finally, we learned that both akira and Dhole were accepted for this Google Summer of Code. Let's welcome them! They have until May 25th before coding officialy begins. Now is the good time to help them feel more comfortable by sharing all these little bits of knowledge on how Debian works.

18 November 2014

Christian Perrier: Bug #770000

Martin Pitt reported Debian bug #770000 on Tuesday November 18th, against the release.debian.org pseudo-package. Bug #760000 was reported as of August 30th: so there have been 10,000 bugs reported in 3 months minus 12 days. The bug rate increased quite significantly during the last weeks. We can suspect this is related to the release and the freeze (that triggers many unblock requests) I find it interesting that this bug is directly related to the release, directly related to systemd and originated from one of the systemd packages maintainers, if I'm right. So, I'll take this opportunity to publicly thank all people who have brought the systemd packages to what they are now, whether or not they're still maintaining the package. We've all witnessed that Debian if facing a strong social issue nowadays and I'm very deeply sad about this. I hope we'll be able to go through this without losing too many brilliant contributors, as it happened recently. Please prove me right and do The Right Thing for me to be able to continue this silly "round bug number" contest and still believe that, some day, bug #1000000 will really happen and I'm still there to witness it. Ah, and by the way, systemd bloody works on my system. I can't even remember when I switched to it. It Just Worked.

Next.