Search Results: "Josh Triplett"

17 January 2021

Wouter Verhelst: Software available through Extrepo

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online. The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months... To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository. Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software
  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away. The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.
  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.
  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.
Again, the above lists are not meant to be exhaustive. Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories. Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

2 March 2020

Antoine Beaupr : Moving dconf entries to git

I've been managing my UNIX $HOME repository with version control for almost two decades now (first under CVS, then with git). Once in a while, I find a little hack to make this work better. Today, it's dconf/gsettings, or more specifically, Workrave that I want to put in git. I noticed my laptop was extremely annoying compared with my office workstation and realized I never figured out how to write Workrave's configuration to git. The reason is that its configuration is stored in dconf, a binary database format, but, blissfully, I had forgotten about this and tried to figure out where the heck its config was. I was about to give up when I re-remembered this, when I figured I would just do a quick search ("dconf commit to git") and that brought me to Josh Triplett's Debconf 14 talk about this exact topic. The slides are... a little terse, but I could figure out the gist of it. The key is to change the DCONF_PROFILE environment to point to a new config file (say in your .bashrc):
export DCONF_PROFILE=$HOME/.config/dconf/profile
That file (~/.config/dconf/profile) should be created with the following content:
  1. The first line is the default: store everything in this huge binary file.
  2. The second is the magic: it stores configuration in a precious text file, in .config/dconf/user.txt specifically.
Then the last step was to migrate config between the two. For that I need a third config file, a DCONF_PROFILE that has only the text database so settings are forcibly written there, say ~/.config/dconf/profile-edit:
And then I can migrate my workrave configuration with:
gsettings list-recursively org.workrave   while read schema key val ; do DCONF_PROFILE=~/.config/dconf/profile-edit gsettings set "$schema" "$key" "$val" ; done
Of course, a bunch of those settings are garbage and do not differ from the default. Unfortuantely, there doesn't seem to be a way to tell gsettings to only list non-default settings, so I had to do things by hand from there, by comparing my generated config with:
DCONF_PROFILE=/dev/null gsettings list-recursively org.workrave
I finally ended up with the following user.txt, which is now my workrave config:
That's a nice setup: "YOLO" settings end up in the binary database that I don't care about tracking in git, and a precious few snowflakes get tracked in a text file. Triplett also made a command to change settings in the text file, but I don't think I need to bother with that. It's more critical to copy settings between the two, in my experience, as I rarely have this moment: "oh I know exactly the key setting I want to change and i'll write it down". What actually happens is that I'm making a change in a GUI and later realize it should be synchronized over. (It looks like Triplett does have a tool to do those diffs and transitions, but unfortunately git:// doesn't respond at this time so I can't find that source code.) (Update: that git repository can be downloaded with:
git clone
It looks like dconfc is designed to copy settings between the two databases. So my above loop would be more easily written as:
dconf dump /org/workrave/   DCONF_PROFILE=~/.config/dconf/profile-edit dconf load /org/workrave/
Certainly simplifies things. dconfd is also nice although it relies on the above dconfe script which makes it (IMHO) needlessly hard to deploy.) Now if only Firefox bookmarks were reasonable again...

1 November 2017

James McCoy: Monthly FLOSS activity - 2017/10 edition

Debian subversion vim

28 January 2017

Bits from Debian: Debian at FOSDEM 2017

On February 4th and 5th, Debian will be attending FOSDEM 2017 in Brussels, Belgium; a yearly gratis event (no registration needed) run by volunteers from the Open Source and Free Software community. It's free, and it's big: more than 600 speakers, over 600 events, in 29 rooms. This year more than 45 current or past Debian contributors will speak at FOSDEM: Alexandre Viau, Bradley M. Kuhn, Daniel Pocock, Guus Sliepen, Johan Van de Wauw, John Sullivan, Josh Triplett, Julien Danjou, Keith Packard, Martin Pitt, Peter Van Eynde, Richard Hartmann, Sebastian Dr ge, Stefano Zacchiroli and Wouter Verhelst, among others. Similar to previous years, the event will be hosted at Universit libre de Bruxelles. Debian contributors and enthusiasts will be taking shifts at the Debian stand with gadgets, T-Shirts and swag. You can find us at stand number 4 in building K, 1 B; CoreOS Linux and PostgreSQL will be our neighbours. See for more details. We are looking forward to meeting you all!

28 May 2015

Sven Hoexter: RMS, free software and where I fail the goal

You might have already read this comment by RMS in the Guardian. That comment and a recent discussion about the relevance of GPL changes post GPLv2 made me think again about the battle RMS started to fight. While some think RMS should "retire", at least I still fail on my personal goal to not depend on non-free software and services. So for me this battle is far from over, and here is my personal list of "non-free debt" I've to pay off. general purpose systems aka your computer Looking at the increasing list of firmware blobs required to use a GPU, wireless chipsets and more and more wired NICs, the situation seems to be worse then in the late 90s. Back then the primary issue was finding supported hardware, but the driver was free. Nowadays even the open sourced firmware often requires obscure patched compilers to build. If I look at this stuff I think the OpenBSD project got that right with the more radical position. Oh and then there is CPU microcode. I'm not yet sure what to think about it, but in the end it's software and it's not open source. So it's non-free software running on my system. Maybe my memory is blurred due to the fact, that the seperation of firmware from the Linux kernel, and proper firmware loading got implemented only years later. I remember the discussion about the pwc driver and its removal from Linux. Maybe the situation wasn't better at that time but the firmware was just hidden inside the Linux driver code? On my system at work I've to add the Flash plugin to the list due to my latest test with Prezi which I'll touch later. I also own a few Humble Indie bundles. I played parts of Osmos after a recommendation by Joey Hess, I later finished to play through Limbo and I got pretty far with Machinarium on a Windows system I still had at that time. I also tried a few others but never got far or soon lost interest. Another thing I can not really get rid of is unrar because of stuff I need to pull from xda-developer links just to keep a cell phone running. Update: Josh Triplett pointed out that there is unar available in the Debian archive. And indeed that one works on the rar file I just extracted. Android ecosystem I will soon get rid of a stock S3 mini and try to replace it with a moto g loaded with CyanogenMod. That leaves me with a working phone with a OS that just works because of a shitload of non-free blobs. The time and work required to get there is another story. Among others you need a new bootloader that requires a newer fastboot compared to what we have in Jessie, and later you also need the newer adb to be able to sideload the CM image. There I gave in and just downloaded the pre build SDK from Google. And there you've another binary I did not even try to build from source. Same for the CM image itself, though that's not that much different from using a GNU/Linux distribution if you ignore the trust issues. It's hard to trust the phone I've build that way, but it's the best I can get at the moment with at least some bigger chunks of free software inside. So let's move to the applications on the phone. I do not use GooglePlay, so I rely on f-droid and freeware I can download directly from the vendor. "Cloud" services This category mixes a lot with the stuff listed above, most of them are not only an application, in fact Threema and Wunderlist are useless without the backend service. And Opera is just degraded to a browser - and to be replaced with Firefox - if you discount the compression proxy. The other big addition in this category is Prezi. We tried it out at work after it got into my focus due to a post by Dave Aitel. It's kind of the poster child of non-freeness. It requires a non-free, unstable, insecure and half way deprecated browser plugin to work, you can not download your result in a useful format, you've to buy storage for your presentation at this one vendor, you've to pay if you want to keep your presentation private. It's the perfect lockin situation. But still it's very convenient, prevents a lot of common mistakes you can make when you create a presentation and they invented a new concept of presenting. I know about impress.js(hosted on a non-free platform by the way, but at least you can export it from there) and I also know about hovercraft. I'm impressed by them, but it's still not close to the ease of use of Prezi. So here you can also very prominently see the cost of free and non-free software. Invest the time and write something cool with CSS3 and impress.js or pay Prezi to just klick yourself through. To add something about the instability - I had to use a windows laptop for presenting with Prezi because the Flash plugin on Jessie crashed in the presentation mode, I did not yet check the latest Flash update. I guess that did not make the situation worse, it already is horrible. Update: Daniel Kahn Gillmore pointed out that you can combine inkscape with sozi, though the Debian package is in desperate need for an active maintainer, see also #692989. I also use kind of database services like and When I was younger you bought such things printed on dead trees but they did not update very well. Thinking a bit further, a Certification Authority is not only questionable due to the whole trust issue, they also provide OCSP responder as kind of a web service. And I've already had the experience what the internet looks like when the OCSP systems of GlobalSign failed. So there is still a lot to fight for and a lot of "personal non-free debt" to pay off.

22 April 2014

Axel Beckert: GNU Screen 4.2.0 in Debian Experimental

About a month ago, on 20th of March, GNU Screen had its 27th anniversary. A few days ago, Amadeusz S awi ski, GNU Screen s new primary upstream maintainer, released the status quo of Screen development as version 4.2.0 (probably to distinguish it from all those 4.1.0 labeled development snapshots floating around in most Linux distributions nowadays). I did something similar and uploaded the status quo of Debian s screen package in git as 4.1.0~20120320gitdb59704-10 to Debian Sid shortly afterwards. That upload should hit Jessie soon, too, resolving the following two issues also in Testing: That way I could decouple these packaging fixes/features from the new upstream release which I uploaded to Debian Experimental for now. Testers for the 4.2.0-1 package are very welcome! Oh, and by the way, that upstream comment (or ArchLinux s according announcement) about broken backwards compatibility with attaching to running sessions started with older Screen releases doesn t affected Debian since that has been fixed in Debian already with the package which is in Wheezy. (Thanks again Julien Cristau for the patch back then!) While there are bigger long-term plans at upstream, Amadeusz is already working on the next 4.x release (probably named 4.2.1) which will likely incorporate some of the patches floating around in the Linux distributions packages. At least SuSE and Debian offered their patches explicitly for upstream inclusion. So far already two patches found in the Debian packages have been obsoleted by upstream git commits after the 4.2.0 release. Yay!

20 January 2014

Enrico Zini: terminal-emulators

Quest for a terminal emulator The requirements I need a terminal emulator. This is a checklist of the features that I need: My experience is that getting all of this to work is not being as easy as it seems, so I'm creating this page to track progress. gnome-terminal I've been happily using this for years, and it did everything I needed, until some months ago it started to open new tabs in the terminal's working directory instead of the last tab's working directory. This is a big point of frustration for me. It also started opening https urls with Firefox, although the preferred browser was Chromium. There seemed to be no way to control it: I looked for firefox or iceweasel in all gconf and dconf settings and found nothing. The browser issue was fixed by accident when I used Xfce4's settings application to change the browser from Chromium to Firefox and then back to Chromium. update, thanks to Mathieu Parent, Josh Triplett, Peter De Wachter, Julien Cristau, and Charles Plessy: It is also possible to restore the "new tab opened inside the same directory of the last tab I was in" behaviour, by enabling "run command as a login shell" so that /etc/profile.d/ is run (thanks Mathieu Parent for the link). That in turn spawned extra cleanup work in my .bashrc/.bash_profile/.profile setup, which has been randomly evolving since even before my first Debian "buzz" system. I found that it was setting PROMPT_COMMAND to something else to set the terminal title, conflicting with what wants to do. With regards to loading /etc/profile.d/ by default, Peter De Watcher sent pointers to relevant bugs: here, here, and here. An alternative strategy is to work using the prompt rather than PROMPT_COMMAND; an example is in Josh Triplett's .bashrc from git:// Josh Triplett also said:
To fix the browser launched for URLs, you either need to use a desktop environment following GNOME's mechanism for setting the default browser, or edit ~/.local/share/applications/mimeapps.list and make sure x-scheme-handler/http, x-scheme-handler/https, and x-scheme-handler/ftp are set to your preferred browser's desktop file basename under [Added Associations].
All my issues with gnome-terminal are now gone and I'm only too happy to go back to it. rxvt-unicode-256color urxvt took some work. This is where I got with configuration:
URxvt.font: xft:Monospace-10:antialias=true
URxvt.foreground: #aaaaaa
URxvt.background: black
URxvt.scrollBar_right: true
URxvt.cursorBlink: true
URxvt.perl-ext-common: default,matcher,tabbedex
URxvt.url-launcher: /usr/bin/x-www-browser
URxvt.matcher.button: 1
URxvt.perl-lib: /home/enrico/.urxvt/perl
URxvt.color0: black
URxvt.color1: #aa0000
URxvt.color2: #00aa00umask
URxvt.color3: #aa5500
URxvt.color4: #0000aa
URxvt.color5: #aa00aa
URxvt.color6: #00aaaa
URxvt.color7: #aaaaaa
URxvt.color8: #555555
URxvt.color9: #ff5555
URxvt.color10: #55ff55
URxvt.color11: #ffff55
URxvt.color12: #5555ff
URxvt.color13: #ff55ff
URxvt.color14: #55ffff
URxvt.color15: #ffffff
I got all of the tab behaviour that I need by "customizing" the tab script (yuck github :( ). Missing sakura Configuration is in .config/sakura/sakura.conf and these bits help:
font=Monospace 10
Missing lxterminal Configuration is in .config/lxterminal/lxterminal.conf and this is relevant to me:
fontname=DejaVu Sans Mono 10
Also, to open a url directly you control+click it. Missing terminator Configuration is in .config/terminator/config and this is relevant to me:
  use_custom_url_handler = True
  custom_url_handler = x-www-browser
  inactive_color_offset = 1.0
  close_term = None
  close_window = None
  copy = None
  cycle_next = None
  cycle_prev = None
  go_down = None
  go_next = None
  go_prev = None
  go_up = None
  group_all = None
  group_tab = None
  hide_window = None
  move_tab_left = None
  move_tab_right = None
  new_tab = None
  new_terminator = None
  new_window = None
  next_tab = None
  paste = None
  prev_tab = None
  reset_clear = None
  reset = None
  resize_down = None
  resize_left = None
  resize_right = None
  resize_up = None
  rotate_ccw = None
  rotate_cw = None
  scaled_zoom = None
  search = None
  split_horiz = None
  split_vert = None
  switch_to_tab_1 = <Alt>F1
  switch_to_tab_2 = <Alt>F2
  switch_to_tab_3 = <Alt>F3
  switch_to_tab_4 = <Alt>F4
  switch_to_tab_5 = <Alt>F5
  switch_to_tab_6 = <Alt>F6
  switch_to_tab_7 = <Alt>F7
  switch_to_tab_8 = <Alt>F8
  switch_to_tab_9 = <Alt>F9
  switch_to_tab_10 = <Alt>F10
  toggle_scrollbar = None
  toggle_zoom = None
  ungroup_all = None
  ungroup_tab = None
  <span class="createlink">default</span>
    palette = "#000000:#aa0000:#00aa00:#aa5500:#0000aa:#aa00aa:#00aaaa:#aaaaaa:#555555:#ff5555:#55ff55:#ffff55:#5555ff:#ff55ff:#55ffff:#ffffff"
    copy_on_selection = True
    icon_bell = False
    background_image = None
    show_titlebar = False
Missing update: Richard Hartmann pointed out that terminator's upstream maintainer now changed after the old one didn't have time any more, and it should have a release with a ton of improvements anytime soon. xfce4-terminal Configuration is in .config/xfce4/terminal, and this is relevant to me: terminalrc:
FontName=Monospace 10
(gtk_accel_path "<Actions>/terminal-window/goto-tab-1" "<Alt>F1")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-2" "<Alt>F2")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-3" "<Alt>F3")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-4" "<Alt>F4")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-5" "<Alt>F5")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-6" "<Alt>F6")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-7" "<Alt>F7")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-8" "<Alt>F8")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-9" "<Alt>F9")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-10" "<Alt>F10")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-11" "<Alt>F11")
(gtk_accel_path "<Actions>/terminal-window/goto-tab-12" "<Alt>F12")
update: Yves-Alexis Perez points out that to disable the F1 for help in the terminal, you need to remove the accelerator. I tried this and this and didn't have success, but I confess I did not dig too much into it. Although xfce4-terminal -e does not work as I expect, xfce4-terminal registers a wrapper for x-terminal-emulator that does the right thing with respect to -e (also thanks Yves-Alexis Perez). Missing roxterm Configuration is in .config/ split in several files corresponding to profiles. This is a reasonable starting point for me: Profiles/Default:
[roxterm profile]
[roxterm colour scheme]
[roxterm shortcuts scheme]
File/New Window=
File/New Tab=
File/Close Window=
File/Close Tab=
Tabs/Previous Tab=
Tabs/Next Tab=
View/Zoom In=<Control>plus
View/Zoom Out=<Control>minus
View/Normal Size=<Control>0
View/Full Screen=F11
View/Scroll Up One Line=
View/Scroll Down One Line=
Edit/Copy & Paste=
Search/Find Next=
Search/Find Previous=
File/New Window With Profile/Default=
File/New Tab With Profile/Default=
[roxterm options]
Missing Nothing of my initial requirements seems to be missing, really, so I'm sticking to it for a while to see what happens. The first itch to scratch is that when the menubar is hidden, the popup menu becomes the entire menubar contents, which does not fit the general use case to have a contextual menu with the most common shortcuts. I'll just declare it useless and get myself used to some new hotkey for starting a new terminal. update: after fixing my issues with gnome-terminal I've switched back to gnome-terminal: its interface feels less clunky as I'm already used to it. Other references Guillem Jover made a similar analysis in 2009, it can be found here. Thomas Koch mentioned that termit should be able to do all I need, and is scriptable in Lua. I like the sound of that, and it's definitely one I should look next time I find myself shopping for terminal emulators.

5 January 2014

Russ Allbery: More on displaying files with head

That was fun! Since my previous entry on using head to display the contents of several files in a form that's easy to cut and paste, multiple people have sent elaborations or related tricks. It seemed like it would be a good idea to post a roundup, since I learned a bunch. Multiple people (I think Josh Triplett was the first) pointed out that one can avoid having to pick a sufficiently large value of -n by instead using:
    head -n -0 *.install
With GNU head at least, a negative number says to print out all lines of the file except that many at the end, so -0 displays the whole file, regardless of size. Unfortunately, while this works anywhere that I am likely to run it, it's not specified by POSIX, while the original is. Another variation, pointed out by Buck Huppmann, is:
    tail -n +0 *.install
The +0 syntax is required by POSIX, unlike the -0 syntax for head... but unfortunately POSIX doesn't require that tail supports multiple files and the headers, although it does for head. Buck also pointed out that including the -v flag will always force the header even if there's only one file, which is useful. (Although be warned that -v isn't a POSIX-recognized flag.) Markus Raab also pointed out the xsel utility, which I'd heard of but hadn't ever used. If the goal is to cut and paste the output, using:
    head -v -n -0 *.install   xsel -i
avoids the cut part by dumping the result directly into the X selection. Buck pointed out xclip, which does the same thing. Both can be used with the -o flag inside an editor to paste as well if you don't want to reach for a mouse. In vi :r !xclip -o, and in Emacs, C-u M-! xclip -o. Finally, Guillem Jover metioned that:
    grep . *.install
does sort of the same thing with a different output format that may be more useful depending on what you're doing. (I find it less human-readable but more machine-parsable.)

20 October 2012

Vincent Bernat: Network lab with KVM

To experiment with network stuff, I was using UML-based network labs. Many alternatives exist, like GNS3, Netkit, Marionnet or Cloonix. All of them are great viable solutions but I still prefer to stick to my minimal home-made solution with UML virtual machines. Here is why: The use of UML had some drawbacks: However, UML features HostFS, a filesystem providing access to any part of the host filesystem. This is the killer feature which allows me to not use any virtual disk image and to get access to my home directory right from the guest. I discovered recently that KVM provided 9P, a similar filesystem on top of VirtIO, the paravirtualized IO framework.

Setting up the lab The setup of the lab is done with a single self-contained shell file. The layout is similar to what I have done with UML. I will only highlight here the most interesting steps.

Booting KVM with a minimal kernel My initial goal was to experiment with Nicolas Dichtel s IPv6 ECMP patch. Therefore, I needed to configure a custom kernel. I have started from make defconfig, removed everything that was not necessary, added what I needed for my lab (mostly network stuff) and added the appropriate options for VirtIO drivers:
No modules. Grab the complete configuration if you want to have a look. From here, you can start your kernel with the following command ($LINUX is the appropriate bzImage):
kvm \
  -m 256m \
  -display none \
  -nodefconfig -no-user-config -nodefaults \
  -chardev stdio,id=charserial0,signal=off \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -chardev socket,id=con0,path=$TMP/vm-$name-console.pipe,server,nowait \
  -mon chardev=con0,mode=readline,default \
  -kernel $LINUX \
  -append "init=/bin/sh console=ttyS0"
Of course, since there is no disk to boot from, the kernel will panic when trying to mount the root filesystem. KVM is configured to not display video output (-display none). A serial port is defined and uses stdio as a backend1. The kernel is configured to use this serial port as a console (console=ttyS0). A VirtIO console could have been used instead but it seems this is not possible to make it work early in the boot process. The KVM monitor is setup to listen on an Unix socket. It is possible to connect to it with socat UNIX:$TMP/vm-$name-console.pipe -.

Initial ramdisk UPDATED: I was initially unable to mount the host filesystem as the root filesystem for the guest directly by the kernel. In a comment, Josh Triplett told me to use /dev/root as the mount tag to solve this problem. I keep using an initrd in this post but the lab on Github has been updated to not use one. Here is how to build a small initial ramdisk:
# Setup initrd
    info "Build initrd"
    mkdir -p $DESTDIR
    # Setup busybox
    copy_exec $($WHICH busybox) /bin/busybox
    for applet in $($ DESTDIR /bin/busybox --list); do
        ln -s busybox $ DESTDIR /bin/$ applet 
    # Setup init
    cp $PROGNAME $ DESTDIR /init
    cd "$ DESTDIR " && find .   \
       cpio --quiet -R 0:0 -o -H newc   \
       gzip > $TMP/initrd.gz
The copy_exec function is stolen from the initramfs-tools package in Debian. It will ensure that the appropriate libraries are also copied. Another solution would have been to use a static busybox. The setup script is copied as /init in the initial ramdisk. It will detect it has been invoked as such. If it was omitted, a shell would be spawned instead. Remove the cp call if you want to experiment manually. The flag -initrd allows KVM to use this initial ramdisk.

Root filesystem Let s mount our root filesystem using 9P. This is quite easy. First KVM needs to be configured to export the host filesystem to the guest:
kvm \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare
$ ROOT can either be / or any directory containing a complete filesystem. Mounting it from the guest is quite easy:
mkdir -p /target/ro
mount -t 9p rootshare /target/ro -o trans=virtio,version=9p2000.u
You should find a complete root filesystem inside /target/ro. I have used version=9p2000.u instead of version=9p2000.L because the later does not allow a program to mount() a host mount point2. Now, you have a read-only root filesystem (because you don t want to mess with your existing root filesystem and moreover, you did not run this lab as root, did you?). Let s use an union filesystem. Debian comes with AUFS while Ubuntu and OpenWRT have migrated to overlayfs. I was previously using AUFS but got errors on some specific cases. It is still not clear which one will end up in the kernel. So, let s try overlayfs. I didn t find any patchset ready to be applied on top of my kernel tree. I was working with David Miller s net-next tree. Here is how I have applied the overlayfs patch on top of it:
$ git remote add torvalds git://
$ git fetch torvalds
$ git remote add overlayfs git://
$ git fetch overlayfs
$ git merge-base overlayfs.v15 v3.6
$ git checkout -b net-next+overlayfs
$ git cherry-pick 4cbe5a555fa58a79b6ecbb6c531b8bab0650778d..overlayfs.v15
Don t forget to enable CONFIG_OVERLAYFS_FS in .config. Here is how I configured the whole root filesystem:
info "Setup overlayfs"
mkdir /target
mkdir /target/ro
mkdir /target/rw
mkdir /target/overlay
# Version 9p2000.u allows to access /dev, /sys and mount new
# partitions over them. This is not the case for 9p2000.L.
mount -t 9p        rootshare /target/ro      -o trans=virtio,version=9p2000.u
mount -t tmpfs     tmpfs     /target/rw      -o rw
mount -t overlayfs overlayfs /target/overlay -o lowerdir=/target/ro,upperdir=/target/rw
mount -n -t proc  proc /target/overlay/proc
mount -n -t sysfs sys  /target/overlay/sys
info "Mount home directory on /root"
mount -t 9p homeshare /target/overlay/root -o trans=virtio,version=9p2000.L,access=0,rw
info "Mount lab directory on /lab"
mkdir /target/overlay/lab
mount -t 9p labshare /target/overlay/lab -o trans=virtio,version=9p2000.L,access=0,rw
info "Chroot"
export STATE=1
cp "$PROGNAME" /target/overlay
exec chroot /target/overlay "$PROGNAME"
You have to export your $ HOME and the lab directory from host:
kvm \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare \
  -fsdev local,security_model=none,id=fsdev-home,path=$ HOME  \
  -device virtio-9p-pci,id=fs-home,fsdev=fsdev-home,mount_tag=homeshare \
  -fsdev local,security_model=none,id=fsdev-lab,path=$(dirname "$PROGNAME") \
  -device virtio-9p-pci,id=fs-lab,fsdev=fsdev-lab,mount_tag=labshare

Network You know what is missing from our network lab? Network setup. For each LAN that I will need, I spawn a VDE switch:
# Setup a VDE switch
    info "Setup switch $1"
    screen -t "sw-$1" \
        start-stop-daemon --make-pidfile --pidfile "$TMP/switch-$" \
        --start --startas $($WHICH vde_switch) -- \
        --sock "$TMP/switch-$1.sock"
    screen -X select 0
To attach an interface to the newly created LAN, I use:
mac=$(echo $name-$net   sha1sum   \
            awk ' print "52:54:" substr($1,0,2) ":" substr($1, 2, 2) ":" substr($1, 4, 2) ":" substr($1, 6, 2) ')
kvm \
  -net nic,model=virtio,macaddr=$mac,vlan=$net \
  -net vde,sock=$TMP/switch-$net.sock,vlan=$net
The use of a VDE switch allows me to run the lab as a non-root user. It is possible to give Internet access to each VM, either by using -net user flag or using slirpvde on a special switch. I prefer the latest solution since it will allow the VM to speak to each others.

Debugging This lab was mostly done to debug both the kernel and Quagga. Each of them can be debugged remotely.

Kernel debugging While the kernel features KGDB, its own debugger, compatible with GDB, it is easier to use the remote GDB server built inside KVM.
kvm \
  -gdb unix:$TMP/vm-$name-gdb.pipe,server,nowait
To connect to the remote GDB server from the host, first locate the vmlinux file at the root of the source tree and run GDB on it. The kernel has to be compiled with CONFIG_DEBUG_INFO=y to get the appropriate debugging symbols. Then, use socat with the Unix socket to attach to the remote debugger:
$ gdb vmlinux
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/src/linux/vmlinux...done.
(gdb) target remote   socat UNIX:$TMP/vm-$name-gdb.pipe -
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-gdb.pipe -
native_safe_halt () at /home/bernat/src/linux/arch/x86/include/asm/irqflags.h:50
You can now set breakpoints and resume the execution of the kernel. It is easier to debug the kernel if optimizations are not enabled. However, it is not possible to disable them globally. You can however disable them for some files. For example, to debug net/ipv6/route.c, just add CFLAGS_route.o = -O0 to net/ipv6/Makefile, remove net/ipv6/route.o and type make.

Userland debugging To debug a program inside KVM, you can just use gdb as usual. Your $HOME directory is available and it should be therefore straightforward. However, if you want to perform some remote debugging, that s quite easy. Add a new serial port to KVM:
kvm \
  -chardev socket,id=charserial1,path=$TMP/vm-$name-serial.pipe,server,nowait \
  -device isa-serial,chardev=charserial1,id=serial1
Starts gdbserver in the guest:
$ libtool execute gdbserver /dev/ttyS1 zebra/zebra
Process /root/code/orange/quagga/build/zebra/.libs/lt-zebra created; pid = 800
Remote debugging using /dev/ttyS1
And from the host, you can attach to the remote process:
$ libtool execute gdb zebra/zebra
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/code/orange/quagga/build/zebra/.libs/lt-zebra...done.
(gdb) target remote   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Reading symbols from /lib64/ debugging symbols found)...done.
Loaded symbols for /lib64/
0x00007ffff7dddaf0 in ?? () from /lib64/

Demo For a demo, have a look at the following video (it is also available as an Ogg Theora video).
<iframe frameborder="0" height="270" src="" width="480"></iframe>

  1. stdio is configured such that signals are not enabled. KVM won t stop when receiving SIGINT. This is important for the usage we want to have.
  2. Therefore, it is not possible to mound a fresh /proc on top of the existing one. I have searched a bit but didn t find why. Any comments on this is welcome.

18 June 2011

Vincent Bernat: The sorry state of Flash with 64-bit Debian

Even with the availability <video> tag in HTML5, most websites still require the use of Adobe Flash plugin to access some content. If you run a 64-bit (Intel architecture) version of Debian, you can still get this plugin through the flashplugin-nonfree package. Unfortunately, Adobe never commited to a decent support for the 64-bit version of its plugin. The first 64-bit version was provided as late as the fall of 2008. The support was discontinued. Then, we recently got a new preview release but it is still lagging behind the 32-bit version. The 64-bit plugin has two major drawbacks: Even if you fix the second point, you have no way to fix the first one. What are the alternatives?

Lightspark Lightspark is a modern, free, open-source Flash player implementation featuring JIT compilation (with the help of LLVM) and hardware acceleration using OpenGL. It only supports Actionscript 3 (introduced in Flash 9) but it now fallbacks to Gnash if an older version is required. The main problem of Lightspark is that it is optimized for modern hardware and uses OpenGL textures to display video frames. This allows to achieve video compositing with the help of the graphic card. Unfortunately, if you use a video card with poor OpenGL performance, like a NVidia card with Nouveau driver, Lightspark is slow as hell. It works well with an Intel card. Currently, Lightspark is only available in experimental for Debian. Install browser-plugin-lightspark.

Gnash Gnash is a more mature project but does not support Actionscript 3. Youtube works well with Gnash, as well as most other applications unless they use too recent features. If your use of Flash is limited, it is a nice alternative. Unfortunately, more and more content are delivered with an application requiring Actionscript 3 support. On Debian, you can get started by installing browser-plugin-gnash.

32-bit Flash plugin on Debian Ubuntu provides Flash as a 32-bit plugin with the help of nspluginwrapper which is a special plugin that will enable the use of a 32-bit plugin inside a 64-bit browser. This seems a sensible choice since Adobe only really supports the 32-bit version. You can also use nspluginwrapper with Debian. You need to download 32-bit version of Flash, uncompress it and put in ~/.mozilla/plugins.
# aptitude purge flashplugin-nonfree
# aptitude install nspluginwrapper lib32asound2-plugins
$ cd ~/.mozilla/plugins
$ nspluginwrapper -i $PWD/
Beware that you need to watch for updates yourself. Mozilla provides a website checking if plugins are up-to-date. It works with other browsers than Firefox. This is a pity to still need this kind of hack in 2011 but it seems that this is currently the best solution. UPDATED: In a comment, Paul Rufous tells repository provides flashplayer-mozilla package which is the 32-bit plugin with nspluginwrapper.

HTML5 Thanks to the <video> tag available in HTML5, most videos should be available without the need of a special plugin. Unfortunately, HTML5 does not define a list of video codecs and container formats that should be supported by any browser. Therefore, some browsers support Ogg Theora, some others H264 and some of them also support WebM, the new royalty-free format from Google. You can find a good summary of the problem from Mark Pilgrim. With Chromium as found in Debian, you have Ogg Theora, WebM and H264. Youtube has an opt-in program to enable HTML5 video but most other video content are just unavailable without Flash. Web without Flash is not a reality yet.

Grab video with helper tools UPDATED: Josh Triplett pointed in a comment the existence of several command line tools that could be used to download Flash video from popular websites. Those tools include:
  • youtube-dl
  • get-flash-videos
  • clive
Those tools can be useful even if you have installed a Flash plugin since they allow the use of the video player of your choice (better performance, working fullscreen mode). UPDATED: Another solution provided in a comment: Video DownloadHelper extension allows to extract videos from the current web page. In another comment, this is the extension FlashVideoReplacer which is featured. It replaces the Flash container with a standalone player.

8 November 2010

Joey Hess: chop chop

I've always loathed cheap particle board and peg bookcases, and when I broke one while reorgaising the living room at the Hollow today, I finally found a good use for them. They're great fun to chop up with an axe, and make for a very hot fire in the stove later. I released git-annex 0.03 today. The main improvements are being able to configure the backend to use on a per-extension basis in .gitattributes, a fsck subcommand that will find dangling annexed content, and bugfixes, including massive memory use savings. The big feature under development next is an idea Josh Triplett and I batted around; a way to checkout annexed files to edit their contents. Today I also got the radio working well here. Set up a car radio with a 40 foot wire for an antenna, and I finally got it almost static free. Pity that running a nslu2 on the same power circuit causes interference, so I am not able to use that to feed in music via the radio's AUX jack yet, which is the eventual plan. And the radio's powered USB outlet is wasted, I had hoped to run the nslu2 from that. Seems I can run the radio for 9 hours and still have plenty of solar power, even this time of year. Wish it were colder..

24 September 2008

Lucas Nussbaum: Cool stats about Debian bugs

Now that bug #500000 has been reported, let’s have a look at all our other bugs, using UDD. Number of archived bugs:
select count(*) from archived_bugs;
Number of unarchived bugs marked done:
select count(*) from bugs where status = 'done';
Status of unarchived bugs (”pending” doesn’t mean “tagged pending” here):
select status, count(*) from bugs group by status;
    status       count
 pending         53587
 pending-fixed    1195
 forwarded        6778
 done             8267
 fixed             167
The sum isn’t even close to 500000. That’s because quite a lot of bugs disappeared:
select id from bugs union select id from archived_bugs order by id limit 10;
Now, let’s look at our open bugs.
Oldest open bugs:
select id, package, title, arrival from bugs where status != 'done' order by id limit 10;
  id       package                                         title                                            arrival
  825   trn              trn warning messages corrupt thread selector display                         1995-04-22 18:33:01
 1555   dselect          dselect per-screen-half focus request                                        1995-10-06 15:48:04
 2297   xterm            xterm: xterm sometimes gets mouse-paste and RETURN keypress in wrong order   1996-02-07 21:33:01
 2298   trn              trn bug with shell escaping                                                  1996-02-07 21:48:01
 3175   xonix            xonix colors bad for colorblind                                              1996-05-31 23:18:04
 3180   linuxdoc-tools   linuxdoc-sgml semantics and formatting problems                              1996-06-02 05:18:03
 3251   acct             accounting file corruption                                                   1996-06-12 17:44:10
 3773   xless            xless default window too thin and won't go away when asked nicely            1996-07-14 00:03:09
 4073   make             make pattern rules delete intermediate files                                 1996-08-08 20:18:01
 4448   dselect          [PERF] dselect performance gripe (disk method doing dpkg -iGROEB)            1996-09-09 03:33:05
Breakdown by severity:
select severity, count(*) from bugs where status != 'done' group by severity;
 severity    count
 normal      27680
 important    7606
 minor        6921
 wishlist    18898
 critical       29
 grave         209
 serious       384
Top 10 submitters for open bugs:
select submitter, count(*) from bugs where status != 'done' group by submitter order by count desc limit 10;
submitter                        count
 Dan Jacobson                     1455
 martin f krafft                    667
 Raphael Geissert                    422
 Joey Hess                            392
 Marc Haber                368
 Julien Danjou                         342
 Josh Triplett                    331
 Vincent Lefevre                    296                                    260
 Justin Pryzby      245
Top bugs reporters ever:
select submitter, count(*) from (select * from bugs union select * from archived_bugs) as all_bugs
group by submitter order by count desc limit 10;
                  submitter                     count
 Martin Michlmayr                4279
 Dan Jacobson               3652
 Daniel Schepler     3045
 Joey Hess                     2836
 Lucas Nussbaum        2701
 Andreas Jochens                   2605
 Matthias Klose            2442
 Christian Perrier           2302
 James Troup                   2198
 Matt Zimmerman                  2027
You want more data? Connect to UDD (from master.d.o or alioth.d.o, more info here), run your own queries, and post them with the results in the comments!

12 August 2007

Joey Hess: ikiwiki and spam

One of the reasons I was relictant to write a wiki before starting work on ikiwiki was the wiki/blog spam problem. Now that ikiwiki is reasonably mature, it's interesting to look back at how that turned out. The basic spam prevention mechanism in ikiwiki is the same as the one many wikis use: By default, it requires registration before you can edit a page. This is a pretty small stumbling block for spammers, especially since ikiwiki doesn't bother with email verification. It even prefills the login form with your username and password after you register. It would take about 5 lines of perl to defeat its registration process. Suprisingly, only one spammer has spammed any of my wikis. To defeat that spammer, I added the most common second layer defense against spam: A blacklist. Again I implemented the bare minimum, a way to blacklist a given user and ask them to "go away". Nothing to blacklist IPs, nothing more complex. That was put in place last October and I've had no problems with spam since. Around the same time, I made a change that I feared might open up a whole new class of spam, when I added openid support to ikiwiki. As far as I know, noone with an openid has tried to spam an ikiwiki wiki yet. Ikiwiki did grow one other common spam prevention tool. Josh Triplett added support for a user account creation password, so you have to know the password to get an account at all. This isn't used by default though. The only other possibly anyi-spam measure is that ikiwiki does allow locking down pages so only an admin can edit them, or so they can only be edited via commits to a revision control system. I use these methods for personal stuff like my blog, and might have cut down on the incentive to spam. Spamming a discussion page is just not as interesting as adding spam direct to my blog's rss feed. It seems likely that any new wiki system will have less spam at the beginning, especially if the spammers have to write code to bypass its registration system. That one spammer spparently did so fairly early is suprising. That no others have tried since is also suprising. My ease at blocking the spammer by simply banning them suggests that they were going after the easiest possible targets, and that it's not yet worth their while to work around even the simplest countermeasures. But if so, why did they add support for an entirely new registration system? My guess is that their existing code happened to work with ikiwiki's registration system. (This would take probably 20 lines of perl. ;-) Of course, there's all kinds of counterspam measures that I could implement if needed. By the way, if you use ikiwiki and have had problems with spam, I'm of course interested in fixing that. It'll be interesting to see when additional measures turn out to be needed, as the wiki and sites using it become better known.

16 June 2007

Michael Janssen: Ask the Interwebs #1

Lately I have been thinking about switching source code management systems. In years gone by, I have used CVS, Subversion, arch, and most recently bazaar-ng for managing all the source code that I have to change. Lately I have been unhappy with the speed that bazaar takes to get some things done, and after watching a Google talk by Linus about Git, I thought I would give it a look.

I was surprised to find out that it actually models the way that I work normally slightly better than the bazaar model does. The main thing is that a repository holds a bunch of branches, but the working directory only has one of those branches checked out at any point in time. I use bazaar with about 4-5 branches per project, and only work on one of them at once (normally, sometimes I want to transport a bug across a couple branches) - so the "working space replaced by branch you want to work on" actually works out OK for me. I also need to collaborate with a bunch of projects at school and work that use other systems including CVS and Subversion, which Git seems to make it easy to do.

I did run into some questions that I don't really have a good answer to, however, therefore the new segment on the blog: Ask the interwebs. Normally I would just ask on some IRC channel, but the Minneapolis public library doesn't seem to allow anything but HTTP out. On to the question!

When using Git, how do you manage having multiple computers (at multiple locations) that you work on? I have approximately 5 different computers that I use regularly. Luckily I have enough space to have git installed on all of them, and I also have a web server that I can use WebDAV on if necessary. What I would like to have is a repository that I can pull from before I start work (if I am connected) and then push to when I finish so that the next computer I sit down at I can just pull again and get everyting. I would like this to include any branches I may have made on the first computer. Is this possible, hard, or easy for Git users? How much do I need to worry about slowdown? My WebDAV server is in the cloud, and I would prefer to be able to get up and go on say, a minute's notice.

I'm also interested on what your workflow is like even if it doesn't match the model I just described. I'm open to changing - the bazaar-ng to git change is big enough that I can probably incorporate a new workflow as well.

Comments: (8) Trackbacks: (1)

  1. adrian: hi, i'm a just a newbie so can't help much.
    You can host your push repository in
    If you prefer to do it your own you have git-daemon (Have a look to tutorials). Good luck!
  2. Michael Janssen: I looked at the public, but decided against it, because while I want my branches to be available at some point, I'd rather not all of them at the same time. People randomly finding them is okay, but I'm sure I'll check in unworkable/uncompilable code in this state, because I would basically be using git as an easy sync between computers at times.

    I'm just really interested in how other people deal with this issue of needing semi-location-independent access to the branches.
  3. mike: Sounds already good for you to use git.

  4. R mi Vanicat: I haven't answer to the synchronization problem on my blog post because it is one I don't have. Still, you can make a git repository that can be pull and push from, and git can use https/dav or ssh or its on git protocol for this.

    Just read man git-remote, man git-push, man git-pull and man git
  5. JM Ibanez: Just upload a bare git repository (e.g. copy /.git in particular) to a server you have ssh access to. Then, in your working copy in git 1.5:

    <br />
      $ git remote add upstream git+ssh:///path/to/bare/git<br />

    You can then

    <br />
      $ git pull upstream .<br />

    <br />
      $ git pull upstream/some-upstream-branch my-local-branch<br />

    <br />
      $ git push upstream/some-upstream-branch<br />
  6. Josh Triplett: A followup to that: you can do the same thing without ssh access, by using WebDAV. See Documentation/howto/setup-git-server-over-http.txt in the Git source (also available various other places in other formats).
  7. Steven Walter: Whenever I want to create a non-local git repository for push, whether for sharing the code or just backing it up on another system, I follow this procedure:

    ssh remoteserver
    mkdir myproject.git
    cd myproject.git
    git --bare init --shared=all # world readable, group writeable
    chgrp -R git .
    vi description
    # back on local system
    git remote add origin git+ssh://remoteserver/path/to/git
    git push origin master:master
    git push origin branch2:branch2 # repeat for other branches

    You may then want to re-create your local branches so that they "track" the remote branches. "man git-branch" for more details on this.

    Hope that helps!
  8. brian m. carlson: For your workflow, you can try this:
    [geshi lang=sh]
    # first time on each machine only (you can use DAV, too)
    git remote add server git+ssh://server-hostname/path/project.git

    # every time
    git push server
    ssh anothermachine
    git fetch server
    git pull server branch-on-server-to-merge-from
    # merge happens; continue working
    # start over again from the push
  1. How I use git
    Michael Janssen ask about workflow and git, so I will explain mine. I have three types of git repositories: git-svn repositories where I don’t have write access on the upstream subversion repositories. git-svn repositories where I do have write ...

11 June 2007

Remi Vanicat: A solution for a better spell checker in firefox : a better editor

I’ve recently complain here about the iceweasel Spell checker, and the fact that it is less effective than Aspell for proposing correction for French misspelled word. I also lake the nice key binding for correction available in Emacs with flyspell. There seem to be a solution to all my problem, and it is a comment made by Josh Triplett on the MJ Ray blog that give it to me: the It’s All Text Firefox(and Iceweasel) extension give me the possibility to use my preferred editor for editing text in edit box (and yes, it is editor agnostic: you can use vi, vim, or even ed). So, thank you Debian planet, MJ Ray and Josh Triplett. I now have to try it to see if it’s really effective.

MJ Ray: Views on textarea Sizes

Matt Palmer wrote:
"Aaah, OK. I have no idea whether [pingbacks] work or not; they're deeply in the black magic category -- never used them."
They're just a remote procedure call. Shouldn't be black magic at all, but a surprising number of blogs seem to do them brokenly, while supporting the more complicated (but just as spammable) comment and trackback systems. On size:
"40x20 would be a good size for me. I'm not sure why people would complain that the comment box is too big, unless perhaps it interfered with the good layout of the page."
I don't think it interfered with the layout. I'll increase the size again Real Soon Now. Josh Triplett commented:
"For me, the size of the edit box doesn't matter much, because I don't use it. I just click the little "edit" button next to it that the Firefox^WIceweasel extension It's All Text provides, and then type my comment in Emacs. :)"
James commented:
"I use the resizeable form fields extension so I don't have a problem with small textareas, but has a script that'll make it resizeable for everyone."
Well, everyone who lets web site scripts use their CPU and electricity... Thanks for all the helpful comments.

13 March 2007

Julien Danjou: DeFuBu contest #8

Bug Welcome to this 8th issue of the DeFuBu contest, the almost monthly championship of the funniest bug reported to the Debian BTS. The challengers How the vote has been done Four Debian related people voted, Raphael Hertzog, Jeroen van Wolffelaar, Ana Guerrero and Margarita Manterola. Full ranking Bugs Challengers The winners Notes To participate, simply drop me an email with a bug number or a request to vote, or anything that may help. About DeFuBu

6 February 2007

Julien Danjou: DeFuBu contest #7

Bug Welcome to this 7th issue of the DeFuBu contest, the monthly championship of the funniest bug reported to the Debian BTS. The challengers How the vote has been done Four Debian related people voted for these bugs, Emmanuel Bouthenot, Mohammed Adn ne Trojette, Julien Louis and Jade Alglave. Full ranking Bugs Challengers The winners Notes To participate, simply drop me an email with a bug number. About DeFuBu

1 February 2007

David Nusinow: Bits From The XSF

This has been a little overdue, but here it is. Lots of less than glamorous stuff has been going on in the XSF, and a few more exciting things are in the pipeline.

Among the less glamorous stuff, our resident release junkie/alpha porter, Steve Langasek, fixed #392500 which was our major RC bug on the alpha arch. Aside from that bug, there's been a whole host of bug triage by our newest team member, Brice Goglin. Brice has taken on the unenviable task of going through the massive list of bugs owned by the XSF. I've been traditionally less than responsible about handling bug reports like I should, and Brice is dealing with the mess I've left behind. Before he started, we were at over 2000 outstanding bugs, and as of this writing we're down to 1646 bugs, which is a huge amount of work. Hopefully this will go a long way towards making our little corner of the BTS usable for mere mortals. Aside from that, there's been an enormous amount of small cleanups and bug fixes by the whole team, most notably Julien Cristau (who's been doing more actual release management than yours truly), Thierry Redding, and Drew Parsons. Perhaps most important among these are miscellaneous last minute driver fixes that will enable a fair number of people to actually run Etch without backports. Michel D nzer has continued to be his usual awesome self, responding to the tough DRI bugs that the rest of us are terrified to approach.

Some of the more exciting stuff from a user perspective has been the various updates and new packaging going on. Thierry has taken on mesa, which is an incredibly daunting task, and he's done and incredible job of it so far. He and Julien cooperated on getting the newest mesa release, 6.5.2, in to experimental. That's a huge step on the way to getting the 7.2 release in to the archive. Thierry has also packaged up the newest compiz and some additional plugins ported from beryl, both of which are waiting in NEW right now for our overworked ftp masters to find time to have a look at them. Speaking of beryl, we also have that more or less complete and packaged. Our second-newest XSF member, Shawn Starr, has taken on beryl, and done a great job of getting the packages ready for experimental. They came back from review with a few minor comments, so he's busy getting those last issues resolved, so beryl should be winging its way over to NEW again for re-review soon. Finally, Josh Triplett and Jamie Sharp, who are both Debian and XCB developers, have put XCB in to experimental for you to test and play with. Expect to hear more about this when the Lenny development cycle starts.

The other huge thing that's been going on, and what I've been devoting most of my time to (aside from reading all the mail from Brice's bug triage), is transitioning the XSF over to git. Thierry has written major chunk of infrastructure to help move us over. I've done most of the conversion locally, and have been putting things up on as I go. As of now, we've got a very large chunk of X in git now, and using svn for those bits is officially closed. I'm hoping to complete the move in the next two weeks or so. If you want the above software that's not yet in the archive (beryl, compiz, compiz-extra, etc) you can clone it from the git repositories and build away. The other major thing that I've been doing is writing up a XSF git policy. This required a lot of input from the team, and we had to come up with a reasonable way of working the archive so that we could easily work with an upstream that was also using git, which as far as I know is a unique situation right now. It's in pretty good shape right now and we're just starting to put it in to real use. I've proposed a talk for Debconf on the XSF roadmap for Lenny, and I'll be sure to talk about our experiences using git if it gets accepted. In addition, we've all been learning the ins and outs of git, which is a whole other post in itself. I think that the XSF will end up being a really good resource for people in Debian who are looking to use git, as we'll have a lot of real world experience with a tool that relatively few people actually seem to know. Since I think git usage will only grow in the future, hopefully this will end up being valuable for Debian as a whole.

18 November 2006

Enrico Zini: pbuilder-tips

pbuilder tips Many thanks to Arnaud Fontaine for providing me with the solutions to all pbuilder problems I had during the last years. This is what happens to me: someone reports a bug in debtags, and it turns out that the bug needs fixing in libwibble. I start by fixing libwibble, then I need to recompile libtagcoll2-dev, then use it to recompile libept-dev, which I can finally use to rebuild debtags. A normal pbuilder setup would only take dependencies from apt sources, so I need to create an APT source with the packages I just built and feed it to pbuilder. Now, thanks to Arnaud's suggestion, I have the easy way to create the APT repository:
$ cat update-aptsource
cd /var/cache/pbuilder/result
dpkg-scanpackages . /dev/null   gzip -9 > Packages.gz
echo "Running pbuilder update..."
su -c "pbuilder update"
Since pbuilder works in a chroot environment, the easiest way of making it access the repository is via http. First I put the new repository online:
ln -s /var/cache/pbuilder/result /var/www/debian
Then I add it to /etc/pbuilderrc:
OTHERMIRROR="deb http://localhost/debian/ ./"
Then I tell pbuilder to update the configuration inside the chroots:
pbuilder update --override-config
And finally I can build my stack of packages. This is probably not rocket science for many DDs. What I'm really appreciating here is using the minimum amount of pbuilder customization and of lines of code of extra infrastructure to maintain. Now, at every compile round, I just need to run that update-aptsource script to inject into the system the dependencies that I have just built. Update: Josh Triplett sent the configuration options to avoid using an HTTP server and use pbuilder's bind-mount facility:
OTHERMIRROR="deb file:/var/cache/pbuilder/result/ ./"