Search Results: "lolando"

29 May 2015

Roland Mas: FusionForge 6.0 final release available

Hi all, After 4 release candidates, the FusionForge community is proud to announce the new major Fusionforge 6.0 final release. The major changes in this version are: Here is a more detailed list of visible changes: Standard features: Tracker roadmap of the 6.0 release: roadmap. Some metrics about 6.0: FusionForge 6.0 can be downloaded in source form from our file release system. Packages will be available in some distributions soon. Enjoy! Your feedback to fusionforge-general@lists.fusionforge.org is welcome! For more information on FusionForge, refer to FusionForge.org. -- The FusionForge community

2 December 2013

Roland Mas: Rsyncing a BackupPC storage pool, efficiently

BackupPC is a pretty good backup system. Its configuration is rather flexible, it has nice expiry policies, and it can store duplicated file contents only once (for files that are shared across hosts or don't change in time) within a compressed pool of data. However, it doesn't do much to help pushing the data to off-site storage, or at least not very efficiently. So if you have a BackupPC instance running on a Raspberry Pi or a plug computer at home, it's a bit tricky to protect your data against loss due to burglary or home fire. The obvious solution would be to rsync the storage pool to a remote site. However, the current pooling system relies heavily on hardlinks, and rsync is notoriously inefficient with those. In the home backup server scenario, this means that even if the computer is more powerful than a Pi and can handle the memory requirements of rsync, you'll often end up transferring way too much data. So, since the obvious solution doesn't work straight away, what do we do? Why, we fix it, of course. With a little look into the storage pool, we notice that the bulk of the data is stored in files with an abstract name (related to the contents) within a $prefix/pool directory; the files with concrete names looking much like their original are stored within $prefix/pc, and they're actually the same files because they're hardlinks. Knowing this (that rsync doesn't), we can make a smarter replication tool, by
  1. pushing only the pool with standard rsync;
  2. storing locally, and recreating remotely, the structure of hardlinks;
  3. pushing everything again with standard rsync.
Steps 1 and 3 are simple invocations of rsync -aH; step 2 can be implemented using the following two scripts. Run store-hardlinks.pl locally, push the links file, then run restore-hardlinks.pl on the remote server. This will ensure that files already present in the pool are also hardlinked in their natural location. store-hardlinks.pl:
#! /usr/bin/perl -w
use strict;
use Storable qw(nstore);
use File::Find;
use vars qw/$prefix $poolpath $pcpath %i2cpool %todo $store/;
$prefix = '/var/lib/backuppc';
$poolpath = '$prefix/cpool';
$pcpath = '$prefix/pc';
$store = '$prefix/links';
# for the convenience of &wanted calls, including -eval statements:
use vars qw/*name *dir *prune/;
*name   = *File::Find::name;
*dir    = *File::Find::dir;
*prune  = *File::Find::prune;
# Scan pool
File::Find::find( wanted => \&wanted_pool , $poolpath);
# Scan PC dirs
File::Find::find( wanted => \&wanted_pc , $pcpath);
nstore \%todo, $store;
exit;
sub wanted_pc  
    my ($dev,$ino,$mode,$nlink,$uid,$gid);
    (($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&
      -f _ &&
      ($nlink > 1) &&
      do  
      $name =~ s,$pcpath/,,;
      if (defined $i2cpool $ino )  
      $todo $name  = $i2cpool $ino ;
       
     
 
sub wanted_pool  
    my ($dev,$ino,$mode,$nlink,$uid,$gid);
    (($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&
      -f _ &&
      ($nlink > 1) &&
      do  
      $name =~ s,$poolpath/,,;
      $i2cpool $ino  = $name;
     
 
restore-hardlinks.pl:
#! /usr/bin/perl -w
use strict;
use Storable;
use File::Path qw/make_path/;
use vars qw/$prefix $poolpath $pcpath %todo $store/;
$prefix = '/srv/backuppc-mirror';
$poolpath = "$prefix/cpool";
$pcpath = "$prefix/pc";
$store = "$prefix/links";
%todo = % retrieve ($store) ;
my ($dev,$ino,$mode,$nlink,$uid,$gid);
foreach my $src (keys %todo)  
    my $inode;
    my $dest = $todo $src ;
    my $dpath = "$poolpath/$dest";
    my $spath = "$pcpath/$src";
    my $sdir = $spath;
    $sdir =~ s,/[^/]*?$,,;
    make_path ($sdir);
    next unless -e $dpath;
    if (! -e $spath)  
      link $dpath, $spath;
      next;
     
    (($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($spath));
    $inode = $ino;
    (($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($dpath));
    if ($ino != $inode)  
      unlink $spath;
      link $dpath, $spath;
     
 
The initial transfer can still take forever if the pool is large (and if you're pushing it through the small end of an ADSL link ), but at least the files are only transferred once. Note: This is only useful for current versions of BackupPC. Apparently BackupPC 4 will have a different pooling system without hardlinks, and the following hack will no longer be required. For now, though, here it is.

28 June 2013

Roland Mas: FusionForge news, June 2013

Once again, it's been some time since the last FusionForge update in here. The main explanation is that news is slow on that front. Debian Wheezy was released without FusionForge packages, as previously announced, and even Sid hadn't seen any update on the package for way too long. The latter has just been fixed though: the freshly-released 5.2.2 upstream version is on its way to Sid (via NEW, since it adds a new plugin to allow authenticating against a CAS server). If and when it reaches Debian Jessie (the current testing ), I'll work on backporting it for Wheezy. At some point I'll also start uploading snapshots of the upstream master branch to experimental , to give adventurous users a glimpse of what is to come in the future releases, although I'll stick to only uploading when the automated tests all pass.

19 February 2013

Roland Mas: A challenge for whoever feels they have too much free time

Open question to enthusiasts, theoretical computing scientists and mathematicians of all sorts: is it possible to construct a valid QR-code that leads to interesting results when used as an initial configuration for the Game of Life? The rules: For Science! Update: The answer seems to be yes. Jurij Smakov assembled a QR-code generator and a Life engine and plugged them together for easy experimenting. And Stefano Zacchiroli noticed that using "free software" (no quotes) as the input leads to a couple of gliders endlessly traveling a field with a few still lifes. This is way beyond awesome.

30 January 2013

Roland Mas: Various small bits

Dear reader, I know you're wondering what I'm getting up to these days. Or at least, I guess there's a possibility that you're wondering. So since there's no single bit of news that would be worthy of a post here by itself, here's one summarizing things, in the hope that the accumulation makes a tiny bit of a difference. On the rock'n'roll front: Eleven did its first true gig on our own in a pub last Saturday. And when I say on our own , it could almost be taken literally, for we must have had a grand total of about 10 people. A bit of a disappointment, to say the least, especially since the pub was about 120 kilometers from home, 60 of which I rode on my motorbike at 2 AM (and close to 0 C). However, the landlord seemed to like us and hinted at further gigs during seasons where people are more likely to go out for drinks and music than to stay warm at home. In a related note: we got ourselves a small set of stage lights (LED-powered, of course), and they have, in addition to the power cord and various switches, two sockets at the back for plugging XLR-3 cables. On investigation, it seems that this means they can be controlled by a protocol known as DMX 512, which opens up a lot of possibilities for someone who likes to control various things from a computer. I read a few web pages to get an idea of how this is supposed to work, it seems rather simple and straightforward, but the required software isn't in Debian yet. So I guess that if/when I get the necessary hardware, I'll have a new hobby and new toys to play with. Maybe our next gig will have bursts of lights on the big accented beats, triggered by strong enough hits on my drum cymbals. This allows me to link to a new minor release of Wiimidi. The only addition is a new configuration section mapping to the default drumkit provided by Hydrogen. And finally, to stay in the small bits of software , I was prompted to give my simple GPG-encrypted password store, a.k.a. SGEPS, its own page, with a proper release and so on. So there, 0.1 is out, with minimal Debian packaging included.

14 December 2012

Roland Mas: Back to space

A long time ago, I spent many many hours playing Frontier (Elite 2) on the family Atari ST. It was nice-looking (oh, those sunrises on distant planets!), playable, varied, had ample room for progression, very different possible roles, a huge universe of which nobody could ever hope of exploring more than a tiny fraction, and it was basically the first game I encountered with no set goals. You are a spaceship pilot in a galaxy colonised left and right, and it's up to you to decide what you want the game to be. Pirate, headhunter, mercenary, trader, taxi, explorer, there's no end goal to achieve except what you fix for yourself. That was, for me as a youngster, a mind-opening experience, which I never felt as strongly since (with the possible exception of the Creatures game, but I barely tried it). Then came modernity, the PC, and the First Encounters sequel (Elite 3). It added moderate complexity to the gameplay, some sort of political evolution among the factions and a kind of plot so the galaxy was no longer quite so static, but the end results were still a bit disappointing. It felt like Frontier with a few not-exactly-overwhelming textures stuck onto the ships, a loose plot arc, and a third faction, the Alliance of Independent Systems, beyond the Federation and the Empire. Not that exciting, although the name of the Alliance's capital star system stayed in my mind. So, like many others, I came to wait for Elite 4. That wait started in 1995, more than seventeen years ago. During that time, Duke Nukem Forever was announced, promised, developed, postponed, abandoned, found again, developed again, and it was even actually released in 2011, while Elite 4 went into the same kind of hell only without the actual release (yet). At some point I started trying various substitutes to help pass time. There was Parsec which was a nice-looking space combat simulator, multiplayer and all, but never felt like finished. Development slowed down to a thin trickle, and it seems like it only recently started again as Open Parsec. Hopefully it won't wither off again, but since it's only focused on combat, it's not exactly the same spirit as Elite. There's Oolite, but it looks and feels very much like a lightly-modernised Frontier. Better textures, sure, but the UI still looks like it got taken from the times where games mentioned they required an EGA/VGA video card. There's Vega Strike. It's very promising too, and I did try it at some point, but it was crashing too often for my tastes. Development seems to have slowed down, but not stopped; maybe it's time I gave it another go. I just recently found out about Pioneer, and I admit I'm excited. Really excited. It feels like like Frontier felt at the time. It's not finished yet (the on-planet cities in particular look a bit weird), but from the alpha I tried and the videos on the website it's really impressive. I'm going to keep an eye on that, knowing I could spend quite some time in there even before it's quite finished, if only to fly a spaceship in the canyons of Europa with Jupiter rising in the distance. I'll let you know if I find a black monolith. But the main event triggering this blog post is that apparently Elite 4 is not dead (yet). Much better, it's under heavy development, under the Elite Dangerous name, and the current state seems to be rather good already. And Frontier Developments, the company behind it, opened a Kickstarter campaign to fund the project. The screenshots and the videos look impressive, and the interviews of the lead developer imply that the gameplay will be awesome. The galaxy will apparently be really dynamic, and the player's actions could really influence its geopolitical (galactopolitical?) evolution, the profitability of trade routes, the prevalence and repartition of space pirates in various sectors, and so on. Combine with a multi-player mode, and this could become the greatest game ever made (for my personal taste at least). So I'm very much hoping the funding campaign reaches its target. The release date is set to March 2014. It's going to be a long wait, fraught with incertitudes. But then again, Duke Nukem Forever was eventually released, wasn't it? [No, I never played the original Elite. I'm not that old.]

29 November 2012

Roland Mas: Wii(gh2)midi yet again

Not that I'm bored or anything, but I spent some more time on this since last month, and apparently some people are interested enough, so here's the recent news about wiigh2midi. Also, since the code moved (and it doesn't harm mentioning it again): Wiimidi is developed with Bazaar, and the public branch is at https://alioth.debian.org/~lolando/bzr/wiimidi/trunk/. So, to grab a copy:
 bzr checkout https://alioth.debian.org/~lolando/bzr/wiimidi/trunk/
Patches welcome, of course! Also, I'll be interested to hear about your applications. I've been told about a Wiimote-powered foot controller, which is exactly the kind of unpredictable results I was hoping to achieve by publishing my code. Keep it up, I want to hear about Wiimote-controlled robot dinosaurs next! Update: Wiimidi now has its dedicated page at Wiimidi.

28 October 2012

Roland Mas: Guitar Hero drumkits and MIDI, again

Almost a year after my previous post, I felt inclined to spend another Sunday (this one was chilly rather than rainy) working on my script to integrate the drumkit of Guitar Hero World Tour for Wii within a MIDI environment. And wiigh2midi got, if not a rewrite, then at least a few enhancements since I last mentioned it here. It's still not something that I'd put in everyone's hands, but it's coming to be seriously usable. I wonder if there'd be any interest in me packaging that and uploading it to Debian? It would need some cleanup first (and a more generic name, since it's far from restricted to Guitar Hero controllers or drumkits), but I guess it could be useful. Ping me if you're interested.

28 September 2012

Roland Mas: FusionForge news, September 2012

Hey, long time no see! Okay, so what's new in FusionForge land these days? Well, I guess the most prominent news is that we've just declared 5.2 stable. It's been uploaded to fusionforge.org already, and Debian packages are on their way to Debian unstable. Yay! Not-yay: the final weeks convinced us that the state of the 5.2 release candidates, and especially the packages in Debian Wheezy, wasn't near good enough for inclusion in a stable Debian release. So the packages won't be officially part of Wheezy when it comes out, to our great shame. The packages from unstable should work fine however, and we'll provide backports via the official Debian channel, backports.debian.org as soon as possible. And finally: the developers (and some users) of FusionForge will gather for a work session in Paris on the 10th of October, as described on the Meeting/Oct2012 page on the FusionForge wiki. Join us if you're interested!

31 August 2012

Roland Mas: Integrating FWbuilder with fail2ban and port-knocking

This article documents how I'm currently building my firewalls. It builds on netfilter-based-port-knocking, and tries to integrate several components of a firewall as gracefully as possible. For some context: I'm getting involved with servers where the firewall policy goes beyond a handful of SSH from my home and my other servers rules. There are many different network streams that need to be policed, some of them are common across several (potentially many) servers, and so on. So I'm gradually giving up on my hand-made scripts, and trying out higher-level tools. I settled on FWbuilder, which seems nice. However, it only allows static policies, and I still want to keep dynamic policies such as what fail2ban provides, as well as my own port-knocking system. The problem I had was that fail2ban isn't really made to play nice as part of a complex firewalling setup, my port-knocking system was too tightly integrated within my firewall script, and FWbuilder wasn't too flexible when it came to delegating part of the firewall policy to something external. Fortunately, this was only a perceived problem (or a real problem in my understanding), because it is actually possible to have all three blocks playing nicely together. More context: as usual, I'm focusing on Debian-like systems. More precisely, on those with a Linux kernel; it may be that FreeBSD's firewalling subsystem has a feature comparable to Linux's recent module, but I don't know. Let's start with FWbuilder. This is not the place for a manual, the official documentation is rather complete. I'll assume you have defined most relevant blocks in there: firewall, hosts, IP addresses, services, and so on. You define your static policy with the standard rules. From then on, we want to integrate the external tools for dynamic rules. Step 1: Integrating fail2ban We want fail2ban to have its own playground, so that it doesn't overwrite anything in the standard policy. The trick is to define a new policy rule set named fail2ban. Leave it empty in FWbuilder. So far so good, but fail2ban (the daemon) still operates on the INPUT chain in the firewall, and could therefore still mangle the static rules. Fortunately, starting with fail2ban 0.8.5 (available from Debian Wheezy, or in the backports for Squeeze), you can define what chain to operate on: with a configuration item such as chain = fail2ban, fail2ban (the daemon) will now only add its rules to fail2ban (the firewall chain), and won't be able do damage the other chains. The missing part is to send some of the traffic to it using the standard policy: i defined a rule sending the incoming SSH connections to the fail2ban policy ( branching in FWbuilder jargon). Voil : the static policy delegates part of the decision-making to a sub-policy controlled by the fail2ban daemon. Step 2: Integrating port-knocking This is a bit trickier, but we'll use a similar method. First, the traffic used for port-knocking needs to be directed to the chain that does the listening. Define a policy rule set named portknocking, and leave it empty in FWbuilder. It'll be used by the dynamic rules to track progression of source IP addresses through the port-knocking sequence, so you'll need to send ( branch ) incoming traffic there, probably after the rules allowing incoming connections from known hosts. The dynamic part of this will only concern the refreshing of this listening chain , which we assume will do its work and mark IP addresses with PK_ESTABLISHED once the sequence is completed. What we do with these marked IP addresses will still be defined within the FWbuilder policy. We're going to need some complex rules since we want to filter according to this PK_ESTABLISHED bit and according to destination port, for instance; unfortunately FWbuilder doesn't allow combining filter criteria with and, so we define a new policy rule set called accept_if_pk_ok. This ruleset has two rules: the second is an ACCEPT and should be easy to understand. The first rule needs to ensure the ACCEPT is only reached for connections coming from PK_ESTABLISHED addresses, so it's going to be a bit tricky. (Explanation: the first rule matches packets coming from IP addresses not marked as PK_ESTABLISHED, and returns them to the calling policy. Packets remaining after this rule are those coming from the appropriate addresses, and they go on to the ACCEPT. We could have had the first rule match on IP addresses that are marked, and branch to yet another ruleset with the ACCEPT part, but that would make it harder to read, I feel.) Now let's get back to the main policy and add rules concerning what kind of traffic we want to allow once the port-knocking sequence completed. For instance, we define a rule matching on the SSH service , where the action is to branch to accept_if_pk_ok. When an incoming packet tries to establish a connection to the SSH port, it's passed to the accept_if_pk_ok ruleset. If it comes from the same IP as a recent port-knocking sequence, it goes on to be ACCEPTed. If not, it returns to the main policy. Maybe static rules further on will allow it to go through. Step 3: tying it all together Now that we have all the pieces, the rest is plumbing. With this setup, at boot time, the $hostname.fw script creates the static policy and the extra playgrounds; then the port-knocking script implements the listening for the magic sequence; then fail2ban inserts its own rules. And there we are: three different parts for the firewall policy, all integrating nicely. Mission accomplished! Note: (Mostly copy-and-pasted from the previous article) This article is deliberately short on details and ready-to-run scripts. Firstly because firewall scripts vary wildly so any script would have to be adapted anyway, but mostly because security is best handled with one's brain switched on. Fiddling with a firewall can easily open gaping holes or lock everyone out. So please make sure you understand what goes on before blindly pasting stuff into your own setup. Some bits are left as an exercise to the reader.

28 March 2012

Roland Mas: FusionForge news, March 2012

The winter has been busy everywhere, and not much happened in FusionForge land. However, this is over, and we've started pushing again, with a focus on making our next 5.2 release ready. We've started a stabilisation branch and set up Jenkins jobs to run our various testsuites on it. Packages built from that branch have also been uploaded to the "experimental" suite in Debian (they need a manual approval before they actually appear there, but that shouldn't take too long). They should be installable and working even on Debian Squeeze (stable), although they focus mostly on Wheezy (testing). My personal goal is to have a solid 5.2 release and include it in Wheezy so it can be part of an official Debian system. What this all means is that we welcome tests and reports of breakage; users of 5.1 or earlier versions in particular are encouraged to test upgrades (in virtual machines or development environments, obviously, if the production server is sensitive). We also welcome translators; there are many new plugins, which means many new translatable strings. We'll probably do release candidates before the final release; I'll announce them here (in addition to the fusionforge.org website). Stay tuned!

1 March 2012

Roland Mas: Looking for the ultimate distributed filesystem, take 2

This follows up on a previous post, and tries to summarize the corrections and suggestions I received. First, a reminder: yes, I'm really looking for a filesystem, not a storage system. Even if I only consider the music files use case, there are just too many things that want to access the files, and there's no way I'm going to port these things to use the storage system's API instead of plain old file access. Examples? The Rhythmbox player. The XMMS2 player. The mt-daapd/forked-daapd DAAP server. My script to backup or restore metadata (tags). Ex-Falso, to do mass edits on these tags. abcde, to add new files when I buy a new CD. Any kind of shell one-liners I currently do without thinking because find, xargs, cp, ln, and so on all operate on files. I understand that providing filesystem semantics on top of a storage system is hard, but that doesn't change my requirements, and saying otherwise is patronising. So, on to the meat of the matter. Apparently my understanding of Tahoe-LAFS was mostly correct. It seems that multiple introducers are on their way to become a reality, which would mean the one remaining central point would go away. Most of my other complaints seem to be on their way to be resolved, too, except that it's still targeted as a storage system, and the FUSE/sshfs layer on top still has its drawbacks (quoting from the wiki, Before uploading a file to a Tahoe filesystem, the whole file has to be available , mutable parts of a filesystem should only be accessed via a single sshfs mount [...] data loss may result for concurrently accessed files if this restriction is not followed and so on). From what I read, the position is that nobody publicly uses Tahoe's filesystem integration, therefore there's no need to fix its shortcomings; my understanding is that nobody uses it precisely because of its shortcomings. Ceph: apparently does repair/rebalance automatically contrary to what I thought. However, two persons told me that it's really expecting all nodes on the same LAN, and geographically distributed setups aren't really targeted. XtreemFS: I don't think I got any comments on my evaluation, so I'll assume it still stands. git-annex: storage system, not a filesystem. Yes, the files can be made available in the filesystem, but if I need to manually retrieve them before accessing them, this doesn't work. GlusterFS: I'd like to read more docs on how it works, but I haven't been able to find them. Apart from the installation manuals, the only documentation I found was a very short Concepts page, which described some of the concepts but didn't give me the big picture on how the who thing works. (I'm turning a blind eye on the Red Had advertisements and the requiremets that everything be installed on RH machines; the hardware requirements are also quite out of line with what I want to do.) HekaFS (aka CloudFS): I found even fewer docs about this one than about GlusterFS, and apparently HekaFS is on its way to being merged into GlusterFS anyway. PlasmaFS: I didn't know about this one previously, but the one email I got about it was mainly full of this won't work and this won't work either . I didn't feel inclined to read further docs after that. In summary, I guess the replies I got didn't cause me to change my mind too much. Tahoe-LAFS still seems to be, if not the best solution, then at least the least bad . Hopefully the drawbacks will be fixed soonish; the main sticking point (at least from my point of view) still seems to be the lack of focus on proper filesystem integration. I'll try setting it all up at some point; I may report further if I end up finding interesting things. Thanks to all who responded!

15 January 2012

Roland Mas: Looking for the ultimate distributed filesystem

(This is not quite a Dear lazyweb post. If anything, it's Dear lazyweb, I've done my homework, now what? ) I'm looking for the ultimate distributed filesystem. Something that's simple to use, redundant, fault-tolerant, and still smart enough to avoid the more obvious performance chokepoints. Ideally, it should work for all of the following sets of files: The requirements on the ultimate distributed filesystem (which I'll call UDFS for short, otherwise you'll get bored and go look at pictures of kittens) are as follows: Quite a few constraints, eh? I have a feeling they are not mutually incompatible though, so I had a look at several candidates. I started with Wikipedia, and I followed the links to Tahoe-LAFS, XtreemFS and Ceph. The following is my evaluation of these candidates based on much reading of docs and websites and wikis, some questioning on IRC, and very little testing. The overall picture seems full of good things scattered across different solutions, but unfortunately none of the existing ones seems to address the whole problem; at least, not my whole problem. It would be good if each focused on one layer and did that layer well, but that seems not to be the case either, so they can't be combined to get the best of all worlds. It may be that I'm missing something, or that I failed to read some docs properly, or that I misunderstood the docs, or that the docs themselves are simply lacking; but my ideal UDFS currently doesn't seem to exist as a turn-key solution. However, the main pieces are available, and implementing the remaining parts may be doable. My humble idea of a way forward would be based on Tahoe-LAFS, with the following three changes: Also nice to have would be a way to work with multiple introducer nodes, but that seems to be in the works already. This would be pretty damn close to my UDFS; read/write performance would certainly be far from what can be obtained on native filesystems stored on local disks, but my use cases involve reasonably small files for which instant access is not compulsory, and the filesystem cache would probably absorb most of the access times. In case anyone is looking for ideas of things to do in their spare time, here are rough sketches of other possible UDFS implementations I thought of. These are wild ideas, and I'm not even sure they could be doable in practice: Such is the state of my research so far. I would welcome feedback, pointers to things I neglected to read, corrections for things I misread or misunderstood, comments on the ideas and so on. I'll probably post an update if my search goes significantly forward. Update: I've already received two pieces of feedback, including a lengthy one with corrections about Tahoe-LAFS. For the sake of fairness, I'll solicit (and wait for) the same from the other candidates I looked at. I was also pointed at HekaFS, GlusterFS and git-annex, which I'll have to look at in more details. Other suggestions are still welcome, but the more I get, the more the full update will be delayed. Thanks already!

7 November 2011

Roland Mas: Guitar Hero Controller (Wii) to MIDI

This is a follow-up on a previous post: I used a bit of a rainy Sunday to read some docs and write some Python, and I now have a script that converts events from a Guitar Hero World Tour drums controller to MIDI. A brief explanation of how it works: in its standard configuration, the drums controller has five pads and a pedal, and it uses a Wiimote to communicate the events (what pad was hit with what force, or what button was pressed/released, or the position of the mini-stick) to the Wii itself. This communication happens over Bluetooth and not over a dedicated cable or protocol, so it is possible to remove the Wii and use a standard computer with Bluetooth (built-in or via a dongle) instead, and the computer will receive messages. There's some decoding to do, but fortunately people have reverse-engineered the frames, and come up with the libcwiid library and its Python binding, which hides the complexity of the decoding and just delivers structured data to the programs that use it. Then we analyze this data and extract the relevant info from it (with some juggling of bits): "the X pad was hit with force Y". From there, it's a simple matter of mapping the pads to a MIDI note and the force to a MIDI velocity, and of sending the appropriate MIDI message to the target, for instance with the libportmidi library. What you do with the MIDI output is your call: I use it to add more pads to my electronic drum kit, but it could equally be used to drive a software synthetizer, or an external sound module. Or a light-show machine. Or fireworks. Or a model railway. Or whatever. Here's the current version of the wiigh2midi.py script (for updated versions, see the Bazaar repository).
#! /usr/bin/python
import cwiid, time, asyncore, pypm
class Wiimidi:
    wiimote = None
    midiout = None
    lastaction = time.time()
    wii2color =   27: 'kick',
                  25: 'red',
                  17: 'yellow',
                  15: 'blue',
                  14: 'orange',
                  18: 'green'  
    color2midi =   'kick': 36, # Bass drum
                   'red': 38, # Snare drum
                   'blue': 48, # High tom
                   'green': 43, # Low tom
                   'yellow': 42, # Closed hi-hat
                   'orange': 52, # Crash cymbal
                    
    def connect_wiimote(self):
        print "Press 1+2 on the Wiimote..."
        try:
            self.wiimote = cwiid.Wiimote()
        except:
            self.wiimote = None
            return None
        print "Connected."
        self.lastaction = time.time()
        self.wiimote.rumble=1
        time.sleep(.2)
        self.wiimote.rumble=0
        return self.wiimote
    def disconnect_wiimote(self):
        if self.wiimote is not None:
            print "Closing connection to the Wiimote."
            self.wiimote.close()
            self.wiimote = None
        return
    def connect_midi(self):
        for i in range(pypm.CountDevices()):
            interf,name,inp,outp,opened = pypm.GetDeviceInfo(i)
            if outp == 1:
                if name == 'UM-1G MIDI 1':
                    self.midiout = pypm.Output(i, 0)
                    return
        for i in range(pypm.CountDevices()):
            interf,name,inp,outp,opened = pypm.GetDeviceInfo(i)
            if outp == 1:
                print i, name," ",
                print
        dev = int(raw_input("Type output number: "))
        self.midiout = pypm.Output(dev, 0)
    def wiimote_callback(self, messages, timestamp):
        self.lastaction = time.time()
        m = messages[0][1]
        # rx = m['r_stick'][0]
        ry = m['r_stick'][1]
        l = m['l']
        # r = m['r']
        which = ( (l << 1) & 16) + (ry >> 1)
        softness = l & 7
        if softness == 7 or which == 0:
            return
        # print "rx=%d\t ry=%d\t l=%d\t r=%d\t rest=%d" % (rx,ry,l,r,rest)
        # print "which = %d, softness = %d" % (which, softness)
        strength = 7-softness
        color = self.wii2color[which]
        midi_vel = 16*strength
        midi_note = self.color2midi[color]
        self.midiout.WriteShort(0x9a,midi_note,midi_vel)
        self.midiout.WriteShort(0x9a,midi_note,0)
        # print "%s %d -> %d %d" % (color, strength, midi_note, midi_vel)
    def main(self):
        self.connect_midi()
        while True:
            if self.wiimote is None:
                self.connect_wiimote()
                if self.wiimote:
                    # Tell Wiimote to display rock sign
                    self.wiimote.led = cwiid.LED1_ON   cwiid.LED4_ON
                    # We only care for the extension data
                    self.wiimote.rpt_mode = cwiid.RPT_EXT
                    self.wiimote.enable(cwiid.FLAG_MESG_IFC   cwiid.FLAG_REPEAT_BTN)
                    self.wiimote.mesg_callback = self.wiimote_callback
                    self.lastaction = time.time()
                else:
                    print "Retrying... "
                    print
            asyncore.loop(timeout=0, count=1)
            if self.lastaction < time.time() - 3600:
                print "An hour passed since last action..."
                self.disconnect_wiimote()
            time.sleep(0.05)
        print "Exited safely."
# Instantiate and run
inst = Wiimidi()
inst.main()
There are a few obvious improvements to be made: first, handling guitar controllers in addition to drums. Second, handling the buttons could provide extra "pads", to trigger sound effects for instance (thunder, lasers, voices ), or to initiate program changes, or to change the mappings of the pads. Maybe storing the mutable parts (default MIDI adapter, pad-to-note mapping) in a configuration file. Maybe add a user interface to ease the configuration. Patches welcome, otherwise maybe I'll do that myself on another rainy Sunday. Notes and references (in other words, the giants on whose shoulders this program stands): More cowbell!

28 October 2011

Roland Mas: FusionForge, October 2011

As usual, a brief roundup of what happened in FusionForge land this month.

29 September 2011

Roland Mas: Non-FusionForge, September 2011

Besides FusionForge stuff, I also kept busy with other things.

Roland Mas: Fusionforge, September 2011

A summary of my FusionForge-ish activities this month:

25 August 2011

Roland Mas: Hack of the day: wondershaper vs. SSH shared connections vs. SCP

We all hate the A in ADSL, but most of us are stuck with it. So there are any number of workarounds that keep our link to the Internet working with a certain amount of perceived fluidity, by way of "traffic shaping". Many of us use wondershaper for that purpose: it's a magic tool that keeps the latency low for interactive traffic by ensuring that the bulk traffic doesn't interfere too much with it. We all hate the A in ADSL, but most of us are stuck with it. So there are any number of workarounds that make the establishment of new connections to external servers feel faster. Many of us use the SSH "ControlMaster" series of options for that purpose: it's a set of options that allow new connections to a server to reuse an existing one if there is already a connection open (or if there was one until not too long ago), so some time is saved becaused the handshake to establish the connection is only needed once. Now combine both. SSH to an external server, for instance, work on it. Since it's meant to be an interactive connection, slogin will set some appropriate flags on the IP packets, and wondershaper will give these packets priority over whatever bulk traffic is happening at the time. At some point, you realise you need to transfer a large set of files to there, so you start an SCP transfer. scp, being part of the SSH suite, will reuse the existing connection. Suddenly, your bulk transfer will not only get priority over the other kinds of traffic you might have, but also get intermingled with your keystrokes in the slogin session to the same server. Which means your keystrokes will be delayed significantly, and working interactively becomes even more of a pain. In my case, the priority boost given to the SCP transfer is sometimes enough to cause other unrelated sockets to time out. After reading some docs, I found my solution. It isn't much, but it's going to change my way of working significantly enough that I'm going to share it anyway.
alias scp='scp -oControlPath=none'
Also, since I use dput extensively, and it invokes scp in many cases without reusing my shell aliases, part of my ~/.dput.cf file reads:
[DEFAULT]
ssh_config_options = ControlPath=none
[ ]
This means I keep the advantages of traffic shaping for interactive sessions, but SCP stuff won't interfere anymore, and I can continue working on the server while the new version of the packages that I need to get there are in transit. One small improvement at a time, progress!

15 January 2011

Thorsten Glaser: FOSDEM 2011 Let the beards grow!

FOSDEM, the Free and Open Source Software Developers' European Meeting

Who s not? Same procedure as every year.

(okay, lolando prefers ski ng but ) Anyway. A cow orker told me that Belgium again/still has no gouvernment, and they have been asked to grow out their beards until they do. I found evidence on the net but won t link it here, also it s on German anyway. Let s all join in. (Besides, I now have an excuse to not shave, maybe even my grandmother will accept this one ) RT said on IRC that mksh will probably work on MSYS. My Debian/m68k stuff is coming around nicely, but I still haven t gotten around to do everything planned, plus I need to grow a new kernel and eglibc, after the latest uploads, and the 2.6.37 based one panics. Also I ve got to take care to not overwork myself. (And make a MirBSD ISO for FOSDEM.) But hey, it s been not working for some time and better now. And slow anyway yet we re progressing. Does anyone know how to debug that a C programme only calling res_init(3) segfaults? Benny is apparently not just working on making NetBSD pkgsrc available on MirOS BSD (picking up my work from 4+ years ago) but also replacing The MirPorts Framework with it. Sad, as I got a request for a gajim MirPort over a cocktail just this evening

25 November 2010

Roland Mas: FusionForge news, November 2010

Okay, so it's been five months since the last update. Sorry about that. I guess I could force myself to more regular updates. What have you missed? Only the branching of what will eventually (soon?) become FusionForge 5.1. Quoting from the default home page, and in no particular order: We've also added many tests to our testsuite, and we'd welcome more. If you're interested in the new features coming to FusionForge, now would be a good time to try them and report the bugs you find. And if you don't want to upgrade your production server just yet, you can use a VM image that builds and installs everything automatically inside a VirtualBox (or another virtualization system). See details of my VM image for easy testing mailing-list post. That VM covers trunk rather than the 5.1 branch, but since the branching is still rather recent there isn't much difference yet. Another tidbit of information: since I recently realized the IPv4 horizon is less than a hundred days away, I've been engaging in a flurry of migrations from IPv4-only to dual-stack, one result of which is that fusionforge.org is now accessible via IPv6. Yay for less NAT and reverse-proxying. And I think that's it for now. There's one obvious piece of news I'm looking forward to announce, but it has to actually happen first

Next.