Search Results: "waja"

13 February 2013

Jan Wagner: Searching painting: Strawberry and Man on Pumps with Sparkling wine

We are searching a motive for a painting or a painting itself for a quite while now. This should find it's place in our living room. Unfortunately we didn't found one, which matched our both prospect and/or wasn't compatible with the rest of our living room. Yesterday we stumbled upon a motive which was quite nice, but was too small and it was neighter possible to get it in a bigger size nor to find out who was the origin painter of the picture. Now we are searching for the name of the picture and/or the painter. Any hints appreciated at 'blog - at - waja - dot - info'. A photo with higher resolution can be found here Update: Okay ... an unknown people (many thanks) hinted me, that google image search is the tool that could be very usefull. Google revealed that the painter is Inna Panasenko. P.S. Is it noticeable that I'm in vacation mode? ;)

Jan Wagner: Tracking coffee consumption

Are you consuming a way too much coffee and have reached the junkie level like me? Maybe can help out and visualize your coffee usage.

2 February 2013

Jan Wagner: Search picture(s)

Today we where packing back our holiday decorations into boxes. We also had a music box to put away. Do you see the defect in the picture? ;) No? Okay ... some years ago my oldest daughter broke it into some pieces and a friend of us (Hi muempf! ;) did glue all together. When he showed us the nice music box, it was recovered very well, but one detail was changed from original. One horseman wasn't reassembled as the other onces. If you you didn't found the defect, please have a look here. You had fun with this? Maybe this is another one for you, I took that photo on christmas eve when my youngest daughter had placed her new 'Lisa Plastic': Keep smiling!

1 February 2013

Jan Wagner: Creative destruction

Today short before ending business hours I was noticed that there is a problem with a server system (domU). Login with unprivileged user was possible but using "su" didn't worked, also login in as root via privkey failed. Fortunately I was able to connect via xen console and login via tty. Looking into the bash history the reason revealed quickly:
4979 2013-02-01 15:03:39 cd /var/www/
4980 2013-02-01 15:03:43 chown www-data:www-data -R /var/www/
4981 2013-02-01 15:04:36 ls -la
4982 2013-02-01 15:04:46 ls -la
4983 2013-02-01 15:04:54 chown www-data:www-data -R /*
4984 2013-02-01 15:07:42 chown www-data:www-data -R /var/www/
4985 2013-02-01 15:36:55 chown www-data:www-data -R /var/www/
This made my day (and maybe parts of the rest of the weekend). For recovery our 1st Level mounted the domU-fs on the dom0 to '/tmp/recover' and did:
2131  2013-02-01 21:29:28 cd /tmp/recover
2142  2013-02-01 21:31:17 rm -r lib64/
The experienced reader may see the problem:
# ls -lad lib64
lrwxrwxrwx 1 root root 4 Jun 28  2011 /lib64 -> /lib
So also the dom0 was knocked out ... what a funny evening (and maybe night). Maybe our staff looked similar like here.

28 January 2013

Ben Hutchings: Testing network link state

Jan Wagner writes about using ethtool to poll the link state on network devices. While I'm pleased to see another happy ethtool user, it's a bit heavyweight for reading one bit of information and it's not ideal for scripting. Much information about each network device is available under the /sys/class/net/name directory, including the carrier and speed attributes. (When the interface is down, some of these attributes cannot be read.) It's also possible to listen for link changes (rather than polling periodically) using the rtnetlink API, but this is probably not suitable for munin.

27 January 2013

Jan Wagner: Fixing muninlite for up interfaces without link

Usually I'm monitoring stuff with Icinga (Nagios in the past). But for my small network, I primary needed monitoring of bandwidth. In our commercial environment we are using a closed source software for accounting traffic. There is also a license for testing purpose with a reduced number of sensors available. But I'm neither running windows in this network nor feeling happy with this. Cacti is a bit bloated for this small network and zabbix is (caused by what?) removed in wheezy, beside that I'm not getting the concept behind it. So I thought I could give munin a try and on the first view it doesn't look so bad. Monitoring my half dozens openwrt devices works like a charm by installing muninlite just the package. One central part of the network is a QNAP TS-459 Pro+, hosting a BackupPC and TimeMachine service, proving SMB/AFS data store and running SqueezeBox Server for another half dozen streaming devices. Unfortunately there is no optware package to provide a munin node. So I just copied the shell script of muninlite and the xinet config over from an openwrt device. At first it looked not bad, but than munin wasn't able to collect the data. After a while I realized, that munin was failing when collecting the network informations. A look into the muninlite script revealed that it was failing when trying to discover the interface speed of eth1 via ethtool. In my setup the QNAP is just connected with with one network interface, the second one is unconnected. Unfortunately all network interfaces on QNAP devices are up and therefore listed in /proc/net/dev where muninlite is discovering the network interfaces:
[~] # grep '^ *\(ppp\ eth\ wlan\ ath\ ra\ ipsec\ tap\ br-\)\([^:]\)\ 1,\ :' /proc/net/dev   cut -f1 -d:   sed 's/ //g
> s/\-/_/g'
Let's look into it:
[~] # ethtool eth0  grep Speed:
    Speed: 1000Mb/s
[~] # ethtool eth0  grep "Link detected:"
    Link detected: yes
[~] # ethtool eth1  grep Speed:
    Speed: Unknown! (65535)
[~] # ethtool eth1  grep "Link detected:"
    Link detected: no
Maybe you see .. the interface eth1 is up but has no link, so there is no speed negotiated and muninlite is failing. Thus I hacked the scripted and now it's working like a charme.
<notextile><figure class="code"><figcaption> (muninlite_fix-unused-up_interface.diff) download</figcaption>
--- /opt/sbin/munin-node.orig	2013-01-27 15:13:51.869007214 +0100
+++ /opt/sbin/munin-node	2013-01-27 16:11:20.536006950 +0100
@@ -133,7 +133,7 @@
   if [ -n "$(which ethtool)" ]; then
 	if [ -x "$(which ethtool)" ]; then
   		if ethtool $1   grep -q Speed; then
-    			MAX=$(($(ethtool $1   grep Speed   sed -e 's/[[:space:]]\ 1,\ / /g' -e 's/^ //' -e 's/M.*//'   cut -d\  -f2) * 1000000))
+    			MAX=$(($(ethtool $1   grep Speed   sed -e 's/[[:space:]]\ 1,\ / /g' -e 's/^ //' -e 's/M.*//'   sed -e 's/Unknown\!/0/'   cut -d\  -f2) * 1000000))
     			echo "up.max $MAX"
     			echo "down.max $MAX"
@@ -535,19 +535,31 @@
     for INTER in $(grep '^ *\(ppp\ eth\ wlan\ ath\ ra\ ipsec\ tap\ br-\)\([^:]\)\ 1,\ :' /proc/net/dev   cut -f1 -d:   sed 's/ //g
-      INTERRES=$(echo $INTER   sed 's/\./VLAN/')
-      RES="$RES if_$INTERRES"
-      eval "fetch_if_$ INTERRES ()   fetch_if $INTER $@;  ;"
-      eval "config_if_$ INTERRES ()   config_if $INTER $@;  ;"
+      if [ -n "$(which ethtool)" ]; then
+        if [ -x "$(which ethtool)" ]; then
+          if [ -n "$(ethtool $INTER   grep 'Link detected: yes')" ]; then
+            INTERRES=$(echo $INTER   sed 's/\./VLAN/')
+            RES="$RES if_$INTERRES"
+            eval "fetch_if_$ INTERRES ()   fetch_if $INTER $@;  ;"
+            eval "config_if_$ INTERRES ()   config_if $INTER $@;  ;"
+          fi
+        fi
+      fi
   elif [ "$PLUG" = "if_err_" ]; then
     for INTER in $(grep '^ *\(ppp\ eth\ wlan\ ath\ ra\ ipsec\ tap\ br-\)\([^:]\)\ 1,\ :' /proc/net/dev   cut -f1 -d:   sed 's/ //g
-      INTERRES=$(echo $INTER   sed 's/\./VLAN/')
-      RES="$RES if_err_$INTERRES"
-      eval "fetch_if_err_$ INTERRES ()   fetch_if_err $INTER $@;  ;"
-      eval "config_if_err_$ INTERRES ()   config_if_err $INTER $@;  ;"
+      if [ -n "$(which ethtool)" ]; then
+        if [ -x "$(which ethtool)" ]; then
+          if [ -n "$(ethtool $INTER   grep 'Link detected: yes')" ]; then
+            INTERRES=$(echo $INTER   sed 's/\./VLAN/')
+            RES="$RES if_err_$INTERRES"
+            eval "fetch_if_err_$ INTERRES ()   fetch_if_err $INTER $@;  ;"
+            eval "config_if_err_$ INTERRES ()   config_if_err $INTER $@;  ;"
+          fi
+        fi
+      fi
   elif [ "$PLUG" = "netstat" ]; then
     if netstat -s >/dev/null 2>&1; then

24 January 2013

Jan Wagner: vsftpd running on IPv4 and IPv6 simultaneous

Today I planed to move my FTP server to a new system running Debian wheezy. Usually I'm running vsftpd, so this was also the plan for the new host. As this system is dual stacked, I wanted to offer the service on IPv6 too.
On the old system I started vsftpd via xinetd, as far as I remember was vsftpd in squeeze not able to bind on the IPv4 and the IPv6. Anyways ... I didn't wanted to use any inetd system ... So I looked if there is any way to solve that. By a quick search I found vsftpd-2.3.4-listen_ipv6.patch, which indicated that just binding to [::] will also accept IPv4 connections. Setting the following in /etc/vsftpd.conf worked out like a charm:
I just updated #574837 so hopefully more people can benefit in the future.

5 January 2013

Jan Wagner: Brainfucked intellectual property rights

This issue is about a bit specific situation in Germany and it's (mainly tax-financed) public service broadcasting. If you think you are not interested, please skip this entry. :) Just beginning from scratch... In Germany we have (like most european countries) a public service broadcasting. Starting with 2013 the system collecting the funding of the public service broadcasting changed from GEZ to Beitragsservice.
To make it short (and a bit simplified) ... it changed from "most 'families' have to pay" to "every menage has to pay". This means, even if you don't have any device to use the service of the german public service broadcasting but are a menage, you have to pay for it. This sitiation is even more worse than pay-TV. Don't get me wrong ... I'm neither against public service broadcasting nor against paying for it in principle. In the rare cases I'm watching TV, this is most likely a channel of the public service broadcasting. Same counts for the rest of my small family. Mostly we are watching news and documentations ... also some movies are watched. The kids are watching KiKA, which we like cause it's Ads-free and it's not just some cartoons with hidden violence. So we most likely benefit from public service broadcasting. Anyways ... in common we are not using "TV" as most of the others may do. We still like some sort of the broadcast, but we are missing it cause various reasons. For our luck there is a service called 'Mediathek' provided by the most german public service broadcasting channels. If you missed a broadcast or are prevented for any reason, you may watch the broadcast there. Unfortunately the timeframe you can access the broadcast is very short, in most cases 7 days. This is manifested in the so called Rundfunkstaatsvertrag, more precisely in the 12. Rundfunk nderungsstaatsvertrag. The 'Tagesschau' has a good explanation about the issue. This means, in simple words, the tax payed broadcasts needs to be deleted after a (in my eyes) short time. WTF?!? Why do we wipe work we all payed for it? Guess what ... it was implemented by lobbying from commercial print media and private (e.g. ads-financed) TV-channels. A comment about a related problem, which I agree, can be found here. Another challenge ... How do you know there is a broadcasting your are interested in? How do you know when it is broadcasted? I remember, my parents was subscribed to a daily newspaper, which included an additional TV paper, my parents-in-law bought also one.
As we are neither subscribed to a daily newspaper nor buying a TV paper, we have to solve that differently (beside that a printed paper won't help us I think). When we use the opportunity to watch TV at the evening, I just use an electronical TV paper (-app). If we find something which fits, we are fine .. if not, we do other things. This sounds a bit like an old school behaviour and it is not, how I would like to use a media today. I like using a feedreader. You can subscribe differnt 'channels' of information feeds and it is a way I would love to use TV. In the past I used a tool called 'Mediathek', which is a tool for Mac OSX. With this tool it was easily possible to find broadcasts, you could subscribe to so called 'channels', which means broadcasts of a same series for example. You could also mark them as "unread" und "read" .. something like a feedreader.
On the other way, it was also possible to list recent broadcast per genres and popular broadcast of the recent and the last week. This was very comfortable as this was working for several mediathek sites for german public service broadcasting channels in one central place. For every broadcast you could just view it or download it to view it later. You could even download subscribed channels automatically. After using the Mediathek-App and being satisfied by the Mediathek sites of the german public service broadcasting channels for several years, this week the server(s) behind the App was shut down. What the f....?!? On Twitter the author stated that the shutdown was cause by problems with the copyright law. Das Erste, ZDF and WDR denied that they has anything todo with this recent issue. The question is ... what is the direct cause for shutting down such a good service. This smells a bit like the PirateBay issue, but in this case the content which is Mediathek providing access to isn't pirated, but legal hosted. So where is the real issue? In my mind, the german citizen has payed to produce this content and should have the right to access the content is an usable way. The Mediathek App did provide such a way and it is really pissing me off that this access isn't available anymore, at least for this reason. Actually I don't see any solution to access our payed content in a decent way. Footnote: I think it's obscenely, that publishing houses and commercial broadcaster trying to gain their turnover at the expense of the german citizen by forcing the german legislature to implement Depublication,.

4 January 2013

Jan Wagner: New blogging engine

You may have noticed that I recently started posting more updates again. The reason is, I switched over from Wordpress to Octopress as blogging engine. The idea was driven, cause my used theme K2 got stuck and with the upcoming release of Debian wheezy I'm forced to switch to a more recent Wordpress release, which is likely incompatible.
Another reason is, that I got bored by wordpress itself (and it's software dependencies). With octopress these dependencies are lowered to a webserver which can server static files and rsync on the server side. Maybe I will post some parts of the story, what I did when migrating the content and what components (plugins, theme ...) I'm using, later.

27 December 2012

Jan Wagner: Roast Goose

This year it turned out that we had to care the first time for ourself about Christmas dinner. So we decided to try a roast goose. Obtaining the goose is a different story and will maybe told later. The second challenge was to select a recipe. We found a great one at Die Rezeptesammlung der Unix-AG

26 December 2012

Jan Wagner: A Merry Christmas

<object height="315" width="560"><param name="movie" value=";hl=de_DE"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed allowfullscreen="true" allowscriptaccess="always" height="315" src=";hl=de_DE" type="application/x-shockwave-flash" width="560"></embed></object>

25 June 2012

Jan Wagner: nagios-plugins 1.4.16 is going to be released

Short before Debian is freezing the upcoming release of a new version of nagios-plugins is scheduled for Wednesday. The good news is, that a recent version is available in unstable and testing. Upstream only fixed some check_ping issues, which are not included in nagios-plugins 1.4.16~pre1-1. There is also a package available through squeeze-backports. If you are able to, please test the packages as soon as possible. If there are some quirks, these can be fixed in the next two days upstream.

19 March 2012

Jan Wagner: Chemnitzer Linuxtage 2012

As announced 3 weeks ago, the Debian project was present at Chemnitzer Linuxtage. Several talks and workshops where held by people related to the Debian project. At the booth we had talks and discussions with exhibitors and visitors, unfortunately I didn t had much time to visit more than small parts of two lectures. Unfortunately (for the visitors) we didn t had any merchandising on board, while we received several requests. On Sunday Axel surprised us with some leftovers from fosdem of merch. At the booth we had a demo machine running Babelbox and xpenguins, which attracted visitors very well. Booth Babelbox We received also more than one Just thank you by satisfied users. :) Four different talks and one workshop were held by Debian people, but they were not specific to the Debian. The workshop was about OpenStreetMap, lectures was about commandline helpers, grep everything, quality analyzing and team management in opensource projects and Conkeror and other keyboard based webbrowsers. Many thanks to Jan Dittberner, Andreas Tille, Christian Hoffmann, Florian Baumann, Christoph Egger, Axel Beckert, Adam Schmalhofer, Markus Schnalke, Sebastian Harl and Patrick Matth i for running the booth, answering a wide range of questions or just chatting with visitors . A special thank to TMT GmbH & Co. KG for providing the complete equipment and sponsoring it s transportation. At the end we have to send a big thank to the organizing team of the Chemnitzer Linuxtage. It was fun and a pleasure to find new friends and meet old ones of the Free Software community. A small sidenote was anybody aware that OpenSuSE Package search is using

18 March 2012

Jan Wagner: Is using Octopress a good idea?

Octopress seems to become more popular these days. As it looks great at a first view, I see two problems. Ruby 1.9.2 All documentations and howtos requires to install ruby 1.9.2 via rvm. Maybe anybody can tell me, why I could not just install the ruby1.9.1 package which is in fact Comments Optopress doesn t support comments itself. I found 2 solutions for this problem. The most used is just to use Disqus, the other one to live without comments. I dislike the idea to embed external resources for privacy reasons and to rely on stuff I don t have under my own control. Is there any alternative solution beside disqus which isn t hosted offside?

9 March 2012

Jan Wagner: Monitoring dualstacked service with Icinga

Having monitoring for dualstack connectivity in place helps a lot. Unfortunately in most cases we are running also services we want to offer dualstacked. In the past we just monitored in those cases IPv4 only or created a separate check for the same service on IPv6. This is a bit messy and I was looking for something to check services via IPv4 and IPv6. Digging my most boring search engine doesn t help much here. So I was talking with some people more involved into the core Icinga/Nagios stuff. Seems there is actually the best solution to use check_multi. As we have the command definition for check_multi_icinga already in place, I created a check_smtp_dualstack.cmd for monitoring a dualstacked SMTP service

command[ IPv4 ] = check_smtp -4 -H $HOSTADDRESS$
command[ IPv6 ] = check_smtp -6 -H $HOSTADDRESS6$
state [ UNKNOWN ] = COUNT(UNKNOWN) > 1 A simple SMTP service definition does the trick (don t forget address6 in host definition)

define service
use generic-service ; Name of service template to use
host_name localhost
service_description SMTP
check_command check_multi_icinga! check_smtp_dualstack.cmd !'-r 1+2+4+8
Okay .. that looks nice, but only at the first view. Imagining what we are actually running as service checks, it seems we just need for every unique service check a new cmd-file for check_multi as I didn t found a way to generalize the whole stuff yet. Does anybody know a way to pass commands for check_multi via service definitions? Something like:

define service
check_command check_multi_icinga! check_general_dualstack.cmd !'check_smtp -p 666 ! -r 1+2+4+8

Jan Wagner: Monitoring dualstacked systems with Icinga

Since some ages we are deploying IPv6 in our network and also for some selected services. Some days ago we discovered, that anybody has enabled accidentally a router advertisement daemon in a network where this shouldn t happen. As result of this, IPv6 enabled systems got (additional) IPv6 adresses and some services where using this new learned addresses as source address when sending out replies. Maybe this doesn t sound so harmful on the first view. But some services rely on a correct source address when getting a reply for a request (for example dns resolver library). To be aware of the issue, that a service maybe available via IPv4 but not via IPv6 or vica versa, a dualstacked monitoring needs to be in place for dualstacked services. Looking into the issue, I stumbled upon Michael Friedrichs HowTo about Dualstacked monitoring with Icinga. Luckily we have the full software stack in place on Squeeze (with squeeze-backport). As icinga and nagios-plugins was in installed already, I just needed to fetch check_multi.

aptitude install -t squeeze-backports nagios-plugin-check-multi Now a new check command is needed, I created the following:

define command
command_name check_multi_icinga
command_line /usr/lib/nagios/plugins/check_multi \
-f /etc/check_multi/$ARG1$ $ARG2$ $ARG3$ $ARG4$ \
-s objects_cache=/var/cache/icinga/objects.cache \
-s status_dat=/var/cache/icinga/status.dat \
For monitoring connectivity I created check_host_alive_dualstack.cmd

command[ IPv4 ] = check_ping -4 -H $HOSTADDRESS$ -w 5000.0,100% -c 5000.0,100% -p 5
command[ IPv6 ] = check_ping -6 -H $HOSTADDRESS6$ -w 5000.0,100% -c 5000.0,100% -p 5
state [ UNKNOWN ] = COUNT(UNKNOWN) > 1 Now we just need to replace the check-host-alive command and add a value for address6 on the host as the following

define host
use generic-host ; Name of host template to use
host_name localhost
alias localhost
address6 ::1
check_command check_multi_icinga! check_host_alive_dualstack.cmd !'-r 1+2+4+8
Reloading icinga should you show something like this:
Now we have general connectivity of our dualstacked systems monitored.

26 February 2012

Jan Wagner: Booth at Chemnitzer Linux-Tage 2012 (CLT14)

Also this year the Debian Project is running a booth at Chemnitzer Linux-Tage. Unfortunately this year we are lacking a bit manpower compared to the last years. Actually we have 6 persons at our wiki without knowing how much time everybody will be present at the booth. It would be really cool, if we can prevent us from having a D j vu.
So if you want to visit one of the best community focused OpenSource events in germany and can invest some time helping to run our booth, have a look into the report from last year and the organization wiki. If you feel you want to be part of this enjoyable event, please get in touch with me. As the registration for the booth is closing on 28th February, don t wait too long! ;)

4 January 2012

Kartik Mistry: Wikipedia takes Ahmedabad

* This post is almost translation of my this post on Gujarati blog (with some corrections and additions etc). If you re not interested in Wikipedia, move on. So, as I mentioned here and here, Wikipedians in Ahmedabad planned event: Wikipedia Takes Ahmedabad. We gathered at Gandhi Ashram. Met Noopur again and some volunteers were already there at 8 AM. I never been to Ashram since long time. This place has something, you know it, folks! People started coming around 8.30 and we quickly divided them based upon their interests on 4 routes we already decided. Most of people went to old city s route 1 and our route 2 had odd 6 (and 2 coordinaters) people only. Good. Less is always good. Noopur and Anirudh gave introduction to wikipedia, creative commons license and some basic rules and todo. We then left for Swaminarayan Temple at Sahibaug. We asked for permission to take photographs and we got it after meeting main swamiji there along with nice sweets and he promised to send professional picture for wikipedia to use. We then left for Delhi Darwaja. Ahmedabad has numbers of old fort gates. We covered some of them and moved to Kalupur Railway Station. Palak and Chirag were caught by traffic police and we should take care of following rules from next time. Point noted for next photowalk. We then left for Zulta Minar. This is well known for its carving and some myths associated with it. We discussed some points regarding uploading pictures, categories and left for Raipur Darwaja. This place is famous for Bhajiyas. We didn t taste it but had again fun taking pictures, talking to police and had smile before we leave for home. Only bad thing I ve got in this walk is that due to some or other reason my phone s touch screen seems stopped working :( That s another topic, but I couldn t call Noopur/Anirudh for lunch etc due to that :) You can view all uploaded pictures at: Categories: Ahmedabad 1 Most of Ahmedabad is covered due to event. Time to write articles?

7 October 2011

Jan Wagner: cyrus/lmtpunix: db4: Logging region out of memory / kmail2 sucks

Today I was wondering that I had almost no new mail in my inbox in the morning. After a while I decided to have a look into the server logfiles . so I learned that postfix wasn t able to deliver mails via lmtp cause of:

Oct 7 07:45:56 post cyrus/lmtpunix[307]: DBERROR db4: Logging region out of memory; you may need to increase its size
Oct 7 07:45:56 post cyrus/lmtpunix[307]: DBERROR: opening /var/lib/cyrus/deliver.db: Cannot allocate memory
Oct 7 07:45:56 post cyrus/lmtpunix[307]: DBERROR: opening /var/lib/cyrus/deliver.db: cyrusdb error
Oct 7 07:45:56 post cyrus/lmtpunix[307]: FATAL: lmtpd: unable to init duplicate delivery database
Oct 7 07:45:56 post cyrus/master[754]: service lmtpunix pid 307 in READY state: terminated abnormally Seems like this can be fixed with:

/etc/init.d/cyrus2.2 stop
cat< /var/lib/cyrus/db/DB_CONFIG
set_cachesize 0 2097152 1
set_lg_regionmax 1048576
/etc/init.d/cyrus2.2 start Looking more closer into the logs, it turned out that this trouble started last night when I connected with a client running the soon to be released Ubuntu Oneiric Ocelot using the new kmail2. So it looks like the KDE/Ubuntu folks broke again kmail (or any KDE subsystem), as it also has troubles when migrating over from kmail(1) and it looks like it s not able to access most of the imap subfolders. Well done!

7 August 2011

Jan Wagner: (new and old) culture

After visiting the great Deadmau5 concert of his Europe Tour in Berlin 7 weeks ago, I m involved into a traditional cultural event on the technical side on the next weekend.

We are running the plattform for the online streaming of Lohengrin live from the Bayreuth Festival Theatre on Sunday, 14th August 2011. Usually we are coming together around noon and having a BBQ while keeping all the stuff up and running.

To get back into this millenium, we (yes, my girl and me) are at the Highfield festival the weekend afterwards. I guess you can find me at the white stage or at camp site. Keep on rocking!