Search Results: "apenwarr"

22 December 2013

Francois Marier: Creating a Linode-based VPN setup using OpenVPN on Debian or Ubuntu

Using a Virtual Private Network is a good way to work-around geoIP restrictions but also to protect your network traffic when travelling with your laptop and connecting to untrusted networks. While you might want to use Tor for the part of your network activity where you prefer to be anonymous, a VPN is a faster way to connect to sites that already know you. Here are my instructions for setting up OpenVPN on Debian / Ubuntu machines where the VPN server is located on a cheap Linode virtual private server. They are largely based on the instructions found on the Debian wiki. An easier way to setup an ad-hoc VPN is to use sshuttle but for some reason, it doesn't seem work on Linode or Rackspace virtual servers.

Generating the keys Make sure you run the following on a machine with good entropy and not a VM! I personally use a machine fitted with an Entropy Key. The first step is to install the required package:
sudo apt-get install openvpn
Then, copy the following file in your home directory (no need to run any of this as root):
mkdir easy-rsa
cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ easy-rsa/
cd easy-rsa/2.0
and put something like this in your ~/easy-rsa/2.0/vars:
export KEY_SIZE=2084
export KEY_COUNTRY="NZ"
export KEY_PROVINCE="AKL"
export KEY_CITY="Auckland"
export KEY_ORG="fmarier.org"
export KEY_EMAIL="francois@fmarier.org"
export KEY_CN=hafnarfjordur.fmarier.org
export KEY_NAME=hafnarfjordur.fmarier.org
export KEY_OU=VPN
Create this symbolic link:
ln -s openssl-1.0.0.cnf openssl.cnf
and generate the keys:
. ./vars
./clean-all
./build-ca
./build-key-server server  # press ENTER at every prompt, no password
./build-key akranes  # "akranes" as Name, no password
./build-dh
/usr/sbin/openvpn --genkey --secret keys/ta.key

Configuring the server On my server, a Linode VPS called hafnarfjordur.fmarier.org, I installed the openvpn package:
apt-get install openvpn
and then copied the following files from my high-entropy machine:
cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
chown root:root /etc/openvpn/*
chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key
Then I took the official configuration template:
cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
gunzip /etc/openvpn/server.conf.gz
and set the following in /etc/openvpn/server.conf:
dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 74.207.241.5"
push "dhcp-option DNS 74.207.242.5"
tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
(These DNS servers are the ones I found in /etc/resolv.conf on my Linode VPS.) Finally, I added the following to these configuration files:
  • /etc/sysctl.conf:
    net.ipv4.ip_forward=1
    
  • /etc/rc.local (just before exit 0):
    iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    
  • /etc/default/openvpn:
    AUTOSTART="all"
    
and ran sysctl -p before starting OpenVPN:
/etc/init.d/openvpn start
If the server has a firewall, you'll need to open up this port:
iptables -A INPUT -p udp --dport 1194 -j ACCEPT

Configuring the client The final piece of this solution is to setup my laptop, akranes, to connect to hafnarfjordur by installing the relevant Network Manager plugin:
apt-get install network-manager-openvpn-gnome
The laptop needs these files from the high-entropy machine:
cp ca.crt akranes.crt akranes.key ta.key /etc/openvpn/
chown root:francois /etc/openvpn/akranes.key /etc/openvpn/ta.key
chmod 640 /etc/openvpn/ta.key /etc/openvpn/akranes.key
and my own user needs to have read access to the secret keys. To create a new VPN, right-click on Network-Manager and add a new VPN connection of type "OpenVPN":
  • Gateway: hafnarfjordur.fmarier.org
  • Type: Certificates (TLS)
  • User Certificate: /etc/openvpn/akranes.crt
  • CA Certificate: /etc/openvpn/ca.crt
  • Private Key: /etc/openvpn/akranes.key
  • Available to all users: NO
then click the "Avanced" button and set the following:
  • General
    • Use LZO data compression: YES
  • Security
    • Cipher: AES-128-CBC
    • HMAC Authentication: Default
  • TLS Authentication
    • Subject Match: server
    • Verify peer (server) certificate usage signature: YES
    • Remote peer certificate TLS type: Server
    • Use additional TLS authentication: YES
    • Key File: /etc/openvpn/ta.key
    • Key Direction: 1

Debugging If you run into problems, simply take a look at the logs while attempting to connect to the server:
tail -f /var/log/syslog
on both the server and the client. In my experience, searching for the error messages you find in there is usually enough to solve the problem.

Next steps The next thing I'm going to add to this VPN setup is a local unbound DNS resolver that will be offered to all clients. Is there anything else you have in your setup and that I should consider adding to mine?

16 April 2013

Aigars Mahinovs: na 6 internets

na ir slavena ne tikai ar savu akmens m ri valsts zieme os, bat ar ar Di o nas Ugunsm ri apk rt visam s valsts Internetam, kas blo visu p c k rtas un iebremzina visu p r jo. Man pirm person g saskarsme ar o pakalpojumu notika jau anhajas lidost , kad tri vien izr d j s, ka valst blo ts ir ne tikai Facebook, bet ar Twitter, kas iev rojami apgr tin ja manas iesp jas tri un viegli apzi ot visus, ka es esmu v l joproj m esmu dz vs un vesels. P c p ris eksperimentiem izr d j s, ka, lai ar no telefona nav pieejama Google+ m jas lapa un nav lejupl d jama Google+ (un WhatsApp) programma uz Android, tom r, ja t s jau ir telefon , ie abi servisi turpina no telefona str d t. T p c es s ku rakst t ce ojuma piez mes Google+ un da as dienas p c ce ojuma s kuma man pat izdev s nokonfigur t If This Then That servisu, lai tas pa em manus Google+ ierakstus un uztaisa no tiem Twitter ierakstus (kas jau t l k pa citiem kan liem izplat s uz Facebook un Draugiem un ar par d s k ned as kopsavilkums aj blog ). Google+ ir savi plusi, bet ar savi m nusi. Galvenais m nuss, ko es aj ce ojum paman ju ir tas, ka Google+ Android aplik cij nav iesp jams sagatavot vair kus ierakstu melnrakstus (v lams katru ar savu geolok ciju) bez Interneta var rakst t tikai vienu ierakstu un t ieraksta GPS koordin tes b s t s kur viet p c tam Internets par d sies. Es jau uzrakst ju Googlei par o probl mu. Galvenais pluss Google plusam (no pun intended) ir Instant Upload ja bild t fotogr fijas ar Android telefonu, s fotogr fijas autom tiski tiks aug upiel d tas un par d sies jaun ieraksta izveides interfeis , kur t s var pievienot ierakstam ar vienu klik i bez jebk das gaid anas. Diem l tas nestr d ar norm laj m kamer m. Pagaid m ;) Ta u es neb tu sts datori is, ja es nepam in tu uzlaust vai apiet o nelielo nas probl mu, ne? ;) Visvienk r akais veids k apiet nas Lielo Ugunsm ru ir izmantot jebk du VPN, kas at auj ne tikai piek t VPN t kla resursiem, bet ar at auj laist visu trafiku caur o VPN savienojumu. dus VPN piesl gumus var nopirkt, vai (ja ir Linux serveris vai routeris rpus nas) izveidot pa am. Man gad jum tas bija ar vienu klik i iesl gts OpenVPN uz Fonera routera, kas st v man s m j s. Diem l na ir sav da. Blo to lapu, portu un protokolu saraksts main s gan da dos rajonos, gan ar atkar b no t vai Internets ir mob lais vai wifi vai ar piesl gumu, gan ar vienk r i no dienas dien . Liel da gad jumu blo to lietu sarakst iekr t ar VPN savienojumi. Bie i vien ar priv tie. Man kaut k ne iet, ka mana m jas IP adrese ir nas ugunsm ra sarakstos, ta u da reiz ar tam VPN pievienoties es nevar ju. Un t d s situ cij s, lai apskat tu k du YouTube video, atliek tikai viens, eni ls risin jums sshuttle! is eni lais r ks izveido ko l dz gu VPN savienojumam caur parasto SSH portu un protokolu. Uz lok l s ma nas ir nepiecie ams Python un root ties bas, bet uz servera ir vajadz gas tikai ties bas palaist Python programmas. sshuttle pats aizs ta sevi uz serveri un palai s tur, pat ie ifr un p rs ta visus savienojumus un ar DNS piepras jumus, ja vi am to paprasa. Var p rs t t konkr tus t klus vai visu trafiku. Un trums man pieredz tas ir bija pat tr ks par parasto VPN. Kopum Interneta blok de un t visp r gais l nums ir viens oti spec gs m nuss nai. Aizskrienot mazliet uz priek u st st jum pateik u, ka Hong Kong das probl mas nav tur Internets ir lielisks! L k t ds t zeris san ca :)

3 October 2012

Kumar Appaiah: A backup workflow with bup

It's important to back things up. You should also make your life easier by using some good software. I use obnam for some backups, but I wanted to try out bup for a new backup workflow I needed. I will describe the workflow here.The situation: I have a laptop with things to be backed up. I also have a Raspberry Pi which I use as a network attached storage (it has a 1 TB external drive attached to it) and runs Raspbian Wheezy on it. Naturally, you can do this on any NAS running some GNU/Linux variant. In addition, I also have an external portable USB hard disk where another backup copy will be stored. How do I automate this?Here is how I went about this process. First, for the Raspberry Pi:
  1. First, identify the directory or directories to be backed up. To keep this simple, the directory I am backing up is /home/kumar/Work.
  2. On the Raspberry Pi, install bup. If you want to build it from the git clone, you would need python-dev, and pandoc for the documentation.
    sudo aptitude install pandoc python-dev
    git clone git://github.com/apenwarr/bup.git
    cd bup;make;sudo make install
    
  3. Initialize the backup directory on the Raspberry Pi:
    BUP_DIR=/home/kumar/BACKUPS bup init
    BACKUPS is the place where bup is asked to store the backups (much like it's analogue of .git). You can store all your backups within this directory, and you can still classify backup sets within this directory with different names (much like git branches).
  4. On the local machine, initialize the backup directory.
    bup index -uv /home/kumar/Work
  5. Back it up. This will take time the first time, if the Work directory has several contents within it.
    bup save -r kumar@<raspberry_pi_ip_address>:BACKUPS -n workbackup /home/kumar/Work
    
    workbackup is the name of the backup set. This can be paperbackup or databackup or notesbackup so that several classes can be backed up within the BACKUPS directory.
  6. Now, you have backed things up. Make changes in the Work directory. For instance, add/edit/remove/move some files.
  7. Rerun the index and save commands from steps 4 and 5 above. Enjoy the speedup and ability to rewind to the previous state.
  8. Test restoration of ONE directory of backup (just to save time) on Pi itself directly on the Pi:
    bup -d /home/kumar/BACKUPS/ restore -C /tmp/ /workbackup/latest/home/kumar/Work/offset-coupling-params
    
    Then scp it back if you need it.
  9. Alternative: Use fuse. On the Pi:
    sudo aptitude install python-fuse
    sudo adduser kumar fuse
    mkdir Mount
    bup -d /home/kumar/BACKUPS fuse Mount
    Then cd into Mount and cp/scp the correct files wherever.
    fusermount -u Mount
    
  10. Once happy, write the bup index/bup save commands into a script and run it periodically to back up (check this at the end of the post). Have fun.
Next, to prepare the external hard disk, I went with the old-style autofs automounting solution:
  1. To automount the disk, in /etc/auto.master, add
    /auto			/etc/auto.misc	--timeout=60
    
  2. In /etc/auto.misc add:
    backupdrive	-fstype=btrfs	:UUID="7927a7ab-8c3f-4198-bd73-d3cc519a9ac3"
    
    fstype may need changing for your file system, and UUID can be found using blkid on the exact partition of the hard disk you wish to back up to.
  3. Initialize the backup directory:
    sudo mkdir /auto/backupdrive/BACKUPS
    sudo chown kumar.kumar /auto/backupdrive/BACKUPS
    bup -d /auto/backupdrive/BACKUPS/ init
    
  4. Index and back up:
    bup -d /auto/backupdrive/BACKUPS/ index -uv /home/kumar/Work
    bup -d /auto/backupdrive/BACKUPS/ save -n workbackup /home/kumar/Work
    
  5. Make changes in the Work directory. Add/edit/remove/move files.
  6. Rerun index and save commands from step 5. Enjoy the speed.
  7. Test restoration of ONE directory of backup from removable disk:
    bup -d /auto/backupdrive/BACKUPS/ restore -C /tmp/ /workbackup/latest/home/kumar/Work/offset-coupling-params
    
  8. Once happy, write the bup index/bup save commands into a script and run it periodically to back up. Have fun.
Finally, a quick script to automate all of the above.
#!/bin/sh
bup index -u /home/kumar/Work
bup save -r kumar@<raspberry_pi_ip_address>:BACKUPS -n workbackup /home/kumar/Work
if ls /auto/backupdrive/ 2> /dev/null;then
    echo Backing up to the removable disk...
    bup -d /auto/backupdrive/BACKUPS/ index -u /home/kumar/Work
    bup -d /auto/backupdrive/BACKUPS/ save -n workbackup /home/kumar/Work
    echo Backed up to the removable disk! Yay!
else
    echo WARNING: Not backing up to the removable disk
fi
What this script does it, it backs up to the NAS, and then checks if the external disk is connected. If the disk is mounted, then it backs up to that disk as well. It might be a good idea to cron or anacron this script, or have someone remind you to run this periodically. The only limitation I have is that you can't use bup restore to restore from a remote repository yet, but that isn't a huge issue for me now. In addition, metadata is not stored by the bup save/restore commands. You would have to use split and join with tar to get that. However, bup is fast moving, so I'd expect these features soon. Till then, this workflow works all right for data only backups.Although this is just a mental dump of the procedure I made for myself, suggestions for improvement are welcome. Thanks.

22 March 2012

Axel Beckert: Tools for CLI Road Warriors: Tunnels

Sometime the network you re connected to is either untrusted (e.g. wireless) or castrated in some way. In both cases you want a tunnel to your trusted home base. Following I ll show you three completely different tunneling tools which may helpful while travelling. sshuttle sshuttle is a tool somewhere in between of automatic port forward and VPN. It tunnels arbitrary TCP connections and DNS through an SSH tunnel without requiring root access on the remote end of the SSH connection. So it s perfect for redirecting most of your traffic through an SSH tunnel to your favourite SSH server, e.g. to ensure your local privacy when you are online via a public, unencrypted WLAN (i.e. easy to sniff for everyone). It runs on Linux and MacOS X and only needs a Python interpreter on the remote side. Requires root access (usually via sudo) on the client side, though. It s currently available at least in Debian Unstable and Testing (Wheezy) as well as in Ubuntu since 11.04 Natty. Miredo Miredo is an free and open-source implementation of Microsoft s NAT-traversing Teredo IPv6 tunneling protocol for at least Linux, FreeBSD, NetBSD and MacOS X. Miredo includes not only a Teredo client but also a Teredo server implementation. The developer of Miredo also runs a public Miredo server, so you don t even need to install a server somewhere. If you run Debian or Ubuntu you just need to do apt-get install miredo as root and you have IPv6 connectivity. It s that easy. So it s perfect to get a dynamic IPv6 tunnel for your laptop or mobile phone independently where you are and without the need to register any IPv6 tunnel or configure the Miredo client. I usually use Miredo on my netbooks to be able to access my boxes at home (which are behind an IPv4 NAT router which is also an SixXS IPv6 tunnel endpoint) from whereever I am. iodine iodine is likely the most undermining tool in this set. It tunnels IPv4 over DNS, allowing you to make arbitrary network connections if you are on a network where nothing but DNS requests is allowed (i.e. only DNS packets reach the internet). This is often the case on wireless LANs with landing page. They redirect all web traffic to the landing page. But the network s routers try to avoid poisoning the client s DNS cache with different DNS replies as they would get after the user is logged in. So DNS packets usually pass even the local network s DNS servers unchanged, just TCP and other UDP packets are redirected until logging in. With an iodine tunnel, it is possible get a network connection to the outside on such a network anyway. On startup iodine tries to automatically find the best parameters (MTU, request type, etc.) for the current environmenent. However that may fail if any DNS server in between imposes DNS request rate limits. To be able to start such a tunnel you need to set up an iodine daemon somewhere on the internet. Choose a server which is not already a DNS server. iodine is available in many distributions, e.g. in Debian and in Ubuntu.

29 December 2011

Jon Dowland: backup

Backups are something I don't do well. I hope to write a more thorough article about them at some point. For now, here are some tips. Like most home users, I don't have access to a tape drive or jukebox for backups. I'm therefore looking for a solution that backs up to a local block device. I haven't ruled out a solution that backs up to DVD-Rs, but I haven't found a suitable one. I first tried rsnapshot, which uses hard link trees to represent increments. I found that this resulted in enormous filesystem metadata, sufficient to cause the machine I was running it on (an embedded ARM system) to slow to a crawl. I'd recommend to avoid anything using this technique. I more recently have been using rdiff-backup. This has appeared to work quite well for a number of years. However, it does not gracefully handle the backup volume being full, spewing python backtraces. This is compounded with some confusing use of temporary directies and not honouring $TMPDIR. This has been known about for years (1, 2). I've just independently discovered these problems for the second time. Aside from that, because it does not de-duplicate files, it is sensitive to large files being moved around in the source being backed up. It also cannot do a dry-run estimate of the disk space required for the next increment. So: time to move on. Back in 2010 I discovered bup, an intriguing backup tool that used a git-style backing store. It's interesting enough that I packaged it and use it in a few situations, but I wouldn't rely on it for my main backups, for two reasons: one, it's not tried-and-tested enough yet (but that is rapidly being resolved), two: you can't get rid of old increments, so your backup volume will always increase in size over time, and never throw away old data. So, for now: move on. Next on my list to try is Lars Wirzenius' obnam.

17 April 2011

Joey Hess: new git-annex use cases

After two weeks of work, I've just released git-annex version 0.20110417, with some big new features that open up some interesting use cases:
joey@gnu:~/tmp/repo> git annex initremote cloud type=S3 encryption=joey@kitenet.net
initremote cloud (checking bucket) (creating bucket in US) ok
joey@gnu:~/tmp/repo> git annex add bigfile
add bigfile ok
(Recording state in git...)
joey@gnu:~/tmp/repo> git annex move bigfile --to cloud
move bigfile (gpg) (checking cloud...) (to cloud...) ok
(Recording state in git...)
joey@gnu:~/tmp/repo> file bigfile
bigfile: broken symbolic link
joey@gnu:~/tmp/repo> git annex get bigfile
get bigfile (copying from cloud...) (gpg) ok
(Recording state in git...)
joey@gnu:~/tmp/repo> file bigfile
bigfile: symbolic link

* This feature isn't available in Debian yet, blocked by a lack of the Haskell hS3 library packaged for Debian. Someone should fix that, ideally not me. Getting missingh updated for the ghc7 transition so git-annex is buildable in unstable would also be nice..

21 November 2010

Axel Beckert: Useful but Unknown Unix Tools: netselect

Ever wondered which mirror of your favourite Linux distribution is the fastest at your location? Check it with netselect (code at GitHub). It checks for the number of hops and ping times to given hosts and tells you which one is the fastest of them:
# netselect -vv ftp.de.debian.org ftp2.de.debian.org \
                ftp.ch.debian.org ftp.nl.debian.org ftp.debian.org
Running netselect to choose 1 out of 5 addresses.
.......................................................
ftp.de.debian.org                       25 ms  16 hops   90% ok ( 9/10) [   72]
ftp2.de.debian.org                      17 ms  17 hops   90% ok ( 9/10) [   51]
ftp.ch.debian.org                        0 ms   3 hops   90% ok ( 9/10) [    0]
ftp.nl.debian.org                       22 ms  15 hops   90% ok ( 9/10) [   62]
ftp.debian.org                          22 ms  15 hops   90% ok ( 9/10) [   60]
    0 ftp.ch.debian.org
And if you re too lazy to optimize your sources.list with netselect manually, just use the netselect-apt package. It will do it for you.

2 June 2010

Jon Dowland: bup

Packages of bup, a git-based de-duplicating backup tool, are now available in unstable.

7 March 2008

Joey Hess: the new portability nightmare

Often if you see a block diagram like this, what comes to mind is a compatability layer in between a program and several operating systems. Generally something that's general-purpose like java, or a web browser, or a widget toolkit.
-------------
             
             
-------------
             
-------------
             
-------------
(Generally it's drawn up all purty, but I'm lame.) But lately I've seen and written a lot of code where the diagram is more complex:
-------------
             
   program   
             
             
-------------
  V  C  S    
-------------
     OS      
-------------
Sometimes the program code is littered with multiple switch statements, as in debcheckout, debcommit, and etckeeper.
case "$vcs" in
git)
svn)
hg)
esac
Sometimes it pushes the VCS-specific code into modules.
use IkiWiki::Rcs::$rcs;
rcs_commit();
But if it does, these modules are specific to that one program. This isn't a general-purpose library. dpkg source v3 doesn't need to use the VCS is the same way as ikiwiki, and even ikiwiki's rcs_commit is very specific to ikiwiki, in its error handling, conflict resolution, locking, etc. pristine-tar injects and extracts data directly from git, using low-level git plumbing, in a way that probably can't be done at all with other VCSes. But even as I was adding that low-level, very git-specific code into pristine-tar, I found myself writing it with future portability in mind.
if ($vcs eq 'git')  
    # git write-tree, git update-index, and other fun
 
else  
    die "unsupported vcs $vcs";
 
When Avery Pennarun talks about git being the next unix, he's talking about programs like these, that build on top of a VCS. But if git is the next unix, then so is mercurial, so is darcs, so is bzr, so too even svn (unless it's Windows?). In other words, we're back to the days when every program had to be ported to a bunch of incompatible and not-quite-compatible operating systems. Back to the unix wars. In Elija's discussion of the "limbo" VCS state he gives several great examples of how multiple VCSes that each seem on the surface to offer similar commands like "$vcs add" and "$vcs commit" can behave very differently.
echo 1 > foo
$vcs add foo
echo 2 > foo
$vcs commit
What was committed, "1" or "2"? Depends on which $vcs you use. Compare with unix where open(2) always opens a file, perhaps with different options, or different handling of large files, but portably enough that you generally don't need to worry about it. Even if you're porting to Windows, you can probably get away with a compatability layer to call whatever baroque monstrosity Windows uses to open a file, and emulate open(2) close enough to not have to worry about it most of the time. A thin compatability layer that calls "$vcs add" isn't very useful for a program that builds on top of multiple VCSes. mr is essentially such a thin compatability layer; it manages to be useful only by being an interface for humans, who can deal with different limbo implementations and other quirks. The VCSes are to some degree converging, but so far it's mostly a surface convergence, with commands that only look the same. Where will things go from here?

13 August 2007

MJ Ray: Read-Write Web Links

Here's what I wrote on other people's sites last week:
Drake.org.uk: Welcome back..
MartynD reboots his blog yet again. This time with added eyetests.
Two Questions for All Serious Free Software Contributors
I answered, as did many others.
ProBlogger Redesign - Bedding Down for the Night
Much easier-to-read, but I still had an enhancement suggestion.
Blowing bubbles niq's soapbox
Comment on house-building and house-owning in England today.
gravityboy: This Is My Good Free Software Experience
I think the GPL is useful for things other than programs and CC is confusing about DRM for everything.
Antti-Juhani Kaijanaho I am going to rank MJ Ray low from now on
Commented more on my blog, but also directly. The criticism might be fairer if the SPI ballot were secret, but it isn't.
WEBlog -- Wouter's Eclectic Blog - Voting tactics
Is this about SPI? How is IRV broken?
Constitutional amendment: reduce the length of DPL election process
Amending the amendment proposal - got enough seconds to reach the ballot, for a change.
Amendment to: reduce the length of DPL election process
This one doesn't look like it will get onto the ballot. Informative subthread on combination of amendments.
Here's some of what I read last week:
talks with Linus Torvalds
He's much ruder than I am, but can get away with it because he's done more.
The Most Excellent and Lamentable Tragedy of Richard Stallman - Edward O'Connor
He's much ruder than I am, but doesn't get away with it because he's done more.
Mayor wants London to copy Paris bike rentals News This is London
I saw those bikes in Paris. Seemed like a good idea.
Parisian-style hire bicycles to beat London traffic jams - Independent Online Edition > Transport
Slightly inaccurate report.
Hacking gadflies: Open Document Format
It's almost September again.
dgh: Bike power
Interesting idea.
apenwarr's log
quite a funny letter to google
Bristol Climate Protesters En-Route to Camp
My legs hurt after cycling only 6 miles yesterday. Not sure what I did wrong.
The Weston Mercury - Call to 'have a say' on the future of Britain
Pensioners to save country. Or not.
BBC NEWS Technology UN's website breached by hackers
Can't be bothered to check netcraft.
Little's Log: North Norfolk Councillor defects to UKIP
Little's still raging against the media.

29 April 2007

Evan Prodromou: 9 Flor al CCXV

Ah, beautiful Sunday! This is the first day this week we've had a little peace around the house. It'll all go crazy on Monday, of course, but it's nice to have something like a day off. tags:

BarCampMontreal2 I had a great time yesterday at BarCampMontreal2. It was pretty fortunate that my parents are visiting this week, because it meant that both Maj and I could go to the event together. We also had our friend and fellow Wikitraveller Jani Patokallio visiting from Singapore, so we brought him along with us. Jani's a community and technology hacker and he's a lot of fun, too. This was our first event in the new BarCamp venue at the SAT, and it turned out to be a good one. The space is really big, with about 3 huge spaces. We used the back room, which is wide enough for a lot of video displays, to good advantage. The front area became an informal caf and it was really great for personal talks -- isolated from the main presentation area. Damian DiFede gave a great talk about his Ajax-based MUD, Mujax. I think it looks really cool -- I hope he can get the kind of participation and content to make it a more immersive space. I gave the second talk. I'd really wanted to do a talk about The Keiki Project, our upcoming venture, but it was too soon for a real hands-on demo. So I'd planned to do a presentation about RDF and our use of Wikitravel:Turtle RDF on Wikitravel, but I kind of got too busy by the end of the week. So I ended up recycling my talk about Commercialization of Wikis (see Talks/SXSW07), which got a lot of positive feedback. I liked seeing Sylvain Carle's presentation on what he'd learned in Silicon Valley. Sylvain is one of the best presenters at BarCampMontreal -- he's a big goofball who loves his work and communicates that passion with intelligence and sincerity. For example, he themed his talk with songs by Quebec rocker Robert Charlebois. Candid and ingenious. Also great was Hugh McGuire's talk about lessons learned in development of the cool podcasting platform Collectik. His talk came down to one great point: that you have to be able to communicate the purpose of your Web site (or other project) in one sentence. He didn't do that for Collectik, and now he's trying to figure out how to. It's great hearing people talk about their mistakes. Maj gave a great talk about Wikitravel Extra, the new personal-opinion and -experience platform that we've built to complement the objective, consensus information in Wikitravel. I keep forgetting what a good public speaker she is. She did a good job talking about the needs we were trying to fill, and she demo'd the platform (flawlessly). I think she's got a good presentation to take to other conferences now (lucky dog). Another talk I found fascinating was Chris Car's discussion of his past project, MeshCube. The MeshCube is a tiny mesh networking tool that Chris and a partner developed in Hamburg. Chris talked about what worked and what didn't work with that project, showed off some cool hacks (mesh network robots!) and was (like Hugh) open about mistakes. Sitting in the middle of about 8 WiFi networks (in my home office) as I type this, I'm really interested in mesh networking and I was glad to hear from Chris why it doesn't currently work and what can be done to make it work. Probably the most important talk, for me, was Martine Pag 's discussion of what we need to do to bring more women to technology conferences. She did a great, fair overview of the topic, and then opened the floor to discussion. We had a really great, frank talk about -- both men and women -- and a lot of diverse points of view were expressed. I'm really appreciative of Martine opening up this discussion, and I'm glad we're going to be more proactive in the Montreal technical community about being inclusive of the really great women working in technology in this city. There were some other good talks, of course: apenwarr gave a good talk about the new, new way to have a startup; the mysterious and elusive Madame Woo talked about traveling alone; and Australian visitor Moomlyn gave a great description of lucid dreaming. There were good demos of Cake Mail and My Carpool Station. All in all a great set of presentations. I also liked meeting up with some of the people there. Patrick Tanguay gave a great lunchtime standup symposium about the state of Montreal's coworking space and where we're going from here. I also liked talking with Tamu Townsend, who I hadn't met before. Probably my best talk was with Sylvain and Martine about what the boundaries of BarCamp are. Why do we have talks about solo travel, digital photography, and lucid dreaming? Sylvain had a great answer: BarCamp comes out of the hacker ethic: that spirit of curiosity, humor, passion and playfulness that makes working on computers so enjoyable for so many. If someone can communicate that same passion and intellectual captivation with another subject, it's going to go over well at BarCamp, no problem. Well said!

Pros and cons Here are some things I liked and didn't like about BarCampMontreal2. First, things I liked:
  • The people. Yet again, meeting up with my favourite technologists in Montreal. Many people I first met at BarCampMontreal1, and it was nice to see how our friendships have developed in the 6 months since then.
  • Diversity. I was really glad that there were many more women at this vent than at previous ones. My guess is somewhere around 20-25%, which isn't anywhere near where it should be but a big improvement over the 5-10% we had at BarCampMontreal1. Special props to Hugh, whose invite to women was, I think, pretty helpful. And of course to all the women who took the extra step to show up at a majority-male event.
  • The presentations. Almost uniformly really, really good. Some were just excellent.
  • Lunch. Simple and easy, didn't get in the way of socializing.
  • The Bar. I liked having the bar open at lunch. Having a beer with friends really changes the dynamic of the event; and SAT has a decent selection of brews and friendly barstaff.
  • The screens. A-V was great this time around -- good job by SAT on this.
  • The space. It was cool having the back room at SAT. It's a really nice place for this kind of event.
  • Powerpoint Karaoke. A great idea (people have to give 5-minute presentations based on off-topic slidesets they've never seen before), but I think the scheduling was off. It would have been great to spread these out between other sessions during the day for comic relief, rather than clumped all together at once.
Second, things I wasn't happy about:
  • Registration. It was really great being welcomed at BarCampMontreal1, and this was distinctly missing at our second event. We had a pile of nametags on the table by the door, and no other form of greeting. For a space like SAT, this just didn't work. I'm going to volunteer to help out with this step at future events, since it's so important.
  • Coffee. I realize I'm a coffeeholic, but it would have been nice to have a coffee table out. It's nice to have on a rainy Saturday, and it gives shy people something to do.
  • T-shirts. I think every BarCamp should be an event worthy of a t-shirt. Fred gave out some shirts left over from BarCampMontreal1, but it would have been nice to have new ones for this event, too.
  • Tables. I realize they're expensive to rent, but dang, they're really great to have if you've got your laptop and other techno doodads. It changes the event from being a show to being a symposium.
  • Schedule timing. The (very important) talk by Martine about women in tech conferences ran far over schedule and wiped a few talks off the board. I think that was a mistake. After it was done, the joke segment of Powerpoint Karaoke started. By the time it was finished, 60% of the audience was gone, even though there were 4 serious talks still to go. We need to do a better job with timing this kind of thing so we keep the energy going through the entire day.
  • Too much noise from the caf area. As people drifted off from the main presentation area, the caf filled up, making for lots of loud chatter that drowned out the presentations. We should probably figure out how to fix this problem; it would be nice to stimulate this casual conversation while still being respectful to the speakers. Maybe more breaks (one in the morning and one in the afternoon)?
  • <100% participation. It seemed like there were a lot of spectators this time around, which drags down the participative nature of the event. I think that asking registrants how they're going to participate when they come to the door is a great way to gently remind people that they're expected to be involved (we do this for Burning Man events sometimes). I think Powerpoint Karaoke and other exercises would be a great way to get better hands-on participation by people who didn't prepare talks. (I hope that MadameWoo does a talk in the future about how to do an ad-hoc talk at BarCamp.)
My net feeling about the event was extremely positive, and I'm making these notes not to criticize anyone but to remind myself to do something about them for the next BarCampMontreal. I think that we can continuously improve this event to be inclusive, stimulating, fun and exciting, and I hope we keep this great barcamp spirit going in this city for a long time. tags: