Search Results: "Edd Dumbill"

10 January 2012

John Goerzen: Social Overload

I m finding social media is becoming a bit annoying. I enjoy using it to keep in touch with all sorts of people, but my problem is the proliferation of services that don t integrate well with each other. Right now, I have: So my problems are:
  1. Posting things multiple places. I currently can post on identi.ca, which automatically posts to twitter, which automatically posts to Facebook. But then I d still have to post to Google+, assuming it s something that I d like to share with both my Facebook friends and my Google+ circles it usually is.
  2. The situation is even worse for re-tweeting/re-sharing other people s posts. That is barely possible between platforms and usually involves cutting and pasting. Though this is somewhat more rare.
  3. It s probably possible to make my blog posts automatically generate a tweet, but not to automatically generate a G+ post.
All the hassle of posting things multiple places leads me to just not bother at all some of the time, which is annoying too. There are some tools that would take G+ content and put it on Twitter, but without a character counter on G+, I don t think this would be useful. Anyone else having similar issues? How are you coping?

22 July 2011

Cosimo Alfarano: Facebook, Google+ and Diaspora (part 2)

This probable is more a tweet [Edit: typo.Thanks to Robot101 :D] than a blog article, but since my last post was about the same subject, I felt it'd make sense to point this article out.

Its author analysed in a deeper way than what I did what the future looks (or is?) like and, I have to say, that after a bit of time spent on G+, I envisioned pretty much a similar future, reason for which I started following Edd Dumbill on G+ and of Twitter ;)
Also thanks to Zack for mentioning the article.

End of post, I said it was short, didn't I? ... and I made it longer than what originally conceived :)

1 July 2008

Edd Dumbill: OSCON: what are your must-see talks?

We've switched on personal schedule sharing on the OSCON web site. When you've put together your desired schedule by starring sessions of interest, just hand out the "public view" link to let others know what you want to see. Here's my personal schedule. In it you'll find all the plenary sessions (as co-chair I simply cannot miss these, and neither should you, however late the party!) Also there's a fair smattering of my pet topics such as open web technologies, virtualization and dynamic languages, and a bunch of things I want to hear more about: Prophet, female participation in open source, Clutter, and of course Erlang. I'm fascinated to find out what other people have got planned, so please publish your schedules too and let's compare notes.Join the conversation about this post

26 June 2008

Edd Dumbill: My OSCON 2008 picks

In just under a month, the tenth O'Reilly Open Source Convention will get underway in Portland. Over ten years OSCON has developed—along with the world of open source—into an intense, exciting, informative, diverse and exhausting event. This year I've the privilege of being co-chair, along with Alison Randall. We've packed so much into the show, it's a difficult job even being able to comprehend it as a whole! Fortunately, there's a way to start making sense of things before you arrive there, thanks to the personal scheduler. Just mark the sessions you want to go to with a star, and you'll be able to plan out your time in advance. I wanted to list a few sessions from my own personal schedule that particularly piqued my interest. Then at the bottom of this post I'll share a discount code which can give readers of this blog 15% off OSCON registration. There's bribery for you. Practical Erlang Programming Largely thanks to XMPP enthusiasts and ejabberd, I've been hearing increasing amounts about Erlang, and I'd like to know enough about it to be dangerous. This three hour tutorial looks just the ticket. Open Source Virtualization Hacks This is one of several sessions we have on virtualization, something I'm particularly pleased about. Virtualization may be "done" at the kernel level, but I think we're only just starting out on its application. This session is by my friend and sometime co-author, Niel Bornstein, who works for Novell on just this sort of thing. Using Puppet: Real World Configuration Management Puppet is the piece of open source software that is most exciting to me at the moment. As a developer, it enables me to manage my machines like I'd manage my code libraries. A must-see if you've not used Puppet yet. These are just 3 out of the 300 or so confirmed sessions. Don't forget there's a large number of events and parties happening around OSCON too. And finally, the discount code. Use the code os08pgm when you're registering, and you'll get 15% off the ticket price. See you in Portland!Join the conversation about this post

20 June 2008

Edd Dumbill: Secure LDAP replication

Ever the sucker for punishment, I decided to pick three difficult things and stick them all together: LDAP, SSL and replication. Here's how to make it go on Debian and Ubuntu. The problem You want LDAP replication to happen over the internet, and you want it to happen securely. The caveat I'm not going to tell you how to set up your LDAP from scratch here: I'm assuming you've reached a solution you're happy with and want to replicate it. The solution We're going to set up a replicating slave LDAP server, which communicates with the master over the internet via an SSL-protected connection. Enabling replication First up, the master LDAP server needs to be configured to permit replication.  The key lines to add to your slapd.conf include:
moduleload syncprov
index entryCSN,entryUUID eq
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 200
These load up the synchronization module, add indices which make sync go faster, and enable sync. For more detail see the OpenLDAP site. Next you need to add a replicator user to your LDAP database, give your replicator user access to passwords as well as general read access. To create the replicator user, I made this simple LDIF file and fed it to ldapadd.
dn: cn=replicator,dc=mydomain,dc=com
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: replicator
description: LDAP replicator
userPassword: TOPSEKRIT
Once this user is in your LDAP database, you should give it read access to passwords (I assume you've already given read access to authenticated users.) I have this in my slapd.conf:
access to attrs=userPassword,sambaNTPassword,sambaLMPassword
...
   by dn="cn=replicator,dc=mydomain,dc=com" read
To check that this works, try using ldapsearch to check that the passwords are returned:
ldapsearch -x -D cn=replicator,dc=mydomain,dc=com \
  -W   grep -i password
Enter the replicator password when prompted, and you should see the encrypted passwords from your LDAP database. Securing access Now you've got replication enabled on the master, you will want to ensure it is available on the internet only via TLS or SSL. Here's what I added to slapd.conf to enable this:
TLSCertificateFile      /etc/ssl/certs/ldapserver_crt.pem
TLSCertificateKeyFile   /etc/ssl/private/ldapserver_key.pem
TLSCACertificateFile    /etc/ssl/certs/myCA.pem
TLSVerifyClient         demand
As you will guess from the configuration, the first two lines set the SSL key and certificate the master uses (see "A little twist" below for an important note on key permissions.) The third line tells slapd where to find my site-local certificate authority (CA), and the fourth line says slapd must require any connecting client to have a valid SSL certificate signed by the site-local CA. This is important, as it provides a second layer of access control: a replicating client must connect using a certificate you signed, plus the replicator password. Before this enables TLS access, we must tell slapd which network interfaces to listen on. To do this, edit the SLAPD_SERVICES variable in /etc/default/slapd. Here's my configuration:
SLAPD_SERVICES="ldap://127.0.0.1/ ldap://192.168.0.1/ ldaps:///"
This enables regular LDAP on the loopback and intranet network interfaces, and LDAP/SSL on all interfaces, including the public internet. So, with slapd restarted we are at this situation: connections are now possible from the internet, as long as they are made over SSL with a certificate signed by our site-local CA. (In fact, you can make much finer-grained access restrictions in your configuration than I have done. Using LDAPS rather than TLS over regular LDAP is a rather broad precaution. As explained on the OpenLDAP site, the ssf= parameter can be used to require a certain level of secure connectivity on a per-user or client basis.) Setting up the replicating server
Your slave server should have the same configuration as the master, except you can leave out the bits enabling replication. Firstly, you'll need add to slapd.conf the replication configuration:
syncrepl rid=123
        provider=ldaps://ldapmaster.mydomain.com/
        type=refreshAndPersist
        searchbase="dc=mydomain,dc=com"
        filter="(objectClass=*)"
        scope=sub
        attrs="*"
        schemachecking=off
        bindmethod=simple
        binddn="cn=replicator,dc=mydomain,dc=com"
        credentials=TOPSEKRIT
Most of this I took as boilerplate from the OpenLDAP documentation. Items to note include: And here's the /etc/default/slapd configuration:
SLAPD_SERVICES="ldap://127.0.0.1/"
The slave slapd exists only in this case to serve the local machine. Finally, there's the tricky bit! You need to configure slapd to connect to the master server using a certificate. I'll assume you've created and signed a key and certificate pair for your slave server (see my post Low-tech SSL certificate maintenance for more on this.) Awkwardly, the TLS configuration in slapd.conf is for the server only. Replication works as a client, and thus needs separate configuration. Furthermore, you cannot configure this globally on your machine, as the SSL certificate is a per-user only parameter (see man ldap.conf for more information on this.) Instead, we must set it in slapd's environment. Add these two lines to the end of /etc/default/slapd:
export LDAPTLS_CERT=/etc/ssl/certs/slapd.crt
export LDAPTLS_KEY=/etc/ssl/private/slapd.key
This file is sourced as a shell script by slapd's init script. Amend the path to your certificate and keys as required. Use /etc/init.d/slapd restart and you should be good to go. Finally, we want the slave server to be certain it's talking to the real master. So we also configure client connections to verify the SSL certificate of the peer, in ldap.conf again:
TLS_CACERT      /etc/ssl/certs/myCA.crt
TLS_REQCERT     demand
A little twist One gotcha to notice with both client and server is that slapd runs as the openldap user by default on Debian. Also by default SSL keys are readable only by the ssl-cert group. You'll need add the openldap user to this group, otherwise it won't be able to access /etc/ssl/private. Related articles on this site: Join the conversation about this post

18 June 2008

Edd Dumbill: Low-tech SSL certificate maintenance

I maintain a bunch of SSL certificates, mostly signed by my own site authority. Too many not to automate, but not enough to warrant heavy machinery. Here's how I do it. The configuration files Each certificate needs a config to describe what's in it. I create each of these and name it with a .cnf suffix. Here's an example:
[ req ]
prompt                  = no
distinguished_name      = server_distinguished_name
[ server_distinguished_name ]
commonName              = server.usefulinc.com
stateOrProvinceName     = England
countryName             = GB
emailAddress            = edd@usefulinc.com
organizationName        = Useful Information Company
organizationalUnitName  = Hosting
[ req_extensions ]
subjectAltName=edd@usefulinc.com
issuerAltName=issuer:copy
nsCertType            = server
[ x509_extensions ]
subjectAltName=edd@usefulinc.com
issuerAltName=issuer:copy
nsCertType            = server
Let's say this config is server.cnf. I then just type make server.pem to generate the corresponding certificate and key, signed by my local certificate authority. As I don't want to attend the startup of every service, I ensure the key is password-less. The Makefile rules Here are the makefile steps I use to generate and sign keys.
.SUFFIXES: .pem .cnf
.cnf.pem:
        OPENSSL_CONF=$< openssl req -newkey rsa:1024 -keyout tempkey.pem -keyform PEM -out tempreq.pem -outform PEM
        openssl rsa <tempkey.pem >  basename $< .cnf _key.pem
        chmod 400  basename $< .cnf _key.pem
        OPENSSL_CONF=./usefulCA/openssl.cnf openssl ca -in tempreq.pem -out  basename $< .cnf _crt.pem
        rm -f tempkey.pem tempreq.pem
        cat  basename $< .cnf _key.pem  basename $< .cnf _crt.pem > $@
        chmod 400 $@
        ln -sf $@  openssl x509 -noout -hash < $@ .0
The resultant files are: Some notes on these steps: my site-local certificate authority is in the directory usefulCA, along with an OpenSSL config which describes my preferences. This config was created by copying and making appropriate adjustments to the default /etc/ssl/openssl.cnf which ships with Debian. For generating certificate signing requests to ship to a commercial certificate authority, it's a bit simpler. I save the config files with a .reqcnf suffix instead, and use this rule:
.SUFFIXES: .pem .cnf .reqcnf .csr
.reqcnf.csr:
        OPENSSL_CONF=$< openssl req -newkey rsa:1024 -keyout  basename $< .reqcnf .key -keyform PEM -out  basename $< .reqcnf .csr -outform PEM
And finally, a rule I use to sign incoming certificate requests from other systems:
.csr.pem:
        OPENSSL_CONF=./usefulCA/openssl.cnf openssl ca -in $< -out  basename $< .csr _crt.pem
I offer these without warranty in the hope they might be useful to somebody. They're not much more than a transcription of a how-to into a makefile, but it's just enough technology to ensure creating certificates isn't a big nuisance. Further reading Notes Why do I bother with a site-local CA, rather than just self-sign? It lets me bypass the annoyance of SSL warnings on clients once I've installed my own CA certificate, and gives me a coarse grained level of access control: for instance, only clients with certificates signed by my CA are allowed to access the site's LDAP server. My personal next step with this is to integrate the certificate production process with my emerging Puppet recipes for managing local infrastructure.Join the conversation about this post

16 June 2008

Edd Dumbill: We're all ops people now

Ten years ago, most of us wouldn't have dreamt we'd be managing terabits of storage, tens of megabits of bandwidth, arrays of network-distributed services. The height of a programmer's worry would likely be choice of UI toolkit or finding the right way to indent code, and the height of consumer concern deciding which room to put the new computer in. Now the problems associated with managing large networks are becoming real for everyone, right down to the consumer level. Stupendously large amounts of computing resource are available at an instant. Your household probably has more than a terabyte of storage already. Issues such as single sign-on are going to hit home over the next year, as networked computing and entertainment devices profilerate. Features such as Apple's Time Machine will be increasingly vital — software that makes traditionally gnarly sysadmin tasks consumer-friendly. The rebranding of .Mac into "Mobile Me" is also a step in this direction. The impact on developers As software developers, we also have to cope with the effects of this resource-richness. For small sums of money we can get access to large computing clusters, geographically redundant hosting services. Our programs have left the desktop and found their new home on the web. System administration issues loom large upon us, security concerns lurk auspiciously in the corners of our minds. Although the cost of infrastructure has dropped radically, other costs remain high and are going to stay that way. System administrators are not only grumpy, they demand high wages. Commercial software license fees spiral out of control: traditional per-CPU licensing models make little sense when you can quickly bring up tens of machines. The cost in power is already troubling large companies, and there's no reason to suspect the problems won't ripple down. Help is at hand from a variety of technologies. If they don't yet make massive resource management trivial, they at least make it possible. Some of these also inhabit the weird territory of being both the source of a problem and a solution at the same time: virtualization, for example. Distributed revision control systems Distributed revision control is a technology whose time has finally come in popular circles, thanks in part to Linus Torvald's Git system. DRCS has several important impacts on today's developer: All these trends lower the barrier to entry, increase collaboration and agility of development. You can the value of this as more software tools become free. Selling such tools is rapidly becoming a thing of the past, the advantages of sharing enable the developers at the sharp end to get their jobs done quicker. However, such increased agility and, well, messiness leave other problems to solve, which the next two technologies address. Virtualization Hardware-as-a-service, infrastructure-as-a-service, call it what you will. The ability to create what we used to call entire machines, pick them up and move them around the network is revolutionary, and it's something that will have a real impact on regular developers. The benefits are at several levels. Configuration management Computing is a zero-sum game, and despite our increased ability to create and distribute software, problems still exist. We just pushed them to the next level. In good part, this next level is the problem of configuration management. We now have networks and clusters of (virtual) machines, software so agile we need six decimal places to describe its revision levels, and network and authentication paths that are starting to tangle. How do we manage that? One thing developers crave is repeatability. That's why we love our makefiles, autoconf, Ant, rake and so on. It's the one time even the most imperative-minded programmer writes declarative code. We like to say "let the world be like this." Our new sprawling world lacks this feature, and the best of our old toolkits — .debs, RPMs — address things only at the level of packages in a single environment. So developers must look to the world of operations, a territory we probably thought we needn't enter. In this world the new "make" is called Puppet. You write recipes to describe how things ought to be, and Puppet will make it so. I've been spending some time digging into Puppet, and feel excited by the confidence it's giving me. Now my applications exceed single source trees, and single machines, it gives me the means to tie the whole together. This article was going to be solely about Puppet, but that will have to wait now for another time. It's likely you'll have played with virtual machines and distributed revision control, but have you tried Puppet yet? Give it a spin, and let your mind wander over the benefits for your organization and development approaches. Conclusions
For developers and users alike, our world is changing. Hardware, connectivity and increasingly software is becoming cheap or free. The solidity of the old things we put value on — real things you can touch like disks — is eroding. What really matters is our data, our creations, and their communication. If they don't quite yet exist in a universal "cloud" yet, they're certainly getting frisky. As vendors provide solutions for consumers to manage their new domestic infrastructure, developers must look to network-aware toolkits and operations techniques to manage and get the best from their emergent infrastructures. Also on this topic: Join the conversation about this post

12 June 2008

Edd Dumbill: Gandi's VM hosting beta now closed to new users

I've been experimenting with Gandi's virtual hosting service recently. In fact, this blog is now hosted on it. Gandi have created by far the easiest hosting service I've used. The web interface allows you to buy credit, create pre-installed virtual machines and log in, all in under 15 minutes. Add the ease of Ubuntu to the mix (just one of the preinstalled images you can choose from), and commissioning times for new services are low indeed. The hosting service is based on Xen, and allows you to dynamically change the resources your VMs can access (CPU/memory/disk), on a scheduled basis if required. It has an API which provides enough functionality for you to white-label hosting as part of your own web app. Gandi's hosting options
Excerpt from Gandi's explanation of the amount of resource you can allocate to a virtual machine Gandi's service isn't yet as flexible as Amazon's EC2, but it comes at the problem from the other end — its initial offering "just works" as an alternative hosting solution, with the added flexibility their Xen infrastructure brings to the mix. Even with all its tool support, Amazon EC2 feels like stepping into an alternate universe. I'm pretty excited about the directions in which Gandi's service will develop. And yes, now I've told you all this, unfortunately you can't yet play with the beta service if you've not got an account already. The initial success means Gandi are closing new signups for a little time to concentrate on improving their infrastructure. The good news is that Gandi say this is the final step before the full release of the system. I can't wait! It's so good to see innovative, high-quality internet solutions coming from Europe.Join the conversation about this post

23 May 2008

Edd Dumbill: Badges, blogging and bragging

Back from my travels, it's time for a few updates. I've mostly blogged about these elsewhere, so I'll just give some pointers here. The launch of magnetic-stripe cards at Where 2.0 went well. We had some initial teething issues with Linux talking to the card printers, which was resolved by backing down to Linux kernel 2.6.22 from 2.6.24. I'm not entirely sure what's up with 2.6.24, but it exhibited strange behavior talking to the card printers over ethernet — as if there were MTU misconfigurations. It's a big nuisance, as 2.6.24 is the default kernel shipped with Ubuntu Hardy, an otherwise excellent release. I've been paying some attention to OpenID 2.0 recently, as it's time for me to upgrade my OpenID accepting websites to use the new release of the specification — if for no other reason than Yahoo! OpenIDs are 2.0-only. This investigation led me to notice XRIs again, which are the confusing underbelly of the OpenID specs. The W3C Technical Architecture Group recently advised against using XRIs. I wrote about this over on my XML.com blog. I've not used for that blog for a long time, but will try to do so more. I've realized that I've still got a lot to say about the web, XML and open standards, and the XML.com blog seems like a good place to say it. Finally, to brag for a short moment. Another XTech has been and gone, and this year's was a great experience for everybody involved. This quote from attendee Paul Smith summed things up nicely, as it tells me I succeeded in my main goal for the conference:
What I really liked about this conference was the mix of attendees and presenters, both from academia, and the commercial world both and small. It made it feel much more valid, and it really felt like everyone was there for the right reasons - not trying to sell anything, but out of a genuinely altruistic wish to make the web better.
My sincere thanks to everybody involved in XTech this year.

3 December 2007

Edd Dumbill: Smartcard authentication on Linux and Mac

For various reasons, I need to secure access to some resources using two-factor authentication, and thus have been looking at smartcards. These little devices store digital keys and certificates, protected by a passphrase, and calculate digital signatures on-device. Hence the two factors: possession of the card, and knowledge of the password.This sort of scheme is widely used by government agencies and large corporations (and largely reliant on Windows, too), but I wanted to find the low cost way in for the small operator using open source.
OpenSC on Linux
The best starting point for Linux is the OpenSC project. It supports a reasonably broad array of devices, and is well supported by Linux distributions. Using command line tools you can create keys and certificates, as you would in OpenSSL for web servers and so on, and then upload them to the smartcard.Although OpenSSH source code has support for OpenSC, it is not compiled in by default in Debian and derived distributions. Unfortunately this means a bit of recompilation to get SSH supporting OpenSC. When that's done, you have an SSH implementation that can use an RSA key from your smartcard, and best of all, you can add this key to the ssh-agent like you would with regular keys. The hardwareI ordered two devices, which seemed to have the best support from OpenSC, the Axalto Cryptoflex E-Gate, and an Aladdin eToken. I got these from UsaSmartCard, who have a special section in their catalogue for Open Source compatible products. Both these cards have a USB interface built-in, I didn't want to be toting around an extra card reader in addition to the tokens themselves.
On the MacWhile both the devices worked as advertised under Linux, the experience has been a lot less fruitful under Mac OS 10.5 Leopard. There is a port of the OpenSC project for Mac OS, called SCA. The promise of the integration is great: you can use the on-device keys with apps like Safari and Mail, but there is a change in the way that the daemon responsible for talking to the smartcards (pcscd) works on Leopard, which means OpenSC won't recognise the cards.
With some fiddling, I have managed to force the Cryptoflex device to work with Mac. Unfortunately the Leopard/Darwin source for OpenSSH has diverged significantly enough from the upstream OpenSSH, that I couldn't apply the OpenSC patches. Not having ssh-agent work with the smartcard is a significant nuisance for me, as it's the easiest way to patch in the extra security to deployment processes.The Aladdin eToken just plain didn't work on Leopard. There is a report that Aladdin are working on Leopard drivers, however.I think a small number of mainly US federal smart cards will work out of the box on Leopard, though I've seen a few complaints about these on the Apple forums. It loooks unfortunately like smartcard support slipped through the net a bit. ConclusionSmartcard support is relatively straightforward on Linux. On Mac OS 10.5, it looks like some waiting is in order before things will work properly.

4 August 2007

Edd Dumbill: Zonbu: an intersection of open source, Web 2.0 and energy efficiency

Salon.com recently reviewed Zonbu, a highly compact general-purpose computer with no moving parts. Zonbu's key features are its incredibly low power consumption, network-connected storage and that it works right out of the box without any installation. Under the covers, it's a Gentoo Linux installation with mainstream open source apps such as OpenOffice and Firefox.

Zonbu skins
Zonbu and its skins — high-powered dressing for a low-power device Price-wise, you can get the Zonbu for as little as $99 if you commit to two years' subscription to the network storage. Understanding the open source ethos, Zonbu also offer the box without any tether for $250. As with the Mac mini, supplying the keyboard, mouse and monitor is up to you. Preloaded with an office suite, email, IM, web browser, multimedia player, games and Skype, Zonbu is aimed at being a general computing appliance. You can't install anything else on it, but then again, that way you can't break it easily either. It sounds the sort of thing I'd be happy leaving with non-technical family and friends. Storage with Amazon S3 Low-power solid state devices are nothing new of course, there are a variety available, and things such as the NSLU2 have been in production for some years. The novel thing about the Zonbu however is in how it manages its storage. The Zonbu has 4GB of compact flash storage on board, which it uses as a cache for Amazon's S3 storage network. All your files get encrypted and sent to S3, and are retrieved when you need them. One really neat consequence of this is that you can get at your data via the web any time you want. Secondly, it gives me some sense of security for my data. I don't know if Zonbu will give me the 'keys' to my data on S3, but it wouldn't be hard for them to provide an easy way to migrate out. Either way, S3 is a place I'd trust with my data. Why Zonbu is important Windows, and to some extent, Mac OS X, are becoming the needy children of computing, always tugging on your arm and asking you something. In contrast, Zonbu looks like a great step towards "appliance computing". Its features are more like those you'd expect from your phone or cable TV provider:

Zonbu has the potential to change domestic computing. The low price point lowers the barrier to computer ownership. Low maintenance needs lower the technical barrier to entry and use. And Zonbu's a green and economical technology, yet as useful as the full power version. As a company, I feel Zonbu to be a well-intentioned player because of their strong support for open source, and the ease with which you can get at your data despite its appliance nature. I hope they continue to develop in this open data direction. Zonbu itself probably won't attain ubiquity, but it will change the marketplace and open up a new category of network-connected appliances for the home. Further reading

2 July 2007

Edd Dumbill: Nokia N800, the second time around

Nokia's N800 internet tablet is an intriguing device. When I originally got one a few months back I tried to treat it purely as a consumer object, just using the installed apps and things available through the obvious point-and-click channels.As a consequence it served mainly as a portable (and expensive) internet radio, streaming me the cricket commentary from the BBC. And when I upgraded my wireless network I somehow managed to make it WPA2 only, knocking the N800 offline. An offline N800 is an almost thoroughly useless device, so it went into the drawer and I forgot about it.Ultimately you can't leave something that expensive unused, so I dragged it out again, fiddled the wi-fi router into compliance, and decided not to deny my hacker nature this time. The N800 is Linux underneath, so who could resist?
The result is two-edged, really: I'm a lot happier with the device, but on the other hand must conclude that the N800 is still a bit far from being consumer-ready.Must-have softwareSo, what things did I install this time around that made the device happier?Claws Mail is a nice email client that works well with my IMAP accounts, which all use SSL and TLS, have lots of messages and a deep folder hierarchy. I don't really want to write much mail on the N800, but an easy reading interface is a bonus.The FM radio is something I can't believe I missed before. I had no idea this was in there, but plug some headphones into the N800 and they act as the antenna for an FM receiver. Desperately cute and old-world, a bit like when laptops still used to have parallel printer ports.I had previously ignored Maemo Mapper, thinking it was useless without a GPS, but it turns out to work very nicely as a dedicated client for Google Maps, as well as several other mapping sources. Must-do geeky bitsTry as I might to like the touch screen, the first thing I had to do with the device was find a way of not using the stylus to do sysadmin type tasks on it.So, first stop is to get a terminal up, figure out how to use the root account, install a SSH client and server, and get my pubkey onto the N800. Now I could shell into it and use a decent keyboard.On my local network I hate maintaining DNS if I don't have to, so the next thing I wanted was Zeroconf support in the shape of avahi. One of the quickest ways to get this going is to install the Canola media application, which uses Zeroconf to find shared music.With these basics in place, the N800 supports APT package repositories familiar to Debian and Ubuntu users, so the device becomes a lot less weird and much more manageable. I felt the same pleasant familiarity as I did with the NSLU2. Things to look forward toThe N800's video camera is neat, but nobody I know uses Google Talk for conferencing. Fortunately it seems that Skype for the N800 is just around the corner. Initially, video support is unlikely, but I imagine that if Skype on the N800 proves popular, it won't be far off.The N800 is something I don't mind having kicking around the kitchen or nursery, so staying in touch with my family while I'm travelling will become a lot more fun.
Also, I'd like to get NFS running on the N800, but that requires the installation of a new kernel, which I've not quite yet had the time to do. Once that's done, all my media, photos and storage will be handily available. 

22 June 2007

Edd Dumbill: Collecting minimal user information

Web applications are in constant competition for a user's attention. Unlike shrink-wrap software, there's often no captive audience, and web apps must gently guide and woo users to help them to the best experience of the software. This is particularly true when considering the problem of collecting personal details from a user. Collect too little information, and your app might not be able to function effectively. Collect too much information, and you raise the barrier for participation with your application. Users may either not be bothered, be alarmed at the apparent intrusion into their privacy, or fill out bogus information.  One size doesn't fit all
I originally stumbled on the issue in the development of Expectnation. The amount of information a conference speaker needs to give the organizers is significantly more than that of, say, a reviewer. An organizer will often want postal and phone details for a speaker, which are largely irrelevant to the review process. All conferences and events rely on the willingess of participants, so the path to participation needs to be as frictionless as possible.
Thus, a one-size-fits-all personal details form didn't make sense for us. At best, we could have simply made more fields optional, but that would result in missing information that we really did need. Another solution would be to provide a dedicated personal details form per user type, e.g. different forms for speakers and reviewers. This is a non-starter, however, as users can take on many roles within a conference. So, we had a problem on our hands: different levels of interaction with our application requires different levels of personal information. We also had a second problem, which is that not all events and organizers want the same level of information. For example, a user group event is unlikely to need the same amount of information from speakers as an academic conference. Thus we have two axes on which the amount of information required varies: by role, and by event.  Our solution: progressive disclosure We concluded that we wanted to adhere to two principles in the collection of user information for Expectnation: The first objective was achieved by constructing a matrix of information items against user role, with the possible values of required, optional, unused for each item. We then achieved the second objective by allowing this matrix to be varied over each event. We've named this concept "progressive disclosure." What this means in practice is that a user is only ever asked for the information required to fulfil the roles they have within Expectnation. If they take on another role, they are asked for the additional information at that point, and not before. Implementation concerns Expectnation user disclosure editing
Extract from disclosure level editing UI It would be untrue to say that progressive disclosure is simple to implement. A user in our system will have disclosure requirements that are the union of all of their roles in all of the events they attend. Within Ruby on Rails, this means the conventional validation logic is useless to us, as it only deals with static requirements. Secondly, disclosure requirements are expensive to compute, so we need to cache them, with the inevitable addition of issues such as cache invalidation. Beyond the data model issues, the user interface issues are, if anything, more troublesome. From an end-user's point of view, the difficulty arises when we need extra information from them, and presenting this in a way that makes sense. From the event organizer's point of view, we have the challenge of presenting a controlling user interface for rather a complex concept. Organizers must be able to change the level of required information either globally, or per-event. We opted for a grid presentation, an extract from which is shown in the screenshot in this article. Over time we'll be able to see how effective this interface is, and what we can do to improve it. Finally, we have chosen sensible defaults for all disclosure levels, as the best result would be that organizers have little need to reconfigure the settings in the first place. Making it look easy Because Expectnation is a general application for organizing events, rather than targetting one specific one, we've had to construct a framework for minimizing user information collection, as opposed to simple implementation of known scenarios. At the end of the day we have added a fairly complicated subsystem and an extra degree of complexity to our codebase. However, the result from the user's point of view is increased simplicity and a streamlined experience. Every time you embark on something that wide-reaching and complex, you have to ask yourself whether the feature is really worth it. In this case, though, the answer is a resounding "yes". The user is the most important factor in our software, and making their life easier in an unobtrusive manner is a priority.

21 June 2007

Edd Dumbill: Jeni Tennison on women in computing

For a long time I've been vexed by the gender bias of computing towards men. It's clear that computing has multiple issues, ranging from its perception to the nature of its society. As organizer of computing conferences, I care about ensuring that participation is meritocratic and inclusive.
As various debates have taken place over blogs in the past on women in computing, I have stayed silent. Not because I don't care, but because I really didn't know what to say. I try to understand, but at the end of the day I'm a man, I tend to misunderstand important points, and I suspect my contributions would come across as at best naïve, and more likely, patronising. So many conversations between men and women on this topic end up in mutual uncomprehension. Better to stay silent than be a fool.
This is why I'm very grateful for Jeni Tennison's analysis of women in computing. It is the most helpful blog post I have read recently on the topic. Perhaps it's because Jeni's British, and we share some of the same cultural assumptions and constraints. But more importantly, it's because her article builds bridges to my experience and understanding.
An argument of femininity Rather than simply appealing to tribes of men and women, Jeni places the discussion in terms of feminine and masculine traits. Like many men, I identify with certain attitudes and preferences more closely allied to my feminine side. By not excluding me, her wording helped me view the points she unfolded in terms that my own experience, however poor, could identify with. I've often found contemporary computing communities to be suffocatingly macho. There's a strong pressure to succeed in terms of attention from the chattering bloggers, privileged inner circles and the capacity for vicious excoriation of those who fall from grace.As I grew up through school and university, I held these sorts of social forms in contempt, as well as occasionally suffering at their hands. Yet now, bafflingly, I find myself under the impression I need to belong and compete in these spheres.Self-efficacy One of Jeni's central points is concern about one's own self-efficacy. This was the real revelation to me. I tend to assume most capable people I meet are aware of their own capability. Jeni turned this on its head for me when she wrote
... look at how Ruby on Rails is marketed. A big play is made of how easy it is. But if a language or framework is easy then people with low self-efficacy can’t win: if they manage to do something with it then they haven’t really achieved very much because anyone can do it; if they don’t manage to do something with it then they’re complete idiots. I’m not saying that we should advertise languages or frameworks as being hard, because obviously that can put people off as well, but a recognition of the barriers that people might face may, in a strange way, make them more approachable.
As Jeni brings her article to a close, it's with some shock and shame that I get the punchline loud and clear: "this isn't about you."It's about empathy, inclusivity and selflessness. Human qualities that are unrestricted to either gender.

20 June 2007

Edd Dumbill: Why Bazaar rocks, and the highs and lows of its use with Rails

While Subversion is for many the source code revision system of choice, I've never stopped long with it myself. Why? Because it's mostly a fixed-up CVS, and doesn't fix the really gnarly problem with revision control: merging.Mark Shuttleworth has written an insightful little article on the key aspects of merging source code. Some of the points he makes underpin my choice of revision control system.For several years now I've been a very contented user of Bazaar, in its first arch-based incarnation, and now in its latter form. The key reason has been ease of merging.While most articles praising Bazaar are reports of open source development, I've also found it very handy for small in-house teams working on web applications.There are very often multiple arcs of development going on at once, and the ease of merging makes it easy to ensure that long-running development arcs don't get ridiculously far from the main trunk of development. Bazaar's merging also makes it easier for lead developers to police the merging of the code base.Using Bazaar with Rails and other toolsWorking mostly with Rails, one repeatedly runs up against the assumption that Subversion is the golden path for revision control. This is perhaps a little telling about the level of participation most Rails-based projects have reached.The tools story for a Bazaar user is both good and bad.The goodThe bad

14 June 2007

Edd Dumbill: BarCamps and unconferences: what software support do you need?

My latest venture, Expectnation, is conference management software aimed primarily at the traditional form of running conferences. The schedule, speakers and topics are all lined up in advance. Foo Camp
Campers at O'Reilly's Foo Camp unconference However, many conferences now are taking the form of "unconferences" in whole or part, exemplified by the BarCamp movement. At these events, the schedule is assembled just in time, and almost everybody is a presenter. The same problems exist for BarCamp organizers as for conventional event organizers. They still need to use a bunch of diverse tools for their web sites, registration, communication and reportage. Expectnation solves this problem by being a "one-stop shop" tool for conventional conference structures, and we want to do it for BarCamps and unconferences too. Tell me what you need  I'm asking for feedback from attendees and organizers of BarCamp-like events as to what kinds of features they're seeking in their support software. We already have the platform to build on, and we want to find out the best ways in which to do it.The sort of subject areas I'm keen to hear about includeIf you have views on this, please join the discussion thread over on the forums. It's worth restating too that I'm very keen to talk about this with attendees as well as organizers.

1 June 2007

Antti-Juhani Kaijanaho: Everybody has their own conference management software

… or at least it feels that way. I just ran into Edd Dumbill’s post on Expectnation, and I just last week helped grade a master’s thesis on the application of Petri Nets to investigating the correctness of another conference management software written at my University (a thesis that I had co-advised). I seem to recall reading about a couple of others.

Edd Dumbill: Launching Expectnation

I'm very pleased to announce the launch of Expectnation, a web application for managing conferences. Expectnation logo What does it do? Expectnation is intended to replace the ad-hoc collection of emails, spreadsheets, documents and hacks that hold together the organization of most conferences. The aim is to improve the quality and reliability of communication between the organizers, speakers and attendees. But it doesn't stop there. What I'm really excited about is the chance to enhance the conference experience through the web. Expectnation is capable of managing the complete web site for a conference, which provides many opportunities for bringing the best of the social software world to augment events. Our tagline is "build your conference into a community", which underpins a key aim: to help organizers make events more relevant and useful to attendees, and to live on beyond the event itself. We do this in two ways: saving time, so you can spend it on improving your conference, and providing online tools to support community building.
Right now Expectnation is able to manage the proposal, review, scheduling and publication of presentations, along with the conference web site and smart emailing to speakers, reviewers and chairs. Over the next two months we'll be introducing our registration module, with optional online payments, and working on extending the social software features available for attendees. Take a look  I'm really excited about letting users get started with Expectnation, and seeing the uses it gets put to. We believe Expectnation will be just as handy for organizing events inside an organization as for traditional conferences. So, if you organize, speak at, or attend conferences of any sort, please check out Expectnation and recommend it to a conference organizer near you!Related links

31 May 2007

Edd Dumbill: Small bundle of sluggy joy

I'm a bit late to this party, but the NSLU2—affectionately known as the 'slug'—is a piece of kit you can't afford to be without on your home or small office network. NSLU2 image Not much bigger than the palm of my hand, and cheaper than a ticket for a Test at Lord's, the NSLU2 is a small fileserver that serves files from attached USB disks. What makes it particularly special is the large amount of alternative firmwares built by Linux open source developers, which allow you to extend the functionality of the NSLU2 beyond merely serving out files via SMB (the Windows file serving protocol.) Cheap NFS serving  So what's the big deal for me? Well, like most laptop-slinging folks whose home network is predominantly wireless, I want my backup disks to sit on the network, available when needed. It turns out it's very difficult to find a standalone network disk that properly supports Unix file system semantics such as symbolic links. Most just support SMB file sharing, with attendant limits such as no links and no files over 2GB. (I found this out the hard way with Apple's Airport Extreme.) With the NSLU2 I was able to install the "Unslung" alternative firmware, and install good old NFS, making my backup disks available in a normal way to Linux and OS X machines alike. (How we used to complain about NFS back in the early 90s, but Windows file sharing still makes it look good!) In the great tradition of open source there are multiple choices of Linux distributions you can install. As it was my first time round, I went for Unslung, which preserves as much as possible of the official Linksys interface, but lets you extend it. Next time, with a better idea of what I'd use the box for, I'd probably plump for Debian.  Constraints breed creativity Inside the case, the NSLU2 is in fact a tiny Linux machine with 32MB of RAM and an Intel XScale CPU. This turns out to be plenty enough resources to serve files on a small network. Aside from my prosaic needs, the NSLU2 has been put to several more innovative uses, such as a music server for Apple ITunes and a 4-line home telephone exchange. I've been astounded at the applications people have devised for this little box. Being fairly cheap makes it a great candidate for home automation projects. It's a great example of how limiting resources fosters innovation. Remember how games on 8-bit microcomputers were so much more ingenious than those on their more well-resourced successors?
So, I may be a little slow in finding this little hardware gem, but I wholeheartedly recommend it.

16 May 2007

Edd Dumbill: XTech heatmaps

One of the fun and useful things we've been able to do in Expectnation with the personal scheduler is to use the data to help us as organizers. We recently added in dynamic overlays to the organizer's view of the schedule, enabling us to create heatmaps of the most popular talks. Heatmap
Screenshot of personal scheduler popularity overlaid on event timetable As well as satisfying natural curiosity, the heatmaps let us identify issues such as possible room overcrowding in advance. We're able also to record actual attendance figures as the conference goes on, and I will try and make an analysis of how attendees' intentions stack up against their actions. 

Next.