Search Results: "Scott James Remnant"

26 October 2014

Colin Watson: Moving on, but not too far

The Ubuntu Code of Conduct says:
Step down considerately: When somebody leaves or disengages from the project, we ask that they do so in a way that minimises disruption to the project. They should tell people they are leaving and take the proper steps to ensure that others can pick up where they left off.
I've been working on Ubuntu for over ten years now, almost right from the very start; I'm Canonical's employee #17 due to working out a notice period in my previous job, but I was one of the founding group of developers. I occasionally tell the story that Mark originally hired me mainly to work on what later became Launchpad Bugs due to my experience maintaining the Debian bug tracking system, but then not long afterwards Jeff Waugh got in touch and said "hey Colin, would you mind just sorting out some installable CD images for us?". This is where you imagine one of those movie time-lapse clocks ... At some point it became fairly clear that I was working on Ubuntu, and the bug system work fell to other people. Then, when Matt Zimmerman could no longer manage the entire Ubuntu team in Canonical by himself, Scott James Remnant and I stepped up to help him out. I did that for a couple of years, starting the Foundations team in the process. As the team grew I found that my interests really lay in hands-on development rather than in management, so I switched over to being the technical lead for Foundations, and have made my home there ever since. Over the years this has given me the opportunity to do all sorts of things, particularly working on our installers and on the GRUB boot loader, leading the development work on many of our archive maintenance tools, instituting the +1 maintenance effort and proposed-migration, and developing the Click package manager, and I've had the great pleasure of working with many exceptionally talented people. However. In recent months I've been feeling a general sense of malaise and what I've come to recognise with hindsight as the symptoms of approaching burnout. I've been working long hours for a long time, and while I can draw on a lot of experience by now, it's been getting harder to summon the enthusiasm and creativity to go with that. I have a wonderful wife, amazing children, and lovely friends, and I want to be able to spend a bit more time with them. After ten years doing the same kinds of things, I've accreted history with and responsibility for a lot of projects. One of the things I always loved about Foundations was that it's a broad church, covering a wide range of software and with a correspondingly wide range of opportunities; but, over time, this has made it difficult for me to focus on things that are important because there are so many areas where I might be called upon to help. I thought about simply stepping down from the technical lead position and remaining in the same team, but I decided that that wouldn't make enough of a difference to what matters to me. I need a clean break and an opportunity to reset my habits before I burn out for real. One of the things that has consistently held my interest through all of this has been making sure that the infrastructure for Ubuntu keeps running reliably and that other developers can work efficiently. As part of this, I've been able to do a lot of work over the years on Launchpad where it was a good fit with my remit: this has included significant performance improvements to archive publishing, moving most archive administration operations from excessively-privileged command-line operations to the webservice, making build cancellation reliable across the board, and moving live filesystem building from an unscalable ad-hoc collection of machines into the Launchpad build farm. The Launchpad development team has generally welcomed help with open arms, and in fact I joined the ~launchpad team last year. So, the logical next step for me is to make this informal involvement permanent. As such, at the end of this year I will be moving from Ubuntu Foundations to the Launchpad engineering team. This doesn't mean me leaving Ubuntu. Within Canonical, Launchpad development is currently organised under the Continuous Integration team, which is part of Ubuntu Engineering. I'll still be around in more or less the usual places and available for people to ask me questions. But I will in general be trying to reduce my involvement in Ubuntu proper to things that are closely related to the operation of Launchpad, and a small number of low-effort things that I'm interested enough in to find free time for them. I still need to sort out a lot of details, but it'll very likely involve me handing over project leadership of Click, drastically reducing my involvement in the installer, and looking for at least some help with boot loader work, among others. I don't expect my Debian involvement to change, and I may well find myself more motivated there now that it won't be so closely linked with my day job, although it's possible that I will pare some things back that I was mostly doing on Ubuntu's behalf. If you ask me for help with something over the next few months, expect me to be more likely to direct you to other people or suggest ways you can help yourself out, so that I can start disentangling myself from my current web of projects. Please contact me sooner or later if you're interested in helping out with any of the things I'm visible in right now, and we can see what makes sense. I'm looking forward to this!

13 August 2012

Raphaël Hertzog: Looking back at 16 years of dpkg history with some figures

With Debian s 19th anniversary approaching, I thought it would be nice to look back at dpkg s history. After all, it s one of the key components of any Debian system. The figures in this article are all based on dpkg s git repository (as of today, commit 9a06920). While the git repository doesn t have all the history, we tried to integrate as much as possible when we created it in 2007. We have data going back to April 1996 In this period between April 1996 and August 2012: Currently the dpkg source tree contains 28303 lines of C, 14956 lines of Perl and 6984 lines of shell (figures generated by David A. Wheeler s SLOCCount ) and is translated in 40 languages (but very few languages managed to translate everything, with all the manual pages there are 3997 strings to translate). The top 5 contributors of all times (in number of commits) is the following (result of git log --pretty='%aN' sort uniq -c sort -k1 -n -r head -n 5):
  1. Guillem Jover with 2663 commits
  2. Rapha l Hertzog with 993 commits
  3. Wichert Akkerman with 682 commits
  4. Christian Perrier with 368 commits
  5. Adam Heath with 342 commits
I would like to point out that those statistics are not entirely representative as people like Ian Jackson (the original author of dpkg s C reimplementation) or Scott James Remnant were important contributors in parts of the history that were recreated by importing tarballs. Each tarball counts for a single commit but usually bundles much more than one change. Also each contributor has its own habits in terms of crafting a work in multiple commits. Last but not least, I have generated this 3 minutes gource visualization of dpkg git s history (I used Planet s head pictures for dpkg maintainers where I could find it). <iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="http://www.youtube.com/embed/1x9-Etj1Ew4?fs=1&amp;feature=oembed" width="500"></iframe> Watching this video made me realize that I have been contributing to dpkg for 5 years already. I m looking forward to the next 5 years :-) And what about you? You could be the 147th contributor see this wiki page to learn more about the team and to start contributing.

No comment Liked this article? Click here. My blog is Flattr-enabled.

14 January 2011

Jordi Mallach: New project to discuss

Reading Scott's recent announcement on his move to Google was both surprising and a pleasure. Surprising, because it'll take time to stop associating his name to Ubuntu, Canonical, and the nice experiences I had while I worked with them. A pleasure, because his blog post was full of reminiscences of the very early days of a project that ended up being way more successful in just a few years than probably anyone in the Oxford conference could imagine. Scott, best of luck for this new adventure! Scott's write-up includes a sentence that made me remember I had been wanting to write a blog post related to all of this, but was pending Mark Shuttleworth's permission for posting:
Ok, Mark wasn t really a Nigerian 419 scammer, but some people did discard his e-mail as spam! Scott James Remnant
Many know the story of how I ended not being part of the Super-Secret-Debian-Startup Scott mentions. I even wrote about it in a blog post, 3 years ago:
[...] nothing beats the next email which sat for some dramatic 6 months in my messy inbox until I found out in the worst of the possible scenarios. Let's go back to late February, 2004, when I had no job, and I didn't have a clue on what to do with my life.
From: Mark Shuttleworth <mark@hbd.com>
Subject: New project to discuss
To: Jordi Mallach <jordi@debian.org>
Date: Sun, 29 Feb 2004 18:33:51 +0000
[...]
I'm hiring a team of debian developers to work full time on a new
distribution based on Debian. We're making internationalisation a prime
focus, together with Python and regular release management. I've discussed
it with a number of Debian leaders and they're all very positive about it.
[...]
I'm not sure if I totally missed it as it came in, or I skimmed through it and thought WTF?! Dude on crack or I just forgot I need to reply to this email , but I'd swear it was the former. Not long after, no-name-yet.com popped up, the rumours started spreading around Debian channels. Luckily, I got a job at LliureX two months later, where I worked during the following 2 years, but that's another story. I guess it was July or so when Ubuntu was made public, and Mark and his secret team organised a conference (blog entries [1] [2] [3] [4] [5]), just before the Warty release, and I was invited to it, for the same reasons I got that email. During that conference, probably because Mark sent me some email and I applied a filter to get to it, I found the lost email, and felt like digging a hole to hide for a LONG while. I couldn't believe the incredible opportunity I had missed. I went to Mark and said "hey, you're not going to believe this", and he did look quite surprised about someone being such an idiot. I wonder if I should reply to his email today...
When the usual suspects in the secret Spanish Debian Cabal channel read this blog post, they decided Mark deserved a reply, even if it would hit his inbox more than three and a half years late. :) With great care, we crafted an email that would look genuinely stupid in late 2007, but just arrogant and idiotic in 2004, when Ubuntu was just an African word, and the GNU/Linux distribution landscape was quickly evolving at the time, Gentoo Linux had the posh distribution crown, that Debian had held for quite a few years. I even took enough care to forge the X-Operating-System and User-Agent headers so they matched whatever was current in Debian in February 2004, and of course, top-posting seemed most appropriate. So Mark woke up that Monday, fired up his email client, and got... this:
Date: Mon, 1 Mar 2004 09:47:55 +0100
From: Jordi Mallach <jordi@sindominio.net>
To: Mark Shuttleworth <mark@hbd.com>
Subject: Re: New project to discuss
Organization: SinDominio
X-Operating-System: Debian GNU/Linux sid (Linux 2.6.3 i686)
User-Agent: Mutt/1.5.5.1+cvs20040105i
Hi Mark,
Thanks for your email. I nearly deleted this e-mail because for some
reason I thought it was targetted spam.
Your project looks very interesting, almost like a dream come true.
However, I feel a bit uneasy about your proposal. Something just doesn't
fit.
Why would someone start a company to work on /yet another/ Debian
derivative? Have you heard about Progeny's sad story? I think it's a
great example to show that Debian users don't want Debian-based distros,
they want people to work on the "real thing". Besides, I don't think
there's much more place for successful commercial distros, with Red Hat
and SuSE having well-established niches in the US and Europe.
Also, why focus on Debian specifically, Why not, for example, Gentoo,
which has a lot of buzz these days, and looks poised to be the next big
distribution?
To be honest, I think only a few people have the stamina or financial
stability to undertake a project like this, so I'd like to know
a bit more about you, and details on how you plan to sustain the
expenses.
Those are the main issues that worry me about your project. Other than
that, I would be interested in taking part in it, as I'm currently
unemployed and working on something Debian-based would be just too good
to miss.
You can reach me at +34 123 45 67 89, or if you feel like flying people
around Europe, I probably can be in the UK whenever it fits you.
Thanks, and hoping to hear from you again,
Jordi
On Sun, Feb 29, 2004 at 06:33:51PM +0000, Mark Shuttleworth wrote:
> Hi Jordi
>
> We haven't met, but both Jeff Waugh and Martin Michelmayr recommended that
> I get in touch with you in connection with a new project that I'm starting.
>
> I'm hiring a team of debian developers to work full time on a new
> distribution based on Debian. We're making internationalisation a prime
> focus, together with Python and regular release management. I've discussed
> it with a number of Debian leaders and they're all very positive about it.
>
> Would you be available to discuss it by telephone? I'm in the UK, so we
> could probably find a good timezoine easily enough ;-) Let me knof if
> you're keen to discuss it, when and what number to call.
>
> Cheers,
> Mark
>
> --
> Try Debian GNU/Linux. Software freedom for the bold, at www.debian.org
> http://www.markshuttleworth.com/
As you can imagine, his reaction was immediate:
Date: Mon, 22 Oct 2007 11:13:54 +0100
From: Mark Shuttleworth <mark@hbd.com>
To: Jordi Mallach <jordi@sindominio.net>
Subject: Re: New project to discuss
Jordi! I just got this now! Did you recently flush an old mail queue?
With thanks to all the Spanish Cabal members who were involved!

31 December 2010

Debian News: New Debian Developers (December 2010)

The following developers got their Debian accounts in the last month: Congratulations!

The following developers have returned as Debian Developers after having retired at some time in the past:

Welcome back!

29 October 2010

Colin Watson: libpipeline 1.0.0 released

In my previous post, I described the pipeline library from man-db and asked whether people were interested in a standalone release of it. Several people expressed interest, and so I've now released libpipeline version 1.0.0. It's in the Debian NEW queue, and my PPA contains packages of it for Ubuntu lucid and maverick. I gave a lightning talk on this at UDS in Orlando, and my slides are available. I hope there'll be a video at some point which I can link to. Thanks to Scott James Remnant for code review (some time back), Ian Jackson for an extensive design review, and Kees Cook and Matthias Klose for helpful conversations.

23 February 2009

Theodore Ts'o: Reflections on a complaint from a frustrated git user

Last week, Scott James Remnant posted a series of Git Sucks on his blog, starting with this one here, with follow up entries here and here. His problem? To quote Scott, I want to put a branch I have somewhere so somebody else can get it. That s the whole point of distributed revision-control, collaboration. He thought this was a mind-numbingly trivial operation, and was frustrated when it wasn t a one-line command in git. Part of the problem here is that for most git workflows, most people don t actually use git push . That s why it s not covered in the git tutorial (this was a point of frustration for Scott). In fact, in most large projects, the number of people need to use the scm push command is a very small percentage of the developer population, just as very few developers have commit privileges and are allowed to use the svn commit command in a project using Subversion. When you have a centralized repository, only the privileged few will given commit privileges, for obvious security and quality control reasons. Ah, but in a distributed SCM world, things are more democratic anyone can have their own repository, and so everyone can type the commands git commit or bzr commit . While this is true, the number of people who need to be able to publish their own branch is small. After all, the overhead in setting up your own server just so people can pull changes from you is quite large; and if you are just getting started, and only need to submit one or two patches, or even a large series of patches, e-mail is a far more convenient route. This is especially true in the early days of git s development, before web sites such as git.or.cz, github, and gitorious made it much easier for people to publish their own git repository. Even for a large series of changes, tools such as git format-patch and git send-email are very convenient for sending a patch series, and on the receiving side, the maintainer can use git am to apply a patch series sent via e-mail. It turns out that from a maintainer s point of view, reviewing patches via e-mail is often much more convenient. Especially for developers who are just starting out with submitting patches to a project, it s rare that a patch is of sufficiently high quality that it can be applied directly into the repository without needing fixups of one kind or another. The patch might not have the right coding style compared to the surrounding code, or it might be fundamentally buggy because the patch submitter didn t understand the code completely. Indeed, more often than not, when someone submits a patch to me, it is more useful for indicating the location of the bug more than anything else, and I often have to completely rewrite the patch before it enters into the e2fsprogs mainline repository. Given that, publishing a patch that will require modification in a public repository where it is ready to be pulled just doesn t make sense for many entry-level patch submitters. E-mail is in fact less work, and more appropriate for review purposes. It is only when a mid-level to senior developer is trusted to create high quality patches that do not need review that publishing their branch in a pull-ready form really makes sense. And that is fairly rare, and why it is not covered in most entry-level git documentation and tutorials. Unfortunately, many people expect to see the command scm push in a distributed SCM, and since git pull is a commonly used command for beginning git users, they expect that they should use git push as well not realizing that in a distributed SCM, push and pull are not symmetric operations. Therefore, while most git users won t need to use git push , git tutorials and other web pages which are attempting to introduce git to new users probably do need to do a better job explaining why most beginning participants in a project probably don t need their own publically accessible repository that other people can pull from, and which they can push changes for publication. There is one exception to this, of course, and this is a developer who wants to get started using git for a new project which he or she is starting and is the author/maintainer, or someone who is interested in converting their project to git. And this is where bzr has an advantage over git, in that bzr is primarily funded by Canonical, which has a strong interest in pushing an on-line web service, Launchpad. This makes it easier for bzr to have relatively simple recipes for sharing a bzr repository, since the user doesn t need to have access to a server with a public IP address, or need to set up a web or bzr server; they can simply take advantage of Launchpad. Of course, there are web sites which make it easy for people to publish their git repositories; earlier, I had mentioned git.or.cz, github, and gitorious. Currently, the git documentation and tutorials don t mention them since they aren t formally affiliated with the git project (although they are used by many git users and developers and the maintainers of these sites have contributed a large amount of code and documentation to git). This should change, I think. Scott s frustrations which kicked off his git sucks complaints would have been solved if the Git tutorial recommended that the easist ways for someone to publicly publish their repository is via one of these public web sites (although people who want to set up their own server certainly free to do so). Most of these public repositories probably won t have much reason to exist, but they don t do much harm, and who knows? While most of the repositories published at github and gitoriuous will be like the hundreds of thousands of abandoned projects on Sourceforge, one or two of the new projects which someone starts experimenting on at github or gitorious could turn out to be the next Ruby on Rails or Python or Linux. And hopefully, they will allow more developers to be able to experiment with publishing commits on their own repositories, and lessen the frustrations of people like Scott who thought they needed their own repositories; whether or not a public repository is the best way for them to do what they need to do, at least this way they won t get as frustrated about git. :-) Related posts (automatically generated):
  1. Git and hg John Goerzen recently posted about Git, Mercurial and Bzr that...
  2. Batches of patched batches of patches I found the following from the Risks Digest, authored by...

Theodore Ts'o: Reflections on a complaint from a frustrated git user

Last week, Scott James Remnant posted a series of Git Sucks on his blog, starting with this one here, with follow up entries here and here. His problem? To quote Scott, I want to put a branch I have somewhere so somebody else can get it. That s the whole point of distributed revision-control, collaboration. He thought this was a mind-numbingly trivial operation, and was frustrated when it wasn t a one-line command in git. Part of the problem here is that for most git workflows, most people don t actually use git push . That s why it s not covered in the git tutorial (this was a point of frustration for Scott). In fact, in most large projects, the number of people need to use the scm push command is a very small percentage of the developer population, just as very few developers have commit privileges and are allowed to use the svn commit command in a project using Subversion. When you have a centralized repository, only the privileged few will given commit privileges, for obvious security and quality control reasons. Ah, but in a distributed SCM world, things are more democratic anyone can have their own repository, and so everyone can type the commands git commit or bzr commit . While this is true, the number of people who need to be able to publish their own branch is small. After all, the overhead in setting up your own server just so people can pull changes from you is quite large; and if you are just getting started, and only need to submit one or two patches, or even a large series of patches, e-mail is a far more convenient route. This is especially true in the early days of git s development, before web sites such as git.or.cz, github, and gitorious made it much easier for people to publish their own git repository. Even for a large series of changes, tools such as git format-patch and git send-email are very convenient for sending a patch series, and on the receiving side, the maintainer can use git am to apply a patch series sent via e-mail. It turns out that from a maintainer s point of view, reviewing patches via e-mail is often much more convenient. Especially for developers who are just starting out with submitting patches to a project, it s rare that a patch is of sufficiently high quality that it can be applied directly into the repository without needing fixups of one kind or another. The patch might not have the right coding style compared to the surrounding code, or it might be fundamentally buggy because the patch submitter didn t understand the code completely. Indeed, more often than not, when someone submits a patch to me, it is more useful for indicating the location of the bug more than anything else, and I often have to completely rewrite the patch before it enters into the e2fsprogs mainline repository. Given that, publishing a patch that will require modification in a public repository where it is ready to be pulled just doesn t make sense for many entry-level patch submitters. E-mail is in fact less work, and more appropriate for review purposes. It is only when a mid-level to senior developer is trusted to create high quality patches that do not need review that publishing their branch in a pull-ready form really makes sense. And that is fairly rare, and why it is not covered in most entry-level git documentation and tutorials. Unfortunately, many people expect to see the command scm push in a distributed SCM, and since git pull is a commonly used command for beginning git users, they expect that they should use git push as well not realizing that in a distributed SCM, push and pull are not symmetric operations. Therefore, while most git users won t need to use git push , git tutorials and other web pages which are attempting to introduce git to new users probably do need to do a better job explaining why most beginning participants in a project probably don t need their own publically accessible repository that other people can pull from, and which they can push changes for publication. There is one exception to this, of course, and this is a developer who wants to get started using git for a new project which he or she is starting and is the author/maintainer, or someone who is interested in converting their project to git. And this is where bzr has an advantage over git, in that bzr is primarily funded by Canonical, which has a strong interest in pushing an on-line web service, Launchpad. This makes it easier for bzr to have relatively simple recipes for sharing a bzr repository, since the user doesn t need to have access to a server with a public IP address, or need to set up a web or bzr server; they can simply take advantage of Launchpad. Of course, there are web sites which make it easy for people to publish their git repositories; earlier, I had mentioned git.or.cz, github, and gitorious. Currently, the git documentation and tutorials don t mention them since they aren t formally affiliated with the git project (although they are used by many git users and developers and the maintainers of these sites have contributed a large amount of code and documentation to git). This should change, I think. Scott s frustrations which kicked off his git sucks complaints would have been solved if the Git tutorial recommended that the easist ways for someone to publicly publish their repository is via one of these public web sites (although people who want to set up their own server certainly free to do so). Most of these public repositories probably won t have much reason to exist, but they don t do much harm, and who knows? While most of the repositories published at github and gitoriuous will be like the hundreds of thousands of abandoned projects on Sourceforge, one or two of the new projects which someone starts experimenting on at github or gitorious could turn out to be the next Ruby on Rails or Python or Linux. And hopefully, they will allow more developers to be able to experiment with publishing commits on their own repositories, and lessen the frustrations of people like Scott who thought they needed their own repositories; whether or not a public repository is the best way for them to do what they need to do, at least this way they won t get as frustrated about git. :-) Related posts (automatically generated):
  1. Git and hg John Goerzen recently posted about Git, Mercurial and Bzr that...
  2. Batches of patched batches of patches I found the following from the Risks Digest, authored by...

25 August 2008

Jeff Licquia: Standards and Conversations, Part 1

So it looks like the project I’ve been laboring on has been getting some attention:
Ever thought it was difficult to write software for Linux? For multiple distros? InternetNews reports that the LSB is making a push for their next release (due out later this year) that should help make all that much easier.
They even link to our project status page. Cool! Of course, good publicity invites criticism. This time, there seem to be two themes. William Pitcock seems to have the most succinct summary:
To put things simply, the LSB sucks. Here s why:
  • The LSB spec depends on RPM. I mean, come on. Seriously. Why do they need to require a specific package manager? If package handling is really required, then why not create a simple package format that can be converted on demand into the system package format? Or why care about packages at all?
  • The LSB spec invents things without consulting distros. Like the whole init scripts thing. But that s not as bad as depending on RPM or requiring a specific layout.
(See also Scott James Remnant.) Let’s take this one part at a time. Today’s topic: packaging. Part of William’s problem may be that he doesn’t understand the spec. The LSB doesn’t require a specific package manager, or a specific package format. It doesn’t even require that the distribution be set up using package management at all! The spec only requires that LSB-compliant software be distributed so that any LSB-compliant distribution can install it. That could be tarballs with POSIX-compliant install scripts, an LSB-compliant install binary, a shar archive, a Python script with embedded base64 binaries, whatever. One of the options allowed is an RPM package, with a number of restrictions. The restrictions are key, because they effectively define a subset of RPM that acts as, to quote William again:
…a simple package format that can be converted on demand into the system package format…
The difference being, of course, that we didn’t reinvent the wheel and create our own; we used a popular format as the basis for ours. Scott raises another concern:
While much of the LSB can be hacked into a different distribution through compatibility layers and tools, such as alien, what ISV or other vendor wants to provide a support contract against a distribution that has such kludges?
I’m not sure if he’s referring specifically to packaging or to the standard in general. As regards packaging: the reason we specify a strict subset is because we can test that subset, and we’ve tailored it to the needs of tools such as alien. The theory goes that alien isn’t a kludge when it comes to LSB packages. But, as already mentioned, if vendors aren’t comfortable with supporting RPM, they have a number of other options. As it turns out, most of them are doing just that; the feedback we’re getting from most ISVs is that packaging (whether LSB-subset RPM, full RPM, or Debian) is just not worth the effort. Coming up: part 2

13 November 2007

Andrew Pollock: [debian] My First Ubuntu Developer Summit

I've been procrastinating writing about my Boston trip, for no particular reason. A bunch of us went to the Boston Ubuntu Developer Summit (in Cambridge, actually) for work purposes. We currently derive from the Long Term Support releases, and Hardy Heron is going to be an LTS, so I wanted us to get more involved directly, particularly at this stage of the game, rather than after it's already a done thing. Anyway, the reason I'm writing this in the "Debian" category is I found the whole thing to be totally awesome. Debian should do something similar. Rather than being a conference, this was essentially a week of 1-2 hour small-group meetings to discuss various features for Hardy. I thought it worked pretty well. I'd never seen gobby used before, and it's basically a poor-man's Google Docs (although one could argue it handles real-time collaborative editing better). They had VoIP up the wazoo. You could dial into a conference bridge to just listen in, or you could dial a different bridge to participate. Every room had a Polycom conference phone in it. That seemed to work pretty well. I think the main reason I think Debian could take a leaf out of Ubuntu's book on this was it helps resolve potentially controversial technical decisions very quickly. Rather than having a two week protracted flame war on a mailing list, you can have a 30 minute rational debate in person and move on. So I've no idea how this UDS stacked up to previous ones, but I was very impressed by the whole thing. I'm in total awe of Scott James Remnant and Colin Watson for not having burned out by now. Doing a release every 6 months, with all the associated stuff that goes around it (i.e. running a UDS) has got to take it out of you.

29 September 2007

Maximilian Attems: happy git usage

Apparently Scott James Remnant in his article on version control systems confuses arch and git. One can only speculate that his short git usage stems from the pre 1.0 days, where you had to use higher level tools (called porcelain) to happily work on git. A funny anecdote is that Scott back in his dpkg hacking year promoted arch heavily. Ubuntu^Wbzr propaganda spreads speed gains as big bonus of the last major releases. In order to be able to do that you have to start with a terrible baseline. Testing bzr on middle sized repos is no fun at all. The bzr pain inside launchpad must be beyond imagination. Nowadays it is much easier to hack on mdadm than on lvm2. The reason is that later project uses rusty cvs. With git it is really easy to contribute back. Either you mail the patches or publish your repository. git will help you along on each way. The other very big bonus of git is the big community around git. It is a community excited around building and delivering the best version control system. The git development does regular surveys on git usages and incorporates back the wishlists. The most "funny" way to use git is to run it as cvsserver. You may believe me or not i have seen git cvsserver emulation usage in the wild.

28 September 2007

Clint Adams: A thrusting hit will never find

Once upon a time I proposed a DebConf talk about how to write zsh completion functions, but it was rejected. Accordingly, I didn't waste any time preparing materials for it, so I never have anything to throw at people when they ask for some kind of introduction. Here we have a fictitious program called arismom. Usage information from the fictitious manpage and the fictitious --help output is as follows:
Usage: arismom [OPTION]... [FILE]...
Do it Jersey style.
  -a, --all                       do all those things
  -b                              bubble
      --bounce                    bonuce
      --CoC=STYLE                 adhere to STYLE code of conduct
  -d, --debian=PACKAGE            dedicate actions to Debian PACKAGE
  -e, --ensqualm=USER             ensqualm USER first
  -t, --tempdir=DIRECTORY         spew temporary files into DIRECTORY
So let's cut to the quick. Create a file called _arismom somewhere in your function search path. This is described by the array $fpath. You can view its contents by typing print -l $fpath, and you can add a a directory to the beginning with fpath=(~/.zsh/scratch $fpath) or to the end with fpath+=(~/.zsh/scritch). For the purposes of this blog entry, we'll pretend you have a ~/.michaelbolton/squatch directory in your $fpath and that you are now editing ~/.michaelbolton/squatch/_arismom. The first line of the file, at the very tippy top, should read
#compdef arismom
This ensures that when the completion system boots up and finds your file, it will associate your function with the command arismom and complete options and arguments for it accordingly. Speaking of arguments, skip a line for aesthetic equilibrium, and invoke the _arguments utility function.
_arguments \
_arguments is sort of the cdbs of the zsh completion fleet. By the end of this blog entry, you'll have no idea how it works, and if you want to do anything particularly complex with it, you might encounter some resistance. For those of you unfamiliar with Z-Shell syntax or shell syntax in general, the trailing backslash means I'm a-gonna feed you a ton of information about the command-line interface to arismom. So tell it about the first option already.
  '(-a --all)' -a,--all '[do all those things]' \
To oversimplify, this declares that -a and --all are options which produce the identical behavior of do all those things . Specifically, the part in parentheses says to not complete either -a or --all when either -a or --all is already on the command-line. The part in braces is merely brace expansion; for that reason it is outside of the single quotes. If you're unfamiliar with brace expansion, try print '(alice)' bob,carnie '[wilson]' to see how it expands. Finally, the phrase in brackets is an explanation of the option, which may or may not be displayed depending on your configuration. Next, do a short option that has no long option equivalent.
  '-b[bubble]' \
A long option with no short option equivalent looks similar. Don't feel limited by upstream's inadequate descriptions, misspellings, or poor grammar.
  '--bounce[amplify bounce level according to X-la algorithm]' \
Some options take arguments. Now we use colons.
  '--CoC=[adhere to CoC]:CoC style:(mjg59 buxy ubuntu)' \
The first column is the same optspec by which you've been so excited thus far, and the part between the colons is the message or description of that which will be matched. The part after the last colon is the action; in this case you are specifying a list of possibilities within single parentheses. In most cases, you'll want to be more dynamic than a pre-defined list, and there are many helper functions all ready to serve you.
  '(-d --debian)' -d,--debian= '[dedicate actions to Debian package]:package:_deb_packages avail' \
_deb_packages is a function that completes Debian packages; it can take avail, installed, or uninstalled to restrict which set of packages it offers. In this case we want it to complete any package available from your sources.
  '(-e --ensqualm)' -e,--ensqualm= '[ensqualm user first]:user to ensqualm:_users' \
Here the _users function will complete usernames.
  '(-t --tempdir)' -t,--tempdir= '[spew]:temp dir:_files -/' \
Normally the _files function will complete files, but you can tell it that you only want directories with the -/ option. Finally, we want to cover all the remaining arguments (which according to the fictitious usage information is a list of files). In this case, you happen to believe that files with the .nj extension are to be completed.
  '*:NJ files:_files -g "*.nj"'
The -g option specifies a glob pattern to match files. Now the entire file should look like this.
#compdef arismom
_arguments \
  '(-a --all)' -a,--all '[do all those things]' \
  '-b[bubble]' \
  '--bounce[amplify bounce level according to X-la algorithm]' \
  '--CoC=[adhere to CoC]:CoC style:(mjg59 buxy ubuntu)' \
  '(-d --debian)' -d,--debian= '[dedicate actions to Debian package]:package:_deb_packages avail' \
  '(-e --ensqualm)' -e,--ensqualm= '[ensqualm user first]:user to ensqualm:_users' \
  '(-t --tempdir)' -t,--tempdir= '[spew]:temp dir:_files -/' \
  '*:NJ files:_files -g "*.nj"'
There you have it. Restart zsh and try tab-completing various things after arismom. P.S. I expect bug reports containing functions for dpatch-edit-patch, pkill, and pgrep by tomorrow morning. P.P.S. Why is Scott James Remnant still on Planet?

26 September 2007

Scott James Remnant: Why I choose Bazaar (a history of revision control)

Like any sensible software developer, I have a close relationship with revision control systems. In my previous job, I was an SCM Engineer (see Software configuration management) which meant I had an even closer relationship than most, since we were running the CVS servers and actively using them to track changes and deployments. We all know, deep down, that revision control systems shouldn’t exist. This kind of thing should be inherent in the design of the operating system, through standard file and filesystem formats. The OLPC interface is making some headway towards that, but for the rest of us, it means using a revision control tool throughout the development process. Unfortunately, even though the tool is expected to be the most-used command on your system, very few of them are particularly easy to use. Thus there’s a large learning curve, and people become religious about their choice since they have invested significant time in using it. Just to spice the mix up, not only will people religiously defend their choice of revision control system, but they’ll do so while actively hating it. In the beginning there was CVS and we all thought that it was pretty good. It was based on the simpler RCS and shared a file-format with it, but introduced control of directory trees and remote operation. Actually, in reality, CVS wasn’t that good. Its command set could be a little strange and inconsistent (e.g. it’s not possible to diff between two dates on a branch); the support for branching assumed that all branches would be merged into the mainline, and only once; and nobody ever really knew how to create a new project in a repository (tip. cvs import is wrong). But we all used it anyway, and we muddled through. It did have some good features; it was simple, fast and pretty reliable–when it did break, you could usually fix the repository yourself. And most importantly of all, we understood how to drive it. And so it was for many years, until Subversion (SVN) came along. Subversion intended to be “a better CVS”, perhaps this goal should have made us suspicious at the time since CVS was already being a pretty good CVS by itself; unfortunately we hated CVS so much we flocked to the new system in hope. In hindsight, Subversion didn’t really improve on CVS much at all. In fact, arguably, the only real improvement was the addition of atomic commits (in CVS, each commit is per-file, so it’s manual labour to work out which change was made to two files at the same time). (Its support for branching, tagging, copying, renaming, etc. were no better than CVS’s when done in the repository by hand.) The cost of this single new feature was a much more complicated interface (with two separate commands), a backend that tended to break down weekly and a lethargic slowness to its operation. Most people I know now justify their use of Subversion instead of CVS by “Subversion is maintained, CVS isn’t” which is a somewhat self-fulfilling justification. While the mass conversion to Subversion, and ensuing disappointment and frustration, was going on; something new appeared on the horizon: Arch. Arch was different, it broke one of the core assumptions of revision control, that of the repository as a cathedral. In CVS, and Subversion like it, if somebody wants to modify your code (even if on a branch) you need to give them access to your own repository. In some cases (especially with CVS), vast access control and permission structures would be in place to ensure proper behaviour. With Arch, you don’t; all you need to give to anyone is read access. Anybody can make their own branch by copying yours and committing to their own copy. This model also necessitated fixing a long standing problem that CVS had; Arch has repeatable (smart) merging. If you merge from a branch, you can merge again later, and again, and again. Arch made this possible through each commit (changeset) having a globally unique identifier; made from the branch’s own globally unique identifier and the changeset number in the branch. Unfortunately while this was a massive step in a new direction, Arch had an absolutely terrible user interface. Its command list was terrifying with over 100 commands, many of which had multiple word names (tla set-tree-version). It exposed too many of its own innards, and expected you to learn them. It also forced baroque file naming semantics on its users and strange policy (though shalt not commit without first running “make clean”). Efforts were made to improve Arch’s user interface through projects such as baz, but they were always to be doomed from the start. We’ve since seen an explosion of new revision control systems; Monotone, Darcs, Git and Bazaar. What’s especially interesting is the commonality between these systems. They are all “distributed” like Arch, though they also all discard the strange “unique branch identifier” convention and instead simply assign a unique identifier to each file or commit. This means that they all support personal branches, and by necessity all support repeatable (smart) merging. So how do they differ, what are their killer features and killer problems? Monotone is all about repository integrity, ensuring that every commit is both authorised and intact. It pays for this with a severe lack of speed. Darcs is based around a “theory of patches”, a branch is not made up of its history but by the collection of patches in it. Unfortunately this often breaks down, and darcs frequently gets stuck calculating even trial and commonplace branch models. Git is very strange to me; its killer feature appears to be the speed at which it can handle very large trees, but the interface is as insane as Arch’s was. It is heavily optimised for the “I only apply patches” development model, at the expense of ordinary development models (it shares an issue with Arch where calculating annotations on an individual file is an expensive operation). What about Bazaar? Its killer feature is that it is designed to work the way you do. The command set is relatively small, and each of them works in the most obvious manner. It also supports plugins so that you can always implement your own workflow. Of all the revision control systems, it’s the only one (that I’m aware of) that supports both distributed and centralised workflows (and lets you go distributed when you need to, e.g. when you’re on a plane). Here’s a few examples of how Bazaar’s command set works the way you do. To start managing some code in bzr:
$ cd myproject
$ bzr init
To add the files, copy in your usual .bzrignore file and just add everything:
$ cp ~/bzrignore .bzrignore
$ bzr add
added foo.c
added bar.c
Check the output for mistakenly added files, adjust .bzrignore and remove the file with bzr rm. A common operation is realising that the commit you’re about to make should really go on a new branch for now:
$ cd ..
$ cp -a myproject myproject-foo
$ cd myproject-foo
$ bzr commit
A copy of a Bazaar branch is a different branch, you can commit to it separately. There’s a bzr branch command for it too (which deals with issues such as bound branches, checkouts, etc.) but it’s nice to demonstrate that Bazaar does what you’d expect even when you don’t use its own commands. Pulling changes from another branch (where you haven’t made any modifications yet) is easy:
$ bzr pull ../myproject
As is merging (when your branches have diverged):
$ bzr merge ../myproject
One particularly nice feature is that after a merge, you see the merge as a single commit and it can be treated as such; but it also has the set of merged commits indented under it–you can examine these as individual commits as well! What’s the downside of Bazaar? Well, it’s not the fastest system (but by no means the slowest), for small to medium sized projects this is never an issue but may be for extremely large projects–fortunately the developers are improving its performance all the time! But that doesn’t matter; it is, honestly, the first revision control system that I don’t hate.

12 September 2007

Scott James Remnant: Waybacking

I think I’ve just invented a new sport… Everyone knows the game of Googling your own name, and finding out all sorts of fascinating things that you have forgotten or didn’t realise had made it onto the Interwobble. Here’s a new twist, use the Internet Archive Wayback Machine instead and read through things like old versions of your homepage. I’m honestly shocked at just how much incriminating material I’ve found, both from my own website and segfault.org This comment on my site from Leonard Richardson and my reply especially amused:
What percentage of the time are you drunk? This is to settle a bet. Okay, well on an average week I guess I go out for 3 nights. Friday night starts at 5:30 and ends 12 hours later, the other two days would be probably from 9:30 to 2:30am, so 5 hours. So we can say I spend 22 hours innebriated. Just under a day, or 13% of the week. Some weeks that might be as much as 25% to 30% I guess tho. Who was the bet with, and more importantly, who won?
This explains a lot …

10 September 2007

Scott James Remnant: Ubuntu Desktop Developer

Continuing my mission to put together a kick-ass team to develop the Ubuntu Desktop, the following position is now up on the website: Posting Date & ID: September 2007 UDD
Job Location: Your home with broadband. Some international travel will be required.
Job Summary: To adapt and develop the GNOME desktop to improve the Ubuntu user experience. Key responsibilities and accountabilities: Requirements skills and experience: How to apply Please send a cover letter and CV with references to hr@canonical.com. Please indicate in your submission the role for which you are applying. We prefer to receive applications and CVs/Resumes in either PDF or plain text format.

25 July 2007

Scott James Remnant: Online Desktop

Havoc’s keynote at GUADEC was extremely interesting, especially for how it polarised the people present. Several people seemed very upset with the notion that f-spot should be replaced by flickr, but I think that was a problem with the way that Havoc presented the message, and not the underlying idea. Instead consider f-spot and flickr as sharing the same collection of data, and being two different ways to view and manage it; with changes from one appearing in the other. The mechanism isn’t important. Consider the following: Now, isn’t that cool?

15 July 2007

Scott James Remnant: Virtual accounts with exim and dovecot

A few people commented on my last post asking for details about how I configured exim and dovecot to have the fake scott+canonical account and separate Maildir tree. exim4 router configuration The first key part of the configuration is to configure exim4 to split the local part into a user name and a suffix. This allows a local part such as “scott+canonical” to be split into the user name “scott” and the suffix “+canonical”. This is configured by adding the following two options to the appropriate routers in your exim4.conf:
local_part_suffix = +*
local_part_suffix_optional
The first option defines the suffix; normal practice appears to be to use both “+” and “-“, but since I’d previously used “-” with qmail I opted to use “+” only so that I didn’t confuse myself during the transition. The second option allows the suffix to be optional, so that mail to “scott” is still delivered normally. Now mail to “scott+canonical” will be delivered to the “scott” user. Forwarding configuration The next task is to ensure that mail is actually forwarded to this address; for me this was a configuration performed by the Canonical sysadmins to ensure that my work e-mail is actually delivered to scott+canonical on my own mail server. Filtering configuration Since both personal and work e-mail are now both being delivered to the same user account on my home mail server, I need to filter the mail into separate folders. This can be done by checking the $localpartsuffix variable in Exim filter .forward files, e.g.:
# Exim filter
if $local_part_suffix is "+canonical"
then
    save Maildir/Canonical/
endif
Now incoming work e-mail is filtered into a different mail folder, while personal mail is delivered into the primary one. I’ve used similar filter instructions for mailing lists, mailman messages, Launchpad mails, etc. to filter into appropriate folders. Where the mail is personal, it is filtered into (e.g.) Maildir/.Lists/upstart-devel/ Where the mail is for work, I add /Canonical/ to the path, (e.g.) Maildir/Canonical/.Lists/linux-hotplug-devel/ This means that the Maildir/ directory is my personal INBOX, with sub-folders immediately under that and beginning with a period; and the Maildir/Canonical/ directory is my work INBOX, with sub-folders immediately under that and beginning with a period. This defines two trees in a manner compatible with dovecot. There’s no particular reason that Maildir/Canonical/ has to be under Maildir/, it could have been Maildir-Canonical/ and this would still work. I simply wanted them in one place to ease backups. Dovecot configuration Now we need to configure dovecot to permit login by a fake (“virtual”) user, with a different Maildir tree, so that I can configure them as two separate accounts. The first set of changes is to dovecot.conf to add an additional authentication source. Modify the “auth default” block to add a new “passwd-file” passdb in addition to the “pam” passdb (or whatever your system is using).
passdb passwd-file  
  args = /etc/dovecot/passwd
 
This lets us authenticate virtual users, but we also want to set their attributes, so we can use the same file as a userdb in the same “auth” block.
userdb passwd-file  
  args = /etc/dovecot/passwd
 
Dovecot will now check both PAM and this file for user information. We now simply need to add a line to this file to specify the virtual user and set up the alternate Maildir tree.
scott+canonical:PASSWORD:1000:1000::/home/scott::userdb_mail=maildir:/home/scott/Maildir/Canonical
The format is that of an ordinary passwd file; the first two parts give the passdb authentication credentials and the rest give the userdb information. I’ve set this user to have the same uid, gid and home directory as my real “scott” user. The final part changes the mail environment for this virtual user, instead rooting it at Maildir/Canonical/ Client configuration The mail client will need two accounts adding; one for “scott” and the other for “scott+canonical”. It will see two separate folder trees for each account. An unexpected bonus is that the reply account is now automatically set for me, since I’m replying from the specific account rather than from a single general one.

13 July 2007

Scott James Remnant: Mail Strike

Like most geeks, I run my own mail server. The burden of administering it is much less than the increased flexibility in filtering incoming mail, let alone dealing with SPAM. My mail server configuration has remained pretty static the entire time, and controversially, I’ve always used qmail. The reason for this dates back to my first sysadmin job in the mid-to-late nineties, and the decision in those days did tend to be sendmail or qmail; with the security conscious choosing the latter. qmail’s delivery system is a little odd, everything on the left hand side up to the first “-” is considered a user, and everything after can be used for filtering. The default local delivery component takes this into account, so e-mail to scott-foo can be filtered by the /home/scott/.qmail-foo file. This gives a pretty natural way to deal with mailing lists; you subscribe with a unique address for that list, and all the mail goes into the right folder automatically. This has served me reasonably well over the years, with heavy patching across the daemon to add features such as LDAP integration, and SPAM filtering that I wanted. Unfortunately it’s been getting to burdensome to maintain. Since it’s not true open source software, and is effectively abandoned upstream, it’s not as up to date as I’d like. SPAM filtering tends to take place in the local delivery loop, rather than at SMTP time; and due to the strange delivery system, it’s unreasonably hard to perform any kind of sender verification or greylisting. Since every special address is filtered differently, it’s quite hard to add common filtering unless it’s pre-planned and you use addresses with a common prefix. The clincher has been dealing with super-sites like Launchpad which send huge amounts of different e-mails to a single address, and no facility to separate that from your published contact address. I needed a better mail server. So I’ve now moved to exim. I was surprised by how quickly I was able to pick it up, I did the migration in two day outages. The first to simply migrate delivery and stash the mail in one big folder, and the second to customise the delivery and filtering to my liking. I’ve also re-subscribed to mailing lists with single addresses again, so now I have a single filter rule which happily can filter Launchpad mails around as well. Happily I’ve been able to make a change I’ve wanted to for a while, home and work e-mail is separated into different Maildir/ trees; and mailing list subscriptions made with the most appropriate address. The magic of dovecot lets me create a fake scott+canonical user that uses the alternate Maildir tree, while still retaining my user permissions, etc. Overall I’m pleased with the new setup, and how the migration went. SpamAssassin needs some tuning as a little SPAM is still getting through, but otherwise it seems to be working well.

19 June 2007

Scott James Remnant: Automatix and Upgrading

It also seems that several of the dapper to edgy upgrade problems are caused by the use of Automatix; a tool to perform common customisations to Ubuntu, such as replace the pre-installed software with alternatives and install packages that Ubuntu is unable to pre-install due to patent or other legal issues. Henrink has a few good points about this, however I feel that it’s also important to remember that the Ubuntu community does not only consist of the core developers. Automatix, and its like, are by their very definition, tools to reduce the amount of your system that the core developers will support. The default set of installed packages is not arbitrary, and one may be selected over your preferred solution simply because we do not have the expertise in the team to deal with the other, or even because the other is not supported upstream! We therefore rely on the wider community to take ownership of these packages, and support them within the community structure. Support, in the development sense, doesn’t just consist of security updates either; it also consists of keeping the software up to date, fixing bugs, and most importantly of all; testing it before we release. The right approach to making sure that Automatix users are not bitten again during the edgy to feisty upgrade in 6 months time is for members of the community to come together and form a team to support it. The existing Automatix team in Launchpad is probably a good start. One of the goals of this team should be to make sure that throughout feisty’s development cycle, upgrading from an edgy box with Automatix installed works flawlessly. Where it doesn’t, they should take effort to ensure that useful bugs are filed (e.g. “foo 1.1-2 contains same file as bar 1.0-1 but neither Replaces nor Conflicts it”) so that the problems can be fixed. Likewise where community members suggest that a user install software from outside the main component, or even outside the Ubuntu repository entirely, they should keep in mind that they’re likely to cause that user problems when it’s time for upgrade. If you’re running a repository of your own right now, have you considered that you need to start testing upgrades from edgy with your packages installed to feisty? Testing when feisty releases is too late!

Scott James Remnant: The Edgy Dance

Last week, a few of us gathered at Canonical’s London offices to oversee the final release preparations. This basically consists of testing the various candidate CD images and performing both install and upgrade tests on them. As you can imagine, three people performing repeated tests of edgy means that the fabled startup sound got many, many playings. A tradition started. Now, whenever you hear that sound, remember to get up and dance, wave your arms in the air or just tap your fingers on the table.

Scott James Remnant: What we'll get in feisty

This post is a sequel to my “What I want in edgy+1” post, which was written when the developer summit was first announced. Now that the summit (and the following company All Hands meeting) is over, and we’re all back home, this seems as good a time as any to review what was discussed and get a good idea of what feisty might look like. I’ve touched on the problem of predicting time-based releases before. It’s both the gift and the curse of a time-based release schedule that work not completed in time can be deferred to a later release. So take the following with a pinch of salt, some of this may still not make it. General Themes The general theme of dapper was to be a release that could be supported for a long term, conservatism was the goal. We did do some quite exciting work under the hood, such as the switch away from hotplug to a fully udev based system, but in general it wasn’t innovative or ground-breaking. Edgy was intended to be more ground-breaking, but the practical matter of having only a few months to develop it and our own pride in shipping something that still worked meant that it turned out as a shinier, improved dapper. So what’s feisty going to be like? Judging from the discussions at UDS, and the specifications that have been written, the general theme of feisty is to lead the way again with new technologies. The Desktop For the users, perhaps the most obvious change will be the active use of 3D acceleration to draw the desktop where hardware can support it (the issue of binary drivers has not yet been resolved). Windows are more visually distinct from each other through shadows behind them, and transparency for the non-active windows. The relationship between different workspaces/viewports is much clearer as the transition is animated on a cube or sliding pane. And for the bling crowd, window s can wobble, burn, explode or dissolve. There are two different compositors being considered at this point, compiz and beryl; we’re likely to decide which to use at Feature Freeze based on how well they’ve been fixed, developed and supported until that point. Underneath the hood, the configuration of the X server will be simpler and more robust; so even the worst case will not leave you confined to a console without any help. Networking Networking in feisty should be a much more pleasurable experience. The Network Manager project, which has been waiting on the side lines for a couple of releases, may finally get a shot at being in the default installation. For the average user, this makes switching between wired and wireless networks, including setting up WEP and WPA much, much easier. And what if there’s no network infrastructure around? Out of the box support for RFC 3927 link-local networks, and multi-cast DNS resolution (aka. Zeroconf), means that you just need to agree on a network name with others around you to be able to communicate. Of course, once you’re on a network, you still need to be able to share files and access local services. The integration of the Avahi project gives you one-click access to other people’s shared music or files; and lets you share your own, should you choose to do so. Customisations One of the most encountered problems with edgy was it being difficult to install various common packages that aren’t part of the default installation, especially codecs. Projects such as Automatix attempt to tackle this, but can cause problems with upgrading to later releases. Some effort will be going into feisty to make performing these common customisations much simpler, including being able to install codecs or viewers by just trying to open the file. Boot Sequence A long-running project within Ubuntu has been to get the boot and shutdown sequences as fast and efficient as possible. At the time we started, it was common for a Linux distribution to boot in a mere two or three minutes. If you thought edgy booted fast, wait until you see feisty. Feisty is the release where we take full advantage of Upstart, not only bringing the system up as fast as possible but also more robustly than we can do today. And if that weren’t enough, it should look slicker too; without some of the nasty flickering and mode changes that happen today.

Next.