Search Results: "Colin Walters"

7 November 2014

Andrew Cater

At mini-Debconf Cambridge:

Much unintentional chaos and hilarity and world class problem solving yesterday.

A routine upgrade from Wheezy - Jessie died horribly on my laptop when UEFI variable space filled, leaving No Operating System on screen.

Cue much running around: Chris Boot, Colin Walters, Steve dug around, booted the system usng rescue CD and so on. Lots more digging, including helpful posts by mjg59 - a BIOS update may solve the problem.

Flashing BIOS did clear the variables and variable space and it all worked perfectly thereafter. This had the potential for turning the laptop into a brick under UEFI (but still working under legacy boot).

As it is, it all worked perfectly - but where else would you get _the_ Grub maintainer, 2 x UEFI experts and a broken laptop all in the same room ?

If it hadn't happened yesterday, it would have happened at home and I'd have been left with nothing. As it is, we all learnt/remembered stuff and had a useful time fixing it.

31 July 2012

Martin Pitt: My impressions from GUADEC

I have had the pleasure of attending GUADEC in full this year. TL;DR: A lot of great presentations + lots of hall conversations about QA stuff + the obligatory be er,ach = . Last year I just went to the hackfest, and I never made it to any previous one, so GUADEC 2012 was a kind of first-time experience for me. It was great to put some faces and personal impressions to a lot of the people I have worked with for many years, as well as catching up with others that I did meet before. I discussed various hardware/software testing stuff with Colin Walters, Matthias Clasen, Lennart Poettering, Bertrand Lorentz, and others, so I have a better idea how to proceed with automated testing in plumbing and GNOME now. It was also really nice to meet my fellow PyGObject co-maintainer Paolo Borelli, as well as seeing Simon Schampier and Ignacio Casal Quinteiro again. No need to replicate the whole schedule here (see for yourself on the interwebs), so I just want to point out some personal highlights in the presentations: There were a lot of other good ones, some of them technical and some amusing and enlightening, such as Frederico s review of the history of GNOME. On Monday I prepared and participated in a PyGObject hackfest, but I ll blog about this separately. I want to thank all presenters for the excellent program, as well as the tireless GUADEC organizer team for making everything so smooth and working flawlessly! Great Job, and see you again in Strasbourg or Brno!

28 October 2011

Lars Wirzenius: On the future of Linux distributions

Executive summary Debian should consider: Introduction I've taken a job with Codethink, as part of a team to develop a new embedded Linux system, called Baserock. Baserock is not a distribution as such, but deals with many of the same issues. I thought it might be interesting to write out my new thoughts related to this, since they might be useful for proper distributions to think about too. I'll hasten to add that many of these thoughts are not originally mine. I shamelessly adopt any idea that I think is good. The core ideas for Baserock come from Rob Taylor, Daniel Silverstone, and Paul Sherwood, my colleagues and bosses at Codethink. At the recent GNOME Summit in Montreal, I was greatly influenced by Colin Walters. This is also not an advertisment for Baserock, but since my of my current thinking come from that project, I'll discuss things in the context of Baserock. Finally, I'm writing this to express my personal opinion and thoughts. I'm not speaking for anyone else, least of all Codethink. On the package abstraction Baserock abandons the idea of packages for all individual programs. In the early 1990s, when Linux was new, and the number of packages in a distribution could be counted on two hands in binary, packages were a great idea. It was feasible to know at least something about every piece of software, and pick the exact set of software to install on a machine. It is still feasible to do so, but only in quite restricted circumstances. For example, picking the packages to install a DNS server, or an NFS server, or a mail server, by hand, without using meta packages or tasks (in Debian terminology), is still quite possible. On embedded devices, there's also usually only a handful of programs installed, and the people doing the install can be expected to understand all of them and decide which ones to install on each specific device. However, those are the exceptions, and they're getting rarer. For most people, manually picking software to install is much too tedious. In Debian, we've realised this a great many years ago, and developed meta packages (whose only purpose is to depend on other packages) and tasks (solves the same problem, but differently). These make it possible for a user to say "I want to have the GNOME desktop environment", and not have to worry about finding every piece that belongs in GNOME, and install that separately. For much of the past decade, computers have had sufficient hard disk space that it is no longer necessary to be quite so picky about what to install. A new cheap laptop will now typically come with at least 250 gigabytes of disk space. An expensive one, with an SSD drive, will have at least 128 gigabytes. A fairly complete desktop install uses less than ten gigabytes, so there's rarely a need to pick and choose between the various components. From a usability point of view, choosing from a list of a dozen or two options is much easier than from a list of thirty five thousand (the number of Debian packages as I write this). This is one reason why Baserock won't be using the traditional package abstraction. Instead, we'll collect programs into larger collections, called strata, which form some kind of logical or functional whole. So there'll be one stratum for the core userland software: init, shell, libc, perhaps a bit more. There'll be another for a development environment, one for a desktop environment, etc. Another, equally important reason, to move beyond packages is the problems caused by the combinatorial explosion of packages and versions. Colin Walters talks about this very well. When every system has a fairly unique set of packages, and versions of them, it becomes much harder to ensure that software works well for everyone, that upgrades work well, and that any problems get solved. When the number of possible package combinations is small, getting the interactions between various packages right is easier, QA has a much easier time to test all upgrade paths, and manual test coverage improves a lot when everyone is testing the same versions. Even debugging gets easier, when everyone can easily run the same versions. Grouping software into bigger collections does reduce flexibility of what gets installed. In some cases this is important: very constrainted embedded devices, for example, still need to be very picky about what software gets installed. However, even for them, the price of flash storage is low enough that it might not matter too much, anymore. The benefit of a simpler overall system may well outweigh the benefit of fine-grained software choice. Everything in git In Baserock, we'll be building everything from source in git. It will not be possible to build anything, unless the source is committed. This will allow us to track, for each binary blob we produce, the precise sources that were used to bulid it. We will also try to achieve something a bit more ambitious: anything that affects any bit in the final system image can be traced to files committed to git. This means tracking also all configuration settings for the build, and the whole build environment, in git. This is important for us so that we can reproduce an image used in the field. When a customer is deploying a specific image, and needs it to be changed, we want to be able to make the change with the minimal changes compared to the previous version of the image. This requires that we can re-create the original image, from source, bit for bit, so that when we make the actual change, only the changes we need to make affect the image. We will make it easy to branch and merge not just individual projects, but the whole system. This will make it easy to do large changes to the system, such as transitioning to a new version of GNOME, or the toolchain. Currently, in Debian, such large changes need to be serialised, so that they do not affect each other. It is easy, for example, for a GNOME transition to be broken by a toolchain transition. Branching and merging has long been considered the best available solution for concurrent development within a project. With Baserock, we want to have that for the whole system. Our build servers will be able to build images for each branch, without requiring massive hardware investment: any software that is shared between branches only gets built once. Launchpad PPAs and similar solutions provide many of the benefits of branching and merging on the system level. However, they're much more work than "git checkout -b gnome-3.4-transition". I believe that the git based approach will make concurrent development much more efficient. Ask me in a year if I was right. Git, git, and only git There are a lot of version control systems in active use. For the sake of simplicity, we'll use only git. When an upstream project uses something else, we'll import their code into git. Luckily, there are tools for that. The import and updates to it will be fully automatic, of course. Git is not my favorite version control system, but it's clearly the winner. Everything else will eventually fade away into obscurity. Or that's what we think. If it turns out that we're wrong about that, we'll switch to something else. However, we do not intend to have to deal with more than one at a time. Life's too short to use all possible tools at the same time. Tracking upstreams closely We will track upstream version control repositories, and we will have an automatic mechanism of building our binaries directly from git. This will, we hope, make it easy to follow closely upstream development, so that when, say, GNOME developers make commits, we want to be able to generate a new system image which includes those changes the same day, if not within minutes, rather than waiting days or weeks or months. This kind of closeness is greatly enhanced by having everything in version control. When upstream commits changes to their version control system, we'll mirror them automatically, and this then triggers a new system image build. When upstream makes changes that do not work, we can easily create a branch from any earlier commit, and build images off that branch. This will, we hope, also make it simpler to make changes, and give them back to upstream. Whenever we change anything, it'll be done in a branch, and we'll have system images available to test the change. So not only will upstream be able to easily get the change from our git repository, they'll also be easily verify, on a running system, that the change fixes the problem. Automatic testing, maybe even test driven development We will be automatically building system images from git commits for Baserock. This will potentially result in a very large number of images. We can't possibly test all of them manually, so we will implement some kind of automatic testing. The details of that are still under discussion. I hope to be able to start adding some test driven development to Baserock systems. In other words, when we are requested to make changes to the system, I want the specification to be provided as executable tests. This will probably be impossible in real life, but I can hope. I've talked about doing the same thing for Debian, but it's much harder to push through such changes in an established, large project. Solving the install, upgrade, and downgrade problems All mainstream Linux distributions are based on packages, and they all, pretty much, do installations and upgrades by unpacking packages onto a running system, and then maybe running some scripts from the packages. This works well for a completely idle system, but not so well on systems that are in active use. Colin Walters again talks about this. For installation of new software, the problem is that someone or something may invoke it before it is fully configured by the package's maintainer script. For example, a web application might unpack in such a way that the web server notices it, and a web browser may request the web app to run before it is configured to connect to the right database. Or a GUI program may unpack a .desktop file before the executable or its data files are unpacked, and a user may notice the program in their menu and start it, resulting in an error. Upgrades suffer from additonal problems. Software that gets upgraded may be running during the upgrade. Should the package manager replace the software's data files with new versions, which may be in a format that the old program does not understand? Or install new plugins that will cause the old version of the program to segfault? If the package manager does that, users may experience turbulence without having put on their seat belts. If it doesn't do that, it can't install the package, or it needs to wait, perhaps for a very long time, for a safe time to do the upgrade. These problems have usually been either ignored, or solved by using package specific hacks. For example, plugins might be stored in a directory that embeds the program's version number, ensuring that the old version won't see the new plugins. Some people would like to apply installs and upgrades only at shutdown or bootup, but that has other problems. None of the hacks solve the downgrade problem. The package managers can replace a package with an older version, and often this works well. However, in many cases, any package maintainer scripts won't be able to deal with downgrades. For example, they might convert data files to a new format or name or location upon upgrades, but won't try to undo that if the package gets downgraded. Given the combinatorial explosion of package versions, it's perhaps just as well that they don't try. For Baserock, we absolutely need to have downgrades. We need to be able to go back to a previous version of the system if an upgrade fails. Traditionally, this has been done by providing a "factory reset", where the current version of the system gets replaced with whatever version was installed in the factory. We want that, but we also want to be able to choose other versions, not just the factory one. If a device is running version X, and upgrades to X+1, but that version turns out to be a dud, we want to be able to go back to X, rather than all the way back to the factory version. The approach we'll be taking with Baserock relies on btrfs and subvolumes and snapshots. Each version of the system will be installed in a separate subvolume, which gets cloned from the previous version, using copy-on-write to conserve space. We'll make the bootloader be able to choose a version of the system to boot, and (waving hands here) add some logic to be able to automatically revert to the previous version when necessary. We expect this to work better and more reliably than the current package based one. Making choices Debian is pretty bad at making choices. Almost always, when faced with a need to choose between alternative solutions for the same problem, we choose all of them. For example, we support pretty much every init implementation, various implementations of /bin/sh, and we even have at least three entirely different kernels. Sometimes this is non-choice is a good thing. Our users may need features that only one of the kernels support, for example. And we certainly need to be able to provide both mysql and postresql, since various software we want to provide to our uses needs one and won't work with the other. At other times, the inability to choose causes trouble. Do we really need to support more than one implemenation of /bin/sh? By supporting both dash and bash for that, we double the load on testing and QA, and introduce yet another variable to deal with into any debugging situation involving shell scripts. Especially for core components of the system, it makes sense to limit the flexibility of users to pick and choose. Combinatorial explosion d j vu. Every binary choice doubles the number of possible combinations that need to be tested and supported and checked during debugging. Flexibility begets complexity, complexity begets problems. This is less of a problem at upper levels of the software stack. At the very top level, it doesn't really matter if there are many choices. If a user can freely choose between vi and Emacs, and this does not add complexity at the system level, since nothing else is affected by that choice. However, if we were to add a choice between glibc, eglibc, and uClibc for the system C library, then everything else in the system needs to be tested three times rather than once. Reducing the friction coefficient for system development Currently, a Debian developer takes upstream code, adds packaging, perhaps adds some patches (using one of several methods), builds a binary package, tests it, uploads it, and waits for the build daemons and the package archive and user-testers to report any problems. That's quite a number of steps to go through for the simple act of adding a new program to Debian, or updating it to a new version. Some of it can be automated, but there's still hoops to jump through. Friction does not prevent you from getting stuff done, but the more friction there is, the more energy you have to spend to get it done. Friction slows down the crucial hack-build-test cycle of software development, and that hurts productivity a lot. Every time a developer has to jump through any hoops, or wait for anything, he slows down. It is, of course, not just a matter of the number of steps. Debian requires a source package to be uploaded with the binary package. Many, if not most, packages in Debian are maintained using version control systems. Having to generate a source package and then wait for it to be uploaded is unnecessary work. The build daemon could get the source from version control directly. With signed commits, this is as safe as uploading a tarball. The above examples are specific to maintaining a single package. The friction that really hurts Debian is the friction of making large-scale changes, or changes that affect many packages. I've already mention the difficulty of making large transitions above. Another case is making policy changes, and then implementing them. An excellent example of that is in Debian is the policy change to use /usr/share/doc for documentation, instead of /usr/doc. This took us many years to do. We are, I think, perhaps a little better at such things now, but even so, it is something that should not take more than a few days to implement, rather than half a decade. On the future of distributions Occasionally, people say things like "distributions are not needed", or that "distributions are an unnecessary buffer between upstream developers and users". Some even claim that there should only be one distribution. I disagree. A common view of a Linux distribution is that it takes some source provided by upstream, compiles that, adds an installer, and gives all of that to the users. This view is too simplistic. The important part of developing a distribution is choosing the upstream projects and their versions wisely, and then integrating them into a whole system that works well. The integration part is particularly important. Many upstreams are not even aware of each other, nor should they need to be, even if their software may need to interact with each other. For example, not every developer of HTTP servers should need to be aware of every web application, or vice versa. (It they had to be, it'd be a combinatorial explosion that'd ruin everything, again.) Instead, someone needs to set a policy of how web apps and web servers interface, what their common interface is, and what files should be put where, for web apps to work out of the box, with minimal fuss for the users. That's part of the integration work that goes into a Linux distribution. For Debian, such decisions are recorded in the Policy Manual and its various sub-policies. Further, distributions provide quality assurance, particularly at the system level. It's not realistic to expect most upstream projects to do that. It's a whole different skillset and approach that is needed to develop a system, rather than just a single component. Distributions also provide user support, security support, longer term support than many upstreams, and port software to a much wider range of architectures and platforms than most upstreams actively care about, have access to, or even know about. In some cases, these are things that can and should be done in collaboration with upstreams; if nothing else, portability fixes should be given back to upstreams. So I do think distributions have a bright future, but the way they're working will need to change.

21 February 2010

Michael Banck: 21 Feb 2010

Application Indicators: A Case of Canonical Upstream Involvement and its Problems I always thought Canonical could do a bit more to contribute to the GNOME project, so I was happy to see the work on application indicators proposed to GNOME. Application indicators are based on the (originally KDE-driven, I believe) proposed cross-desktop status notifier spec. The idea (as I understand it) is to have a consistent way of interacting with status notifiers and stop the confusing mix of panel applets and systray indicators. This is a very laudable goal as mentioned by Colin Walters:
"First, +5000 that this change is being driven by designers, and +1000 that new useful code is being written. There are definite problems being solved here."
The discussion that followed was very useful, including the comments by Canonical's usability expert Matthew Paul Thomas. Most of the discussion was about the question how this proposal and spec could be best integrated into GTK, the place where most people seemed to agree this belongs (rather than changing all apps to provide this, this should be a service provided by the platform) However, on the same day, Canonical employee Ted Gould proposed libappindicator as an external dependency. The following thread showed a couple of problems, both technical and otherwise: What I personally disliked is the way the Cody Russel and Ted Gould are papering over the above issues in the thread that followed. For examples, about point one, Ted Gould writes in the proposal:
Q: Shouldn't this be in GTK+?
A: Apparently not.
while he himself said on the same day, on the same mailing list: "Yes, I think GTK/glib is a good place" and nobody was against it (and in fact most people seemed to favor including this in GTK). To the question about why libappindicator is not licensed as usual under the LGPL, version 2.1 or later, Canonical employee Cody Russell even replied:
"Because seriously, everything should be this way. None of us should be saying "LGPL 2.1 or later". Ask a lawyer, even one from the FSF, how much sense it makes to license your software that way."
Not everybody has to love the FSF, but proposing code under mandated copyright assignments which a lot of people have opposed and at the same time insinuating that the FSF was not to be trusted on their next revision of the LGPL license seems rather bold to me. Finally, on the topic of copyright assignments, Ted said:
"Like Clutter for example ;) Seriously though, GNOME already is dependent on projects that require contributor agreements."
It is true that there are (or at least were) GNOME applications which require copyright assignments for contributions (evolution used to be an example, but the requirement was lifted), however, none of the platform modules require this to my knowledge (clutter is an external dependency as well). It seems most people in the GNOME community have the opinion that application indicators should be in GTK at least eventually, so having libappindicator as an external dependency with copyright assignments might work for now but will not be future proof. In summary, Most of the issues could be dealt with by reimplementing it for GTK when the time comes for this spec to be included, but this would mean (i) duplication of effort, (ii) possibly porting all applications twice and (iii) probably no upstream contribution by Canonical. Furthermore, I am amazed at how the Canonical people approach the community for something this delicate (their first major code drop, as far as I am aware). To be fair, neither Ted nor Cody posted the above using their company email addresses, but nevertheless the work is sponsored by Canonical, so their posts to desktop-devel-list could be seen as writing with their Canonical hat on. Canonical does not have an outstanding track record on contributing code to GNOME, and at least to me it seems this case is not doing much to improve things, either.

14 October 2009

Robert McQueen: Telepathy Q&A from the Boston GNOME Summit

The first Telepathy session session on Saturday evening at the Boston GNOME Summit was very much of a Q&A where myself and Will answered various technical and roadmap issues from a handful of developers and downstream distributors. It showed me that there s a fair amount of roadmap information we should do better at communicating outside of the Telepathy project, so in the hope its useful to others, read on MC5 Ted Gould was wondering why Mission Control 4 had an API for getting/setting presence for all of your accounts whereas MC5 does not. MC5 has a per-account management of desired/current presence which is more powerful, but loses the convenience of setting presence in one place. Strictly speaking, doing things like deciding which presence to fall back on (a common example being if you have asked for invisible but the connection doesn t support it) is a UI-level policy which MC should not take care of, but in practice there aren t many different policies which make sense, and the key thing is that MC should tell the presence UI when the desired presence isn t available so it could make a per-account choice if that was preferable. As a related side point, Telepathy should implement more of the invisibility mechanisms in XMPP so it s more reliably available, and then we could more meaningfully tell users which presence values were available before connecting, allowing signing on as invisible. Since MC5 gained support for gnome-keyring, its not possible to initialise Empathy s account manager object without MC5 prompting the user for their keyring password un-necessarily (especially if the client is Ubuntu s session presence applet and the user isn t using Empathy in the current session but has some accounts configured). Currently the accounts D-Bus API requires that all properties including the password are presented to the client to edit. A short-term fix might be to tweak the spec so that accounts don t have to provide their password property unless it s explicitly queried, but this might break the ABI of tp-glib. Ultimately, passwords being stored and passed around in this way should go away when we write an authentication interface which will pass authentication challenges up to the Telepathy client to deal with, enabling a unified interface for OAuth/Google/etc web token, Kerberos or SIP intermediate proxy authentication, and answering password requests from the keyring lazily or user on demand. Stability & Security Jonathan Blandford was concerned about the churn level of Telepathy, from the perspective of distributions with long-term support commitments, and how well compatibility will be maintained. Generally the D-Bus API and the tp-glib library APIs are maintained to the GNOME standards of making only additive changes and leaving all existing methods/signals working even if they are deprecated and superseded by newer interfaces. A lot of new interfaces have been added over the past year or so, many of which replace existing functionality with a more flexible or more efficient interface. However, over the next 4-6 months we hope to finalise the new world interfaces (such as multi-person media calls, roster, authentication, certificate verification, more accurate offline protocol information, chat room property/role management), and make a D-Bus API break to remove the duplicated cruft. Telepathy-glib would undergo an ABI revision in this case to also remove those symbols, possibly synchronised with a move from dbus-glib to GVariant/etc, but in many cases clients which only use modern interfaces and client helper classes should not need much more than a rebuild. Relatedly there was a query about the security of Telepathy, and how much it had been through the mill on security issues compared to Pidgin. In the case of closed IM protocols (except MSN where we have our own implementation) then we re-use the code from libpurple, so the same risks apply, although the architecture of Telepathy means its possible to subject the backend processes to more stringent lockdowns using SElinux or other security isolation such as UIDs, limiting the impact of compromises. Other network code in Telepathy is based on more widely-used libraries with a less chequered security history thus far. OTR The next topic was about support for OTR in Telepathy. Architecturally, it s hard for us to support the same kind of message-mangling plugins as Pidgin allows because there is no one point in Telepathy that messages go through. There are multiple backends depending on the protocol, multiple UIs can be involved in saving (eg a headless logger) or displaying messages (consider GNOME Shell integration which previews messages before passing conversations on to Empathy), and the only other centralised component (Mission Control 5) does not act as an intermediary for messages. Historically, we ve always claimed OTR to be less appealing than native protocol-level end-to-end encryption support, such as the proposals for Jingle + peer to peer XMPP + TLS which are favoured by the XMPP community, mostly because if people can switch to an unofficial 3rd party client to get encryption, they could switch to a decent protocol too, and because protocol-level support can encrypt other traffic like SRTP call set-up, presence, etc. However, there is an existing deployed OTR user base, including the likes of Adium users on the Mac, who might often end up using end to end encryption without being aware of it, who we would be doing a disservice by Telepathy not supporting OTR conversations with these people. This is a compelling argument which was also made to me by representatives from the EFF, and the only one to date which actually held some merit with me compared to just implementing XMPP E2E encryption. Later in the summit we went on to discuss how we might achieve it in Telepathy, and how our planned work towards XMPP encryption could also help. Tubes We also had a bit of discussion about Tubes, such as how the handlers are invoked. Since the introduction of MC5, clients can register interest in certain channel types (tubes or any other) by implementing the client interface and making filters for the channels they are interested in. MC5 will first invoke all matching observers for all channels (incoming and outgoing) until all of them have responded or timed out (eg to let a logger daemon hook up signal callbacks before channel handling proceeds), all matching approvers for incoming channels until one of them replies (eg to notify the user of the new channel before launching the full UI), and then sending it to the handler with the most specific filter (eg Tomboy could register for file transfers with the right MIME type and receive those in favour to Empathy whose filter has no type on it). Tubes can be shared with chat rooms, either as a stream tube where one member shares a socket for others to connect to (allowing re-sharing an existing service implementation), or a D-Bus tube where every member s application is one endpoint on a simulated D-Bus bus, and Telepathy provides a mapping between the D-Bus names and the members of the room. In terms of Tube applications, now we ve got working A/V calling in Empathy, as well as desktop sharing, and an R&D project on multi-user calls, our next priority is on performance and Tube-enabling some more apps such as collaborative editing (Gobby, AbiWord, GEdit, Tomboy ?). There was a question about whether Tube handlers can be installed on demand when one of your contacts initiates that application with you. It d be possible to simulate this by finding out (eg from the archive) which handlers are available, and dynamically registering a handler for all of those channel types, so that MC5 will advertise those capabilities, but also register as an approver. When an incoming channel actually arrives at the approval stage, prompt the user to install the required application and then tell MC5 to invoke it as the handler. Colin Walters asked about how Telepathy did NAT traversal. Currently, Telepathy makes use of libnice to do ICE (like STUN between every possible pair of addresses both parties have, works in over 90% of cases) for the UDP packets involved in calls signalled over XMPP, either the Google Talk variant which can benefit from Google s relay servers if one or other party has a Google account, so is more reliable, or the latest IETF draft which can theoretically use TURN relays but its not really hooked up in Telepathy and few people have access to them. XMPP file transfers and one-to-one tube connections use TCP which is great if you have IPv6, but otherwise impossible to NAT traverse reliably, so often ends up using strictly rate-limited SOCKS5 -ish XMPP proxies, or worse, in-band base64 in the XML stream. We hope to incorporate (and standardise in XMPP) a reliability layer which will allow us to use Jingle and ICE-UDP for file transfers and tubes too, allowing peer to peer connections and higher-bandwidth relays to enhance throughput significantly. Future Ted Gould had some good questions about the future of Telepathy
Should Empathy just disappear on the desktop as things like presence applets or GNOME Shell take over parts of its function? Maybe, yes. In some ways its goal is just to bring Telepathy to end users and the desktop so that its worth other things integrating into Telepathy, but Telepathy allows us to do a lot better than a conventional IM client. Maemo and Sugar on the OLPC use Telepathy but totally integrates it into the device experience rather than having any single distinct IM client, and although Moblin uses Empathy currently it has its own presence chooser and people panel, and may go on to replace other parts of the user experience too. GNOME Shell looks set to move in this direction too and use Telepathy to integrate communications with the desktop workflow. Should Telepathy take care of talking to social networking sites such as Facebook, Twitter, etc? There s no hard and fast rule Telepathy only makes sense for real-time communications, so it s good for exposing and integrating the Facebook chat, but pretty lame for dealing with wall posts, event invitations and the like. Similarly on the N900, Telepathy is used for the parts of the cellular stack that overlap with real-time communications like calling and SMS, but there is no sense pushing unrelated stuff like configuration messages through it. For Twitter, the main question is whether you actually want tweets to appear in the same UI, logging and notification framework as other messages. Probably not anything but the 1-to-1 tweets, meaning something like Moblin s Mojito probably makes more sense for that. Later in the summit I took a look at Google Latitude APIs, which seem like something which Telepathy can expose via its contact location interface, but probably not usefully until we have support for metacontacts in the desktop. Can/will Telepathy support IAX2? It can, although we d have to do a local demultiplexer for the RTP streams involved in separate calls. It s not been a priority of ours so far, but we can help people get started (or Collabora can chat to people who have a commercial need for it). Similarly nobody has looked at implementing iChat-compatible calling because our primary interest lies with open protocols, but if people were interested we could give pointers its probably just SIP and RTP after you dig through a layer of obfuscation or two.
If you want to know more about Telepathy feel free to comment with some follow-up questions, talk to us in #telepathy on Freenode, or post to the mailing list.

15 December 2005

Colin Walters: Did a human actually write this?

Or was it generated by a computer picking random buzzwords? Take a guess.
The semantic mark-up languages would be used to specify the convergence of business and IT models, and more importantly, their metamodels. The ontological representation of the metamodel of a constituent model enables reasoning about the instance model, which enables a dependency analysis to deduce unknown or implied relationships among entities within the instance model. The analysis would be extended across multiple layers of models. The semantic model-based dependency analysis would reveal which entity has an impact on which entities (for example, business components and processes, performance indicators, IT systems, software classes and objects, among others) of the model. This semantic model-based analysis would be applied to a model that provides an introspective view of the business within an enterprise. Also, it would be applied to a value network that yields an extrospective view of businesses in an ecosystem.

23 November 2005

Antti-Juhani Kaijanaho: On build-essential

The build-essential package is not a definition of the build-essential policy. The actual definition is given in the policy manual:
It is not necessary to explicitly specify build-time relationships on a minimal set of packages that are always needed to compile, link and put in a Debian package a standard "Hello World!" program written in C or C++. The required packages are called build-essential, and an informational list can be found in /usr/share/doc/build-essential/list (which is contained in the build-essential package).
The build-essential package is an informational list, it is documentation. The idea of the build-essential package is to document which packages are always needed to compile, link and put in a Debian package a standard "Hello World!" program written in C or C++. The build-essential package maintainer's main job is to make sure that the informational list is correct. When I maintained build-essential, I got multiple bug reports asking (and sometimes demanding) that debhelper be added to the list of build-essential packages. I closed all such reports (#64827, #70014, I thought there were more but can't find them) with a suggestion that the proper place to make this request is the policy list, not the build-essential package bug list. In fact, when I asked for a new maintainer (Colin Walters ended up taking the package), I specifically listed the following requirement:
I consider the capacity to deal with stupid requests (like "plz add debhelper"; lately, there have been not so many of these) and an understanding of the relationship between this package and Policy as essential qualities of the maintainer of this package.
I'm a little concerned that the current build-essential maintainer apparently does not understand the relationship between the package and policy. For the build-essential package maintainer, it is irrelevant whether debhelper is used by 92 % of the archive. That argument might be a good one, but the proper place to make that argument is the policy list, seeking to amend policy in such a way that debhelper is included in the set of build-essential packages. But it is not the job of the build-essential package maintainer to decide it!