Search Results: "walters"

1 May 2020

Utkarsh Gupta: FOSS Activites in April 2020

Here s my (seventh) monthly update about the activities I ve done in the F/L/OSS world.

Debian
It s been 14 months since I ve started contributing to Debian. And 4 months since I ve been a Debian Developer. And in this beautiful time, I had this opprotunity to do and learn lots of new and interesting things. And most importantly, meet and interact with lots of lovely people!
Debian is $home.

Uploads:

Other $things:
  • Attended Ruby team meeting. Logs here.
  • Attended Perl team LHF. Report here.
  • Sponsored a lot of uploads for William Desportes and Adam Cecile.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Applied for DUCI project for Google Summer of Code 2020.

Ruby2.7 Migration:
Ruby2.7 was recently released on 25th December, 2019. Santa s gift. Believe it or not. We, the Debian Ruby team, have been trying hard to make it migrate to testing. And it finally happened. The default version in testing is ruby2.7. Here s the news! \o/
Here s what I worked on this month for this transition.

Upstream: Opened several issues and proposed patches (in the form of PRs):
  • Issue #35 against encryptor for Ruby2.7 test failures.
  • Issue #28 against image_science for removing relative paths.
  • Issue #106 against ffi-yajl for Ruby2.7 test failures.
  • PR #5 against aggregate for simply using require.
  • PR #6 against aggregate for modernizing CI and adding Ruby 2.5 and 2.7 support.
  • Issue #13 against espeak-ruby for Ruby2.7 test failures.
  • Issue #4 against tty-which for test failures in general.
  • Issue #11 against packable for Ruby2.7 test failures. PR #12 has been proposed.
  • Issue #10 against growl for test failures and proposed an initial patch.

Downstream: I fixed and uploaded the following packages in Debian:

Debian LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
This was my seventh month as a Debian LTS paid contributor. I was assigned 24.00 hours and worked on the following things:

CVE Fixes and Announcements:

Other LTS Work:

Other(s)
Sometimes it gets hard to categorize work/things into a particular category.
That s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal: This month I could get the following things done:
  • Most importantly, I finally migrated to a new website. Huge UI imporvement! \o/
    From Jekyll to Hugo, it was not easy. But it was worth it! Many thanks to Luiz for writing hugo-coder, Clement, and Samyak!
    If you find any flaws, issues and pull requests are welcomed at utkarsh2102/utkarsh2102.com
  • Wrote battery-alert, a mini-project of my own to show battery alerts at <10% and >90%.
    Written in shell, it brings me all the satisfaction as it has saved my life on many occasions.
    And guess what? It has more users than just myself!
    Reviews and patches are welcomed \o/
  • Mentored in HackOn Hackathon. Thanks to Manvi for reaching out!
    It was fun to see people developing some really nice projects.
  • Thanks to Ray and John, I became a GitLab Hero!
    (I am yet to figure out my role and responibility though)
  • Atteneded Intro Sec Con and had the most fun!
    Heard Ian s keynote and attended other talks and learned how to use WireShark!

Open Source: Again, this contains all the things that I couldn t categorize earlier.
Opened several issues and pull requests:
  • Issue #297 against hugo-coder, asking to enable RSS feed for blogs.
  • PR #316 for hugo-coder for fixing the above issue myself.
  • Issue #173 against arbre for requesting a release.
  • Issue #104 against combustion, asking to relax dependency on rubocop. Fixed in this commit.
  • Issue #16 against ffi-compiler for requesting to fix homepage and license.
  • Issue #57 against gographviz for requesting a release.
  • Issue #14 against crb-blast, suggesting compatability with bio 2.0.x.
  • Issue #58 against uniform_notifier for asking to drop the use of ruby-growl.
  • PR #2072 for polybar, adding installation instructions on Debian systems.

Until next time.
:wq for today.

18 March 2020

Antoine Beaupr : How can I trust this git repository?

Join me in the rabbit hole of git repository verification, and how we could improve it.

Problem statement As part of my work on automating install procedures at Tor, I ended up doing things like:
git clone REPO
./REPO/bootstrap.sh
... something eerily similar to the infamous curl pipe bash method which I often decry. As a short-term workaround, I relied on the SHA-1 checksum of the repository to make sure I have the right code, by running this both on a "trusted" (ie. "local") repository and the remote, then visually comparing the output:
$ git show-ref master
9f9a9d70dd1f1e84dec69a12ebc536c1f05aed1c refs/heads/master
One problem with this approach is that SHA-1 is now considered as flawed as MD5 so it can't be used as an authentication mechanism anymore. It's also fundamentally difficult to compare hashes for humans. The other flaw with comparing local and remote checksums is that we assume we trust the local repository. But how can I trust that repository? I can either:
  1. audit all the code present and all the changes done to it after
  2. or trust someone else to do so
The first option here is not practical in most cases. In this specific use case, I have audited the source code -- I'm the author, even -- what I need is to transfer that code over to another server. (Note that I am replacing those procedures with Fabric, which makes this use case moot for now as the trust path narrows to "trust the SSH server" which I already had anyways. But it's still important for my fellow Tor developers who worry about trusting the git server, especially now that we're moving to GitLab.) But anyways, in most cases, I do need to trust some other fellow developer I collaborate with. To do this, I would need to trust the entire chain between me and them:
  1. the git client
  2. the operating system
  3. the hardware
  4. the network (HTTPS and the CA cartel, specifically)
  5. then the hosting provider (and that hardware/software stack)
  6. and then backwards all the way back to that other person's computer
I want to shorten that chain as much as possible, make it "peer to peer", so to speak. Concretely, it would eliminate the hosting provider and the network, as attackers.

OpenPGP verification My first reaction is (perhaps perversely) to "use OpenPGP" for this. I figured that if I sign every commit, then I can just check the latest commit and see if the signature is good. The first problem here is that this is surprisingly hard. Let's pick some arbitrary commit I did recently:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
diff --git a/fabric_tpa/pytest.ini b/fabric_tpa/pytest.ini
new file mode 100644
index 0000000..71004ea
--- /dev/null
+++ b/fabric_tpa/pytest.ini
@@ -0,0 +1,3 @@
+[pytest]
+# we inline tests directly in the source code
+python_files = *.py
That's the output of git log -p in my local repository. I signed that commit, yet git log is not telling me anything special. To check the signature, I need something special: --show-signature, which looks like this:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature faite le lun 16 mar 2020 14:37:53 EDT
gpg:                avec la clef RSA 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Bonne signature de  Antoine Beaupr  <anarcat@orangeseeds.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@torproject.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@anarc.at>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@koumbit.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@debian.org>  [ultime]
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Can you tell if this is a valid signature? If you speak a little french, maybe you can! But even if you would, you are unlikely to see that output on your own computer. What you would see instead is:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Important part: Can't check signature: No public key. No public key. Because of course you would see that. Why would you have my key lying around, unless you're me. Or, to put it another way, why would that server I'm installing from scratch have a copy of my OpenPGP certificate? Because I'm a Debian developer, my key is actually part of the 800 keys in the debian-keyring package, signed by the APT repositories. So I have a trust path. But that won't work for someone who is not a Debian developer. It will also stop working when my key expires in that repository, as it already has on Debian buster (current stable). So I can't assume I have a trust path there either. One could work with a trusted keyring like we do in the Tor and Debian project, and only work inside that project, that said. But I still feel uncomfortable with those commands. Both git log and git show will happily succeed (return code 0 in the shell) even though the signature verification failed on the commits. Same with git pull and git merge, which will happily push your branch ahead even if the remote has unsigned or badly signed commits. To actually verify commits (or tags), you need the git verify-commit (or git verify-tag) command, which seems to do the right thing:
$ LANG=C.UTF-8 git verify-commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
[1]$
At least it fails with some error code (1, above). But it's not flexible: I can't use it to verify that a "trusted" developer (say one that is in a trusted keyring) signed a given commit. Also, it is not clear what a failure means. Is a signature by an expired certificate okay? What if the key is signed by some random key in my personal keyring? Why should that be trusted?

Worrying about git and GnuPG In general, I'm worried about git's implementation of OpenPGP signatures. There has been numerous cases of interoperability problems with GnuPG specifically that led to security, like EFAIL or SigSpoof. It would be surprising if such a vulnerability did not exist in git. Even if git did everything "just right" (which I have myself found impossible to do when writing code that talks with GnuPG), what does it actually verify? The commit's SHA-1 checksum? The tree's checksum? The entire archive as a zip file? I would bet it signs the commit's SHA-1 sum, but I just don't know, on the top of my head, and neither do git-commit or git-verify-commit say exactly what is happening. I had an interesting conversation with a fellow Debian developer (dkg) about this and we had to admit those limitations:
<anarcat> i'd like to integrate pgp signing into tor's coding practices more, but so far, my approach has been "sign commits" and the verify step was "TBD" <dkg> that's the main reason i've been reluctant to sign git commits. i haven't heard anyone offer a better subsequent step. if torproject could outline something useful, then i'd be less averse to the practice. i'm also pretty sad that git remains stuck on sha1, esp. given the recent demonstrations. all the fancy strong signatures you can make in git won't matter if the underlying git repo gets changed out from under the signature due to sha1's weakness
In other words, even if git implements the arcane GnuPG dialect just so, and would allow us to setup the trust chain just right, and would give us meaningful and workable error messages, it still would fail because it's still stuck in SHA-1. There is work underway to fix that, but in February 2020, Jonathan Corbet described that work as being in a "relatively unstable state", which is hardly something I would like to trust to verify code. Also, when you clone a fresh new repository, you might get an entirely different repository, with a different root and set of commits. The concept of "validity" of a commit, in itself, is hard to establish in this case, because an hostile server could put you backwards in time, on a different branch, or even on an entirely different repository. Git will warn you about a different repository root with warning: no common commits but that's easy to miss. And complete branch switches, rebases and resets from upstream are hardly more noticeable: only a tiny plus sign (+) instead of a star (*) will tell you that a reset happened, along with a warning (forced update) on the same line. Miss those and your git history can be compromised.

Possible ways forward I don't consider the current implementation of OpenPGP signatures in git to be sufficient. Maybe, eventually, it will mature away from SHA-1 and the interface will be more reasonable, but I don't see that happening in the short term. So what do we do?

git evtag The git-evtag extension is a replacement for git tag -s. It's not designed to sign commits (it only verifies tags) but at least it uses a stronger algorithm (SHA-512) to checksum the tree, and will include everything in that tree, including blobs. If that sounds expensive to you, don't worry too much: it takes about 5 seconds to tag the Linux kernel, according to the author. Unfortunately, that checksum is then signed with GnuPG, in a manner similar to git itself, in that it exposes GnuPG output (which can be confusing) and is likely similarly vulnerable to mis-implementation of the GnuPG dialect as git itself. It also does not allow you to specify a keyring to verify against, so you need to trust GnuPG to make sense of the garbage that lives in your personal keyring (and, trust me, it doesn't). And besides, git-evtag is fundamentally the same as signed git tags: checksum everything and sign with GnuPG. The difference is it uses SHA-512 instead of SHA-1, but that's something git will eventually fix itself anyways.

kernel patch attestations The kernel also faces this problem. Linus Torvalds signs the releases with GnuPG, but patches fly all over mailing list without any form of verification apart from clear-text email. So Konstantin Ryabitsev has proposed a new protocol to sign git patches which uses SHA256 to checksum the patch metadata, commit message and the patch itself, and then sign that with GnuPG. It's unclear to me what this solves, if anything, at all. As dkg argues, it would seem better to add OpenPGP support to git-send-email and teach git tools to recognize that (e.g. git-am) at least if you're going to keep using OpenPGP anyways. And furthermore, it doesn't resolve the problems associated with verifying a full archive either, as it only attests "patches".

jcat Unhappy with the current state of affairs, the author of fwupd (Richard Hughes) wrote his own protocol as well, called jcat, which provides signed "catalog files" similar to the ones provided in Microsoft windows. It consists of a "gzip-compressed JSON catalog files, which can be used to store GPG, PKCS-7 and SHA-256 checksums for each file". So yes, it is yet again another wrapper to GnuPG, probably with all the flaws detailed above, on top of being a niche implementation, disconnected from git.

The Update Framework One more thing dkg correctly identified is:
<dkg> anarcat: even if you could do exactly what you describe, there are still some interesting wrinkles that i think would be problems for you. the big one: "git repo's latest commits" is a loophole big enough to drive a truck through. if your adversary controls that repo, then they get to decide which commits to include in the repo. (since every git repo is a view into the same git repo, just some have more commits than others)
In other words, unless you have a repository that has frequent commits (either because of activity or by a bot generating fake commits), you have to rely on the central server to decide what "the latest version" is. This is the kind of problems that binary package distribution systems like APT and TUF solve correctly. Unfortunately, those don't apply to source code distribution, at least not in git form: TUF only deals with "repositories" and binary packages, and APT only deals with binary packages and source tarballs. That said, there's actually no reason why git could not support the TUF specification. Maybe TUF could be the solution to ensure end-to-end cryptographic integrity of the source code itself. OpenPGP-signed tarballs are nice, and signed git tags can be useful, but from my experience, a lot of OpenPGP (or, more accurately, GnuPG) derived tools are brittle and do not offer clear guarantees, and definitely not to the level that TUF tries to address. This would require changes on the git servers and clients, but I think it would be worth it.

Other Projects

OpenBSD There are other tools trying to do parts of what GnuPG is doing, for example minisign and OpenBSD's signify. But they do not integrate with git at all right now. Although I did find a hack] to use signify with git, it's kind of gross...

Golang Unsurprisingly, this is a problem everyone is trying to solve. Golang is planning on hosting a notary which would leverage a "certificate-transparency-style tamper-proof log" which would be ran by Google (see the spec for details). But that doesn't resolve the "evil server" attack, if we treat Google as an adversary (and we should).

Python Python had OpenPGP going for a while on PyPI, but it's unclear if it ever did anything at all. Now the plan seems to be to use TUF but my hunch is that the complexity of the specification is keeping that from moving ahead.

Docker Docker and the container ecosystem has, in theory, moved to TUF in the form of Notary, "a project that allows anyone to have trust over arbitrary collections of data". In practice however, in my somewhat limited experience, setting up TUF and image verification in Docker is far from trivial.

Android and iOS Even in what is possibly one of the strongest models (at least in terms of user friendliness), mobile phones are surprisingly unclear about those kind of questions. I had to ask if Android had end-to-end authentication and I am still not clear on the answer. I have no idea of what iOS does.

Conclusion One of the core problems with everything here is the common usability aspect of cryptography, and specifically the usability of verification procedures. We have become pretty good at encryption. The harder part (and a requirement for proper encryption) is verification. It seems that problem still remains unsolved, in terms of usability. Even Signal, widely considered to be a success in terms of adoption and usability, doesn't properly solve that problem, as users regularly ignore "The security number has changed" warnings... So, even though they deserve a lot of credit in other areas, it seems unlikely that hardcore C hackers (e.g. git and kernel developers) will be able to resolve that problem without at least a little bit of help. And TUF seems like the state of the art specification around here, it would seem wise to start adopting it in the git community as well. Update: git 2.26 introduced a new gpg.minTrustLevel to "tell various signature verification codepaths the required minimum trust level", presumably to control how Git will treat keys in your keyrings, assuming the "trust database" is valid and up to date. For an interesting narrative of how "normal" (without PGP) git verification can fail, see also A Git Horror Story: Repository Integrity With Signed Commits.

7 November 2014

Andrew Cater

At mini-Debconf Cambridge:

Much unintentional chaos and hilarity and world class problem solving yesterday.

A routine upgrade from Wheezy - Jessie died horribly on my laptop when UEFI variable space filled, leaving No Operating System on screen.

Cue much running around: Chris Boot, Colin Walters, Steve dug around, booted the system usng rescue CD and so on. Lots more digging, including helpful posts by mjg59 - a BIOS update may solve the problem.

Flashing BIOS did clear the variables and variable space and it all worked perfectly thereafter. This had the potential for turning the laptop into a brick under UEFI (but still working under legacy boot).

As it is, it all worked perfectly - but where else would you get _the_ Grub maintainer, 2 x UEFI experts and a broken laptop all in the same room ?

If it hadn't happened yesterday, it would have happened at home and I'd have been left with nothing. As it is, we all learnt/remembered stuff and had a useful time fixing it.

31 July 2012

Martin Pitt: My impressions from GUADEC

I have had the pleasure of attending GUADEC in full this year. TL;DR: A lot of great presentations + lots of hall conversations about QA stuff + the obligatory be er,ach = . Last year I just went to the hackfest, and I never made it to any previous one, so GUADEC 2012 was a kind of first-time experience for me. It was great to put some faces and personal impressions to a lot of the people I have worked with for many years, as well as catching up with others that I did meet before. I discussed various hardware/software testing stuff with Colin Walters, Matthias Clasen, Lennart Poettering, Bertrand Lorentz, and others, so I have a better idea how to proceed with automated testing in plumbing and GNOME now. It was also really nice to meet my fellow PyGObject co-maintainer Paolo Borelli, as well as seeing Simon Schampier and Ignacio Casal Quinteiro again. No need to replicate the whole schedule here (see for yourself on the interwebs), so I just want to point out some personal highlights in the presentations: There were a lot of other good ones, some of them technical and some amusing and enlightening, such as Frederico s review of the history of GNOME. On Monday I prepared and participated in a PyGObject hackfest, but I ll blog about this separately. I want to thank all presenters for the excellent program, as well as the tireless GUADEC organizer team for making everything so smooth and working flawlessly! Great Job, and see you again in Strasbourg or Brno!

8 January 2012

Russell Coker: My First Cruise

A few weeks ago I went on my first cruise, from Sydney to Melbourne on the Dawn Princess. VacationsToGo.com (a discount cruise/resort web site) has a review of the Dawn Princess [1], they give it 4 stars out of a possible 6. The 6 star ships seem to have discount rates in excess of $500 per day per person, much more than I would pay. The per-person rate is based on two people sharing a cabin, it seems that most cabins can be configured as a double bed or twin singles. If there is only one person in a cabin then they pay almost double the normal rate. It seems that most cruise ships have some support for cabins with more than two people (at a discount rate), but the cabins which support that apparently sell out early and don t seem to be available when booking a cheap last-minute deal over the Internet. So if you want a cheap cruise then you need to have an even number of people in your party. The cruise I took was two nights and cost $238 per person, it was advertised at something like $220 but then there are extra fees when you book (which seems to be the standard practice). The Value of Cruises To book a hotel room that is reasonably comfortable (4 star) in Melbourne or Sydney you need to spend more than $100 per night for a two person room if using Wotif.com. The list price of a 4 star hotel room for two people in a central city area can be well over $300 per night. So the cost for a cruise is in the range of city hotel prices. The Main Dining Room (MDR) has a quality of food and service that compares well with city restaurants. The food and service in the Dawn Princess MDR wasn t quite as good as Walter s Wine Bar (one of my favorite restaurants). But Walter s costs about $90 for a four course meal. The Dawn Princess MDR has a standard 5 course meal (with a small number of options for each course) and for no extra charge you can order extra serves. When you make it a 7 course meal the value increases. I really doubt that I could find any restaurant in Melbourne or Sydney that would serve a comparable meal for $119. You could consider a cruise to be either paying for accommodation and getting everything else for free or to be paying for fine dining in the evening and getting everything else for free. Getting both for the price of one (along with entertainment etc) is a great deal! I can recommend a cruise as a good holiday which is rather cheap if you do it right. That is if you want to spend lots of time swimming and eating quality food. How Cruise Companies Make Money There are economies of scale in running a restaurant, so having the MDR packed every night makes it a much more economic operation than a typical restaurant which has quiet nights. But the expenses in providing the services (which involves a crew that is usually almost half the number of passengers) are considerable. Paying $119 per night might cover half the wages of an average crew member but not much more. The casino is one way that the cruise companies make money. I can understand that someone taking a luxury vacation might feel inclined to play blackjack or something else that seems sophisticated. But playing poker machines on a cruise ship is rather sad not that I m complaining, I m happy for other people to subsidise my holidays! Alcohol is rather expensive on board. Some cruise companies allow each passenger to take one bottle of wine and some passengers try to smuggle liquor on board. On the forums some passengers report that they budget to spend $1000 per week on alcohol! If I wanted a holiday that involved drinking that much I d book a hotel at the beach, mix up a thermos full of a good cocktail in my hotel room, and then take my own deck-chair to the beach. It seems that the cruise companies specialise in extracting extra money from passengers (I don t think that my experience with the Dawn Princess is unusual in any way). Possibly the people who pay $1000 per night or more for a cruise don t get the nickel-and-dime treatment, but for affordable cruises I think it s standard. You have to be in the habit of asking the price whenever something is offered and be aware of social pressure to spend money. When I boarded the Dawn Princess there was a queue, which I joined as everyone did. It turned out that the queue was to get a lanyard for holding the key-card (which opens the cabin door and is used for payment). After giving me the lanyard they then told me that it cost $7.95 so I gave it back. Next time I ll take a lanyard from some computer conference and use it to hold the key-card, it s handy to have a lanyard but I don t want to pay $7.95. Finally some things are free at some times but not at others, fruit juice is free at the breakfast buffet but expensive at the lunch buffet. Coffee at the MDR is expensive but it was being served for free at a cafe on deck. How to have a Cheap Cruise VacationsToGo.com is the best discount cruise site I ve found so far [2]. Unfortunately they don t support searching on price, average daily price, or on a customised number of days (I can search for 7 days but not 7 or less). For one of the cheaper vessels it seems that anything less than $120 per night is a good deal and there are occasional deals as low as $70 per night. Princess cruises allows each passenger to bring one bottle of wine on board. If you drink that in your cabin (to avoid corkage fees) then that can save some money on drinks. RumRunnerFlasks.com sells plastic vessels for smuggling liquor on board cruise ships [3]. I wouldn t use one myself but many travelers recommend them highly. Chocolate and other snack foods are quite expensive on board and there are no restrictions on bringing your own, so the cheap options are to bring your own snack food or to snack from the buffet (which is usually open 24*7). Non-alcoholic drinks can be expensive but you can bring your own and use the fridge in your cabin to store it, but you have to bring cans or pressurised bottles so it doesn t look like you are smuggling liquor on board. Generally try not to pay for anything on board, there s enough free stuff if you make good choices. Princess offers free on-board credit (money for buying various stuff on-board) for any cruise that you book while on a cruise. The OBC starts at $25 per person and goes as high as $150 per person depending on how expensive the cruise is. Generally booking cruises while on-board is a bad idea as you can t do Internet searches. But as Princess apparently doesn t allow people outside the US to book through a travel agent and as they only require a refundable deposit that is not specific to any particular cruise there seems no down-side. In retrospect I should have given them a $200 on the off chance that I ll book another cruise with them some time in the next four years. Princess provide a book of discount vouchers in every cabin, mostly this is a guide to what is most profitable for them and thus what you should avoid if you want a cheap holiday. But there are some things that could be useful such as a free thermos cup with any cup of coffee if you buy coffee then you might as well get the free cup. Also they have some free contests that might be worth entering. Entertainment It s standard practice to have theatrical shows on board, some sort of musical is standard and common options include a magic show and comedy (it really depends on which cruise you take). On the Dawn Princess the second seating for dinner started at 8PM (the time apparently varies depending on the cruise schedule) which was the same time as the first show of the evening. I get the impression that this sort of schedule is common so if you want to see two shows in one night then you need to have the early seating for dinner. The cruise that I took lasted two nights and had two shows (a singing/dancing show and a magic show), so it was possible to have the late seating for dinner and still see all the main entertainment unless you wanted to see one show twice. From reading the CruiseCritic.com forum [4] I get the impression that the first seating for dinner is the most popular. On some cruises it s easy to switch from first to second seating but not always possible to switch from second to first. Therefore the best strategy seems to be to book the first seating. Things to do Before Booking a Cruise Read the CruiseCritic.com forum for information about almost everything. Compare prices for a wide variety of cruises to get a feel for what the best deals are. While $100 per night is a great deal for the type of cruise that interests me and is in my region it may not be a good match for the cruises that interest you. Read overview summaries of cruise lines that operate in your area. Some cruise lines cater for particular age groups and interests and are thus unappealing to some people EG anyone who doesn t have children probably won t be interested in Disney cruises. Read reviews of the ships, there is usually a great variation between different ships run by one line. One factor is when the ships have been upgraded with recently developed luxury features. Determine what things need to be booked in advance. Some entertainment options on board support a limited number of people and get booked out early. For example if you want to use the VR golf simulator on the Dawn Princess you should probably check in early and make a reservation as soon as you are on board. The forums are good for determining what needs to be booked early. Also see my post about booking a cruise and some general discussion of cruise related things [5]. Related posts:
  1. Cruises It seems that in theory cruises can make for quite...
  2. Combat Wasps One of the many interesting ideas in Peter F. Hamilton s...
  3. Victoria Hotel Melbourne I have just stayed at the Victoria Hotel Melbourne. I...

28 October 2011

Lars Wirzenius: On the future of Linux distributions

Executive summary Debian should consider: Introduction I've taken a job with Codethink, as part of a team to develop a new embedded Linux system, called Baserock. Baserock is not a distribution as such, but deals with many of the same issues. I thought it might be interesting to write out my new thoughts related to this, since they might be useful for proper distributions to think about too. I'll hasten to add that many of these thoughts are not originally mine. I shamelessly adopt any idea that I think is good. The core ideas for Baserock come from Rob Taylor, Daniel Silverstone, and Paul Sherwood, my colleagues and bosses at Codethink. At the recent GNOME Summit in Montreal, I was greatly influenced by Colin Walters. This is also not an advertisment for Baserock, but since my of my current thinking come from that project, I'll discuss things in the context of Baserock. Finally, I'm writing this to express my personal opinion and thoughts. I'm not speaking for anyone else, least of all Codethink. On the package abstraction Baserock abandons the idea of packages for all individual programs. In the early 1990s, when Linux was new, and the number of packages in a distribution could be counted on two hands in binary, packages were a great idea. It was feasible to know at least something about every piece of software, and pick the exact set of software to install on a machine. It is still feasible to do so, but only in quite restricted circumstances. For example, picking the packages to install a DNS server, or an NFS server, or a mail server, by hand, without using meta packages or tasks (in Debian terminology), is still quite possible. On embedded devices, there's also usually only a handful of programs installed, and the people doing the install can be expected to understand all of them and decide which ones to install on each specific device. However, those are the exceptions, and they're getting rarer. For most people, manually picking software to install is much too tedious. In Debian, we've realised this a great many years ago, and developed meta packages (whose only purpose is to depend on other packages) and tasks (solves the same problem, but differently). These make it possible for a user to say "I want to have the GNOME desktop environment", and not have to worry about finding every piece that belongs in GNOME, and install that separately. For much of the past decade, computers have had sufficient hard disk space that it is no longer necessary to be quite so picky about what to install. A new cheap laptop will now typically come with at least 250 gigabytes of disk space. An expensive one, with an SSD drive, will have at least 128 gigabytes. A fairly complete desktop install uses less than ten gigabytes, so there's rarely a need to pick and choose between the various components. From a usability point of view, choosing from a list of a dozen or two options is much easier than from a list of thirty five thousand (the number of Debian packages as I write this). This is one reason why Baserock won't be using the traditional package abstraction. Instead, we'll collect programs into larger collections, called strata, which form some kind of logical or functional whole. So there'll be one stratum for the core userland software: init, shell, libc, perhaps a bit more. There'll be another for a development environment, one for a desktop environment, etc. Another, equally important reason, to move beyond packages is the problems caused by the combinatorial explosion of packages and versions. Colin Walters talks about this very well. When every system has a fairly unique set of packages, and versions of them, it becomes much harder to ensure that software works well for everyone, that upgrades work well, and that any problems get solved. When the number of possible package combinations is small, getting the interactions between various packages right is easier, QA has a much easier time to test all upgrade paths, and manual test coverage improves a lot when everyone is testing the same versions. Even debugging gets easier, when everyone can easily run the same versions. Grouping software into bigger collections does reduce flexibility of what gets installed. In some cases this is important: very constrainted embedded devices, for example, still need to be very picky about what software gets installed. However, even for them, the price of flash storage is low enough that it might not matter too much, anymore. The benefit of a simpler overall system may well outweigh the benefit of fine-grained software choice. Everything in git In Baserock, we'll be building everything from source in git. It will not be possible to build anything, unless the source is committed. This will allow us to track, for each binary blob we produce, the precise sources that were used to bulid it. We will also try to achieve something a bit more ambitious: anything that affects any bit in the final system image can be traced to files committed to git. This means tracking also all configuration settings for the build, and the whole build environment, in git. This is important for us so that we can reproduce an image used in the field. When a customer is deploying a specific image, and needs it to be changed, we want to be able to make the change with the minimal changes compared to the previous version of the image. This requires that we can re-create the original image, from source, bit for bit, so that when we make the actual change, only the changes we need to make affect the image. We will make it easy to branch and merge not just individual projects, but the whole system. This will make it easy to do large changes to the system, such as transitioning to a new version of GNOME, or the toolchain. Currently, in Debian, such large changes need to be serialised, so that they do not affect each other. It is easy, for example, for a GNOME transition to be broken by a toolchain transition. Branching and merging has long been considered the best available solution for concurrent development within a project. With Baserock, we want to have that for the whole system. Our build servers will be able to build images for each branch, without requiring massive hardware investment: any software that is shared between branches only gets built once. Launchpad PPAs and similar solutions provide many of the benefits of branching and merging on the system level. However, they're much more work than "git checkout -b gnome-3.4-transition". I believe that the git based approach will make concurrent development much more efficient. Ask me in a year if I was right. Git, git, and only git There are a lot of version control systems in active use. For the sake of simplicity, we'll use only git. When an upstream project uses something else, we'll import their code into git. Luckily, there are tools for that. The import and updates to it will be fully automatic, of course. Git is not my favorite version control system, but it's clearly the winner. Everything else will eventually fade away into obscurity. Or that's what we think. If it turns out that we're wrong about that, we'll switch to something else. However, we do not intend to have to deal with more than one at a time. Life's too short to use all possible tools at the same time. Tracking upstreams closely We will track upstream version control repositories, and we will have an automatic mechanism of building our binaries directly from git. This will, we hope, make it easy to follow closely upstream development, so that when, say, GNOME developers make commits, we want to be able to generate a new system image which includes those changes the same day, if not within minutes, rather than waiting days or weeks or months. This kind of closeness is greatly enhanced by having everything in version control. When upstream commits changes to their version control system, we'll mirror them automatically, and this then triggers a new system image build. When upstream makes changes that do not work, we can easily create a branch from any earlier commit, and build images off that branch. This will, we hope, also make it simpler to make changes, and give them back to upstream. Whenever we change anything, it'll be done in a branch, and we'll have system images available to test the change. So not only will upstream be able to easily get the change from our git repository, they'll also be easily verify, on a running system, that the change fixes the problem. Automatic testing, maybe even test driven development We will be automatically building system images from git commits for Baserock. This will potentially result in a very large number of images. We can't possibly test all of them manually, so we will implement some kind of automatic testing. The details of that are still under discussion. I hope to be able to start adding some test driven development to Baserock systems. In other words, when we are requested to make changes to the system, I want the specification to be provided as executable tests. This will probably be impossible in real life, but I can hope. I've talked about doing the same thing for Debian, but it's much harder to push through such changes in an established, large project. Solving the install, upgrade, and downgrade problems All mainstream Linux distributions are based on packages, and they all, pretty much, do installations and upgrades by unpacking packages onto a running system, and then maybe running some scripts from the packages. This works well for a completely idle system, but not so well on systems that are in active use. Colin Walters again talks about this. For installation of new software, the problem is that someone or something may invoke it before it is fully configured by the package's maintainer script. For example, a web application might unpack in such a way that the web server notices it, and a web browser may request the web app to run before it is configured to connect to the right database. Or a GUI program may unpack a .desktop file before the executable or its data files are unpacked, and a user may notice the program in their menu and start it, resulting in an error. Upgrades suffer from additonal problems. Software that gets upgraded may be running during the upgrade. Should the package manager replace the software's data files with new versions, which may be in a format that the old program does not understand? Or install new plugins that will cause the old version of the program to segfault? If the package manager does that, users may experience turbulence without having put on their seat belts. If it doesn't do that, it can't install the package, or it needs to wait, perhaps for a very long time, for a safe time to do the upgrade. These problems have usually been either ignored, or solved by using package specific hacks. For example, plugins might be stored in a directory that embeds the program's version number, ensuring that the old version won't see the new plugins. Some people would like to apply installs and upgrades only at shutdown or bootup, but that has other problems. None of the hacks solve the downgrade problem. The package managers can replace a package with an older version, and often this works well. However, in many cases, any package maintainer scripts won't be able to deal with downgrades. For example, they might convert data files to a new format or name or location upon upgrades, but won't try to undo that if the package gets downgraded. Given the combinatorial explosion of package versions, it's perhaps just as well that they don't try. For Baserock, we absolutely need to have downgrades. We need to be able to go back to a previous version of the system if an upgrade fails. Traditionally, this has been done by providing a "factory reset", where the current version of the system gets replaced with whatever version was installed in the factory. We want that, but we also want to be able to choose other versions, not just the factory one. If a device is running version X, and upgrades to X+1, but that version turns out to be a dud, we want to be able to go back to X, rather than all the way back to the factory version. The approach we'll be taking with Baserock relies on btrfs and subvolumes and snapshots. Each version of the system will be installed in a separate subvolume, which gets cloned from the previous version, using copy-on-write to conserve space. We'll make the bootloader be able to choose a version of the system to boot, and (waving hands here) add some logic to be able to automatically revert to the previous version when necessary. We expect this to work better and more reliably than the current package based one. Making choices Debian is pretty bad at making choices. Almost always, when faced with a need to choose between alternative solutions for the same problem, we choose all of them. For example, we support pretty much every init implementation, various implementations of /bin/sh, and we even have at least three entirely different kernels. Sometimes this is non-choice is a good thing. Our users may need features that only one of the kernels support, for example. And we certainly need to be able to provide both mysql and postresql, since various software we want to provide to our uses needs one and won't work with the other. At other times, the inability to choose causes trouble. Do we really need to support more than one implemenation of /bin/sh? By supporting both dash and bash for that, we double the load on testing and QA, and introduce yet another variable to deal with into any debugging situation involving shell scripts. Especially for core components of the system, it makes sense to limit the flexibility of users to pick and choose. Combinatorial explosion d j vu. Every binary choice doubles the number of possible combinations that need to be tested and supported and checked during debugging. Flexibility begets complexity, complexity begets problems. This is less of a problem at upper levels of the software stack. At the very top level, it doesn't really matter if there are many choices. If a user can freely choose between vi and Emacs, and this does not add complexity at the system level, since nothing else is affected by that choice. However, if we were to add a choice between glibc, eglibc, and uClibc for the system C library, then everything else in the system needs to be tested three times rather than once. Reducing the friction coefficient for system development Currently, a Debian developer takes upstream code, adds packaging, perhaps adds some patches (using one of several methods), builds a binary package, tests it, uploads it, and waits for the build daemons and the package archive and user-testers to report any problems. That's quite a number of steps to go through for the simple act of adding a new program to Debian, or updating it to a new version. Some of it can be automated, but there's still hoops to jump through. Friction does not prevent you from getting stuff done, but the more friction there is, the more energy you have to spend to get it done. Friction slows down the crucial hack-build-test cycle of software development, and that hurts productivity a lot. Every time a developer has to jump through any hoops, or wait for anything, he slows down. It is, of course, not just a matter of the number of steps. Debian requires a source package to be uploaded with the binary package. Many, if not most, packages in Debian are maintained using version control systems. Having to generate a source package and then wait for it to be uploaded is unnecessary work. The build daemon could get the source from version control directly. With signed commits, this is as safe as uploading a tarball. The above examples are specific to maintaining a single package. The friction that really hurts Debian is the friction of making large-scale changes, or changes that affect many packages. I've already mention the difficulty of making large transitions above. Another case is making policy changes, and then implementing them. An excellent example of that is in Debian is the policy change to use /usr/share/doc for documentation, instead of /usr/doc. This took us many years to do. We are, I think, perhaps a little better at such things now, but even so, it is something that should not take more than a few days to implement, rather than half a decade. On the future of distributions Occasionally, people say things like "distributions are not needed", or that "distributions are an unnecessary buffer between upstream developers and users". Some even claim that there should only be one distribution. I disagree. A common view of a Linux distribution is that it takes some source provided by upstream, compiles that, adds an installer, and gives all of that to the users. This view is too simplistic. The important part of developing a distribution is choosing the upstream projects and their versions wisely, and then integrating them into a whole system that works well. The integration part is particularly important. Many upstreams are not even aware of each other, nor should they need to be, even if their software may need to interact with each other. For example, not every developer of HTTP servers should need to be aware of every web application, or vice versa. (It they had to be, it'd be a combinatorial explosion that'd ruin everything, again.) Instead, someone needs to set a policy of how web apps and web servers interface, what their common interface is, and what files should be put where, for web apps to work out of the box, with minimal fuss for the users. That's part of the integration work that goes into a Linux distribution. For Debian, such decisions are recorded in the Policy Manual and its various sub-policies. Further, distributions provide quality assurance, particularly at the system level. It's not realistic to expect most upstream projects to do that. It's a whole different skillset and approach that is needed to develop a system, rather than just a single component. Distributions also provide user support, security support, longer term support than many upstreams, and port software to a much wider range of architectures and platforms than most upstreams actively care about, have access to, or even know about. In some cases, these are things that can and should be done in collaboration with upstreams; if nothing else, portability fixes should be given back to upstreams. So I do think distributions have a bright future, but the way they're working will need to change.

21 February 2010

Michael Banck: 21 Feb 2010

Application Indicators: A Case of Canonical Upstream Involvement and its Problems I always thought Canonical could do a bit more to contribute to the GNOME project, so I was happy to see the work on application indicators proposed to GNOME. Application indicators are based on the (originally KDE-driven, I believe) proposed cross-desktop status notifier spec. The idea (as I understand it) is to have a consistent way of interacting with status notifiers and stop the confusing mix of panel applets and systray indicators. This is a very laudable goal as mentioned by Colin Walters:
"First, +5000 that this change is being driven by designers, and +1000 that new useful code is being written. There are definite problems being solved here."
The discussion that followed was very useful, including the comments by Canonical's usability expert Matthew Paul Thomas. Most of the discussion was about the question how this proposal and spec could be best integrated into GTK, the place where most people seemed to agree this belongs (rather than changing all apps to provide this, this should be a service provided by the platform) However, on the same day, Canonical employee Ted Gould proposed libappindicator as an external dependency. The following thread showed a couple of problems, both technical and otherwise: What I personally disliked is the way the Cody Russel and Ted Gould are papering over the above issues in the thread that followed. For examples, about point one, Ted Gould writes in the proposal:
Q: Shouldn't this be in GTK+?
A: Apparently not.
while he himself said on the same day, on the same mailing list: "Yes, I think GTK/glib is a good place" and nobody was against it (and in fact most people seemed to favor including this in GTK). To the question about why libappindicator is not licensed as usual under the LGPL, version 2.1 or later, Canonical employee Cody Russell even replied:
"Because seriously, everything should be this way. None of us should be saying "LGPL 2.1 or later". Ask a lawyer, even one from the FSF, how much sense it makes to license your software that way."
Not everybody has to love the FSF, but proposing code under mandated copyright assignments which a lot of people have opposed and at the same time insinuating that the FSF was not to be trusted on their next revision of the LGPL license seems rather bold to me. Finally, on the topic of copyright assignments, Ted said:
"Like Clutter for example ;) Seriously though, GNOME already is dependent on projects that require contributor agreements."
It is true that there are (or at least were) GNOME applications which require copyright assignments for contributions (evolution used to be an example, but the requirement was lifted), however, none of the platform modules require this to my knowledge (clutter is an external dependency as well). It seems most people in the GNOME community have the opinion that application indicators should be in GTK at least eventually, so having libappindicator as an external dependency with copyright assignments might work for now but will not be future proof. In summary, Most of the issues could be dealt with by reimplementing it for GTK when the time comes for this spec to be included, but this would mean (i) duplication of effort, (ii) possibly porting all applications twice and (iii) probably no upstream contribution by Canonical. Furthermore, I am amazed at how the Canonical people approach the community for something this delicate (their first major code drop, as far as I am aware). To be fair, neither Ted nor Cody posted the above using their company email addresses, but nevertheless the work is sponsored by Canonical, so their posts to desktop-devel-list could be seen as writing with their Canonical hat on. Canonical does not have an outstanding track record on contributing code to GNOME, and at least to me it seems this case is not doing much to improve things, either.

14 October 2009

Robert McQueen: Telepathy Q&A from the Boston GNOME Summit

The first Telepathy session session on Saturday evening at the Boston GNOME Summit was very much of a Q&A where myself and Will answered various technical and roadmap issues from a handful of developers and downstream distributors. It showed me that there s a fair amount of roadmap information we should do better at communicating outside of the Telepathy project, so in the hope its useful to others, read on MC5 Ted Gould was wondering why Mission Control 4 had an API for getting/setting presence for all of your accounts whereas MC5 does not. MC5 has a per-account management of desired/current presence which is more powerful, but loses the convenience of setting presence in one place. Strictly speaking, doing things like deciding which presence to fall back on (a common example being if you have asked for invisible but the connection doesn t support it) is a UI-level policy which MC should not take care of, but in practice there aren t many different policies which make sense, and the key thing is that MC should tell the presence UI when the desired presence isn t available so it could make a per-account choice if that was preferable. As a related side point, Telepathy should implement more of the invisibility mechanisms in XMPP so it s more reliably available, and then we could more meaningfully tell users which presence values were available before connecting, allowing signing on as invisible. Since MC5 gained support for gnome-keyring, its not possible to initialise Empathy s account manager object without MC5 prompting the user for their keyring password un-necessarily (especially if the client is Ubuntu s session presence applet and the user isn t using Empathy in the current session but has some accounts configured). Currently the accounts D-Bus API requires that all properties including the password are presented to the client to edit. A short-term fix might be to tweak the spec so that accounts don t have to provide their password property unless it s explicitly queried, but this might break the ABI of tp-glib. Ultimately, passwords being stored and passed around in this way should go away when we write an authentication interface which will pass authentication challenges up to the Telepathy client to deal with, enabling a unified interface for OAuth/Google/etc web token, Kerberos or SIP intermediate proxy authentication, and answering password requests from the keyring lazily or user on demand. Stability & Security Jonathan Blandford was concerned about the churn level of Telepathy, from the perspective of distributions with long-term support commitments, and how well compatibility will be maintained. Generally the D-Bus API and the tp-glib library APIs are maintained to the GNOME standards of making only additive changes and leaving all existing methods/signals working even if they are deprecated and superseded by newer interfaces. A lot of new interfaces have been added over the past year or so, many of which replace existing functionality with a more flexible or more efficient interface. However, over the next 4-6 months we hope to finalise the new world interfaces (such as multi-person media calls, roster, authentication, certificate verification, more accurate offline protocol information, chat room property/role management), and make a D-Bus API break to remove the duplicated cruft. Telepathy-glib would undergo an ABI revision in this case to also remove those symbols, possibly synchronised with a move from dbus-glib to GVariant/etc, but in many cases clients which only use modern interfaces and client helper classes should not need much more than a rebuild. Relatedly there was a query about the security of Telepathy, and how much it had been through the mill on security issues compared to Pidgin. In the case of closed IM protocols (except MSN where we have our own implementation) then we re-use the code from libpurple, so the same risks apply, although the architecture of Telepathy means its possible to subject the backend processes to more stringent lockdowns using SElinux or other security isolation such as UIDs, limiting the impact of compromises. Other network code in Telepathy is based on more widely-used libraries with a less chequered security history thus far. OTR The next topic was about support for OTR in Telepathy. Architecturally, it s hard for us to support the same kind of message-mangling plugins as Pidgin allows because there is no one point in Telepathy that messages go through. There are multiple backends depending on the protocol, multiple UIs can be involved in saving (eg a headless logger) or displaying messages (consider GNOME Shell integration which previews messages before passing conversations on to Empathy), and the only other centralised component (Mission Control 5) does not act as an intermediary for messages. Historically, we ve always claimed OTR to be less appealing than native protocol-level end-to-end encryption support, such as the proposals for Jingle + peer to peer XMPP + TLS which are favoured by the XMPP community, mostly because if people can switch to an unofficial 3rd party client to get encryption, they could switch to a decent protocol too, and because protocol-level support can encrypt other traffic like SRTP call set-up, presence, etc. However, there is an existing deployed OTR user base, including the likes of Adium users on the Mac, who might often end up using end to end encryption without being aware of it, who we would be doing a disservice by Telepathy not supporting OTR conversations with these people. This is a compelling argument which was also made to me by representatives from the EFF, and the only one to date which actually held some merit with me compared to just implementing XMPP E2E encryption. Later in the summit we went on to discuss how we might achieve it in Telepathy, and how our planned work towards XMPP encryption could also help. Tubes We also had a bit of discussion about Tubes, such as how the handlers are invoked. Since the introduction of MC5, clients can register interest in certain channel types (tubes or any other) by implementing the client interface and making filters for the channels they are interested in. MC5 will first invoke all matching observers for all channels (incoming and outgoing) until all of them have responded or timed out (eg to let a logger daemon hook up signal callbacks before channel handling proceeds), all matching approvers for incoming channels until one of them replies (eg to notify the user of the new channel before launching the full UI), and then sending it to the handler with the most specific filter (eg Tomboy could register for file transfers with the right MIME type and receive those in favour to Empathy whose filter has no type on it). Tubes can be shared with chat rooms, either as a stream tube where one member shares a socket for others to connect to (allowing re-sharing an existing service implementation), or a D-Bus tube where every member s application is one endpoint on a simulated D-Bus bus, and Telepathy provides a mapping between the D-Bus names and the members of the room. In terms of Tube applications, now we ve got working A/V calling in Empathy, as well as desktop sharing, and an R&D project on multi-user calls, our next priority is on performance and Tube-enabling some more apps such as collaborative editing (Gobby, AbiWord, GEdit, Tomboy ?). There was a question about whether Tube handlers can be installed on demand when one of your contacts initiates that application with you. It d be possible to simulate this by finding out (eg from the archive) which handlers are available, and dynamically registering a handler for all of those channel types, so that MC5 will advertise those capabilities, but also register as an approver. When an incoming channel actually arrives at the approval stage, prompt the user to install the required application and then tell MC5 to invoke it as the handler. Colin Walters asked about how Telepathy did NAT traversal. Currently, Telepathy makes use of libnice to do ICE (like STUN between every possible pair of addresses both parties have, works in over 90% of cases) for the UDP packets involved in calls signalled over XMPP, either the Google Talk variant which can benefit from Google s relay servers if one or other party has a Google account, so is more reliable, or the latest IETF draft which can theoretically use TURN relays but its not really hooked up in Telepathy and few people have access to them. XMPP file transfers and one-to-one tube connections use TCP which is great if you have IPv6, but otherwise impossible to NAT traverse reliably, so often ends up using strictly rate-limited SOCKS5 -ish XMPP proxies, or worse, in-band base64 in the XML stream. We hope to incorporate (and standardise in XMPP) a reliability layer which will allow us to use Jingle and ICE-UDP for file transfers and tubes too, allowing peer to peer connections and higher-bandwidth relays to enhance throughput significantly. Future Ted Gould had some good questions about the future of Telepathy
Should Empathy just disappear on the desktop as things like presence applets or GNOME Shell take over parts of its function? Maybe, yes. In some ways its goal is just to bring Telepathy to end users and the desktop so that its worth other things integrating into Telepathy, but Telepathy allows us to do a lot better than a conventional IM client. Maemo and Sugar on the OLPC use Telepathy but totally integrates it into the device experience rather than having any single distinct IM client, and although Moblin uses Empathy currently it has its own presence chooser and people panel, and may go on to replace other parts of the user experience too. GNOME Shell looks set to move in this direction too and use Telepathy to integrate communications with the desktop workflow. Should Telepathy take care of talking to social networking sites such as Facebook, Twitter, etc? There s no hard and fast rule Telepathy only makes sense for real-time communications, so it s good for exposing and integrating the Facebook chat, but pretty lame for dealing with wall posts, event invitations and the like. Similarly on the N900, Telepathy is used for the parts of the cellular stack that overlap with real-time communications like calling and SMS, but there is no sense pushing unrelated stuff like configuration messages through it. For Twitter, the main question is whether you actually want tweets to appear in the same UI, logging and notification framework as other messages. Probably not anything but the 1-to-1 tweets, meaning something like Moblin s Mojito probably makes more sense for that. Later in the summit I took a look at Google Latitude APIs, which seem like something which Telepathy can expose via its contact location interface, but probably not usefully until we have support for metacontacts in the desktop. Can/will Telepathy support IAX2? It can, although we d have to do a local demultiplexer for the RTP streams involved in separate calls. It s not been a priority of ours so far, but we can help people get started (or Collabora can chat to people who have a commercial need for it). Similarly nobody has looked at implementing iChat-compatible calling because our primary interest lies with open protocols, but if people were interested we could give pointers its probably just SIP and RTP after you dig through a layer of obfuscation or two.
If you want to know more about Telepathy feel free to comment with some follow-up questions, talk to us in #telepathy on Freenode, or post to the mailing list.

28 January 2008

Rob Taylor: Language runtime independance

Colin, you might want to check out PyPy, especially it’s translation framework, which (as I read it) allows the separation of language definition, optimisation techniques and machine backends (virtual or not). It looks like right now you can translate Python to C, CLR, JVM, LLVM or even JavaScript and there seem to be people working on different language definitions also, with JavaScript, Prolog, Scheme and Smalltalk in the tree.

16 June 2007

Enrico Zini: chat-with-bubulle

Chat with bubulle These are a few notes from a chat with Bubulle about package descriptions. Check a description against keywords To see what packages would match certain keywords, and how:
ept-cache dumpavail keyword1 keyword2   grep-dctrl -sPackage,Search-Score .
To check how much a package would match certain keywords:
#!/bin/sh
PACKAGE=$1
shift
ept-cache dumpavail "$@"   grep-dctrl -sSearch-Score -FPackage -X $PACKAGE
Technical details in the descriptions The programming language used to implement the software and the X UI toolit used by the software can also be removed from descriptions: Debtags offer a better place to store this information, and most package managers now show tags together with the package description. Submitting descriptions for review Submitting descriptions for review to the Smith Review Project means posting them to the publicly archived debian-l10n-english mailing list, and that may turn off some. It would also be interesting to change the description template generated by debhelper to mention the Smith Review Project and a link to description guidelines Description guidelines There are various description guidelines that could use consolidation and eventually inclusion in the Developers Reference. For example, Writing package descriptions is a very good guide, but most of the unsolved issues at the end have actually now been solved by the Smith Review Project.

31 May 2007

Enrico Zini: package-descriptions

Writing good package descriptions Thanks to Fathi Boudra, I found the link to "Writing Debian package descriptions". Towards the bottom of the page, it contains an excellent checklist to follow when writing one. This entry is to take a note of it, so that I'll find it when I write one myself, and when I report a bug for a description that could be improved.

15 December 2005

Colin Walters: Did a human actually write this?

Or was it generated by a computer picking random buzzwords? Take a guess.
The semantic mark-up languages would be used to specify the convergence of business and IT models, and more importantly, their metamodels. The ontological representation of the metamodel of a constituent model enables reasoning about the instance model, which enables a dependency analysis to deduce unknown or implied relationships among entities within the instance model. The analysis would be extended across multiple layers of models. The semantic model-based dependency analysis would reveal which entity has an impact on which entities (for example, business components and processes, performance indicators, IT systems, software classes and objects, among others) of the model. This semantic model-based analysis would be applied to a model that provides an introspective view of the business within an enterprise. Also, it would be applied to a value network that yields an extrospective view of businesses in an ecosystem.

23 November 2005

Antti-Juhani Kaijanaho: On build-essential

The build-essential package is not a definition of the build-essential policy. The actual definition is given in the policy manual:
It is not necessary to explicitly specify build-time relationships on a minimal set of packages that are always needed to compile, link and put in a Debian package a standard "Hello World!" program written in C or C++. The required packages are called build-essential, and an informational list can be found in /usr/share/doc/build-essential/list (which is contained in the build-essential package).
The build-essential package is an informational list, it is documentation. The idea of the build-essential package is to document which packages are always needed to compile, link and put in a Debian package a standard "Hello World!" program written in C or C++. The build-essential package maintainer's main job is to make sure that the informational list is correct. When I maintained build-essential, I got multiple bug reports asking (and sometimes demanding) that debhelper be added to the list of build-essential packages. I closed all such reports (#64827, #70014, I thought there were more but can't find them) with a suggestion that the proper place to make this request is the policy list, not the build-essential package bug list. In fact, when I asked for a new maintainer (Colin Walters ended up taking the package), I specifically listed the following requirement:
I consider the capacity to deal with stupid requests (like "plz add debhelper"; lately, there have been not so many of these) and an understanding of the relationship between this package and Policy as essential qualities of the maintainer of this package.
I'm a little concerned that the current build-essential maintainer apparently does not understand the relationship between the package and policy. For the build-essential package maintainer, it is irrelevant whether debhelper is used by 92 % of the archive. That argument might be a good one, but the proper place to make that argument is the policy list, seeking to amend policy in such a way that debhelper is included in the set of build-essential packages. But it is not the job of the build-essential package maintainer to decide it!