Search Results: "walter"

15 January 2021

Dirk Eddelbuettel: Rcpp 1.0.6: Some Updates

rcpp logo The Rcpp team is proud to announce release 1.0.6 of Rcpp which arrived at CRAN earlier today, and has been uploaded to Debian too. Windows and macOS builds should appear at CRAN in the next few days. This marks the first release on the new six-months cycle announced with release 1.0.5 in July. As reminder, interim dev or rc releases will often be available in the Rcpp drat repo; this cycle there were four. Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2174 packages on CRAN depend on Rcpp for making analytical code go faster and further (which is an 8.5% increase just since the last release), along with 207 in BioConductor. This release features six different pull requests from five different contributors, mostly fixing fairly small corner cases, plus some minor polish on documentation and continuous integration. Before releasing we once again made numerous reverse dependency checks none of which revealed any issues. So the passage at CRAN was pretty quick despite the large dependency footprint, and we are once again grateful for all the work the CRAN maintainers do.

Changes in Rcpp patch release version 1.0.6 (2021-01-14)
  • Changes in Rcpp API:
    • Replace remaining few uses of EXTPTR_PTR with R_ExternalPtrAddr (Kevin in #1098 fixing #1097).
    • Add push_back and push_front for DataFrame (Walter Somerville in #1099 fixing #1094).
    • Remove a misleading-to-wrong comment (Mattias Ellert in #1109 cleaning up after #1049).
    • Address a sanitizer report by initializing two private bool variables (Benjamin Christoffersen in #1113).
    • External pointer finalizer toggle default values were corrected to true (Dirk in #1115).
  • Changes in Rcpp Documentation:
    • Several URLs were updated to https and/or new addresses (Dirk).
  • Changes in Rcpp Deployment:
    • Added GitHub Actions CI using the same container-based setup used previously, and also carried code coverage over (Dirk in #1128).
  • Changes in Rcpp support functions:
    • Rcpp.package.skeleton() avoids warning from R. (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub. My sincere thanks to my current sponsors for me keeping me caffeinated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 May 2020

Utkarsh Gupta: FOSS Activites in April 2020

Here s my (seventh) monthly update about the activities I ve done in the F/L/OSS world.

Debian
It s been 14 months since I ve started contributing to Debian. And 4 months since I ve been a Debian Developer. And in this beautiful time, I had this opprotunity to do and learn lots of new and interesting things. And most importantly, meet and interact with lots of lovely people!
Debian is $home.

Uploads:

Other $things:
  • Attended Ruby team meeting. Logs here.
  • Attended Perl team LHF. Report here.
  • Sponsored a lot of uploads for William Desportes and Adam Cecile.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Applied for DUCI project for Google Summer of Code 2020.

Ruby2.7 Migration:
Ruby2.7 was recently released on 25th December, 2019. Santa s gift. Believe it or not. We, the Debian Ruby team, have been trying hard to make it migrate to testing. And it finally happened. The default version in testing is ruby2.7. Here s the news! \o/
Here s what I worked on this month for this transition.

Upstream: Opened several issues and proposed patches (in the form of PRs):
  • Issue #35 against encryptor for Ruby2.7 test failures.
  • Issue #28 against image_science for removing relative paths.
  • Issue #106 against ffi-yajl for Ruby2.7 test failures.
  • PR #5 against aggregate for simply using require.
  • PR #6 against aggregate for modernizing CI and adding Ruby 2.5 and 2.7 support.
  • Issue #13 against espeak-ruby for Ruby2.7 test failures.
  • Issue #4 against tty-which for test failures in general.
  • Issue #11 against packable for Ruby2.7 test failures. PR #12 has been proposed.
  • Issue #10 against growl for test failures and proposed an initial patch.

Downstream: I fixed and uploaded the following packages in Debian:

Debian LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
This was my seventh month as a Debian LTS paid contributor. I was assigned 24.00 hours and worked on the following things:

CVE Fixes and Announcements:

Other LTS Work:

Other(s)
Sometimes it gets hard to categorize work/things into a particular category.
That s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal: This month I could get the following things done:
  • Most importantly, I finally migrated to a new website. Huge UI imporvement! \o/
    From Jekyll to Hugo, it was not easy. But it was worth it! Many thanks to Luiz for writing hugo-coder, Clement, and Samyak!
    If you find any flaws, issues and pull requests are welcomed at utkarsh2102/utkarsh2102.com
  • Wrote battery-alert, a mini-project of my own to show battery alerts at <10% and >90%.
    Written in shell, it brings me all the satisfaction as it has saved my life on many occasions.
    And guess what? It has more users than just myself!
    Reviews and patches are welcomed \o/
  • Mentored in HackOn Hackathon. Thanks to Manvi for reaching out!
    It was fun to see people developing some really nice projects.
  • Thanks to Ray and John, I became a GitLab Hero!
    (I am yet to figure out my role and responibility though)
  • Atteneded Intro Sec Con and had the most fun!
    Heard Ian s keynote and attended other talks and learned how to use WireShark!

Open Source: Again, this contains all the things that I couldn t categorize earlier.
Opened several issues and pull requests:
  • Issue #297 against hugo-coder, asking to enable RSS feed for blogs.
  • PR #316 for hugo-coder for fixing the above issue myself.
  • Issue #173 against arbre for requesting a release.
  • Issue #104 against combustion, asking to relax dependency on rubocop. Fixed in this commit.
  • Issue #16 against ffi-compiler for requesting to fix homepage and license.
  • Issue #57 against gographviz for requesting a release.
  • Issue #14 against crb-blast, suggesting compatability with bio 2.0.x.
  • Issue #58 against uniform_notifier for asking to drop the use of ruby-growl.
  • PR #2072 for polybar, adding installation instructions on Debian systems.

Until next time.
:wq for today.

18 March 2020

Antoine Beaupr : How can I trust this git repository?

Join me in the rabbit hole of git repository verification, and how we could improve it.

Problem statement As part of my work on automating install procedures at Tor, I ended up doing things like:
git clone REPO
./REPO/bootstrap.sh
... something eerily similar to the infamous curl pipe bash method which I often decry. As a short-term workaround, I relied on the SHA-1 checksum of the repository to make sure I have the right code, by running this both on a "trusted" (ie. "local") repository and the remote, then visually comparing the output:
$ git show-ref master
9f9a9d70dd1f1e84dec69a12ebc536c1f05aed1c refs/heads/master
One problem with this approach is that SHA-1 is now considered as flawed as MD5 so it can't be used as an authentication mechanism anymore. It's also fundamentally difficult to compare hashes for humans. The other flaw with comparing local and remote checksums is that we assume we trust the local repository. But how can I trust that repository? I can either:
  1. audit all the code present and all the changes done to it after
  2. or trust someone else to do so
The first option here is not practical in most cases. In this specific use case, I have audited the source code -- I'm the author, even -- what I need is to transfer that code over to another server. (Note that I am replacing those procedures with Fabric, which makes this use case moot for now as the trust path narrows to "trust the SSH server" which I already had anyways. But it's still important for my fellow Tor developers who worry about trusting the git server, especially now that we're moving to GitLab.) But anyways, in most cases, I do need to trust some other fellow developer I collaborate with. To do this, I would need to trust the entire chain between me and them:
  1. the git client
  2. the operating system
  3. the hardware
  4. the network (HTTPS and the CA cartel, specifically)
  5. then the hosting provider (and that hardware/software stack)
  6. and then backwards all the way back to that other person's computer
I want to shorten that chain as much as possible, make it "peer to peer", so to speak. Concretely, it would eliminate the hosting provider and the network, as attackers.

OpenPGP verification My first reaction is (perhaps perversely) to "use OpenPGP" for this. I figured that if I sign every commit, then I can just check the latest commit and see if the signature is good. The first problem here is that this is surprisingly hard. Let's pick some arbitrary commit I did recently:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
diff --git a/fabric_tpa/pytest.ini b/fabric_tpa/pytest.ini
new file mode 100644
index 0000000..71004ea
--- /dev/null
+++ b/fabric_tpa/pytest.ini
@@ -0,0 +1,3 @@
+[pytest]
+# we inline tests directly in the source code
+python_files = *.py
That's the output of git log -p in my local repository. I signed that commit, yet git log is not telling me anything special. To check the signature, I need something special: --show-signature, which looks like this:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature faite le lun 16 mar 2020 14:37:53 EDT
gpg:                avec la clef RSA 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Bonne signature de  Antoine Beaupr  <anarcat@orangeseeds.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@torproject.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@anarc.at>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@koumbit.org>  [ultime]
gpg:                 alias  Antoine Beaupr  <anarcat@debian.org>  [ultime]
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Can you tell if this is a valid signature? If you speak a little french, maybe you can! But even if you would, you are unlikely to see that output on your own computer. What you would see instead is:
commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
Author: Antoine Beaupr  <anarcat@debian.org>
Date:   Mon Mar 16 14:37:28 2020 -0400
    fix test autoloading
    pytest only looks for file names matching  test  by default. We inline
    tests inside the source code directly, so hijack that.
Important part: Can't check signature: No public key. No public key. Because of course you would see that. Why would you have my key lying around, unless you're me. Or, to put it another way, why would that server I'm installing from scratch have a copy of my OpenPGP certificate? Because I'm a Debian developer, my key is actually part of the 800 keys in the debian-keyring package, signed by the APT repositories. So I have a trust path. But that won't work for someone who is not a Debian developer. It will also stop working when my key expires in that repository, as it already has on Debian buster (current stable). So I can't assume I have a trust path there either. One could work with a trusted keyring like we do in the Tor and Debian project, and only work inside that project, that said. But I still feel uncomfortable with those commands. Both git log and git show will happily succeed (return code 0 in the shell) even though the signature verification failed on the commits. Same with git pull and git merge, which will happily push your branch ahead even if the remote has unsigned or badly signed commits. To actually verify commits (or tags), you need the git verify-commit (or git verify-tag) command, which seems to do the right thing:
$ LANG=C.UTF-8 git verify-commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
[1]$
At least it fails with some error code (1, above). But it's not flexible: I can't use it to verify that a "trusted" developer (say one that is in a trusted keyring) signed a given commit. Also, it is not clear what a failure means. Is a signature by an expired certificate okay? What if the key is signed by some random key in my personal keyring? Why should that be trusted?

Worrying about git and GnuPG In general, I'm worried about git's implementation of OpenPGP signatures. There has been numerous cases of interoperability problems with GnuPG specifically that led to security, like EFAIL or SigSpoof. It would be surprising if such a vulnerability did not exist in git. Even if git did everything "just right" (which I have myself found impossible to do when writing code that talks with GnuPG), what does it actually verify? The commit's SHA-1 checksum? The tree's checksum? The entire archive as a zip file? I would bet it signs the commit's SHA-1 sum, but I just don't know, on the top of my head, and neither do git-commit or git-verify-commit say exactly what is happening. I had an interesting conversation with a fellow Debian developer (dkg) about this and we had to admit those limitations:
<anarcat> i'd like to integrate pgp signing into tor's coding practices more, but so far, my approach has been "sign commits" and the verify step was "TBD" <dkg> that's the main reason i've been reluctant to sign git commits. i haven't heard anyone offer a better subsequent step. if torproject could outline something useful, then i'd be less averse to the practice. i'm also pretty sad that git remains stuck on sha1, esp. given the recent demonstrations. all the fancy strong signatures you can make in git won't matter if the underlying git repo gets changed out from under the signature due to sha1's weakness
In other words, even if git implements the arcane GnuPG dialect just so, and would allow us to setup the trust chain just right, and would give us meaningful and workable error messages, it still would fail because it's still stuck in SHA-1. There is work underway to fix that, but in February 2020, Jonathan Corbet described that work as being in a "relatively unstable state", which is hardly something I would like to trust to verify code. Also, when you clone a fresh new repository, you might get an entirely different repository, with a different root and set of commits. The concept of "validity" of a commit, in itself, is hard to establish in this case, because an hostile server could put you backwards in time, on a different branch, or even on an entirely different repository. Git will warn you about a different repository root with warning: no common commits but that's easy to miss. And complete branch switches, rebases and resets from upstream are hardly more noticeable: only a tiny plus sign (+) instead of a star (*) will tell you that a reset happened, along with a warning (forced update) on the same line. Miss those and your git history can be compromised.

Possible ways forward I don't consider the current implementation of OpenPGP signatures in git to be sufficient. Maybe, eventually, it will mature away from SHA-1 and the interface will be more reasonable, but I don't see that happening in the short term. So what do we do?

git evtag The git-evtag extension is a replacement for git tag -s. It's not designed to sign commits (it only verifies tags) but at least it uses a stronger algorithm (SHA-512) to checksum the tree, and will include everything in that tree, including blobs. If that sounds expensive to you, don't worry too much: it takes about 5 seconds to tag the Linux kernel, according to the author. Unfortunately, that checksum is then signed with GnuPG, in a manner similar to git itself, in that it exposes GnuPG output (which can be confusing) and is likely similarly vulnerable to mis-implementation of the GnuPG dialect as git itself. It also does not allow you to specify a keyring to verify against, so you need to trust GnuPG to make sense of the garbage that lives in your personal keyring (and, trust me, it doesn't). And besides, git-evtag is fundamentally the same as signed git tags: checksum everything and sign with GnuPG. The difference is it uses SHA-512 instead of SHA-1, but that's something git will eventually fix itself anyways.

kernel patch attestations The kernel also faces this problem. Linus Torvalds signs the releases with GnuPG, but patches fly all over mailing list without any form of verification apart from clear-text email. So Konstantin Ryabitsev has proposed a new protocol to sign git patches which uses SHA256 to checksum the patch metadata, commit message and the patch itself, and then sign that with GnuPG. It's unclear to me what this solves, if anything, at all. As dkg argues, it would seem better to add OpenPGP support to git-send-email and teach git tools to recognize that (e.g. git-am) at least if you're going to keep using OpenPGP anyways. And furthermore, it doesn't resolve the problems associated with verifying a full archive either, as it only attests "patches".

jcat Unhappy with the current state of affairs, the author of fwupd (Richard Hughes) wrote his own protocol as well, called jcat, which provides signed "catalog files" similar to the ones provided in Microsoft windows. It consists of a "gzip-compressed JSON catalog files, which can be used to store GPG, PKCS-7 and SHA-256 checksums for each file". So yes, it is yet again another wrapper to GnuPG, probably with all the flaws detailed above, on top of being a niche implementation, disconnected from git.

The Update Framework One more thing dkg correctly identified is:
<dkg> anarcat: even if you could do exactly what you describe, there are still some interesting wrinkles that i think would be problems for you. the big one: "git repo's latest commits" is a loophole big enough to drive a truck through. if your adversary controls that repo, then they get to decide which commits to include in the repo. (since every git repo is a view into the same git repo, just some have more commits than others)
In other words, unless you have a repository that has frequent commits (either because of activity or by a bot generating fake commits), you have to rely on the central server to decide what "the latest version" is. This is the kind of problems that binary package distribution systems like APT and TUF solve correctly. Unfortunately, those don't apply to source code distribution, at least not in git form: TUF only deals with "repositories" and binary packages, and APT only deals with binary packages and source tarballs. That said, there's actually no reason why git could not support the TUF specification. Maybe TUF could be the solution to ensure end-to-end cryptographic integrity of the source code itself. OpenPGP-signed tarballs are nice, and signed git tags can be useful, but from my experience, a lot of OpenPGP (or, more accurately, GnuPG) derived tools are brittle and do not offer clear guarantees, and definitely not to the level that TUF tries to address. This would require changes on the git servers and clients, but I think it would be worth it.

Other Projects

OpenBSD There are other tools trying to do parts of what GnuPG is doing, for example minisign and OpenBSD's signify. But they do not integrate with git at all right now. Although I did find a hack] to use signify with git, it's kind of gross...

Golang Unsurprisingly, this is a problem everyone is trying to solve. Golang is planning on hosting a notary which would leverage a "certificate-transparency-style tamper-proof log" which would be ran by Google (see the spec for details). But that doesn't resolve the "evil server" attack, if we treat Google as an adversary (and we should).

Python Python had OpenPGP going for a while on PyPI, but it's unclear if it ever did anything at all. Now the plan seems to be to use TUF but my hunch is that the complexity of the specification is keeping that from moving ahead.

Docker Docker and the container ecosystem has, in theory, moved to TUF in the form of Notary, "a project that allows anyone to have trust over arbitrary collections of data". In practice however, in my somewhat limited experience, setting up TUF and image verification in Docker is far from trivial.

Android and iOS Even in what is possibly one of the strongest models (at least in terms of user friendliness), mobile phones are surprisingly unclear about those kind of questions. I had to ask if Android had end-to-end authentication and I am still not clear on the answer. I have no idea of what iOS does.

Conclusion One of the core problems with everything here is the common usability aspect of cryptography, and specifically the usability of verification procedures. We have become pretty good at encryption. The harder part (and a requirement for proper encryption) is verification. It seems that problem still remains unsolved, in terms of usability. Even Signal, widely considered to be a success in terms of adoption and usability, doesn't properly solve that problem, as users regularly ignore "The security number has changed" warnings... So, even though they deserve a lot of credit in other areas, it seems unlikely that hardcore C hackers (e.g. git and kernel developers) will be able to resolve that problem without at least a little bit of help. And TUF seems like the state of the art specification around here, it would seem wise to start adopting it in the git community as well. Update: git 2.26 introduced a new gpg.minTrustLevel to "tell various signature verification codepaths the required minimum trust level", presumably to control how Git will treat keys in your keyrings, assuming the "trust database" is valid and up to date. For an interesting narrative of how "normal" (without PGP) git verification can fail, see also A Git Horror Story: Repository Integrity With Signed Commits.

9 May 2017

Martin Pitt: Cockpit is now just an apt install away

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it s now a simple
$ sudo apt install cockpit
away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu. After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit? Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would. Login screen System page The left side bar is the equivalent of a task switcher , and the applications (i. e. modules for administering various aspects of your server) are run in parallel. The main idea of Cockpit is that it should not behave special in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time. It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not Cockpit .

Remote machines In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget: Add remote Remote terminal The Fedora 26 machine has some more Cockpit modules installed, including a lot of playground ones, thus you see a lot more menu entries there.

Under the hood Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the local host only world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren t. This is where the cockpit bridge comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many channels to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine. One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named d1 and gets a property from systemd s hostnamed:
$ cockpit-bridge --interact=---
  "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1"  
---
d1
  "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop"  
---
and it will reply with something like
d1
 "reply":[[ "t":"s","v":"donald" ]],"id":"hostname-prop" 
---
( donald is my laptop s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host. Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don t directly speak this, but use a nice JavaScript API. As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at DevConf.cz in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn t put any effort into design whatsoever, but it does work .

Next steps Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I d like to replace the external PPA/repository on the Install instructions with just it s readily available in -backports ! Unfortunately there s some procedural blockers there, the Ubuntu backport request suffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it s just about zero risk - it s a new leaf package in testing. Have fun playing around with it, and please report bugs! Feel free to discuss and ask questions on the Google+ post.

2 January 2016

Daniel Pocock: The great life of Ian Murdock and police brutality in context

Tributes: (You can Follow or Tweet about this blog on Twitter) Over the last week, people have been saying a lot about the wonderful life of Ian Murdock and his contributions to Debian and the world of free software. According to one news site, a San Francisco police officer, Grace Gatpandan, has been doing the opposite, starting a PR spin operation, leaking snippets of information about what may have happened during Ian's final 24 hours. Sadly, these things are now starting to be regurgitated without proper scrutiny by the mainstream press (note the erroneous reference to SFGate with link to SFBay.ca, this is British tabloid media at its best). The report talks about somebody (no suggestion that it was even Ian) "trying to break into a residence". Let's translate that from the spin-doctor-speak back to English: it is the silly season, when many people have a couple of extra drinks and do silly things like losing their keys. "a residence", or just their own home perhaps? Maybe some AirBNB guest arriving late to the irritation of annoyed neighbours? Doesn't the choice of words make the motive sound so much more sinister? Nobody knows the full story and nobody knows if this was Ian, so snippets of information like this are inappropriate, especially when somebody is deceased. Did they really mean to leave people with the impression that one of the greatest visionaries of the Linux world was also a cat burglar? That somebody who spent his life giving selflessly and generously for the benefit of the whole world (his legacy is far greater than Steve Jobs, as Debian comes with no strings attached) spends the Christmas weekend taking things from other people's houses in the dark of the night? The report doesn't mention any evidence of a break-in or any charges for breaking-in. If having a few drinks and losing your keys in December is such a sorry state to be in, many of us could potentially be framed in the same terms at some point in our lives. That is one of the reasons I feel so compelled to write this: somebody else could be going through exactly the same experience at the moment you are reading this. Any of us could end up facing an assault as unpleasant as the tweets imply at some point in the future. At least I can console myself that as a privileged white male, the risk to myself is much lower than for those with mental illness, the homeless, transgender, Muslim or black people but as the tweets suggest, it could be any of us. The story reports that officers didn't actually come across Ian breaking in to anything, they encountered him at a nearby street corner. If he had weapons or drugs or he was known to police that would have almost certainly been emphasized. Is it right to rush in and deprive somebody of their liberties without first giving them an opportunity to identify themselves and possibly confirm if they had a reason to be there? The report goes on, "he was belligerent", "he became violent", "banging his head" all by himself. How often do you see intelligent and successful people like Ian Murdock spontaneously harming themselves in that way? Can you find anything like that in any of the 4,390 Ian Murdock videos on YouTube? How much more frequently do you see reports that somebody "banged their head", all by themselves of course, during some encounter with law enforcement? Do police never make mistakes like other human beings? If any person was genuinely trying to spontaneously inflict a head injury on himself, as the police have suggested, why wouldn't the police leave them in the hospital or other suitable care? Do they really think that when people are displaying signs of self-harm, rounding them up and taking them to jail will be in their best interests? Now, I'm not suggesting this started out with some sort of conspiracy. Police may have been at the end of a long shift (and it is a disgrace that many US police are not paid for their overtime) or just had a rough experience with somebody far more sinister. On the other hand, there may have been a mistake, gaps in police training or an inappropriate use of a procedure that is not always justified, like a strip search, that causes profound suffering for many victims. A select number of US police forces have been shamed around the world for a series of incidents of extreme violence in recent times, including the death of Michael Brown in Ferguson, shooting Walter Scott in the back, death of Freddie Gray in Baltimore and the attempts of Chicago's police to run an on-shore version of Guantanamo Bay. Beyond those highly violent incidents, the world has also seen the abuse of Ahmed Mohamed, the Muslim schoolboy arrested for his interest in electronics and in 2013, the suicide of Aaron Swartz which appears to be a direct consequence of the "Justice" department's obsession with him. What have the police learned from all this bad publicity? Are they changing their methods, or just hiring more spin doctors? If that is their response, then doesn't it leave them with a cruel advantage over those people who were deceased? Isn't it standard practice for some police to simply round up anybody who is a bit lost and write up a charge sheet for resisting arrest or assaulting an officer as insurance against questions about their own excessive use of force? When British police executed Jean Charles de Menezes on a crowded tube train and realized they had just done something incredibly outrageous, their PR office went to great lengths to try and protect their image, even photoshopping images of Menezes to make him look more like some other suspect in a wanted poster. To this day, they continue to refer to Menezes as a victim of the terrorists, could they be any more arrogant? While nobody believes the police woke up that morning thinking "let's kill some random guy on the tube", it is clear they made a mistake and like many people (not just police), they immediately prioritized protecting their reputation over protecting the truth. Nobody else knows exactly what Ian was doing and exactly what the police did to him. We may never know. However, any disparaging or irrelevant comments from the police should be viewed with some caution. The horrors of incarceration It would be hard for any of us to understand everything that an innocent person goes through when detained by the police. The recently released movie about The Stanford Prison Experiment may be an interesting place to start, a German version produced in 2001, Das Experiment, is also very highly respected. The United States has the largest prison population in the world and the second-highest per-capita incarceration rate. Many, including some on death row, are actually innocent, in the wrong place at the wrong time, without the funds to hire an attorney. The system, and the police and prison officers who operate it, treat these people as packages on a conveyor belt, without even the most basic human dignity. Whether their encounter lasts for just a few hours or decades, is it any surprise that something dies inside them when they discover this cruel side of American society? Worldwide, there is an increasing trend to make incarceration as degrading as possible. People may be innocent until proven guilty, but this hasn't stopped police in the UK from locking up and strip-searching over 4,500 children in a five year period, would these children go away feeling any different than if they had an encounter with Jimmy Saville or Rolf Harris? One can only wonder what they do to adults. What all this boils down to is that people shouldn't really be incarcerated unless it is clear the danger they pose to society is greater than the danger they may face in a prison. What can people do for Ian and for justice? Now that these unfortunate smears have appeared, it would be great to try and fill the Internet with stories of the great things Ian has done for the world. Write whatever you feel about Ian's work and your own experience of Debian. While the circumstances of the final tweets from his Twitter account are confusing, the tweets appear to be consistent with many other complaints about US law enforcement. Are there positive things that people can do in their community to help reduce the harm? Sending books to prisoners (the UK tried to ban this) can make a difference. Treat them like humans, even if the system doesn't. Recording incidents of police activities can also make a huge difference, such as the video of the shooting of Walter Scott or the UK police making a brutal unprovoked attack on a newspaper vendor. Don't just walk past a situation and assume everything is under control. People making recordings may find themselves in danger, it is recommended to use software that automatically duplicates each recording, preferably to the cloud, so that if the police ask you to delete such evidence, you can let them watch you delete it and still have a copy. Can anybody think of awards that Ian Murdock should be nominated for, either in free software, computing or engineering in general? Some, like the prestigious Queen Elizabeth Prize for Engineering can't be awarded posthumously but others may be within reach. Come and share your ideas on the debian-project mailing list, there are already some here. Best of all, Ian didn't just build software, he built an organization, Debian. Debian's principles have helped to unite many people from otherwise different backgrounds and carry on those principles even when Ian is no longer among us. Find out more, install it on your computer or even look for ways to participate in the project.

7 November 2014

Andrew Cater

At mini-Debconf Cambridge:

Much unintentional chaos and hilarity and world class problem solving yesterday.

A routine upgrade from Wheezy - Jessie died horribly on my laptop when UEFI variable space filled, leaving No Operating System on screen.

Cue much running around: Chris Boot, Colin Walters, Steve dug around, booted the system usng rescue CD and so on. Lots more digging, including helpful posts by mjg59 - a BIOS update may solve the problem.

Flashing BIOS did clear the variables and variable space and it all worked perfectly thereafter. This had the potential for turning the laptop into a brick under UEFI (but still working under legacy boot).

As it is, it all worked perfectly - but where else would you get _the_ Grub maintainer, 2 x UEFI experts and a broken laptop all in the same room ?

If it hadn't happened yesterday, it would have happened at home and I'd have been left with nothing. As it is, we all learnt/remembered stuff and had a useful time fixing it.

10 October 2013

Russ Allbery: One more haul

Just a few more recently-released books that I had to pick up. Sheila Bair Bull by the Horns (non-fiction)
Elizabeth Bear Book of Iron (sff)
Steven Brust & Skyler White The Incrementalists (sff)
J.M. Coetzee The Childhood of Jesus (mainstream)
Elizabeth Wein Rose Under Fire (mainstream)
Walter Jon Williams Knight Moves (sff) Time to get back into the reading habit. That's the plan for the next couple of weeks.

22 January 2013

Russ Allbery: Review: Fantasy & Science Fiction, March/April 2011

Review: Fantasy & Science Fiction, March/April 2011
Editor: Gordon van Gelder
Issue: Volume 120, No. 3 & 4
ISSN: 1095-8258
Pages: 258
Charles de Lint's book review column sticks with the sorts of things he normally reviews: urban and contemporary fantasy and young adult. Predictably, I didn't find that much of interest. But I was happy to see that not all the reviews were positive, and he talked some about how a few books didn't work. I do prefer seeing a mix of positive and negative (or at least critical) reviews. James Sallis's review column focuses entirely on Henry Kuttner and C.L. Moore (by way of reviewing a collection). I'm always happy to see this sort of review. But between that and de Lint's normal subject matter, this issue of F&SF was left without any current science fiction reviews, which was disappointing. Lucius Shepard's movie review column features stunning amounts of whining, even by Shepard's standards. The topic du jour is how indie films aren't indie enough, mixed with large amounts of cane-shaking and decrying of all popular art. I find it entertaining that the F&SF film review column regularly contains exactly the sort of analysis that one expects from literary gatekeepers who are reviewing science fiction and fantasy. Perhaps David Langford should consider adding an "As We See Others" feature to Ansible cataloging the things genre fiction fans say about popular movies. "Scatter My Ashes" by Albert E. Cowdrey: The protagonist of this story is an itinerant author who has been contracted to write a family history (for $100K, which I suspect is a bit of tongue-in-cheek wish fulfillment) and has promptly tumbled into bed with his employer. But he is somewhat serious about the writing as well, and is poking around in family archives and asking relatives about past details. There is a murder (maybe) in the family history, not to mention some supernatural connections. Those familiar with Cowdrey's writing will recognize the mix of historical drama, investigation, and the supernatural. Puzzles are, of course, untangled, not without a bit of physical danger. Experienced fantasy readers will probably guess at some of the explanation long before the protagonist does. Like most Cowdrey, it's reliably entertaining, but I found it a bit thin. (6) "A Pocketful of Faces" by Paul Di Filippo: Here's a bit of science fiction, and another mystery, this time following the basic model of a police procedural. The police in this case are enforcing laws around acceptable use of "faces" in a future world where one can clone someone's appearance from their DNA and then mount it on a programmable android. As you might expect from that setup, the possibilities are lurid, occasionally disgusting, and inclined to give the police nightmares. After some scene-setting, the story kicks in with the discovery of the face of a dead woman who, at least on the surface, no one should have any motive to clone. There were a few elements of the story that were a bit too disgusting for me, but the basic mystery plot was satisfying. I thought the ending was a let-down, however. Di Filippo tries to complicate the story and, I thought, went just a little too far, leaving motives and intent more confusing than illuminating. (6) "The Paper Menagerie" by Ken Liu: Back to fantasy, this time using a small bit of magic to illustrate the emotional conflicts and difficulties of allegiance for second-generation immigrants. Jack is the son of an American farther and a Chinese mother who was a mail-order bride. He's young at the start of the story and struggling with the embarassment and humiliation that he feels at his mother's history and the difficulties he has blending in with other kids, leading to the sort of breathtaking cruelty that comes so easily from teenagers who are too self-focused and haven't yet developed adult empathy. I found this very hard to read. The magic is beautiful, personal, and very badly damaged by the cruelty in ways that can never really be fixed. It's a sharp reminder of the importance of being open-hearted, but it's also a devastating reminder that the lesson is normally learned too late. Not the story to read if you're prone to worrying about how you might have hurt other people. (6) "The Evening and the Morning" by Sheila Finch: This long novella is about a third of the issue and is, for once, straight science fiction, a somewhat rare beast in F&SF these days. It's set in the far future, among humans who are members of the Guild of Xenolinguists and among aliens called the Venatixi, and it's about an expedition back to the long-abandoned planet of Earth. I had a lot of suspension of disbelief problems with the setup here. While Earth has mostly dropped out of memory, there's a startling lack of curiosity about its current condition among the humans. Finch plays some with transportation systems and leaves humanity largely dependent on other races to explain the failure to return to Earth, but I never quite bought it. It was necessary to set up the plot, which is an exploration story with touches of first contact set on an Earth that's become alien to the characters, but it seemed remarkably artificial to me. But, putting that aside, I did get pulled into the story. Its emotional focus is one of decline and senescence, a growing sense of futility, that's halted by exploration, mystery, and analysis. The question of what's happened on Earth is inherently interesting and engaging, and the slow movement of the story provides opportunities to build up to some eerie moments. The problem, continuing a theme for this issue, is the ending. Some of the reader's questions are answered, but most of the answers are old, well-worn paths in science fiction. The emotional arc of the story is decidedly unsatisfying, at least to me. I think I see what Finch was trying to do: there's an attempted undermining of the normal conclusion of this sort of investigation to make a broader point about how to stay engaged in the world. But it lacked triumph and catharsis for me, partly because the revelations that we get are too pedestrian for the build-up they received. It's still an interesting story, but I don't think it entirely worked. (6) "Night Gauntlet" by Walter C. DeBill, Jr., et al.: The full list of authors for this story (Walter C. DeBill, Jr., Richard Gavin, Robert M. Price, W.H. Pugmire, Jeffrey Thomas, and Don Webb) provides one with the first clue that it's gone off the rails. Collaborative storytelling, where each author tries to riff off the work of the previous author while spinning the story in a different direction, is something that I think works much better orally, particularly if you can watch facial expressions while the authors try to stump each other. In written form, it's a recipe for a poorly-told story. That's indeed what we get here. The setup is typical Cthulhu mythos stuff: a strange scientist obsessed with conspiracy theories goes insane, leaving behind an office with a map of linkages between apparently unrelated places. The characters in the story also start going insane for similar reasons, leading up to a typical confrontation with things man was not meant to know, or at least pay attention to. If you like that sort of thing, you may like this story better than I did, but I thought it was shallow and predictable. (3) "Happy Ending 2.0" by James Patrick Kelly: More fantasy, this time of the time travel variety. (I call it fantasy since there's no scientific explanation for the time travel and it plays a pure fantasy role in the story.) That's about as much as I can say without giving away the plot entirely (it's rather short). I can see what Kelly was going for, and I think he was largely successful, but I'm not sure how to react to it. The story felt like it reinforced some rather uncomfortable stereotypes about romantic relationships, and the so-called happy ending struck me as the sort of situation that was going to turn very nasty and very uncomfortable about five or ten pages past where Kelly ended the story. (5) "The Second Kalandar's Tale" by Francis Marion Soty: The main question I have about this story is one that I can't answer without doing more research than I feel like doing right now: how much of this is original to Soty and how much if it is straight from Burton's translation of One Thousand and One Nights. Burton is credited for the story, so I suspect quite a lot of this is from the original. Whether one would be better off just reading the original, or if Soty's retelling adds anything substantial, are good questions that I don't have the background to answer. Taken as a stand-alone story, it's not a bad one. It's a twisting magical adventure involving a djinn, a captive woman, some rather predictable fighting over the woman, and then a subsequent adventure involving physical transformation and a magical battle reminiscent of T.H. White. (Although I have quite likely reversed the order of inspiration if as much of this is straight from Burton as I suspect.) Gender roles, however, are kind of appalling, despite the presence of a stunningly powerful princess, due to the amount of self-sacrifice expected from every woman in the story. Personally, I don't think any of the men in the story are worth anywhere near the amount of loyalty and bravery that the women show. Still, it was reasonably entertaining throughout, in exactly the way that I would expect a One Thousand and One Nights tale to be. Whether there's any point in reading it instead of the original is a question I'll leave to others. (7) "Bodyguard" by Karl Bunker: This is probably the best science fiction of the issue. The first person protagonist is an explorer living with an alien race, partly in order to flee the post-singularity world of uploaded minds and apparent stagnation that Earth has become. It's a personal story that uses his analysis of alien mindsets (and his interaction with his assigned alien bodyguard) to flesh out his own desires, emotional background, and reactions to the world. There are some neat linguistic bits here that I quite enjoyed, although I wish they'd been developed at even more length. (The alien language is more realistic than it might sound; there are some human languages that construct sentences in a vaguely similar way.) It's a sad, elegiac story, but it grew on me. (7) "Botanical Exercises for Curious Girls" by Kali Wallace: One has to count this story as science fiction as well, although for me it had a fantasy tone because the scientific world seems to play by fantasy rules from the perspective of the protagonist. Unpacking that perspective is part of the enjoyment of the story. At the start, she seems to be a disabled girl who is being cared for by a strange succession of nurses who vary by the time of day, but as the story progresses, it becomes clear that something much stranger is going on. There are moments that capture a sense of wonder, reinforced by the persistantly curious and happy narrative voice, but both are undercut by a pervasive sense of danger and dread. This is a light story written about rather dark actions. My biggest complaint with the story is that it doesn't so much end as wander off into the sunset. It set up conflict within a claustrophobic frame, so I can understand the thematic intent of breaking free of that frame, but in the process I felt like the story walked away from all of the questions and structure that it created and ended in a place that felt less alive with potential than formless and oddly pointless. I think I wanted it to stay involved and engaged with the environment it had created. (6) "Ping" by Dixon Wragg: I probably should just skip this, since despite the table of contents billing and the full title introduction, it's not a story. It's a two-line joke. But it's such a bad two-line joke that I had to complain about it. I have no idea why F&SF bothered to reprint it. (1) "The Ifs of Time" by James Stoddard: This certainly fits with the Arabian Nights story in this issue. The timekeeper of a vast and rambling castle (think Gormenghast taken to the extreme) wanders into a story-telling session in a distant part of the castle. The reader gets to listen to four (fairly good) short stories about time, knowledge, and memory, told in four very different genres. All of this does relate to why the timekeeper is there, and the frame story is resolved by the end, but the embedded stories are the best part; each of them is interesting in a different way, and none of them outlast their welcome. This was probably the strongest story of this issue. (7) Rating: 6 out of 10

1 January 2013

Russ Allbery: Review: City of Diamond

Review: City of Diamond, by Jane Emerson
Publisher: DAW
Copyright: March 1996
ISBN: 0-88677-704-6
Format: Mass market
Pages: 624
The setting is deep into the star-faring future of humanity. Humans have settled many other worlds, thanks largely to technology from a race named the Curosa. There is a thriving network of settlement based around a sort of warp conduit network. But, standing apart from most human settlements, are three huge ships: the City of Opal, the City of Pearl, and the City of Diamond. Their inhabitants are devout members of the Redemptionist faith, a merging of Christian (primarily Catholic) belief and Curosa religion. Their ships are gifts from the now-departed Curosa, featuring drives unknown to the rest of humanity that allows them to ignore the normal transport network. They are meant to witness and spread their religion, but now, six centuries later, they primarily trade. Much of this is not apparent at the start. City of Diamond opens with Adrian Mercati, chosen heir of the current dying Protector, dodging people in what seems to be an oddly-organized city. Impudent, occasionally flippant, utterly charming, and in love with danger, he inherits the position of Protector shortly after we meet him. Emerson slowly reveals more of the nature of the ship while Adrian is shocking (and charming) the nobility and the church, and consorting with demons. Or, at least, one demon: Tal, an entirely human-appearing creature except for his eyes, who the Redemptionists claim is born without a soul. Slowly, it becomes clear that he's actually the offspring of an alien and a human, a mating of species whose children cannot socialize as human and behave in ways that seem like human sociopaths. Tal is no exception, but his interests align with Adrian's for the time. To those two viewpoint characters, Emerson adds four more: a low-rank administrator named Spider who works for Tal after Tal saved him from death by recycling, Adrian's politically-arranged bride-to-be from the rival (and even more religious) ship City of Opal, a guard from a French-descended slum on City of Opal, and a dangerously honorable and literal mercenary from an off-shoot culture of humanity that's now only a legend. This is a book that tends to sprawl. There's quite a lot going on, starting with the negotiations between City of Diamond and City of Opal to put to rest the remnants of a civil war and leading to the search for a legendary Curosa artifact on the planet they're both visiting. All of the characters have their own agendas, their own history, and their own goals, and are pursuing them in parallel and occasionally in opposition. Emerson takes the time to let all of this build. Despite that, there is very little in this book that drags or bores, and that's due primarily to the strength of the characters. Every one of the viewpoint characters is unique, bringing a different perspective to the story, and every one of them changes and grow over the course of the story. They also play off of each other in delightful ways. This is a book that makes full use of the fact that three characters make seven character interactions. For example, Tal by himself is logical but just slightly askew, Tal seen by Spider is uncomfortable and worrisome, Tal and Adrian is a dance of mutual amusement and respect, and Tal and Keylinn finding points of commonality is just delightful. I did wish that there was a bit more Keylinn and Tal and a bit less of Spider, but once Emerson built her world, introduced her characters, and got the story moving, there aren't many weak pieces. Part of the fun is figuring out the world background. There are almost no infodumps here; Emerson just starts telling stories of characters in the surroundings in which they find themselves, and the reader is left to piece things together from background description, the occasional explanation from one character to another, and parallels with long-standing SF tropes. It's not a puzzle book, but it is a book of where the reader is dropped as an alien into a fully-developed culture. In both that way and in the use of nobility, religion, and formal hierarchy against a starfaring backdrop, City of Diamond reminds me of the sort of classic space opera that led up to Star Wars but isn't seen as much these days. It's also funny. I was unsurprised to discover that Emerson is a pseudonym for Doris Egan, who (in addition to writing some other SF) is an occasional writer and producer for House. The humor isn't entirely obvious at the start, although Adrian frequently provokes a grin from the start of the book, but Tal's perceptive but obliviously blunt comments become more and more fun as the book progresses, and both Iolanthe (Adrian's bride) and Keylinn (the mercenary, sort of) are a delight. I want to see more of both of them, but sadly the book ends just as they (particularly Iolanthe) are hitting their strides. And that's the biggest problem with City of Diamond: it's a failed (or at least postponed) trilogy. As with Walter Jon Williams's Metropolitan and City on Fire, there were supposed to be more books, but apparently either the publisher lost interest or other projects took precedence. The story does come to something of a conclusion, but most of the major story arcs aren't resolved. Emerson is still bringing more guns onto the set late in the book. None of the villains truly get their just desserts, nor do most of the character arcs fully resolve, although friendships and more have formed satisfyingly by the end of the book. There is obviously far more to both these characters and this universe, including an entire ship that was never visited and an alien whose presence should prove startling. This is a great book: warm, funny, full of interesting characters, and with a nicely complex background universe. If it were complete in itself, or if the sequels had materialized, I would recommend it unreservedly. As is, you will have to weigh your dislike of unfinished stories against its stand-alone merits. But I enjoyed it a great deal. Rating: 7 out of 10

31 July 2012

Martin Pitt: My impressions from GUADEC

I have had the pleasure of attending GUADEC in full this year. TL;DR: A lot of great presentations + lots of hall conversations about QA stuff + the obligatory be er,ach = . Last year I just went to the hackfest, and I never made it to any previous one, so GUADEC 2012 was a kind of first-time experience for me. It was great to put some faces and personal impressions to a lot of the people I have worked with for many years, as well as catching up with others that I did meet before. I discussed various hardware/software testing stuff with Colin Walters, Matthias Clasen, Lennart Poettering, Bertrand Lorentz, and others, so I have a better idea how to proceed with automated testing in plumbing and GNOME now. It was also really nice to meet my fellow PyGObject co-maintainer Paolo Borelli, as well as seeing Simon Schampier and Ignacio Casal Quinteiro again. No need to replicate the whole schedule here (see for yourself on the interwebs), so I just want to point out some personal highlights in the presentations: There were a lot of other good ones, some of them technical and some amusing and enlightening, such as Frederico s review of the history of GNOME. On Monday I prepared and participated in a PyGObject hackfest, but I ll blog about this separately. I want to thank all presenters for the excellent program, as well as the tireless GUADEC organizer team for making everything so smooth and working flawlessly! Great Job, and see you again in Strasbourg or Brno!

8 January 2012

Russell Coker: My First Cruise

A few weeks ago I went on my first cruise, from Sydney to Melbourne on the Dawn Princess. VacationsToGo.com (a discount cruise/resort web site) has a review of the Dawn Princess [1], they give it 4 stars out of a possible 6. The 6 star ships seem to have discount rates in excess of $500 per day per person, much more than I would pay. The per-person rate is based on two people sharing a cabin, it seems that most cabins can be configured as a double bed or twin singles. If there is only one person in a cabin then they pay almost double the normal rate. It seems that most cruise ships have some support for cabins with more than two people (at a discount rate), but the cabins which support that apparently sell out early and don t seem to be available when booking a cheap last-minute deal over the Internet. So if you want a cheap cruise then you need to have an even number of people in your party. The cruise I took was two nights and cost $238 per person, it was advertised at something like $220 but then there are extra fees when you book (which seems to be the standard practice). The Value of Cruises To book a hotel room that is reasonably comfortable (4 star) in Melbourne or Sydney you need to spend more than $100 per night for a two person room if using Wotif.com. The list price of a 4 star hotel room for two people in a central city area can be well over $300 per night. So the cost for a cruise is in the range of city hotel prices. The Main Dining Room (MDR) has a quality of food and service that compares well with city restaurants. The food and service in the Dawn Princess MDR wasn t quite as good as Walter s Wine Bar (one of my favorite restaurants). But Walter s costs about $90 for a four course meal. The Dawn Princess MDR has a standard 5 course meal (with a small number of options for each course) and for no extra charge you can order extra serves. When you make it a 7 course meal the value increases. I really doubt that I could find any restaurant in Melbourne or Sydney that would serve a comparable meal for $119. You could consider a cruise to be either paying for accommodation and getting everything else for free or to be paying for fine dining in the evening and getting everything else for free. Getting both for the price of one (along with entertainment etc) is a great deal! I can recommend a cruise as a good holiday which is rather cheap if you do it right. That is if you want to spend lots of time swimming and eating quality food. How Cruise Companies Make Money There are economies of scale in running a restaurant, so having the MDR packed every night makes it a much more economic operation than a typical restaurant which has quiet nights. But the expenses in providing the services (which involves a crew that is usually almost half the number of passengers) are considerable. Paying $119 per night might cover half the wages of an average crew member but not much more. The casino is one way that the cruise companies make money. I can understand that someone taking a luxury vacation might feel inclined to play blackjack or something else that seems sophisticated. But playing poker machines on a cruise ship is rather sad not that I m complaining, I m happy for other people to subsidise my holidays! Alcohol is rather expensive on board. Some cruise companies allow each passenger to take one bottle of wine and some passengers try to smuggle liquor on board. On the forums some passengers report that they budget to spend $1000 per week on alcohol! If I wanted a holiday that involved drinking that much I d book a hotel at the beach, mix up a thermos full of a good cocktail in my hotel room, and then take my own deck-chair to the beach. It seems that the cruise companies specialise in extracting extra money from passengers (I don t think that my experience with the Dawn Princess is unusual in any way). Possibly the people who pay $1000 per night or more for a cruise don t get the nickel-and-dime treatment, but for affordable cruises I think it s standard. You have to be in the habit of asking the price whenever something is offered and be aware of social pressure to spend money. When I boarded the Dawn Princess there was a queue, which I joined as everyone did. It turned out that the queue was to get a lanyard for holding the key-card (which opens the cabin door and is used for payment). After giving me the lanyard they then told me that it cost $7.95 so I gave it back. Next time I ll take a lanyard from some computer conference and use it to hold the key-card, it s handy to have a lanyard but I don t want to pay $7.95. Finally some things are free at some times but not at others, fruit juice is free at the breakfast buffet but expensive at the lunch buffet. Coffee at the MDR is expensive but it was being served for free at a cafe on deck. How to have a Cheap Cruise VacationsToGo.com is the best discount cruise site I ve found so far [2]. Unfortunately they don t support searching on price, average daily price, or on a customised number of days (I can search for 7 days but not 7 or less). For one of the cheaper vessels it seems that anything less than $120 per night is a good deal and there are occasional deals as low as $70 per night. Princess cruises allows each passenger to bring one bottle of wine on board. If you drink that in your cabin (to avoid corkage fees) then that can save some money on drinks. RumRunnerFlasks.com sells plastic vessels for smuggling liquor on board cruise ships [3]. I wouldn t use one myself but many travelers recommend them highly. Chocolate and other snack foods are quite expensive on board and there are no restrictions on bringing your own, so the cheap options are to bring your own snack food or to snack from the buffet (which is usually open 24*7). Non-alcoholic drinks can be expensive but you can bring your own and use the fridge in your cabin to store it, but you have to bring cans or pressurised bottles so it doesn t look like you are smuggling liquor on board. Generally try not to pay for anything on board, there s enough free stuff if you make good choices. Princess offers free on-board credit (money for buying various stuff on-board) for any cruise that you book while on a cruise. The OBC starts at $25 per person and goes as high as $150 per person depending on how expensive the cruise is. Generally booking cruises while on-board is a bad idea as you can t do Internet searches. But as Princess apparently doesn t allow people outside the US to book through a travel agent and as they only require a refundable deposit that is not specific to any particular cruise there seems no down-side. In retrospect I should have given them a $200 on the off chance that I ll book another cruise with them some time in the next four years. Princess provide a book of discount vouchers in every cabin, mostly this is a guide to what is most profitable for them and thus what you should avoid if you want a cheap holiday. But there are some things that could be useful such as a free thermos cup with any cup of coffee if you buy coffee then you might as well get the free cup. Also they have some free contests that might be worth entering. Entertainment It s standard practice to have theatrical shows on board, some sort of musical is standard and common options include a magic show and comedy (it really depends on which cruise you take). On the Dawn Princess the second seating for dinner started at 8PM (the time apparently varies depending on the cruise schedule) which was the same time as the first show of the evening. I get the impression that this sort of schedule is common so if you want to see two shows in one night then you need to have the early seating for dinner. The cruise that I took lasted two nights and had two shows (a singing/dancing show and a magic show), so it was possible to have the late seating for dinner and still see all the main entertainment unless you wanted to see one show twice. From reading the CruiseCritic.com forum [4] I get the impression that the first seating for dinner is the most popular. On some cruises it s easy to switch from first to second seating but not always possible to switch from second to first. Therefore the best strategy seems to be to book the first seating. Things to do Before Booking a Cruise Read the CruiseCritic.com forum for information about almost everything. Compare prices for a wide variety of cruises to get a feel for what the best deals are. While $100 per night is a great deal for the type of cruise that interests me and is in my region it may not be a good match for the cruises that interest you. Read overview summaries of cruise lines that operate in your area. Some cruise lines cater for particular age groups and interests and are thus unappealing to some people EG anyone who doesn t have children probably won t be interested in Disney cruises. Read reviews of the ships, there is usually a great variation between different ships run by one line. One factor is when the ships have been upgraded with recently developed luxury features. Determine what things need to be booked in advance. Some entertainment options on board support a limited number of people and get booked out early. For example if you want to use the VR golf simulator on the Dawn Princess you should probably check in early and make a reservation as soon as you are on board. The forums are good for determining what needs to be booked early. Also see my post about booking a cruise and some general discussion of cruise related things [5]. Related posts:
  1. Cruises It seems that in theory cruises can make for quite...
  2. Combat Wasps One of the many interesting ideas in Peter F. Hamilton s...
  3. Victoria Hotel Melbourne I have just stayed at the Victoria Hotel Melbourne. I...

28 October 2011

Lars Wirzenius: On the future of Linux distributions

Executive summary Debian should consider: Introduction I've taken a job with Codethink, as part of a team to develop a new embedded Linux system, called Baserock. Baserock is not a distribution as such, but deals with many of the same issues. I thought it might be interesting to write out my new thoughts related to this, since they might be useful for proper distributions to think about too. I'll hasten to add that many of these thoughts are not originally mine. I shamelessly adopt any idea that I think is good. The core ideas for Baserock come from Rob Taylor, Daniel Silverstone, and Paul Sherwood, my colleagues and bosses at Codethink. At the recent GNOME Summit in Montreal, I was greatly influenced by Colin Walters. This is also not an advertisment for Baserock, but since my of my current thinking come from that project, I'll discuss things in the context of Baserock. Finally, I'm writing this to express my personal opinion and thoughts. I'm not speaking for anyone else, least of all Codethink. On the package abstraction Baserock abandons the idea of packages for all individual programs. In the early 1990s, when Linux was new, and the number of packages in a distribution could be counted on two hands in binary, packages were a great idea. It was feasible to know at least something about every piece of software, and pick the exact set of software to install on a machine. It is still feasible to do so, but only in quite restricted circumstances. For example, picking the packages to install a DNS server, or an NFS server, or a mail server, by hand, without using meta packages or tasks (in Debian terminology), is still quite possible. On embedded devices, there's also usually only a handful of programs installed, and the people doing the install can be expected to understand all of them and decide which ones to install on each specific device. However, those are the exceptions, and they're getting rarer. For most people, manually picking software to install is much too tedious. In Debian, we've realised this a great many years ago, and developed meta packages (whose only purpose is to depend on other packages) and tasks (solves the same problem, but differently). These make it possible for a user to say "I want to have the GNOME desktop environment", and not have to worry about finding every piece that belongs in GNOME, and install that separately. For much of the past decade, computers have had sufficient hard disk space that it is no longer necessary to be quite so picky about what to install. A new cheap laptop will now typically come with at least 250 gigabytes of disk space. An expensive one, with an SSD drive, will have at least 128 gigabytes. A fairly complete desktop install uses less than ten gigabytes, so there's rarely a need to pick and choose between the various components. From a usability point of view, choosing from a list of a dozen or two options is much easier than from a list of thirty five thousand (the number of Debian packages as I write this). This is one reason why Baserock won't be using the traditional package abstraction. Instead, we'll collect programs into larger collections, called strata, which form some kind of logical or functional whole. So there'll be one stratum for the core userland software: init, shell, libc, perhaps a bit more. There'll be another for a development environment, one for a desktop environment, etc. Another, equally important reason, to move beyond packages is the problems caused by the combinatorial explosion of packages and versions. Colin Walters talks about this very well. When every system has a fairly unique set of packages, and versions of them, it becomes much harder to ensure that software works well for everyone, that upgrades work well, and that any problems get solved. When the number of possible package combinations is small, getting the interactions between various packages right is easier, QA has a much easier time to test all upgrade paths, and manual test coverage improves a lot when everyone is testing the same versions. Even debugging gets easier, when everyone can easily run the same versions. Grouping software into bigger collections does reduce flexibility of what gets installed. In some cases this is important: very constrainted embedded devices, for example, still need to be very picky about what software gets installed. However, even for them, the price of flash storage is low enough that it might not matter too much, anymore. The benefit of a simpler overall system may well outweigh the benefit of fine-grained software choice. Everything in git In Baserock, we'll be building everything from source in git. It will not be possible to build anything, unless the source is committed. This will allow us to track, for each binary blob we produce, the precise sources that were used to bulid it. We will also try to achieve something a bit more ambitious: anything that affects any bit in the final system image can be traced to files committed to git. This means tracking also all configuration settings for the build, and the whole build environment, in git. This is important for us so that we can reproduce an image used in the field. When a customer is deploying a specific image, and needs it to be changed, we want to be able to make the change with the minimal changes compared to the previous version of the image. This requires that we can re-create the original image, from source, bit for bit, so that when we make the actual change, only the changes we need to make affect the image. We will make it easy to branch and merge not just individual projects, but the whole system. This will make it easy to do large changes to the system, such as transitioning to a new version of GNOME, or the toolchain. Currently, in Debian, such large changes need to be serialised, so that they do not affect each other. It is easy, for example, for a GNOME transition to be broken by a toolchain transition. Branching and merging has long been considered the best available solution for concurrent development within a project. With Baserock, we want to have that for the whole system. Our build servers will be able to build images for each branch, without requiring massive hardware investment: any software that is shared between branches only gets built once. Launchpad PPAs and similar solutions provide many of the benefits of branching and merging on the system level. However, they're much more work than "git checkout -b gnome-3.4-transition". I believe that the git based approach will make concurrent development much more efficient. Ask me in a year if I was right. Git, git, and only git There are a lot of version control systems in active use. For the sake of simplicity, we'll use only git. When an upstream project uses something else, we'll import their code into git. Luckily, there are tools for that. The import and updates to it will be fully automatic, of course. Git is not my favorite version control system, but it's clearly the winner. Everything else will eventually fade away into obscurity. Or that's what we think. If it turns out that we're wrong about that, we'll switch to something else. However, we do not intend to have to deal with more than one at a time. Life's too short to use all possible tools at the same time. Tracking upstreams closely We will track upstream version control repositories, and we will have an automatic mechanism of building our binaries directly from git. This will, we hope, make it easy to follow closely upstream development, so that when, say, GNOME developers make commits, we want to be able to generate a new system image which includes those changes the same day, if not within minutes, rather than waiting days or weeks or months. This kind of closeness is greatly enhanced by having everything in version control. When upstream commits changes to their version control system, we'll mirror them automatically, and this then triggers a new system image build. When upstream makes changes that do not work, we can easily create a branch from any earlier commit, and build images off that branch. This will, we hope, also make it simpler to make changes, and give them back to upstream. Whenever we change anything, it'll be done in a branch, and we'll have system images available to test the change. So not only will upstream be able to easily get the change from our git repository, they'll also be easily verify, on a running system, that the change fixes the problem. Automatic testing, maybe even test driven development We will be automatically building system images from git commits for Baserock. This will potentially result in a very large number of images. We can't possibly test all of them manually, so we will implement some kind of automatic testing. The details of that are still under discussion. I hope to be able to start adding some test driven development to Baserock systems. In other words, when we are requested to make changes to the system, I want the specification to be provided as executable tests. This will probably be impossible in real life, but I can hope. I've talked about doing the same thing for Debian, but it's much harder to push through such changes in an established, large project. Solving the install, upgrade, and downgrade problems All mainstream Linux distributions are based on packages, and they all, pretty much, do installations and upgrades by unpacking packages onto a running system, and then maybe running some scripts from the packages. This works well for a completely idle system, but not so well on systems that are in active use. Colin Walters again talks about this. For installation of new software, the problem is that someone or something may invoke it before it is fully configured by the package's maintainer script. For example, a web application might unpack in such a way that the web server notices it, and a web browser may request the web app to run before it is configured to connect to the right database. Or a GUI program may unpack a .desktop file before the executable or its data files are unpacked, and a user may notice the program in their menu and start it, resulting in an error. Upgrades suffer from additonal problems. Software that gets upgraded may be running during the upgrade. Should the package manager replace the software's data files with new versions, which may be in a format that the old program does not understand? Or install new plugins that will cause the old version of the program to segfault? If the package manager does that, users may experience turbulence without having put on their seat belts. If it doesn't do that, it can't install the package, or it needs to wait, perhaps for a very long time, for a safe time to do the upgrade. These problems have usually been either ignored, or solved by using package specific hacks. For example, plugins might be stored in a directory that embeds the program's version number, ensuring that the old version won't see the new plugins. Some people would like to apply installs and upgrades only at shutdown or bootup, but that has other problems. None of the hacks solve the downgrade problem. The package managers can replace a package with an older version, and often this works well. However, in many cases, any package maintainer scripts won't be able to deal with downgrades. For example, they might convert data files to a new format or name or location upon upgrades, but won't try to undo that if the package gets downgraded. Given the combinatorial explosion of package versions, it's perhaps just as well that they don't try. For Baserock, we absolutely need to have downgrades. We need to be able to go back to a previous version of the system if an upgrade fails. Traditionally, this has been done by providing a "factory reset", where the current version of the system gets replaced with whatever version was installed in the factory. We want that, but we also want to be able to choose other versions, not just the factory one. If a device is running version X, and upgrades to X+1, but that version turns out to be a dud, we want to be able to go back to X, rather than all the way back to the factory version. The approach we'll be taking with Baserock relies on btrfs and subvolumes and snapshots. Each version of the system will be installed in a separate subvolume, which gets cloned from the previous version, using copy-on-write to conserve space. We'll make the bootloader be able to choose a version of the system to boot, and (waving hands here) add some logic to be able to automatically revert to the previous version when necessary. We expect this to work better and more reliably than the current package based one. Making choices Debian is pretty bad at making choices. Almost always, when faced with a need to choose between alternative solutions for the same problem, we choose all of them. For example, we support pretty much every init implementation, various implementations of /bin/sh, and we even have at least three entirely different kernels. Sometimes this is non-choice is a good thing. Our users may need features that only one of the kernels support, for example. And we certainly need to be able to provide both mysql and postresql, since various software we want to provide to our uses needs one and won't work with the other. At other times, the inability to choose causes trouble. Do we really need to support more than one implemenation of /bin/sh? By supporting both dash and bash for that, we double the load on testing and QA, and introduce yet another variable to deal with into any debugging situation involving shell scripts. Especially for core components of the system, it makes sense to limit the flexibility of users to pick and choose. Combinatorial explosion d j vu. Every binary choice doubles the number of possible combinations that need to be tested and supported and checked during debugging. Flexibility begets complexity, complexity begets problems. This is less of a problem at upper levels of the software stack. At the very top level, it doesn't really matter if there are many choices. If a user can freely choose between vi and Emacs, and this does not add complexity at the system level, since nothing else is affected by that choice. However, if we were to add a choice between glibc, eglibc, and uClibc for the system C library, then everything else in the system needs to be tested three times rather than once. Reducing the friction coefficient for system development Currently, a Debian developer takes upstream code, adds packaging, perhaps adds some patches (using one of several methods), builds a binary package, tests it, uploads it, and waits for the build daemons and the package archive and user-testers to report any problems. That's quite a number of steps to go through for the simple act of adding a new program to Debian, or updating it to a new version. Some of it can be automated, but there's still hoops to jump through. Friction does not prevent you from getting stuff done, but the more friction there is, the more energy you have to spend to get it done. Friction slows down the crucial hack-build-test cycle of software development, and that hurts productivity a lot. Every time a developer has to jump through any hoops, or wait for anything, he slows down. It is, of course, not just a matter of the number of steps. Debian requires a source package to be uploaded with the binary package. Many, if not most, packages in Debian are maintained using version control systems. Having to generate a source package and then wait for it to be uploaded is unnecessary work. The build daemon could get the source from version control directly. With signed commits, this is as safe as uploading a tarball. The above examples are specific to maintaining a single package. The friction that really hurts Debian is the friction of making large-scale changes, or changes that affect many packages. I've already mention the difficulty of making large transitions above. Another case is making policy changes, and then implementing them. An excellent example of that is in Debian is the policy change to use /usr/share/doc for documentation, instead of /usr/doc. This took us many years to do. We are, I think, perhaps a little better at such things now, but even so, it is something that should not take more than a few days to implement, rather than half a decade. On the future of distributions Occasionally, people say things like "distributions are not needed", or that "distributions are an unnecessary buffer between upstream developers and users". Some even claim that there should only be one distribution. I disagree. A common view of a Linux distribution is that it takes some source provided by upstream, compiles that, adds an installer, and gives all of that to the users. This view is too simplistic. The important part of developing a distribution is choosing the upstream projects and their versions wisely, and then integrating them into a whole system that works well. The integration part is particularly important. Many upstreams are not even aware of each other, nor should they need to be, even if their software may need to interact with each other. For example, not every developer of HTTP servers should need to be aware of every web application, or vice versa. (It they had to be, it'd be a combinatorial explosion that'd ruin everything, again.) Instead, someone needs to set a policy of how web apps and web servers interface, what their common interface is, and what files should be put where, for web apps to work out of the box, with minimal fuss for the users. That's part of the integration work that goes into a Linux distribution. For Debian, such decisions are recorded in the Policy Manual and its various sub-policies. Further, distributions provide quality assurance, particularly at the system level. It's not realistic to expect most upstream projects to do that. It's a whole different skillset and approach that is needed to develop a system, rather than just a single component. Distributions also provide user support, security support, longer term support than many upstreams, and port software to a much wider range of architectures and platforms than most upstreams actively care about, have access to, or even know about. In some cases, these are things that can and should be done in collaboration with upstreams; if nothing else, portability fixes should be given back to upstreams. So I do think distributions have a bright future, but the way they're working will need to change.

28 April 2011

Russell Coker: Australia Needs it s own Monarch

Tomorrow Prince William will marry Kate Middleton. If we don t change anything he will probably become the King of Australia at some future time, so I think that now is the time to start discussing the options. Walter Block makes some interesting points in favor of longer terms for politicians and for having a monarch to get a long-term view of the national interest [1], he s not the only person to make such points, but he makes them in a better way than most. Of course the problem with this is the long history of kings not doing what s best for their country part of the ownership rights to property is the right to destroy it, so a monarch who owns a country therefore can be considered to have the right to cause the wholesale death of their subjects. There are some examples of President for Life political leaders demonstrating this at the moment. Even with a monarch who is generally a nice person and who has controls to prevent the worst abuses there is the possibility of Control Fraud. In the Constitutional Monarchy system that doesn t happen because a constitutional monarch has little power (no official position of power). But there is still the issue of whether the monarchy is any good. Charles Stross wrote an interesting post about the apparent human need to have a leadership figure [2]. So getting rid of a monarch tends to result in a president getting the trappings of a king, and if things go wrong (as they often do) then they get absolute power until the next revolution. It seems that having one person who is the head of government and the head of state (as done in the US for example) is a bad idea, they can start to think that it s all about them. I don t think that the US is at risk of getting a president for life in the near future and I don t think that Australia will do so if we become a republic, but that doesn t mean that the republican system works well. I think that the Australian system is working better than the US system and I will generally vote against any changes that make Australia more like the US. As long as the House of Windsor provides monarchs who are as sensible as Queen Elizabeth 2 I will vote in favor of the continued rule of the House of Windsor in preference to an Australian republic (if Prince Charles ever becomes king I may support a republic). A Way of Improving Things I want to have an Australian monarch. Someone who will live in Australia for most of the time (as opposed to a distant monarch who visits once a decade if we are lucky). Protocol should dictate that the Prime Minister and cabinet ministers are forced to show ritual respect to the monarch, bowing etc, and no touching. A separation between the person who performs most of the ceremonial functions and the person who actually makes the political decisions should help constrain political egos. When a Prime Minister feels the need to suck up to someone more powerful it would be better to have that person be an Australian monarch than the US president. Tradition has it that monarchs have to be descended from other monarchs (although there are cases of elected monarchs as happened in Danish history). An election for a monarch probably wouldn t work well in a modern political environment, so we need someone with royal ancestry. One possibility is to have a spare descendant of our current Queen become the monarch of Australia, I think it s quite likely that given a choice between remaining a UK prince or princess for the rest of their life and becoming the monarch of Australia there would be someone who would take the latter option and I expect that the Queen would consent to that arrangement if asked (she would have to prefer it to a Republic). Another possibility is the fact that Mary the Crown Princess of Denmark has more children than the Danish monarchy requires [3]. As she was born in Australia it seems likely that her children will have more interest in Australia than most royals and a skim read of some tabloid magazines indicates that her family is quite popular. I expect that if a Danish prince or princess was invited to become the monarch of Australia then this would be acceptable to the Queen of Denmark. In an ideal world there would not be such a thing as a monarchy. But as we don t get to have ideal voters and therefore our politicians are far from ideal it seems to me that the constitutional monarchy is the least bad system of government. Don t think that I am in favor of a monarchy, I just dislike it less than the other options. Finally the Queen is the Supreme Governor of the Church of England. I think that it would be good to have a separation between church and state and therefore anyone who is in a leadership position of any religious organisation should be considered unsuitable to be the monarch.

28 January 2011

Amaya Rodrigo: Indignez Vous!

If There is any sort of food for thought worth reading, this is it.
INDIGNEZ-VOUS! GET ANGRY! CRY OUT by St phane Hessel


St phane Hessel, author of Indignez-vous!
After 93 years, it is almost the final act. The end for me is not very far off any more. But it still leaves me a chance to be able to remind others of what acted as the basis of my political engagement. It was the years of resistance to the Nazi occupation -- and the program of social rights worked out 66 years ago by the National Council of the Resistance!

It is to Jean Moulin [murdered founder of the Council] that we owe, as part of this Council, the uniting of all elements of occupied France -- the movements, the parties, the labor unions -- to proclaim their membership in Fighting France, and we owe this to the only leader that it acknowledged, General de Gaulle. From London, where I had joined de Gaulle in March 1941, I learned that this Council had completed a program and adopted it on March 15th, 1944, that offered for liberated France a group of principles and values on which would rest the modern democracy of our country.

These principles and these values, we need today more than ever. It is up to us to see to it, all together, that our society becomes a society of which we are proud, not this society of immigrants without papers -- expulsions, suspicion regarding the immigrants. Not this society where they call into question social security and national retirement and health plans. Not this society where mass media are in the hands of the rich. These are things that we would have refused to give in to if we had been the true heirs of the National Council of the Resistance.









From 1945, after a dreadful drama [WWII], it was an ambitious
resurrection of society to which the remaining contingent of the Council
of the Resistance devoted itself. Let us remember them while creating
national health and pensions plans such as the Resistance wished, as its
program stipulated, "a full plan of French national health and social
security, aimed at assuring all citizens the means of existence whenever
they are unable to obtain them by a job; a retirement allowing the old
workers to finish their days with dignity."

The sources of energy, electricity, and gas, mines, the big banks, were
nationalized. Now this was as the program recommended: "... the return
to the nation of big monopolized means of production, fruits of common
labor, sources of energy, wealth from the mines, from insurance
companies and from big banks; the institution of a true economic and
social democracy involving the ousting of the big economic and financial
fiefdoms from the direction of the economy."

General interest must dominate over special interest. The just man
believes that wealth created in the realm of labor should dominate over
the power of money.

The Resistance proposed, "a rational organization of the economy
assuring the subordination of special interests to general interest, and
the emancipation of 'slaves' of the professional dictatorship that was
instituted just as in the fascist states," which had used the interim
[for two years after the war] government of the Republic as an agent.

A true democracy needs an independent press, and the Resistance
acknowledged it, demanded it, by defending "the freedom of the press,
its honor, and its independence from the State, the power of money and
foreign influence." This is what relieved restrictions on the press from
1944 on. And press freedom is definitely what is in danger today.

The Resistance called for a "real possibility for all French children to
benefit from the most advanced education," without discrimination.
Reforms offered in 2008 go contrary to this plan. Young teachers, whose
actions I support, went so far as refusing to apply them, and they saw
their salaries cut by way of punishment. They were indignant,
"disobeyed," judging these reforms too far from the ideal of the
democratic school, too much in the service of a society of commerce and
not developing the inventive and critical mind enough. 2

All the foundations of the social conquests of the Resistance are
threatened today.

The motive of the Resistance: indignation (Indignez-vous!)

Some dare to say to us that the State cannot afford the expenses of
these measures for citizens any more. But how can there be today a lack
of money to support and extend these conquests while the production of
wealth has been considerably augmented since the Liberation period when
Europe was in ruins? On the contrary, the problem is the power of money,
so much opposed by the Resistance, and of the big, boldfaced, selfish
man, with his own servants in the highest spheres of the State.

Banks, since privatized again, have proved to be concerned foremost for
their dividends and for the very high salaries of their leaders, not the
general interest. The disparity between the poorest and the richest has
never been so great, and amassing money, competition, so encouraged.

The basic motive of the Resistance was indignation!

We, the veterans of the resistance movements and combat forces of Free
France, we call on the young generation to live by, to transmit, the
legacy of the Resistance and its ideals. We say to them: Take our place,
"Indignez-vous!" [Get angry! or Cry out!].

The political, economic, intellectual leaders, and the whole society do
not have to give in, nor allow oppression by an actual international
dictatorship of the financial markets, which threatens peace and
democracy.

I wish for you all, each of you, to have your own motive for
indignation. It is precious. When something outrages you as I was
outraged by Nazism, then people become militant, strong, and engaged.
They join this current of history, and the great current of history must
continue thanks to each individual. And this current goes towards more
justice, more freedom, but not this unbridled freedom of the fox in the
henhouse. The rights contained in the UN Universal Declaration of Human
Rights of 1948 are just that, universal.

If you meet somebody who does not benefit from it, feel sorry for them
but help them to win their rights.

Two visions of history

When I try to understand what caused fascism, what made it so we were
overcome by Hitler and the Vichy [French government that collaborated
with Hitler], I tell myself that the propertied, with their selfishness,
were terrifically afraid of Bolshevik revolution. They were allowed to
lead with their fear.

But if, today as then, an active minority stands up, it will be enough;
we shall be the leavening that makes the bread rise. Certainly, the
experience of a very old person like me, born in 1917, is different from
the experience of the today's young persons. I often ask professors for
the opportunity to interact with their students, and I say to them: You
don't have the same obvious reasons to engage you. For us, to resist was
not to accept German occupation, defeat. It was comparatively simple.
Simple as what followed, decolonization. Then the war in Algeria.

It was necessary that Algeria become independent, it was obvious. As for
Stalin, we all applauded the victory of the Red Army against the Nazis
in 1943. But already we had known about the big Stalinist trials of
1935, and even if it was necessary to keep an ear open towards communism
to compensate against American capitalism, the necessity to oppose this
unbearable form of totalitarianism had established itself as an
obviousness. My long life presented a succession of reasons to outrage
me.

These reasons were born less from an emotion than a deliberate
commitment. As a young student at normal school [teachers college] I was
very influenced by Sartre, a fellow student. His "Nausea" [a novel],
"The Wall," [play], and "The Being and Nothingness" [essay] were very
important in the training of my thought. Sartre taught us, "You are
responsible as individuals." It was a libertarian message. The
responsibility of a person can not be assigned by a power or an
authority. On the contrary, it is necessary to get involved in the name
of one's responsibility as a human being.

When I entered the French Ecole Normale Superieure, Ulm Street, in Paris
in 1939, I entered it as a fervent adherent of the philosopher Hegel,
and I adhered to the thought of Maurice Merleau-Ponty. His teaching
explored concrete experience, that of the body and of its relations with
the senses, one big singular sense faced with a plurality of senses. But
my natural optimism, which wants all that is desirable to be possible,
carried me rather towards Hegel. Hegelism interprets the long history of
humanity as having a meaning: It is the freedom of man progressing step
by step. History is made of successive shocks, and the taking into
account of challenges. The history of societies thus advances; and in
the end, man having attained his full freedom, we have the democratic
state in its ideal form.

There is certainly another understanding of history. It says progress is
made by "freedom" of competition, striving for "always more"; it can be
as if living in a devastating hurricane. That's what it represented to a
friend of my father, the man who shared with him an effort to translate
into German "The Search for Time Lost" [novel] by Marcel Proust.

That was the German philosopher Walter Benjamin. He had drawn a
pessimistic view from a painting by the Swiss painter Paul Klee,
"Angelus Novus," where the face of the angel opens arms as if to contain
and push back a tempest, which he identifies with progress. For
Benjamin, who would commit suicide in September 1940 to escape Nazism,
the sense of history is the overpowering progression of disaster upon
disaster.

Indifference: the worst of attitudes

It is true the reasons to be indignant can seem today less clearly
related or the world too complex. Who's doing the ordering, who decides?
It is not always easy to differentiate between all the currents that
govern us. We are not any more dealing with a small elite whose joint
activities can be clearly seen. It is a vast world, of which we have a
feeling of interdependence.

We live in an interconnectivity as never before. But in this world there
still are intolerable things. To see them, it is well and necessary to
look, to search. I say to the young people, Search little, and that is
what you are going to find. The worst of attitudes is indifference, to
say "I can do nothing there, I'll just manage to get by." By including
yourself in that, you lose one of the essential elements that makes the
human being: the faculty of indignation and the commitment that is a
consequence of it.

They [young people] can already identify two big new challenges:

1. The huge gap which exists between the very poor and the very rich and
that does not cease increasing. It is an innovation of the 20th and 21st
centuries. The very poor in the today's world earn barely two dollars a
day. The new generation cannot let this gap become even greater. The
official reports alone should provoke a commitment.

2. Human rights and state of the planet: I had the chance after the
Liberation to join in the writing of the Universal Declaration of Human
Rights, adopted by the United Nations organization, on December 10th,
1948, in Paris at the palace of Chaillot. It was as principal private
secretary of Henry Laugier, the adjunct general-secretary of the UN, and
as and secretary of the Commission on Human Rights that I with others
was led to participate in the writing of this statement. I wouldn't know
how to forget the role in its elaboration of Ren Cassin, who was
national commissioner of justice and education in the government of Free
France in London in 1941 and won the Nobel peace prize in 1968, nor that
of Pierre Mend s-France in the Economic and Social Council, to whom the
text drafts we worked out were submitted before being considered by the
Third Committee (Social, Humanitarian and Cultural) of the General
Assembly. It was ratified by the 54 member states in session of the
United Nations, and I certified it as secretary.

It is to Ren Cassin that we owe the term "universal rights" instead of
"international rights" as offered by our American and British friends.
This [universal versus international] was key because, at the end of the
Second World War, what was at stake was to becomeereignty," which a
nation can emphasize while it devotes itself to crimes against humanity
on its own soil. Such was the case of Hitler, who felt himself supreme
and authorized to carry out a genocide. This universal statement owed
much to universal revulsion towards Nazism, fascism, and totalitarianism
-- and owes a lot, in our minds, to the spirit of the Resistance.

I had a feeling that it was necessary to move quickly so as not to be
dupes of the hypocrisy that there was in the UN membership, some whom
claimed these values already won but had no intention at all to promote
them faithfully -- claimed that we were trying to impose values on them.

I can not resist the desire to quote Article 15 of the Universal
Declaration of Human Rights (1948): "Everyone has the right to a
nationality." Article 22 says, "Everyone, as a member of society, has
the right to social security and is entitled to realization, through
national effort and international cooperation and in accordance with the
organization and resources of each State, of the economic, social and
cultural rights indispensable for his dignity and the free development
of his personality." And if this statement has a declarative scope, and
not statutory, the Declaration nevertheless has played a powerful role
since 1948. It saw colonized people take it up in their fight for
independence; it sowed minds in a battle for freedom.

I note with pleasure that in the course of last decades there has been
an increase in nongovernmental organizations (NGOs) and social movements
such as ATTAC (Association for the Taxation of Financial Transactions);

also FIDH (International Federation for Human Rights) and Amnesty
International, which are active and competitive. It is obvious that to
be effective today it is necessary to act in a network, to use all
modern means of communication.

To the young people, I say: Look around you, you will find topics that
justify your indignation facts about treatment of immigrants, of
"illegal" immigrants, of the Roma [aka Gypsies]. You will find concrete
situations that lead you to strong citizen action. Search and you shall
find!

My indignation regarding Palestine outrages by Israel [Indignez-vous!]

Today, my main indignation concerns Palestine, the Gaza Strip, and the
West Bank of Jordan. This conflict is outrageous. It is absolutely
essential to read the report by Richard Goldstone, of September 2009, on
Gaza, in which this South African, Jewish judge, who claims even to be a
Zionist, accuses the Israeli army of having committed "acts comparable
to war crimes and perhaps, in certain circumstances, crimes against
humanity" during its "Operation Cast Lead," which lasted three weeks.

I went back to Gaza in 2009 myself, when I was able to enter with my
wife thanks to our diplomatic passports, to study first-hand what this
report said. People who accompanied us were not authorized to enter the
Gaza Strip. There and in the West Bank of Jordan. We also visited the
Palestinian refugee camps set up from 1948 by the United Nations agency
UNRWA, where more than three million Palestinians expelled off their
lands by Israel wait even yet for a more and more problematical return.

As for Gaza, it is a roofless prison for one and a half million
Palestinians. A prison where people get organized just to survive.
Despite material destruction such as that of the Red Crescent hospital
by Operation Cast Lead, it is the behavior of the Gazans, their
patriotism, their love of the sea and beaches, their constant
preoccupation for the welfare of their children, who are innumerable and
cheerful, that haunt our memory. We were impressed by how ingeniously
they face up to all the scarcities that are imposed on them. We saw them
making bricks, for lack of cement, to rebuild the thousands of houses
destroyed by tanks. They confirmed to us that there had been 1400 deaths
including women, children, and oldsters in the Palestinian camp
during this Operation Cast Lead led by the Israeli army, compared to
only 50 injured men on the Israeli side. I share conclusions of the
South African judge. That Jews can, themselves, perpetrate war crimes is
unbearable. Alas, history does not give enough examples of people who
draw lessons from their own history. [The author, St phane Hessel, had
a Jewish father.]

Terrorism, or exasperation?

I know that Hamas [party of Palestine freedom fighters], which had won
the last legislative elections, could not help it that rockets were
launched on Israeli cities in response to the situation of isolation and
blockade in which Gazans exist. I think, naturally, that terrorism is
unacceptable; but it is necessary to acknowledge (from experience in
France) that when people are occupied by forces immensely superior to
their own, popular reaction cannot be altogether bloodless.

Does it serve Hamas to send rockets onto the town of Sd rot [Israeli
town across the border from Gaza]?

The answer is no. This does not serve their purpose, but they can
explain this gesture by the exasperation of Gazans. In the notion of
exasperation, it is necessary to understand violence as the regrettable
conclusion of situations not acceptable to those who are subjected them.

Thus, they can tell themselves, terrorism is a form of exasperation. And
that this "terrorism" is a misnomer. One should not have to resort to
this exasperation, but it is necessary to have hope. Exasperation is a
denial of hope. It is comprehensible, I would say almost natural, but it
still is not acceptable. Because it does not allow one to acquire
results that hope can possibly, eventually produce.

Nonviolence: the way we must learn to follow

I am persuaded that the future belongs to nonviolence, to reconciliation
of different cultures. It is by this way that humanity will have to
enter its next stage. But on this I agree with Sartre: We cannot excuse
the terrorists who throw bombs, but we can understand them. Sartre wrote
in 1947: "I recognize that violence in whatever form it may manifest
itself is a setback. But it is an inevitable setback because we are in a
world of violence. And if it is true that recourse to violence risks
perpetuating it, it is also true it is the sure means to make it stop."

To that I would add that nonviolence is a surer means of making violence
stop. One can not condone the terrorism, using Sartre or in the name of
this principle, during the war of Algeria, nor during the Munich Games
of 1972 the murder attempt made against Israeli athletes. Terrorism is
not productive, and Sartre himself would end up wondering at the end of
his life about the sense of violence and doubt its reason for being.

However, to proclaim "violence is not effective" is more important than
to know whether one must condemn or not those who devote themselves to
it. Terrorism is not effective. In the notion of effectiveness, a
bloodless hope is needed. If there is a violent hope, it is in the poem
of William Apollinaire "that hope is violent," and not in policy.

Sartre, in March 1980, within three weeks of his death, declared: "It is
necessary to try to explain why the world of today, which is horrible,
is only an instant in a long historical development, that hope always
has been one of the dominant forces in revolutions and insurrections,
and how I still feel hope as my conception of the future." [Note 5]

It is necessary to understand that violence turns its back on hope. It
is necessary to prefer to it hope, hope over violence. Nonviolence is
the way that we must learn to follow. So must the oppressors.
10

It is necessary to arrive at negotiations to remove oppression; it is
what will allow you to have no more terrorist violence. That's why you
should not let too much hate pile up.

The message of Mandela and Martin Luther King finds all its pertinence
in the world that overcame the confrontation of ideologies [e.g.,
Nazism] and conquered totalitarianism [e.g.,Hitler]. It is also a
message of hope in the capacity of modern societies to overcome
conflicts by a mutual understanding and a vigilant patience. To reach
that point is necessarily based on rights, against es, such as the
military intervention in Iraq.

We had this economic crisis, but we still did not initiate a new policy
of development. Also, the summit of Copenhagen against climatic warming
did not bring about a true policy for the preservation of the planet.

We are on a threshold between the terror of the first decade and the
possibilities of following decades. But it is necessary to hope, it is
always necessary to hope. The previous decade, that of 1990s, had been a
time of great progress. The United Nations had enough wisdom to call
conferences such as those of Rio on environment, in 1992, and that of
Beijing on women, in 1995. In September 2000, on the initiative of the
general secretary of United Nations, Kofi Annan, the 191 member
countries adopted a statement on the "eight objectives of the millennium
for development," by which they notably promised to reduce poverty in
the world by half before 2015.

My big regret is that neither Obama nor the European Union has yet
committed themselves to what should be the provision for a useful forum
bearing on the fundamental values.

Conclusion

How to conclude this call to be indignant? By saying still what, on the
occasion of the sixtieth anniversary of the program of the National
Council of the Resistance, we said on March 8th, 2004 -- we veterans of
the resistance movements and combat forces of Free France (1940-1945) --
that certainly "Nazism was conquered, thanks to the sacrifice of our
brothers and sisters of the Resistance and United Nations against
fascist barbarism. But this threat did not completely disappear, and our
anger against injustice is ever intact." [Note 6] Also, let us always be
called in "a truly peaceful insurrection against means of mass
communication that offer as a vista for our youth only the consumption
of mass trivia, contempt of the weakest and the culture, a generalized
amnesia, and the hard competition of all against all."

To those who will make the 21st century, we say with our affection:

TO CREATE IS TO RESIST; TO RESIST IS TO CREATE.

30 April 2010

Matt Brubeck: Fennec on Android: user feedback and next steps

Last month I joined Mozilla as a UI engineer on the Fennec (Mobile Firefox) project. Firefox is already available for Nokia's Maemo platform, and now a group of Mozilla programmers are porting it to Google's Android OS. This Tuesday they made an early preview build available for public feedback. Until now, the only people working on Firefox for Android were platform developers getting the back-end code to build and run. This week was the first time most other people including me got to try it out. We front-end developers and designers are now starting to adapt the user interface to Android. (The preview build used the look and feel of Firefox for Maemo, designed for rather different hardware and software.) Because we are an open source project, we like to share our work and hear your feedback even at this early stage of development. While I wasn't directly involved in the Android development effort, I spent some of my spare time this week talking to users via Twitter and our Android feedback group. Here's what I heard, in rough order of importance to users, plus some information on our future plans.1 We don't have a regular schedule yet for releasing new builds on Android. Once we get the code merged and automated build servers configured, we'll publish nightly builds of Firefox for Android alongside our Maemo and desktop nightlies. Later this year we will have alpha and beta versions, and hopefully a stable release. Until then, you can follow @MozMobile or Vlad (@vvuk) to hear about any new previews.
  1. Please remember I am still new to the project, and cannot speak for the whole team. This is a personal blog, not a Firefox roadmap!

21 February 2010

Michael Banck: 21 Feb 2010

Application Indicators: A Case of Canonical Upstream Involvement and its Problems I always thought Canonical could do a bit more to contribute to the GNOME project, so I was happy to see the work on application indicators proposed to GNOME. Application indicators are based on the (originally KDE-driven, I believe) proposed cross-desktop status notifier spec. The idea (as I understand it) is to have a consistent way of interacting with status notifiers and stop the confusing mix of panel applets and systray indicators. This is a very laudable goal as mentioned by Colin Walters:
"First, +5000 that this change is being driven by designers, and +1000 that new useful code is being written. There are definite problems being solved here."
The discussion that followed was very useful, including the comments by Canonical's usability expert Matthew Paul Thomas. Most of the discussion was about the question how this proposal and spec could be best integrated into GTK, the place where most people seemed to agree this belongs (rather than changing all apps to provide this, this should be a service provided by the platform) However, on the same day, Canonical employee Ted Gould proposed libappindicator as an external dependency. The following thread showed a couple of problems, both technical and otherwise: What I personally disliked is the way the Cody Russel and Ted Gould are papering over the above issues in the thread that followed. For examples, about point one, Ted Gould writes in the proposal:
Q: Shouldn't this be in GTK+?
A: Apparently not.
while he himself said on the same day, on the same mailing list: "Yes, I think GTK/glib is a good place" and nobody was against it (and in fact most people seemed to favor including this in GTK). To the question about why libappindicator is not licensed as usual under the LGPL, version 2.1 or later, Canonical employee Cody Russell even replied:
"Because seriously, everything should be this way. None of us should be saying "LGPL 2.1 or later". Ask a lawyer, even one from the FSF, how much sense it makes to license your software that way."
Not everybody has to love the FSF, but proposing code under mandated copyright assignments which a lot of people have opposed and at the same time insinuating that the FSF was not to be trusted on their next revision of the LGPL license seems rather bold to me. Finally, on the topic of copyright assignments, Ted said:
"Like Clutter for example ;) Seriously though, GNOME already is dependent on projects that require contributor agreements."
It is true that there are (or at least were) GNOME applications which require copyright assignments for contributions (evolution used to be an example, but the requirement was lifted), however, none of the platform modules require this to my knowledge (clutter is an external dependency as well). It seems most people in the GNOME community have the opinion that application indicators should be in GTK at least eventually, so having libappindicator as an external dependency with copyright assignments might work for now but will not be future proof. In summary, Most of the issues could be dealt with by reimplementing it for GTK when the time comes for this spec to be included, but this would mean (i) duplication of effort, (ii) possibly porting all applications twice and (iii) probably no upstream contribution by Canonical. Furthermore, I am amazed at how the Canonical people approach the community for something this delicate (their first major code drop, as far as I am aware). To be fair, neither Ted nor Cody posted the above using their company email addresses, but nevertheless the work is sponsored by Canonical, so their posts to desktop-devel-list could be seen as writing with their Canonical hat on. Canonical does not have an outstanding track record on contributing code to GNOME, and at least to me it seems this case is not doing much to improve things, either.

14 October 2009

Robert McQueen: Telepathy Q&A from the Boston GNOME Summit

The first Telepathy session session on Saturday evening at the Boston GNOME Summit was very much of a Q&A where myself and Will answered various technical and roadmap issues from a handful of developers and downstream distributors. It showed me that there s a fair amount of roadmap information we should do better at communicating outside of the Telepathy project, so in the hope its useful to others, read on MC5 Ted Gould was wondering why Mission Control 4 had an API for getting/setting presence for all of your accounts whereas MC5 does not. MC5 has a per-account management of desired/current presence which is more powerful, but loses the convenience of setting presence in one place. Strictly speaking, doing things like deciding which presence to fall back on (a common example being if you have asked for invisible but the connection doesn t support it) is a UI-level policy which MC should not take care of, but in practice there aren t many different policies which make sense, and the key thing is that MC should tell the presence UI when the desired presence isn t available so it could make a per-account choice if that was preferable. As a related side point, Telepathy should implement more of the invisibility mechanisms in XMPP so it s more reliably available, and then we could more meaningfully tell users which presence values were available before connecting, allowing signing on as invisible. Since MC5 gained support for gnome-keyring, its not possible to initialise Empathy s account manager object without MC5 prompting the user for their keyring password un-necessarily (especially if the client is Ubuntu s session presence applet and the user isn t using Empathy in the current session but has some accounts configured). Currently the accounts D-Bus API requires that all properties including the password are presented to the client to edit. A short-term fix might be to tweak the spec so that accounts don t have to provide their password property unless it s explicitly queried, but this might break the ABI of tp-glib. Ultimately, passwords being stored and passed around in this way should go away when we write an authentication interface which will pass authentication challenges up to the Telepathy client to deal with, enabling a unified interface for OAuth/Google/etc web token, Kerberos or SIP intermediate proxy authentication, and answering password requests from the keyring lazily or user on demand. Stability & Security Jonathan Blandford was concerned about the churn level of Telepathy, from the perspective of distributions with long-term support commitments, and how well compatibility will be maintained. Generally the D-Bus API and the tp-glib library APIs are maintained to the GNOME standards of making only additive changes and leaving all existing methods/signals working even if they are deprecated and superseded by newer interfaces. A lot of new interfaces have been added over the past year or so, many of which replace existing functionality with a more flexible or more efficient interface. However, over the next 4-6 months we hope to finalise the new world interfaces (such as multi-person media calls, roster, authentication, certificate verification, more accurate offline protocol information, chat room property/role management), and make a D-Bus API break to remove the duplicated cruft. Telepathy-glib would undergo an ABI revision in this case to also remove those symbols, possibly synchronised with a move from dbus-glib to GVariant/etc, but in many cases clients which only use modern interfaces and client helper classes should not need much more than a rebuild. Relatedly there was a query about the security of Telepathy, and how much it had been through the mill on security issues compared to Pidgin. In the case of closed IM protocols (except MSN where we have our own implementation) then we re-use the code from libpurple, so the same risks apply, although the architecture of Telepathy means its possible to subject the backend processes to more stringent lockdowns using SElinux or other security isolation such as UIDs, limiting the impact of compromises. Other network code in Telepathy is based on more widely-used libraries with a less chequered security history thus far. OTR The next topic was about support for OTR in Telepathy. Architecturally, it s hard for us to support the same kind of message-mangling plugins as Pidgin allows because there is no one point in Telepathy that messages go through. There are multiple backends depending on the protocol, multiple UIs can be involved in saving (eg a headless logger) or displaying messages (consider GNOME Shell integration which previews messages before passing conversations on to Empathy), and the only other centralised component (Mission Control 5) does not act as an intermediary for messages. Historically, we ve always claimed OTR to be less appealing than native protocol-level end-to-end encryption support, such as the proposals for Jingle + peer to peer XMPP + TLS which are favoured by the XMPP community, mostly because if people can switch to an unofficial 3rd party client to get encryption, they could switch to a decent protocol too, and because protocol-level support can encrypt other traffic like SRTP call set-up, presence, etc. However, there is an existing deployed OTR user base, including the likes of Adium users on the Mac, who might often end up using end to end encryption without being aware of it, who we would be doing a disservice by Telepathy not supporting OTR conversations with these people. This is a compelling argument which was also made to me by representatives from the EFF, and the only one to date which actually held some merit with me compared to just implementing XMPP E2E encryption. Later in the summit we went on to discuss how we might achieve it in Telepathy, and how our planned work towards XMPP encryption could also help. Tubes We also had a bit of discussion about Tubes, such as how the handlers are invoked. Since the introduction of MC5, clients can register interest in certain channel types (tubes or any other) by implementing the client interface and making filters for the channels they are interested in. MC5 will first invoke all matching observers for all channels (incoming and outgoing) until all of them have responded or timed out (eg to let a logger daemon hook up signal callbacks before channel handling proceeds), all matching approvers for incoming channels until one of them replies (eg to notify the user of the new channel before launching the full UI), and then sending it to the handler with the most specific filter (eg Tomboy could register for file transfers with the right MIME type and receive those in favour to Empathy whose filter has no type on it). Tubes can be shared with chat rooms, either as a stream tube where one member shares a socket for others to connect to (allowing re-sharing an existing service implementation), or a D-Bus tube where every member s application is one endpoint on a simulated D-Bus bus, and Telepathy provides a mapping between the D-Bus names and the members of the room. In terms of Tube applications, now we ve got working A/V calling in Empathy, as well as desktop sharing, and an R&D project on multi-user calls, our next priority is on performance and Tube-enabling some more apps such as collaborative editing (Gobby, AbiWord, GEdit, Tomboy ?). There was a question about whether Tube handlers can be installed on demand when one of your contacts initiates that application with you. It d be possible to simulate this by finding out (eg from the archive) which handlers are available, and dynamically registering a handler for all of those channel types, so that MC5 will advertise those capabilities, but also register as an approver. When an incoming channel actually arrives at the approval stage, prompt the user to install the required application and then tell MC5 to invoke it as the handler. Colin Walters asked about how Telepathy did NAT traversal. Currently, Telepathy makes use of libnice to do ICE (like STUN between every possible pair of addresses both parties have, works in over 90% of cases) for the UDP packets involved in calls signalled over XMPP, either the Google Talk variant which can benefit from Google s relay servers if one or other party has a Google account, so is more reliable, or the latest IETF draft which can theoretically use TURN relays but its not really hooked up in Telepathy and few people have access to them. XMPP file transfers and one-to-one tube connections use TCP which is great if you have IPv6, but otherwise impossible to NAT traverse reliably, so often ends up using strictly rate-limited SOCKS5 -ish XMPP proxies, or worse, in-band base64 in the XML stream. We hope to incorporate (and standardise in XMPP) a reliability layer which will allow us to use Jingle and ICE-UDP for file transfers and tubes too, allowing peer to peer connections and higher-bandwidth relays to enhance throughput significantly. Future Ted Gould had some good questions about the future of Telepathy
Should Empathy just disappear on the desktop as things like presence applets or GNOME Shell take over parts of its function? Maybe, yes. In some ways its goal is just to bring Telepathy to end users and the desktop so that its worth other things integrating into Telepathy, but Telepathy allows us to do a lot better than a conventional IM client. Maemo and Sugar on the OLPC use Telepathy but totally integrates it into the device experience rather than having any single distinct IM client, and although Moblin uses Empathy currently it has its own presence chooser and people panel, and may go on to replace other parts of the user experience too. GNOME Shell looks set to move in this direction too and use Telepathy to integrate communications with the desktop workflow. Should Telepathy take care of talking to social networking sites such as Facebook, Twitter, etc? There s no hard and fast rule Telepathy only makes sense for real-time communications, so it s good for exposing and integrating the Facebook chat, but pretty lame for dealing with wall posts, event invitations and the like. Similarly on the N900, Telepathy is used for the parts of the cellular stack that overlap with real-time communications like calling and SMS, but there is no sense pushing unrelated stuff like configuration messages through it. For Twitter, the main question is whether you actually want tweets to appear in the same UI, logging and notification framework as other messages. Probably not anything but the 1-to-1 tweets, meaning something like Moblin s Mojito probably makes more sense for that. Later in the summit I took a look at Google Latitude APIs, which seem like something which Telepathy can expose via its contact location interface, but probably not usefully until we have support for metacontacts in the desktop. Can/will Telepathy support IAX2? It can, although we d have to do a local demultiplexer for the RTP streams involved in separate calls. It s not been a priority of ours so far, but we can help people get started (or Collabora can chat to people who have a commercial need for it). Similarly nobody has looked at implementing iChat-compatible calling because our primary interest lies with open protocols, but if people were interested we could give pointers its probably just SIP and RTP after you dig through a layer of obfuscation or two.
If you want to know more about Telepathy feel free to comment with some follow-up questions, talk to us in #telepathy on Freenode, or post to the mailing list.

19 January 2009

Axel Beckert: How I use my virtual desktops

Many months ago I stumbled upon this German written meme about how users use their virtual desktops. I use virtual desktops since my very early Unix times (tvtwm on Sun Sparc SLC/ELC/IPX with greyscale screens running SunOS 4.x), so in the meanwhile I use them nearly everywhere the same way. Short Summary 3x5, no overlapping windows, either tiling or fullscreen, keyboard navigation, xterms, yeahconsole, FVWM, panel for systray. Window Manager of Choice My window manager of choice is FVWM since more than a decade. I tried others like Sawfish, Metacity and Compiz, but I couldn t get them behave like the FVWM I got used to, so I always came back. Since I hate overlapping windows, I use FVWM a lot like a tiling window manager. FVWM has this nice function to maximize windows so that they occupy as much space as available, but do not overlap other windows. This function was also often missing when I tried other window managers. I though do not want to use real tiling window managers since I have a few sticky windows around (e.g. the panner with the virtual desktops and xosview) and they shouldn t be overlapped either. Virtual Desktops Switching between virtual desktops is done with the keyboard only with Ctrl-Shift as modifier and the cursor keys. The cursor keys are usually pressed with thumb, ring and small finger of the right hand. Which hand presses Ctrl and Shift depend on the situation and keyboard layout, but it s usually either ring and small finger of the left hand, or pointer and middle finger of the right hand. So I m able to switch the virtual desktop with only one hand. I have always three rows of virtual desktops and usually four or five columns. The top row is usually occupied with xterms. It s my work space. The top left workspace usually contains at least one xterms with a shell and one with mutt, my favourite e-mail client since nearly a decade. At home the second left virtual desktop in the top row usually contains a full-screen Liferea (my preferred feed reader) while at work it contains the GNU Emacs main window besides two xterms. Emacs and the emacs server are automatically started at login. This also means that I switch the virtual desktops when I switch between mutt and Emacs for typing the content of an e-mail. Did this already during my studies. (At home mutt runs inside a screen, so there I just switch the virtual terminal with Ctrl-A Ctrl-A instead of the virtual desktop. Not that big difference ;-) The other virtual desktops of the the top row get filled with xterms as needed. Usually one virtual desktop per task. The middle row is for web browsers. One full screen browser (usually Conkeror or Opera) per virtual desktop, often opened with many tabs (tabs in Opera, buffers in Conkeror) related to the task I m accomplishing in the xterms in the virtual deskop directly above. The third row usually contains root shells for maintenance tasks, either permanently open ones on machines I need an administrate often (e.g. daily updates of Debian testing or Debian unstable machines), or for temporary mass administration (Linux workstations on the job, all Xen DomUs of one Xen server, etc.) using pconsole. yeahconsole Additionally I have a sticky yeahconsole running, an xterm which slides down from the top like the console in Quake. (It s the only overlapping thing I use. :-) My yeahconsole can be activated on every virtual desktop by pressing Ctrl-Alt-Z (with QWERTY layout, Ctrl-Alt-Y with QWERTZ layout). It s the terminal for those one-line jobs then and when, e.g. calling ccal, translate, wget or clive. Changes over time Of course the desktop usage changes from time to time: At work I have more than one monitor, so in the meanwhile the second row with the web browsers moved to the second screen with independent virtual desktops (multiple X servers, no Xinerama). The second row on the main screen at work is now used the same way as the third row with a slight preference for the permanently open shells while the third row is more used for mass administration with pconsole. At home I used XMMS respective Audacious for a long time (my FVWM panner and xosview are exactly as wide as WinAmp2/XMMS/Audacious, guess why:-) which usually was sticky the same way as the panner and xosview are. But when I started using last.fm recently, I moved to Rhythmbox (after testing some other music players like e.g. Amarok) which I use in fullscreen as I do with web browsers and the feed reader. So it occupies a complete virtual desktop, usually the second one in the middle row below the feed reader because I don t need a corresponding web browser for the feed reader. (Just found out that there is a last.fm player for text-mode, so maybe that will change again. :-) Another thing which changed my virtual desktop usage was the switch from a classical tabbed web browser (Galeon, Kazehakase, Opera) to the buffer oriented Conkeror. With a tabbed web browser I have either no overview over all open tabs (one row tab bar or truncated tab menu) or they occupy too much space of the browser window. That was another reason for more than one browser window and therefore more than one virtual desktop with fullscreen web browser windows. With Conkeror tabs are optional (and not even enabled by default), Conkeror uses buffer like Emacs and if you want to switch to another buffer, you press C-x b and then start typing parts of the buffer s name (e.g. parts of the URL or the web page title) to narrow down the list of buffers until only one is left or until you have spotted the wanted buffer in the list and choose it with the cursor keys. So the need for more than one browser window is gone. For a long time I didn t need any task/menu/start/whatever bar on my desktop. But since neither NetworkManager nor wicd have a comand-line interface (yet) and bluetooth seems also easier handled from the system tray my laptops also use either gnome-panel (big screen, long sessions with FVWM) or lxpanel (formerly used trayer; use it on small screen, short sessions with ratpoison or matchbox) on my laptops. It s sticky and always visible. (No overlapping, remember? ;-) The panel is usually at the bottom on the screen as by default with Windows or KDE, not at top as with GNOME and MacOS. Only on the OpenMoko, I have the panel at the top to be close to what I m used from Nokia mobile phones. Things I tried but didn t survive in my setup: Systems without Virtual Desktops Anyway, there are systems where I don t use virtual desktops at all. On systems with a screen resolution so small that there s not enough space for two non-overlapping, fixed font 80x25 xterms on the screen (e.g. on my MicroClient with 8 touch screen, the 7 EeePC or the OpenMoko) I do not use virtual desktops at all. On such systems I use all applications in fullscreen, so switching between applications is like switching virtual desktops anyway. My window managers of choice for such systems are ratpoison for systems with keyboard and matchbox for system without keyboard. With ratpoison you treat windows like terminals in GNU screen, so there are no new keybindings to learn if you re already used to screen (which I use nearly daily since more than a decade).

20 August 2008

Adam Rosi-Kessel: Walter Michalik and the Roslindale Community Center

I’ve had little time in the four years I’ve lived in Roslindale to get involved in community volunteer efforts. My wife Rachele, however, has devoted several years to efforts such as the Roslindale Community Center, Roslindale Village Main Streets, and Roslindale Clean and Green. She has served as chair of both RCC and one of the RVMS committees. Over the past few months, I’ve been disappointed to see the RCC board endure repeated senseless attacks by a perhaps overzealous community member, Walter Michalik, who has effectively paralyzed the organization. I have nothing personal against this guy, but I thought I could do my part here by publishing an example of an email he broadcast to the committee, which I think is embarrassing enough on its own merits to need no editorializing from me, except to say any email that starts with “just so you won’t be surprised when the IRS launches its investigation” can be neither productive nor taken seriously. The context is a response to an announcement sent out pursuant to the Community Centers By-Laws scheduling a special meeting to discuss the recent upheaval on the Council. There’s no reason this should not be in the public record:
Rachele Just so you won’t be surprised when the IRS launches its investigation, be advised that YOU DO NOT HAVE THE AUTHORITY TO CALL FOR A SPECIAL MEETING. The By Laws do not allow the President/Chair to call for a Special meeting - that right is reserved for the MEMBERS to have a power over a hostile or non-performing President/Chair. It was created by a Membership wary of annimosity within its ranks in order to protect itself from falling into a useless dysfunctional body. That’s why our By Laws require the majority of the membership request it through the Chair and not the other way around. So you screwed that up too. Your continuous violation of rules and laws have proven beyond a doubt that this current Roslindale Community Center Council has violated the trust behind its IRS-designated not-for-profit status. The failures are on so many significant levels over 6 months it is readily apparent that the Roslindale Community is better served without a Community Center Council. There is no longer any excuse to allow the residents of Roslindale to be denied the services they deserve that you and the officers have no idea how to deliver. Because that’s what you are supposed to do - serve the residents of Roslindale and not preseide over a self-serving board. You’ve taken us from dysfunctional to non-functional over the past 60 days. You and your officers have already driven away our Archdale partners and now the merger is off. Congratulations. You got what you wanted. You and the officers have withheld information from the Members of the Council imposing gag orders along the way and in more than one instance spreading lies within this community. There are so very many lapses on your part and the rest of the officers that it is very apparent that the Roslindale Community Center exists IN SPITE OF the Roslindale Community Center Council - not because of it. It would be far better indeed, if the Center were run without the Council. What have you done in 6 months? All you do is avoid the work of leadership. Please, just gracefully step aside and let a real leader emerge - or dissolve the Council and let the Center run itself under BCYF before the IRS makes that happen. Of course this is my stated opinion. You can choose not to belive it. You can huddle with a group of self-serving officers and invent excuses or change the agenda away from the message and continue with your ad hominem attacks against the messenger. I’m thick-skinned. Feelings aren’t important to me - services to residents are important to me. While I have, in fact, enjoyed the confidences of others within this community and their true displesure with this Council, I will produce that to the appropriate authorities as I see fit along with the Bill of Particulars I have produced with documentation for all charges. As a founder of the Roslindale Community Center Council I can no longer support you or the officers as any legitimate authority in this community.
Technorati Tags: , ,

28 January 2008

Rob Taylor: Language runtime independance

Colin, you might want to check out PyPy, especially it’s translation framework, which (as I read it) allows the separation of language definition, optimisation techniques and machine backends (virtual or not). It looks like right now you can translate Python to C, CLR, JVM, LLVM or even JavaScript and there seem to be people working on different language definitions also, with JavaScript, Prolog, Scheme and Smalltalk in the tree.

Next.