
There is an article on TechCrunch on
Windows on the OLPC. This article started out as a comment below lots of comments that were
missing the point, but eventually grew too large.
The entire discussion circles around the question whether it would be beneficial to give the users the same view and behaviour that is on 90% of machines worldwide, so they can start out prospective jobs with a minimum of training. Learning your way around the UI is only a significant part of training if that actual work you will do is trivial — so this argument basically boils down to "I don't expect the African kids to do anything but grunt work during their lifetime anyway, so we better start training them early", which is the wrong approach not only to education.
To make a
bad car analogy, roads are usually made of several layers, from the foundation providing the stability up to the paint defining lanes. Operating systems are similarly layered, with a core that applications (cars) never touch directly, and several other layers on top of that that are not really required for basic functionality, but that add safety (process separation) or comfort (standard functions). The minimum standard of things is a "platform definition", which all car (or application) makers can expect — all roads have a minimum width and there are no dangerous spikes (if that is not true, you can get a steamroller or respectively format your harddisk).
Railways use the same kind of foundation (operating system), but the platform (heh) is quite different. You cannot drive a car on a railway, or a train on a road, just as you cannot run a Windows application on a Linux system or vice versa (there are special wagons you can place your car on, and special trucks with rails on them if you feel like it, but these are heavier and need more energy to pull).
Now in this discussion, people have been comparing Windows (the platform) to Linux (the operating system). That doesn't work.
On Linux, there are several platforms available, the most prominent being GNOME and KDE for the desktop and POSIX utilities on the command line, but there are lots of others as well. Part of most platform definitions is an user interface, which abstracts what is really happening to something comprehensible to the user, using analogies (a tachometer displays our speed as an angle usually, but other representations are possible).
The "desktop" idiom happened to be the first graphical UI some thirty years back, and was perpetuated into today's computers (just like the width of roads hasn't changed from the days of the Roman empire, where it was "two horses and then some"), however this doesn't mean it is the best choice available — it's just what we are used to.
If you look at the screen contents on the day traders' computers (lots of that on the TV right now thanks to the market crisis), you will notice the vast majority does not use overlapping windows or standardized "rising-edge" buttons to click on, but rather, they have a tightly-packed grid layout with high-contrast information displays that also color-code certain messages.
I think that is the most important point here: to achieve optimal results, the presentation idiom needs to be chosen in a task specific way.
With children as the target audience, we lose one of the key requirements behind the adoption of the windowed view: the need for side-by-side presentation of data from multiple unrelated sources (which is also a problem given the lack of screen space). With the introduction of ad-hoc mesh networking and collaborative applications, the "desktop" analogy begins to break down.
The project's mission also defines requirements on the platform. If we want to keep the requirement "users should be able to build and share their own stuff", then we want a framework where it is hard to make mistakes, especially those that can be spotted only after an interesting failure, and more importantly impossible to write code that makes unrelated components fail, because these components might be your way back out of the situation.
Windows has an excellent event model with fairly good isolation of components (to the point where a problem in an event handler can be handled by the event loop rather than terminating the program, so for example Internet Explorer can shut down broken plugins rather than crashing), but the detail knowledge required to really work with the API (how to build a message loop that also runs queued I/O completion handlers correctly) leads to a fairly steep learning curve, and would teach implementation details rather than concepts.
The normal "linuxy" approach of going low level whenever higher-level approaches fail is not the answer either as we want to truly empower people rather than just training them to be a cheap replacement for the tech support Indians (no offence), so it is vital that the "real" applications use the same framework that people implementing new things would use, and thus all the complexity that we want in our "official" applications needs to be taken care of by the platform, with all the safety features in place too.
So no existing platform provides what we want. — hence Sugar. And that is the problem for Windows advocates: Sugar replaces those bits that make Windows a platform and not just a kernel, so porting Sugar to Windows doesn't make sense from a technical point of view, since we already replaced the bits that we didn't have free software for before.
Other than that, the "Linux vs. Windows" kernel choice is secondary; in fact both kernels are very similar in design and function, the various advantages and disadvantages of either aren't that relevant really.
The only
technical reason in favour of Linux is the virtual memory management — the Windows VMM behaves erratically in the absence of a swap device, but I believe that is not something that cannot be fixed.
The reason why I believe Linux is the better choice here is long-term support.
Since these devices will be used in basic education (which hasn't changed that much in the past years as 1 plus 1 still equals 2), there is hardly any need for radical changes after the initial rollout — why add instability when you don't have to? With Microsoft as a for-profit company, there needs to be a business model sustaining that behind it, and I believe it will be very hard to find one. "Subscription" falls down in that it is a long-term recurring expense, which governments tend to be pretty wary of.
The alternative is to upgrade several million computers' OS every few years. Lots of companies are skipping entire Windows releases because of the migration cost, and even with the "console bonus" (all hardware is the same) and bootloader support for software upgrades over a mesh network, this is still a massive endeavour. That each machine would have to reserve enough memory for the entire "upgrade pack" so it can transition "in one go" also makes this model unworkable.
To summarize, using Windows on the OLPC does not make sense at all. If you use just the kernel, you don't gain anything over Linux, and if you use the entire platform (and by extension, the UI), you add unnecessary complexity that is not only not required for the actual task, but also distracting. If you add restrictions and extensions to make it work, you invent a new platform, which is precisely what Sugar did.
The argument that it is important for pupils to use the same thing that the rest of the world is using to ease their entry into the workforce is bogus at best, and racist at worst.