Search Results: "jason"

20 March 2012

Russell Coker: An Introduction to Android

I gave a brief introductory talk about Android at this month s LUV meeting. Here are the slides with a brief description. All the screen-shots were made on a Samsung Galaxy S running Cyanogenmod version 7.1 [1] (Android version 2.3.7). With that build of Cyanogenmod you can press the power button for about 1.5 seconds to get a menu which gives an option to take a screen shot. The aim of the talk was to give an overview of what Android can do. I also gave some random commentary about Android such as explaining why it doesn t make a good phone. Most of the pictures in this post have links to the Android applications in question. Essential and Important Apps picture of root shell access running df I started by explaining why having root access to your system is really important, including the issue of backing up an Android phone [2]. Cyanogenmod includes a terminal program which allows you to run su - . Running a shell as root isn t generally that useful, what you really want is to be able to run programs such as Titanium Backup which can only work properly if given root access. When you run an OS that allows root access you can run su - at a terminal prompt and you can also have an application use a GUI to request root access. I recommend rooting and modding an Android phone immediately after buying it. However that takes some time which is somewhat equivalent to money and is a significant hidden cost to purchasing an Android phone. picture of 3g watchdog bandwidth monitor The business models of telephone companies seem to involve hidding users for unexpected fees and extra fees for excess bandwidth can be really expensive. 3G Watchdog is one app that can monitor bandwidth and disable data transfers if too much is used. Onavo is an alternative that allows tracking data use on a per-application basis, but it only runs on Android 2.3.x while 3G Watchdog works on Android 2.1. Official EBay app searching for Samsung Android phones EBay has an official app which is handy for searching for items. So far I ve only used it to get price estimates and have used a PC for buying. K9 viewing my SE Linux mailing list email K9 seems to be the best MUA for Android. The MUA that ships with Android 2.1 isn t nearly as good and K9 is good enough that I didn t even bother testing the MUA from Cyanogenmod. The above picture shows a list of mail in my SE Linux folder. graph by opticron grapher The Opticron Grapher is a good graphing calculator. I won t claim it s the best because I didn t seriously test such programs, but for the basic tests I ve done it has worked well. Google map of LUV location The Google Maps client comes with every Android system, the above shows the location of the LUV meeting. Open Street Map location of the LUV meeting Osmand is an Android client for the Open Street Map project. Here is the web site for the Open Street Map project [3]. One significant advantage of OSM over the Google Maps is that OSM is free, the data is all contributed by users like Wikipedia. Another significant advantage is that you can download as much data as you need to your phone, for example the entire dataset for Australia is about 200M. Storing 200M of data on your phone is no big deal when you consider the availability of phones with more than 16G of storage and the ability to use a map when offline is a real benefit. So far I ve used Osmand while waiting for a train at an underground station and I plan to use it to track my progress the next time I m on a cruise. serval mesh networking and VOIP Serval is a mesh networking application for Android that supports VOIP phone calls and distributing messages and files. It s designed to be used in disaster areas, but there are lots of other potential uses of the technology. The Serval Project blog has an article about the presentation they gave at LCA 2012 [4]. periodic table
details about Titanium Periodic Droid is my favorite Periodic Table viewing program. tomfusion Au Weather forecast app The Tomfusion AU Weather forecast app seemed to be better than the one from the BoM last time I checked. It s probably the best weather app for Australia. screen-shot of LUV web site
LUV web site zoomed in The Opera Mini browser is often faster than other browsers because it uses a compressing proxy run by Opera. It s not so good for privacy though LUV page on Wikipedia I have been using Wapedia for browsing Wikipedia. Since giving the talk I discovered the Official Wikipedia browser from the Wikimedia foundation [5] which is a better fit for my needs and I ve uninstalled Wapedia. As an aside modern phones have 16G of storage or more and could easily have a copy of the entire English text of Wikipedia on internal storage. It would be good if someone like Jason King (who is known for work on stand-alone DVD images for Wikipedia) was to write an Android program to do this. Handy Apps Androsensor showing GPS, accelleration, and light intensity
Androsensor showing magnetic field, orientation, and battery Androsensor is a program to display output from most (all?) of the sensors on your phone. The results aren t as accurate as one would hope, for example Earth s gravity is 9.81m/s^2 not the 10.26m/s^2 my phone registered. But they are a useful indication. picture of Coke can, scanning the barcode
Google search on Coke can barcode
QR Code lookup of Facebook page The Zxing Barcode scanner is one of many programs that will scan barcodes with the camera in an Android phone. It can launch a Google search on a product code or open a URL from a
QR Code. The above pictures show it scanning a Coke can (the can and other background was displayed on the full screen before the screen capture program activated), doing a Google search on the can barcode, and looking up a QR code that was on an advertisment outside the LUV venue. picture from the bridge of the Dawn Princess in Tauranga NZ Cruise Cams allows you to download pictures from cruise ships. Some cruise ships have several cameras on different parts of the ship uploading pictures regularly so that people around the world can see what s happening. list of geo-caches near the park where I prepared most of my notes The c:geo opensource program allows you to get information on Geocaches and see a compass or map showing the location. This program has been getting an increasing number of features to do everything you might want to do related to Geocaching. The above picture shows some caches that are close to where I made the screen-shot, in a park a couple of Km from the meeting location. picture of Google Sky Map in the direction of Andromeda The Google Sky map uses augmented reality techniques to display stars in the direction that your phone is pointing along with their names and the names of the constellations. Marine Traffic showing ships near me
Marine Traffic showing the Pacific Sun highlighted on a Google Map
Marine Traffic showing details of the Pacific Sun
Photos of the Pacific sun taken by fans and shown by Marine Traffic The Marine Traffic program shows the locations of ships as well as lots of information about them. The above pictures show me discovering that the Pacific Sun was nearby, viewing it s location on Google Maps, seeing the details, and then viewing fan pictures. The developer s web site allows viewing all the same data without an Android phone [6]. Anyone can join the project by buying an Automatic Identification System (AIS) receiver and configuring a PC to take data from AIS and send it to the servers. As an aside they seem to be missing coverage in western Victoria, so it would be good if someone near Apollo Bay or Warrnambool could install an AIS receiver and help out. Satellite map from Satellite AR Satellite AR uses augmented reality to show the location of satellites and other things in spare. Unfortunately the screen capture process turned off the camera as I had a sign advertising fast food positioned in an amusing location in the background. Shipmate overview of Dawn Princess
Shipmate map of Dawn Princess Shipmate publishes a set of programs giving information on cruise ships, they have one program for each cruise line. Above is the program for the Princess cruise line, the above pictures give information on the Dawn Princess. Unfortunately the program wasn t usable without net access when I tried to use it on a cruise ship. Games Air Attack HD game, clone of 1942 Air Attack HD is an entertaining game that demonstrates the capability of Android phones to run action games. Like many Android games it has a free version and paid versions if you want more. picture of Angry Birds Angry Birds is one of the most well known games for touch-screen devices. It has also spawned a huge line of merchandise. Labyrinth Lite
Labyrinth Lite from a different angle Labyrinth Lite is one of the many Android games based on the old mechanical game where you tilt a toy to roll a ball-bearing through a maze. It s free and is better than most of the free games in that genre. Minecraft Pocket Edtion Minecraft Pocket Edition allows you to play Minecraft on your phone. The demo version doesn t allow saving the game, you have to buy the game for about $5 if you want to do that. It also lacked the full features of the game last time I checked, it didn t have monsters. Paradise Island overview
Paradise Island details of Bungalow Paradise Island is one of many business simulation games for Android. It s more playable than most and has very detailed graphics, but the down-side is that it s a memory hog and will crash if you don t have enough RAM. It s one of the games that are free to download but encourage you to pay money to level up, for some players it s probably a very expensive game. Tower Raiders 2, Pratt was eaten by a Grue Tower Raiders 2 is one of the better tower defense games for Android. Related posts:
  1. Choosing an Android Phone My phone contract ends in a few months, so I m...
  2. Galaxy S vs Xperia X10 and Android Network Access Galaxy S Review I ve just been given an indefinite loan...
  3. Standardising Android Don Marti wrote an amusing post about the lack of...

10 January 2012

Andrew Pollock: [life] Breaking and entering, with permission

I had a bit of an adventure yesterday, which would have taken some explaining if the police had gotten involved. It went a little something like this... My friend and former co-worker Sara was in the US Virgin Islands for the holidays. Her boyfriend, Karl, flew there separately for the tail end of her time there. Yesterday, I received a phone call from Sara, saying that Karl had managed to fly out to the Virgin Islands without his passport. Apparently you can get there without one, but to get back into the mainland US, you need one. She wanted to know if I could get one of my lock-picking co-workers to break into their apartment and retrieve Karl's passport and mail it them. Karl was supposed to fly out the next day. Attempts by Sara to contact her landlord had failed, so they didn't have many other options (apart from mailing me a key, which would have cost them another day). I asked one of my co-workers, Jason, who I knew was into lock picking, if he was up for it, and he offered to put me in touch with another guy who had dominated the recent lock picking night that he'd run. So now I'm talking to David, who's on board with the mission, but doesn't have his lock picking gear on him. No problem, Jason says he'll lend me his, which was at work with him. So we have a plan. Our friends Ian and Melinda are currently in Australia. They've lent us their car because it's leased, and they have some minimum mileage they're supposed to do and they're under it, so I've been driving to work in their car some days. As it happens, I drove to work in it yesterday. So now David and I set out in a car that neither of us own, with a lock picking set that belongs to another person, to break into an apartment of someone who's in the Virgin Islands. What could possibly go wrong? I'm told that it's not illegal to own a lock picking set, but if you're caught with one on your person and you're not a locksmith, you can get into all sorts of trouble. On top of that I'd have a hard time explaining the car I'm driving. We get to Sara and Karl's condo complex. It has a common gate that visitors would normal get buzzed through. Turns out it's not that hard to climb over. It's got some benign-looking spiky things on top, but I could get a leg over from the left hand side of the gate and jump over without impaling myself. Then I let David in and we proceeded upstairs to Sara and Karl's apartment door, where David set to work. Sara said that just the dead bolt was locked. David started at it with Jason's tools, trying to be as discreet as possible. It was about 3:30pm and there was no one around, but we could hear some noises from the neighbouring apartment (the two front doors were right next to each other). After what felt like about half an hour without success (the last pin of the lock was particularly tricky apparently) David was having to resort to more noisy techniques with the lock, so I decided to take the up-front approach and just inform the next door neighbour what we were doing in case he/she (I think it was a she) decided to call the cops on us. I told her through the door why we were there and what we were doing. She didn't seem to care too much. David then proceeded to start "raking" the lock, essentially brute forcing the pins with a lot of jiggling, and finally managed to pick it and we were in. I quickly found Karl's passport where it was suspected to be, and then we pondered how we were going to lock the door again. We could have just locked the door knob instead of the deadbolt and closed the door behind us, but we weren't sure if Sara and Karl had a key to the doorknob (Sara said they always just locked the deadbolt). Sara was fine with leaving the door unlocked until they got home, but weren't so keen on leaving our fingerprints all over the place and then leaving the door unlocked. David tried to re-pick the deadbolt so that he could lock it via the same means as opened it, and I scouted around for a key. I managed to find a key that locked both the deadbolt and the doorknob, so I took that with us and locked up their apartment. In David's defense, the deadbolt was a bit stiff to lock even with the key. I dropped David back at work, collected my stuff (it was now about 4:30pm) and headed to the UPS Store to ship Karl's passport to him as fast as humanly possible. I just made the 5pm pick up. Today I received an SMS from Sara informing me that they had received the passport. I was very impressed with how fast it got to them. So that was all a bit of an adventure. I'm not sure how much longer Karl is going to have to stay in the Virgin Islands as a result. I'm going to suggest that Sara and Karl leave a spare key with someone in future.

9 December 2011

Russell Coker: Cocolo Chocolate

Cocolo Overview I recently wrote about buying a fridge for storing chocolate [1]. Jason Lewis (the co-founder of Organic Trader [2]) read that post and sent me some free samples of Cocolo chocolate [3] (Cocolo is an Organic Trader product that is made in Switzerland). It s interesting to note that Cocolo seem very focussed on a net presence [3], their URL is printed on the back of the packet in an equal size font to the main label on the front (although the front label is in upper case). The main web page has a prominent link to their Twitter page which appears to be updated a couple of times a month. PIcture of Cocolo chocolate packaging Cocolo makes only organic fair-trade chocolate. Every pack lists the percentage of ingredients that are Fairtrade (presumably milk and some other ingredients are sourced locally in Switzerland and Fairtrade doesn t apply to them). Their chocolate packages have the URL printed on them and their web site links to an international Fairtrade organisation. The packages also list the organic and Fairtrade certification details and state that they are GMO free. The final geek data on the package is advise to store the chocolate at a temperature between 16C and 18C (I have now set my fridge thermostat to 17C). The above picture shows the front of a pack of Dark Orange chocolate and the back of a pack of Milk chocolate. Reviews One thing that is different about Cocolo is that they use only unrefined evaporated organic cane sugar juice to sweeten their chocolate. This gives it a hint of molasses in the flavor. Children who like white sugar with brown coloring might not appreciate this, but I think that the use of natural cane sugar juice will be appreciated by most people who appreciate products with complex and subtle flavors. The Milk chocolate contains a minimum of 32% cocoa solids, this compares to the EU standard of a minimum of 25% for milk chocolate and the UK standard of a minimum of 20% for Family Milk Chocolate . The EU standard for dark chocolate specifies a minimum of 35% cocoa solids, so it seems that Cocolo milk chocolate is almost as strong as dark chocolate. If you are used to eating dark and bittersweet chocolate then the Cocolo milk chocolate is obviously not that strong, but it is also significantly more concentrated than most milk chocolate that is on the market. The high chocolate content combined with the evaporated cane sugar extract gives a much stronger flavor than any of the milk chocolates that I have eaten in recent times. The Dark Mint Crisp chocolate has a minimum of 61% cocoa mass. The mint crisp is in very small pieces that give a good texture to the chocolate with a faint crunch when you bite it. It has a good balance of mint and chocolate flavors. The Dark Orange chocolate contains 58% cocoa solids and has a subtle orange flavor. The white chocolate tastes quite different from most white chocolate. While most white chocolate is marketed to children the Cocolo white chocolate will probably appeal more to adults than children. This is one of the few white chocolates that I ve wanted to eat since the age of about 14. They also have many other flavors, most common types of chocolate (such as with almonds or hazelnuts) are available. I highly recommend Cocolo products!

6 August 2011

Bdale Garbee: FreedomBox in Banja Luka

FreedomBox activities at Debconf11 I spent the last two weeks of July 2011 in Banja Luka. The occasion was the annual Debian developer's conference, Debconf11 and preceding work week known as Debcamp. This was my tenth successive year attending Debconf, and I had a very productive and pleasant time! The facilities were good, the local team was friendly, enthusiastic, and very helpful, and in addition to giving three talks and hosting a couple panel discussions, I managed to put a burst of energy into work on FreedomBox. Several other developers working on FreedomBox were also present, and a good number of Sheeva and Dream plugs were evident in the hacklabs sporting new FreedomBox stickers. Working together in the same place for several days, we made good progress on several projects, and also had some great discussions about what we want to do going forward. image building tools For some time, I've been working towards a light-weight tool set to build FreedomBox software images. Shortly before Debconf started, I chose the name 'freedom-maker' for this tool and shared a link to a readable copy of my git repository with other developers I expected to work with in Banja Luka. With input from Bert Agaz and Jonas Smedegaard during Debcamp, freedom-maker went from almost useful to actually useful. It still deserves work to be more useful to others, but I have now pushed a copy of the git repository to so that we can take advantage of the tools supported there to enable others to more easily contribute to the code. Very soon, Bert plans to add support to freedom-maker for using Lars Wirzenius' vmdebootstrap to build x86 images suitable for testing in a virtualized environment. At the same time, we plan to refactor the existing code slightly to enable lists of desired packages for the various image flavors we expect to produce independently of the configuration for each specific image building tool. Jonas continued in parallel to work on his alternate packaging toolset boxer. It offers some potentially interesting features for the future, and we may eventually merge some or all of it into freedom-maker, but for now it remains a separate utility. uAP user space tools Several weeks ago, we received from Marvell the source code to two user space programs that are necessary for configuring and monitoring the binary firmware provided for the uAP wireless chip used in the DreamPlug. Early during my stay in Banja Luka, I packaged these for Debian as uaputl and uapevent, and I am pleased to note that they were quickly accepted into the archive and are now present in Debian mirrors. u-boot Another bit of code received very shortly before Debconf started was the source for the version of u-boot shipped by Globalscale in the DreamPlug units we're working with. During Debcamp, Clint Adams passed a copy of this source to Jason Cooper, who was already trying to add support for the DreamPlug to upstream u-boot, but had stalled due to a lack of information. Jason has now merged his own work with the sources we got from the manufacturer, and is making good progress towards merging DreamPlug support into upstream u-boot. Once that happens, we should be able to flash our Sheeva and Dream plug devices with a u-boot image built from the source in the Debian u-boot package, in the process enabling things that matter to us like the ability to boot from an ext2 partition, and hopefully the ability to execute command scripts from that partition instead of having to hard-code kernel filenames in flash. This will allow us to support the ongoing effort in Debian to move away from the need for kernel symlinks. DreamPlug kernels With respect to kernels, another work stream at Debconf primarily involving H ctor Or n and Nick Bane was to analyze the current state of the patches from Marvell and Globalscale used to support the DreamPlug against both upstream and current Debian kernel sources. To my surprise and our collective pleasure, the remaining patch set required against current upstream kernels is much smaller than we previously believed! There are still several patches critical to us that are not merged upstream, but the work remaining to be able to build images for our devices from mainline and Debian kernel source trees now seems like something we might be able to complete before Debian's next stable release. One of our discoveries during the u-boot and kernel work during Debconf was that Globalscale did not obtain a new machine id for the DreamPlug, but instead re-used the one for the GuruPlug series, despite there being some differences in the hardware that require at least one additional driver. After much discussion, we plan to continue using the existing machine id instead of requesting another, particularly because the ARM kernel community has apparently stopped issuing new ids for the moment. We will add a new kernel config option for the DreamPlug, however, and are likely to build distinct Sheeva and Dream kernel packages that do not require initrd for use in FreedomBox images, even if doing so is not strictly necessary. This will allow us to optimize both the in-memory footprint and boot times for our devices. software configuration Another area of investigation in Banja Luka was technology for package configuration. Mirsal Ennaime performed various tests using debconf and Config::Model, with some results reflected in this commit relating to configuring the bitcoind daemon in the bitcoin package for Debian. identity and trust management While we did not actually do any FreedomBox specific work on the trust management layer we know is necessary, after several rounds of conversation, I am now more convinced than ever that the right path forward is to base our trust relationships on OpenPGP keys using GnuPG and Monkeysphere as starting software elements. Our thinking to date is captured on an Identity Management page in the wiki. communication services Another thing that became fairly clear to me during discussions at Debconf is that in the near term, planning to build communication services around XMPP is the approach most likely to give good results. Investigating the software choices available to build an interesting XMPP infrastructure is now a high priority for me. Jonas has done some work towards configuring and integrating ejabberd or Prosody, I've started studying yate as a possible call manager and VoIP server choice with XMPP/jingle support, and we await with great interest a release from the Buddycloud developers to evaluate as a possible basis for deploying social network services. Some of these software choices will lead us to use Apache as our web services base technology because of the need for features that only it supports well among daemons that are Free Software. Jonas completed packaging GNU Sip Witch for Debian, and it is now available in the mirror network. Tzafrir Cohen and Jonas did some initial testing on its use. documentation A number of new wiki pages were written (or at least started) in order to sum up ideas, design various aspects of FreedomBox, and reflect discussions that happened during DebConf11. A lot of work is needed to complete these pages though, as well as others to capture more of the current state of the project. press coverage Finally, while in Banja Luka I got some great press coverage for FreedomBox! On Sunday the 24th, I was interviewed by the main television network serving the Republika Srpska. This led to a couple of minutes of coverage near the top of the national news program that night, immediately following the lead story about the President and several ministers appearing at Debian Day that morning to help open the conference. This interview was later re-used in another TV program that summarized Debconf11. On the morning of Thursday the 28th, I was part of a small group that spent more than an hour meeting with the Minister of Science and Technology in his office, and the relationship between Debian and our work on FreedomBox was one of the items of discussion in that meeting and the associated press conference. I'm told this resulted in more press coverage, but if true I have not seen it yet. summary On Friday afternoon the 29th, I gave a talk in the main Debconf program containing a FreedomBox Progress Report . In it, I talked about the structure of the FreedomBox Foundation, progress the foundation has made, and the work that was still then underway in Banja Luka. It was streamed live over the internet, and replays are available online. The reaction from Debian developers present was very positive, which was good to learn since by that time my energy level was quite low after the nearly two weeks of intense technical and social interaction that is Debconf! All in all, we got lots of work done on FreedomBox in Banja Luka, enough that I think at least the next few steps along the road towards an eventual "1.0" release of a reference implementation are now much clearer than they were two weeks ago!

5 June 2011

Dirk Eddelbuettel: Charles Lloyd and Zakir Hussain

Wonderful concert on Friday evening at Symphony Center: Charles Lloyd (Wikipedia; ts, flute, vocals) was in town with both his quartet and trio. The first part of the set was performed with Zakir Hussain (Wikipedia; tabla, vocals) and up-and-coming drummer Eric Harland (Wikipedia; drums). For the second half, Lloyd and Harland were joined by Jason Moran (Wikipedia; piano) -- who happens to also be a 2010 MacArthur Fellow -- and Larry Grenadier (Wikipedia; bass) before Hussain returned for the final piece and encores. It was evident how much joy Charles Lloyd still gets from performing live at his somewhat advanced age of 73. Surrounded by some extraordinary musicians, and clearly enjoying himself on stage. Great evening with wonderful jazz music from somewhere in between modern post-whatever, free, bop and hard bop, world music and everything else in between. I regret not having seen him earlier in life. Very much recommended.

18 April 2011

Asheesh Laroia: Why I spend my time on outreach

Current mood: moved Today is an amazingly great day. I wrote a blog post about an event that I put together, and an attendee then followed-up by writing a long-form comment. The thing is, the attendee did a far better job of writing about the event than I did. The amount of emotional positive reinforcement that this comment gave me is hard to overstate. You can find it toward the end of the blog post but I hope you'll read the copy below. I am stunned and awed. I guess I have my work set out for me.
8----< CUT HERE >-----8
I have never written software for an open source project. I am not subscribed to any development mailing lists. I have not been in a chat room on IRC for months. Yet I was delighted to see this tweet from @torproject on 04/15/2011 Join us in #vidalia on today at 13:00 UTC. The blog post described the Build It initiative for the Vidalia project, where people would be available to help you setup your build environment and compile the project. I have wanted to participate in open source projects for quite a while, but never really knew where to begin. I have experience (and enjoy) writing software. I am glad to learn the languages they are using. I know how to compile software. I m glad to learn their versioning system and build system. I looked into participating in several projects, but felt like I would be more of a burden than a help considering the relatively small amount that I was intending to contribute. I can generally figure anything out on my own, but it s nice to have somewhere to turn when you are struggling with something simple . I thought that this would be a perfect opportunity for me, since I already preach the use of Tor and Vidalia. I ve even demonstrated the bundle at a local LUG meeting. :) I joined the OFTC #vidalia room and waited nothing happened around 13:00 UTC, so I figured I d missed the event. They began around 13:30 UTC and walked us through the source code download process and compilation process. They directed our attention to the Volunteer page and the HACKING page. chiiph even suggested several simple OSX-specific tickets for me personally, since he knew that I was building on OSX. I ve already managed to contribute a patch for one ticket and am ready to begin a second ticket. I wouldn t have done it without help and feedback from chiiph and others. I am confident that there are many others who would be glad to help out with one or more of their favorite open source projects if they only had some place to begin. I hope that other project members or leaders offer a similar Build It events for their users. Jason Klein

21 January 2011

Rapha&#235;l Hertzog: People behind Debian: Michael Vogt, synaptic and APT developer

Michael and his daughter Marie

Michael has been around for more than 10 years and has always contributed to the APT software family. He s the author of the first real graphical interface to APT synaptic. Since then he created software-center as part of his work for Ubuntu. Being the most experienced APT developer, he s naturally the coordinator of the APT team. Check out what he has to say about APT s possible evolutions. My questions are in bold, the rest is by Michael. Who are you? My name is Michael Vogt, I m married and have two little daughters. We live in Germany (near to Trier) and I work for Canonical as a software developer. I joined Debian as a developer in early 2000 and started to contribute to Ubuntu in 2004. What s your biggest achievement within Debian or Ubuntu? I can not decide on a single one so I will just be a bit verbose. From the very beginning I was interested in improving the package manager experience and the UI on top for our users. I m proud of the work I did with synaptic. It was one of the earliest UIs on top of apt. Because of my work on synaptic I got into apt development as well and fixed bugs there and added new features. I still do most of the uploads here, but nowadays David Kalnischkies is the most active developer. I also wrote a bunch of tools like gdebi, update-notifier, update-manager, unattended-upgrade and software-properties to make the update/install situation for the user easier to deal with. Most of the tools are written in python so I added a lot of improvements to python-apt along the way, including the initial high level apt interface and a bunch of missing low-level apt_pkg features. Julian Andres Klode made a big push in this area recently and thanks to his effort the bindings are fully complete now and have good documentation. My most recent project is software-center. Its aim is to provide a UI strongly targeted for end-users. The goal of this project is to make finding and installing software easy and beautiful. We have a fantastic collection of software to offer and software-center tries to present it well (including screenshots, instant search results and soon ratings&reviews). This builds on great foundations like aptdaemon by Sebastian Heinlein, by Christoph Haas, by Michael Bramer, apt-xapian-index by Enrico Zini and many others (this is what I love about free software, it usually adds , rarely takes away ). What are your plans for Debian Wheezy? For apt I would love to see a more plugable architecture for the acquire system. It would be nice to be able to make apt-get update (and the frontends that use this from libapt) be able to download additional data (like debtags or additional index file that contains more end-user targeted information). I also want to add some scripts so that apt (optionally) creates btrfs snapshots on upgrade and provide some easy way to rollback in case of problems. There is also some interesting work going on around making the apt problem resolver a more plugable part. This way we should be able to do much faster development. software-center will get ratings&reviews in the upstream branch, I really hope we can get that into Wheezy. If you could spend all your time on Debian, what would you work on? In that case I would start with a refactor of apt to make it more robust about ABI breaks. It would be possible to move much faster once this problem is solved (its not even hard, it just need to be done). Then I would add a more complete testsuite. Another important problem to tackle is to make maintainer scripts more declarative. I triaged a lot of upgrade bug reports (mostly in ubuntu though) and a lot of them are caused by maintainer script failures. Worse is that depending on the error its really hard for the user to solve the problem. There is also a lot of code duplication. Having a central place that contains well tested code to do these jobs would be more robust. Triggers help us a lot here already, but I think there is still more room for improvement. What s the biggest problem of Debian? That s a hard question :) I mostly like Debian the way it is. What frustrated me in the past were flamewars that could have been avoided. To me being respectful to each other is important, I don t like flames and insults because I like solving problems and fighting like this rarely helps that. The other attitude I don t like is to blame people and complain instead of trying to help and be positive (the difference between it sucks because it does not support $foo instead of it would be so helpful if we had $foo because it enables me to let me do $bar ). For a long time, I had the feeling you were mostly alone working on APT and were just ensuring that it keeps working. Did you also had this feeling and are things better nowadays ? I felt a bit alone sometimes :) That being said, there were great people like Eugene V. Lyubimkin and Otavio Salvador during my time who did do a lot of good work (especially at release crunch times) and helped me with the maintenance (but got interested in other area than apt later). And now we have the unstoppable David Kalnischkies and Julian Andres Klode. Apt is too big for a single person, so I m very happy that especially David is doing superb work on the day-to-day tasks and fixes (plus big project like multiarch and the important but not very thankful testsuite work). We talk about apt stuff almost daily, doing code reviews and discuss bugs. This makes the development process much more fun and healthy. Julian Andres Klode is doing interesting work around making the resolver more plugable and Christian Perrier is as tireless as always when it comes to the translations merging. I did a quick grep over the bzr log output (including all branch merges) and count around ~4300 total commits (including all revisions of branches merged). Of that there ~950 commits from me plus an additional ~500 merges. It was more than just ensuring that it keeps working but I can see where this feeling comes from as I was never very verbose. Apt also was never my only project, I am involved in other upstream work like synaptic or update-manager or python-apt etc). This naturally reduced the time available to hack on apt and spend time doing the important day-to-day bug triage, response to mailing list messages etc. One the python-apt side Julian Andres Klode did great work to improve the code and the documentation. It s a really nice interface and if you need to do anything related to packages and love python I encourage you to try it. Its as simple as:
import apt
cache = apt.Cache()
Of course you can do much more with it (update-manager, software-center and lots of more tools use it). With pydoc apt you can get a good overview. The apt team always welcomes contributors. We have a mailing list and a irc channel and it s a great opportunity to solve real world problems. It does not matter if you want to help triage bugs or write documentation or write code, we welcome all contributors. You re also an Ubuntu developer employed by Canonical. Are you satisfied with the level of cooperation between both projects? What can we do to get Ubuntu to package new applications developed by Canonical directly in Debian? Again a tricky question :) When it comes to cooperation there is always room for improvement. I think (with my Canonical hat on) we do a lot better than we did in the past. And it s great to see the current DPL coming to Ubuntu events and talking about ways to improve the collaboration. One area that I feel that Debian would benefit is to be more positive about NMUs and shared source repositories (collab-maint and LowThresholdNmu are good steps here). The lower the cost is to push a patch/fix (e.g. via direct commit or upload) the more there will be. When it comes to getting packages into Debian I think the best solution is to have a person in Debian as a point of contact to help with that. Usually the amount of work is pretty small as the software will have a debian/* dir already with useful stuff in it. But it helps me a lot to have someone doing the Debian uploads, responding to the bugmail etc (even if the bugmail is just forwarded as upstream bugreports :) IMO it is a great opportunity especially for new packagers as they will not have to do a lot of packaging work to get those apps into Debian. This model works very well for me for e.g. gdebi (where Luca Falavigna is really helpful on the Debian side). Is there someone in Debian that you admire for his contributions? There are many people I admire. Probably too many to mention them all. I always find it hard to single out individual people because the project as a whole can be so proud of their achievements. The first name that comes to my mind is Jason Gunthorpe (the original apt author) who I ve never met. The next is Daniel Burrows who I met and was inspired by. David Kalnischkies is doing great work on apt. From contributing his first (small) patch to being able to virtually fix any problem and adding big features like multiarch support in about a year. Sebastian Heinlein for aptdaemon. Christian Perrier has always be one of my heroes because he cares so much about i18n. Christoph Haas for, Michael Bramer for his work on debian translated package descriptions.
Thank you to Michael for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on, Twitter and Facebook.

4 comments Liked this article? Click here. My blog is Flattr-enabled.

10 December 2010

Rapha&#235;l Hertzog: People behind Debian: David Kalnischkies, an APT developer

The two first interviews were dedicated to long-time Debian developers. This time I took the opposite approach, I interviewed David Kalnischkies who is not (yet) a Debian developer. But he s contributing to one of the most important software within Debian the APT package manager since 2009. You can already see him in many places in Debian sharing his APT knowledge when needed. English is not his native language and he s a bit shy, but he accepted the interview nevertheless. I would like to thank him for the efforts involved and I hope his story can inspire some people to take the leap and just start helping My questions are in bold, the rest is by David. Who are you? I am David Kalnischkies, 22 years old, living in the small town Erbach near Wiesbaden in Germany and I m studying computer science at the TU Darmstadt. Furthermore I am for more than half a decade now young group leader of my hometown. I never intended to get into this position, but it has similarities with my career in this internet-thingy here. I don t remember why, but in April 2009 I was at a stage that some simple bugs in APT annoyed me so much that I grabbed the source, and most importantly I don t know why I did it but I published my changes in Mai with #433007, a few more bugs and even a branch on launchpad. And this public branch got me into all this trouble in June: I got a mail from Mr. package managment Michael Vogt regarding this branch A few days later I joined an IRC session with him and closely after that my name appeared for the first time in a changelog entry. It s a strange but also addicting feeling to read your own name in an unfamiliar place. And even now after many IRC discussions, bugfixes and features, three Ubuntu Developer Summits and a Google Summer of Code in Debian, my name still appear in places I have never even thought about e.g. in an interview. What s your biggest achievement within Debian? I would like to answer MultiArch in APT as it was my Google Summer of Code project, but as it has (not much) use for the normal user at this point will hopefully change for wheezy I chose three smaller things in squeeze s APT that many people don t even know yet: If your impression is now that I only do APT stuff: that s completely right, but that s already more than enough for me for now as the toolchain behind the short name APT contains so many tools and use cases that you always have something different. You re an active member of the APT development team. Are there plans for APT in Debian Wheezy? What features can we expect? That s very hard to answer, as the team is too small to be able to really plan something. I mean, you can have fancy plans and everything and half a second later someone arrives on the mailing list with a small question which eats days of development time just for debugging But right now the TODO list contains (in no particular order): We will see what will get real for wheezy and what is postponed, but one thing is sure: more will be done for wheezy if you help! If you could spend all your time on Debian, what would you work on? I would spend it on APT s debbugs count zero would be cool to look at! We make progress in this regard, but with the current velocity we will reach it in ten years or so. Reading more mailing lists would be interesting, as I am kind of an information junky. Maintaining a package could be interesting to share the annoyance of a maintainer with handcrafted dependencies just to notice that APT doesn t get it in the way I intended it to be. Through, to make it feel real I need to train a few new APT contributors before so they can point my mistake out, but this unfortunately doesn t depend so much on time but on victims Maybe I could even be working on getting an official status. Beside that, I would love to be able to apt-get dist-upgrade the increasing mass of systems I and many others carry around in their pockets. In regards to my phone, this is already fixed, but there is much room for improvements. What s the biggest problem of Debian? You need to be lucky. You need to talk at the right time to the right person. That s not really a debian-only problem as such, but in a global project full of volunteers you can see it clearly as there are plenty of opportunities to be unlucky. For example, it s unlikely that an interview would be made with me now if Michael had not contacted me in June 2009. In a big project like Debian, you are basically completely lost without a mentor guiding you, so things like the debian-mentors list are good projects, but I am pretty certain they could benefit from some more helping hands. The other thing which I consider a problem is that and I read from time to time some people don t care for translations. That s bad. Yes, a developer is able to read English, otherwise s/he couldn t write code or participate on the mailinglists. Still, I personally prefer to use a translated application if I have the chance as it s simply easier for me to read in my mother tongue, not only because I am dyslexic, but because my mind still thinks in German and not in English. Yes, I could personally fix that by thinking in English only from now on, but its a quite big problem to convince my family which is not really familiar with tech-stuff to use something if they can t understand what is written on screen. It was hard enough to tell my mother how to write an SMS in a German interface. My phone with English words all over the place would be completely unusable for her despite the fact that my phone is powered by Debian and better for the task from a technical point of view. You are not yet an official Debian developer/maintainer, but you re already perceived in the community as one the most knowledgeable person about APT. It s a great start! What s your advice to other people who want to start contributing to Debian in general, and to APT in particular? It was never a goal in my life to start contributing . My goal was and still is to make my life easier by letting the computer work for me. At some point APT hindered the success of this goal, so it needed to be fixed. I didn t expect to open pandora s box. So, my advice is simple: Just start. Ignore the warning signs telling you that this is not easy. They are just telling you that you do something useful. Only artificial problems are easy. Further more, contribution to APT, dpkg or any other existing package is in no way harder than opening an ITP and working on your own, and it s cooler as you have a similar minded team around you to talk to. :) APT didn t accept release codenames as target release was one of the first things I fixed. If I had asked someone if that would be a good starting point the answer would have been a clear no , but I didn t search for a good starting point As a kid I can start playing football by just walking on the field and play or I can sit near the field, watching the others play, while analyzing which position would be the best for me to start ruling out one by one as the technical requirements seem too high Oh bicycle kick that sounds complicated I can t do that Julian Andreas Klode is working on a APT replacement, there s also Cupt by Eugene V. Lyubimkin. Both projects started because their authors are not satisfied with APT, they find APT s code difficult to hack partly due to the usage of C++. Do you share their concerns and what s your opinion on those projects? I don t think C++ is a concern in this regard, after all cupt is currently rewritten to C++0x and APT2 started in vala and is now C + glib last time I checked at least. I personally think that something is wrong if we need to advertise an application by saying in which language it is written The major problem for APT is probably that the code is old : APT does its job for more than 12 years now, under different maintainers with an always changing environment around it: so there are lines in APT which date from a time when nobody knew what a Breaks dependency is, that packages can have long descriptions which can be translated or even that package archives can be signed with a gpg key! And yet we take all those for granted today. APT has proven to adapt to these changes in the environment and became in this process very popular. So I don t think the point is near (if it will come at all) that APT can go into retirement as it is completely replaced by something else. The competitors one the other hand have their first 12 years still to go. And it will be interesting to see how they will evolve and what will be the state of the art in 2022 But you asked what I think about the competitors: I prefer the revolution from inside simply because I can see effects faster as more users will profit from it now. Cupt and co. obviously prefer the normal revolution. The goal is the same, creating the best package manager tool, but the chosen way to the goal is different. aptitude and cupt have an interactive resolver for example: that s something I dislike personally, for others that is the ultimate killer feature. cupt reading the same preference file as APT will have a different pinning result, which we should consider each time someone mentions the word drop-in replacement . APT2 isn t much more than the name which I completely dislike currently from a user point of view, so I can t really comment on that. All of them make me sad as each line invested in boilerplate code like configuration file parsing would be in my eyes better be spent in a bugfix or new feature instead, but I am not here to tell anyone what they should do in their free time But frankly, I don t see them really as competitors: I use the tools I use, if other do that too that s good, if not that s their problem. :) The thing that annoys me really are claims like plan is to remove APT by 2014 as this generates a vi vs. emacs like atmosphere we don t need. If some people really think emacs is a good editor who cares? I really hope we all can drink a beer in 2022 in Milliways, the restaurant at the end of the package universe, remembering the good old 2010 ;) Is there someone in Debian that you admire for his contributions? No, not one, many! Michael Vogt who has nearly the monopole of package manager maintainer by being upstream of APT, synaptics and software center to name only the biggest and still has the time to answer even the dumbest of my questions. :) Jason Gunthorpe for being one of the initial developers behind deity who I will probably never meet in person beside in old comments and commit logs. Christian Perrier for caring so much about translations. Obey Arthur Liu as a great admin for Debian s participation in Google s Summer of Code. Paul Wise for doing countless reviews on debian-mentors which are a good source of information not only for the maintainer of the package under review. I guess I need to stop here because you asked for just one. So let s end with some big words instead: I am just a little cog in the big debian wheel
Thank you to David Kalnischkies for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on, Twitter and Facebook.

3 comments Liked this article? Click here. My blog is Flattr-enabled.

16 March 2010

Russell Coker: Starting with KVM

I ve just bought a new Thinkpad that has hardware virtualisation support and I ve got KVM running. HugePages The Linux-KVM site has some information on using hugetlbfs to allow the use of 2MB pages for KVM [1]. I put vm.nr_hugepages = 1024 in /etc/sysctl.conf to reserve 2G of RAM for KVM use. The web page notes that it may be impossible to allocate enough pages if you set it some time after boot (the kernel can allocate memory that can t be paged and it s possible for RAM to become too fragmented to allow allocation). As a test I reduced my allocation to 296 pages and then increased it again to 1024, I was surprised to note that my system ran extremely slow while reserving the pages it seems that allocating such pages is efficient when done at boot time but not so efficient when done later. hugetlbfs /hugepages hugetlbfs mode=1770,gid=121 0 0 I put the above line in /etc/fstab to mount the hugetlbfs filesystem. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other s files. The gid of 121 is for the kvm group. I m not sure how hugepages are used, they aren t used in the most obvious way. I expected that allocating 1024 huge pages would allow allocating 2G of RAM to the virtual machine, that s not the case as -m 2048 caused kvm to fail. I also expected that the number of HugePages free according to /proc/meminfo would reliably drop by an amount that approximately matches the size of the virtual machine which doesn t seem to be the case. I have no idea why KVM with Hugepages would be significantly slower for user and system CPU time but still slightly faster for the overall build time (see the performance section below). I ve been unable to find any documents explaining in which situations huge pages provide advantages and disadvantages or how they work with KVM virtualisation the virtual machine allocates memory in 4K pages so how does that work with 2M pages provided to it by the OS? But Hugepages does provide a slight benefit in performance and if you have plenty of RAM (I have 5G and can afford to buy more if I need it) you should just install it as soon as you start. I have filed Debian bug report #574073 about KVM displaying an error you normally can t see when it can t access the hugepages filesystem [6]. Permissions open /dev/kvm: Permission denied
Could not initialize KVM, will disable KVM support One thing that annoyed me about KVM is that the Debian/Lenny version will run QEMU instead if it can t run KVM. I discovered this when a routine rebuild of the SE Linux Policy packages in a Debian/Unstable virtual machine took an unreasonable amount of time. When I halted the virtual machine I noticed that it had displayed the above message on stderr before changing into curses mode (I m not sure the correct term for this) such that the message was obscured until the xterm was returned to the non-curses mode at program exit. I had to add the user in question to the kvm group. I ve filed Debian bug report #574063 about this [2]. Performance Below is a table showing the time taken for building the SE Linux reference policy on Debian/Unstable. It compares running QEMU emulation (using the kvm command but without permission to access /dev/kvm), KVM with and without hugepages, Xen, and a chroot. Xen is run on an Opteron 1212 Dell server system with 2*1TB SATA disks in a RAID-1 while the KVM/QEMU tests are on an Intel T7500 CPU in a Thinkpad T61 with a 100G SATA disk [4]. All virtual machines had 512M of RAM and 2 CPU cores. The Opteron 1212 system is running Debian/Lenny and the Thinkpad is running Debian/Lenny with a 2.6.32 kernel from Testing.
Elapsed User System
QEMU on Opteron 1212 with Xen installed 126m54 39m36 8m1
QEMU on T7500 95m42 42m57 8m29
KVM on Opteron 1212 7m54 4m47 2m26
Xen on Opteron 1212 6m54 3m5 1m5
KVM on T7500 6m3 2m3 1m9
KVM Hugepages on T7500 with NCurses console 5m58 3m32 2m16
KVM Hugepages on T7500 5m50 3m31 1m54
KVM Hugepages on T7500 with 1800M of RAM 5m39 3m30 1m48
KVM Hugepages on T7500 with 1800M and file output 5m7 3m28 1m38
Chroot on T7500 3m43 3m11 29
I was surprised to see how inefficient it is when compared with a chroot on the same hardware. It seems that the system time is the issue. Most of the tests were done with 512M of RAM for the virtual machine, I tried 1800M which improved performance slightly (less IO means less context switches to access the real block device) and redirecting the output of dpkg-buildpackage to /tmp/out and /tmp/err reduced the built time by 32 seconds it seems that the context switches for networking or console output really hurt performance. But for the default build it seems that it will take about 50% longer in a virtual machine than in a chroot, this is bearable for the things I do (of which building the SE Linux policy is the most time consuming), but if I was to start compiling KDE then I would be compelled to use a chroot. I was also surprised to see how slow it was when compared to Xen, for the tests on the Opteron 1212 system I used a later version of KVM (qemu-kvm 0.11.0+dfsg-1~bpo50+1 from Debian/Unstable) but could only use 2.6.26 as the virtualised kernel (the Debian 2.6.32 kernels gave a kernel Oops on boot). I doubt that the lower kernel version is responsible for any significant portion of the extra minute of build time. Storage One way of managing storage for a virtual machine is to use files on a large filesystem for it s block devices, this can work OK if you use a filesystem that is well designed for large files (such as XKS). I prefer to use LVM, one thing I have not yet discovered is how to make udev assign the KVM group to all devices that match /dev/V0/kvm-*. Startup KVM seems to be basically designed to run from a session, unlike Xen which can be started with xm create and then run in the background until you feel like running xm console to gain access to the console. One way of dealing with this is to use screen. The command screen -S kvm-foo -d -m kvm WHATEVER will start a screen session named kvm-foo that will be detached and will start by running kvm with WHATEVER as the command-line options. When screen is used for managing virtual machines you can use the command screen -ls to list the running sessions and then commands such as screen -r kvm-unstable to reattach to screen sessions. To detach from a running screen session you type ^A^D. The problem with this is that screen will exit when the process ends and that loses the shutdown messages from the virtual machine. To solve this you can put exec bash or sleep 200 at the end of the script that runs kvm. start-stop-daemon -S -c USERNAME --exec /usr/bin/screen -- -S kvm-unstable -d -m /usr/local/sbin/kvm-unstable On a Debian system the above command in a system boot script (maybe /etc/rc.local) could be used to start a KVM virtual machine on boot. In this example USERNAME would be replaced by the name of the account used to run kvm, and /usr/local/sbin/kvm-unstable is a shell script to run kvm with the correct parameters. Then as user USERNAME you can attach to the session later with the command screen -x kvm-unstable . Thanks to Jason White for the tip on using screen. I ve filed Debian bug report #574069 [3] requesting that kvm change it s argv[0] so that top(1) and similar programs can be used to distinguish different virtual machines. Currently when you have a few entries named kvm in top s output it is annoying to match the CPU hogging process to the virtual machine it s running. It is possible to use KVM with X or VNC for a graphical display by the virtual machine. I don t like these options, I believe that Xephyr provides better isolation, I ve previously documented how to use Xephyr [5]. kvm -kernel /boot/vmlinuz-2.6.32-2-amd64 -initrd /boot/initrd.img-2.6.32-2-amd64 -hda /dev/V0/unstable -hdb /dev/V0/unstable-swap -m 512 -mem-path /hugepages -append "selinux=1 audit=1 root=/dev/hda ro rootfstype=ext4" -smp 2 -curses -redir tcp:2022::22 The above is the current kvm command-line that I m using for my Debian/Unstable test environment. Networking I m using KVM options such as -redir tcp:2022::22 to redirect unprivileged ports (in this case 2022) to the ssh port. This works for a basic test virtual machine but is not suitable for production use. I want to run virtual machines with minimal access to the environment, this means not starting them as root. One thing I haven t yet investigated is the vde2 networking system which allows a private virtual network over multiple physical hosts and which should allow kvm to be run without root privs. It seems that all the other networking options for kvm which have appealing feature sets require that the kvm process be started with root privs. Is KVM worth using? It seems that KVM is significantly slower than a chroot, so for a basic build environment a secure chroot environment would probably be a better option. I had hoped that KVM would be more reliable than Xen which would offset the performance loss however as KVM and Debian kernel 2.6.32 don t work together on my Opteron system it seems that I will have some reliability issues with KVM that compare with the Xen issues. There are currently no Xen kernels in Debian/Testing so KVM is usable now with the latest bleeding edge stuff (on my Thinkpad at least) while Xen isn t. Qemu is really slow, so Xen is the only option for 32bit hardware. Therefore all my 32bit Xen servers need to keep running Xen. I don t plan to switch my 64bit production servers to KVM any time soon. When Debian/Squeeze is released I will consider whether to use KVM or Xen after upgrading my 64bit Debian server. I probably won t upgrade my 64bit RHEL-5 server any time soon maybe when RHEL-7 is released. My 64bit Debian test and development server will probably end up running KVM very soon, I need to upgrade the kernel for Ext4 support and that makes KVM more desirable. So it seems that for me KVM is only going to be seriously used on my laptop for a while. Generally I am disappointed with KVM. I had hoped that it would give almost the performance of Xen (admittedly it was only 14.5% slower). I had also hoped that it would be really reliable and work with the latest kernels (unlike Xen) but it is giving me problems with 2.6.32 on Opteron. Also it has some new issues such as deciding to quietly do something I don t want when it s unable to do what I want it to do.

27 October 2009

Asheesh Laroia: Will the last to leave kindly turn out the light? /

Today is Monday, October 26. Someone at Yahoo will go home tonight and, on the way out, turn off Update: Tue, 27 Oct 2009 15:17:50 -0400 Geocities is finally offline. Pages say:
Sorry, the GeoCities web site you were trying to reach is no longer available.
To commemorate it, I bought I intend to do what I can to keep the Geocities pages on the web. I am part of the Archive Team, an independent group of amateur archivists racing to rescue the web from destruction at its own hand. In late 1994, Geocities began offering free web hosting as Beverly Hills Internet. A decade ago, Yahoo bought Geocities. In December 1998, one-third of all web users visited the website. As recently as March 2009, 11.5 million unique visitors arrived there. Today, according to Alexa Site Info, Geocities ranks somewhere between the New York Times and the Washington Post in pageviews. And today, shouts:
Tomorrow, Geocities' website will be closed for good if Yahoo sticks to that promise. The amount the Archive Team has downloaded is around one terabyte. That's all we seem to be able to reach; many pages were deleted months ago when the archiving effort began. The archiving is continuing as I write this. Think of it. Fifteen years of history, memories for millions of people, the birth of a generation on the web. More personal embarrassment than all the POG games put together. It fits on an $80 piece of storage equipment -- at least, that's what we managed to find before Yahoo erases it all. Initially, when I met Jason Scott of the Archive Team, he told me he wanted to download Geocities and share it by mailing hard drives around. I told him I wanted to hoist it back on the Web. He came around, and we and the rest of the Archive Team have put Geocities back online. is not the greatest website in the world, no. This is just a tribute. P.S. Major thanks to John Joseph Bachir for the paperwork assist.

14 October 2009

Robert McQueen: Boston GNOME Summit 2009

I spent this weekend in Boston for the annual GNOME summit. I really enjoyed it this year, although there were fewer attendees than previously it felt very focussed and productive. There s some cool stuff going on, and it s always great to catch up with all of the usual free software suspects in Boston. Some highlights from the weekend: I was really impressed by Jason Clinton and others summaries of the sessions, which I think are really valuable for the people who couldn t make it to the summit. He asked me to take some notes about the first Telepathy session on Saturday evening while he was taking notes about the Outreach session. Rather than lumber him with my deranged scratchings from Tomboy, I ll blog them separately.

26 May 2009

Peter Van Eynde: Common Lisp has no libraries: ha!

In the last few weeks I needed to write a short utility at $WORK. I decided to use my trusted Common Lisp. Turned out that my old utility still would be ok, but 'upstream' had changed from CSV files to 'json' files.

A short google query, downloading the two libraries that exists to parse these files and within a few minutes I could read and parse the new fileformat.

Don't tell me CL doesn't have libraries...

ObDebian: yes I still need to update cl-irc and package said jason library... it's somewhere in my long todo list.

10 March 2009

Steve McIntyre: Post-release

It took us a couple of weeks to organise, but we had a small Lenny release party in Cambridge last weekend. We had the usual crowd of Cambridge folks, plus Noodles and codehelp. Jason Clifford from UKFSN even threw some cash our way to help cover the costs - Thanks Jason! :-) We started at the Regal pub in town, then headed back to my place and drank until late. Lenny T! Quite a number of the revellers also bought some of our shiny new Lenny release T-shirts! If you'd like one, look at the details here and mail me! Update: Fixed the URL to the T-shirt photo. Doh!

16 January 2009

Petr Rockai: darcs 2.2.0

I am happy to announce general availability of darcs 2.2.0. Getting the release For this release, we have decided to provide two flavours, depending on the build system used:
  1. The source tarball,, which can be built using the traditional autoconf-based system. This is the fully supported version. After downloading and unpacking, you can issue:
    $ ./configure
    $ make
    and possibly
    # make install
    More detailed instructions inside the tarball (file README). Please note that we had at least one report of build failure, with quickcheck-related message. The currently best workaround, if this happens to you, is to use the cabal version of the package instead, see below.
  2. Cabalised source. You can either download a tarball from and build manually (see the build instructions in README inside the tarball), or, alternatively, you can use cabal-install to obtain a copy:
    $ cabal update
    $ cabal install darcs
    This will give you a darcs binary in ~/.cabal/bin you should probably add that to your PATH.
In addition to source tarballs, we expect binary packages for various UNIX platforms will be available in due time. For Windows users, Salvatore Insalaco has prepared a binary build, available from You just need to unpack the directory somewhere and add it to your path (if you like). Moreover, an experimental TortoiseDarcs release for darcs 2 has been made available by Kari Hoijarvi and is looking for home. It can be found at (unfortunately, at the time of this writing, the site seemed unreachable If you can help with hosting, please mail Kari.) What s New The summary of changes since version 2.1.2 (released last November) follows: And a summary of issues that have been fixed in darcs since version 2.1.2 (compiled by Thorkil Naur): 525 amend-record => darcs patches show duplicate additions
971 darcs check fails (case sensitivity on filenames)
1006 darcs check and repair do not look for adds
1043 pull => mergeAfterConflicting failed in geteff (2.0.2+)
1101 darcs send cc recipient not included in success message
1117 Whatsnew should warn on non-recorded files
1144 Add darcs send in-reply-to or header In-Reply-To: x@y.z
1165 get should print last gotten tag
1196 Asking for changes in /. of directory that doesn t exist gives changes in entire repo
1198 Reproducible mergeConflictingNons failed in geteff with ix
1199 Backup files darcs added after external merge
1223 sporadic test failure (2.1.1rc2+472)
1238 wish: darcs help setpref should list all prefs
1247 make TAGS is broken
1249 2.1.2 (+ 342 patches) local drive detection on Windows error
1272 amend-record not the same as unrecord + record
1273 renameFile: does not exist (No such file or directory)
1223 sporadic test failure (2.1.1rc2+472) I would like to thank all contributors for making this release possible. Future The next release will be 2.2.1, fixing low-risk issues found in 2.2.0, or those that have been excluded for 2.2.0 due to freeze. This release will appear in two or three weeks time, depending on circumstances. The next major release will be 2.3, due in June or July this year. The focus of this release will be new features and further work on performance. Moreover, we expect that it will use Cabal as its default build system and will make first steps towards sustainable libdarcs API.

13 January 2009

Petr Rockai: darcs 2.2.0rc1

(This post is somewhat late, the final release is in two days. However, we still need testing and reports of possible issues.) I am pleased to announce that darcs 2.2 is coming along nicely. I would like to ask everyone to give a ride to darcs 2.2, release candidate 1. This release again comes in two flavours:
  1. The source tarball,, which can be built using the traditional autoconf-based buildsystem. This is the fully supported version. After downloading and unpacking, you can issue:
    $ ./configure
    $ make
    $ ./darcs --version
    # make install
    More detailed instructions inside the tarball (file README).
  2. Cabalised source. You can either download a tarball from and build manually (see the build instructions in README inside the tarball), or, alternatively, you can use cabal-install to obtain a copy (the release candidate is now available on hackage):
    $ cabal update  
    $ cabal install darcs
    This should give you a darcs binary in ~/.cabal/bin you should probably add that to your PATH.
This is a preliminary changelog since version 2.1.2 (released last November): Preliminary list of issues that have been fixed in darcs since version 2.1.2: 1223 sporadic test failure (2.1.1rc2+472)
525 amend-record => darcs patches show duplicate additions
1247 make TAGS is broken
1273 renameFile: does not exist (No such file or directory)
1165 get should print last gotten tag
1249 2.1.2 (+ 342 patches) local drive detection on Windows error
1238 wish: darcs help setpref should list all prefs
1199 Backup files darcs added after external merge
1043 pull => mergeAfterConflicting failed in geteff (2.0.2+)
1117 Whatsnew should warn on non-recorded files
1101 darcs send cc recipient not included in success message Thanks to Thorkil Naur for compiling this list. I would like to thank all contributors developers, testers, bystanders for helping darcs get along further. It s been hard times recently for darcs, as many of you probably know. Nevertheless, we are regaining confidence in future darcs development. No way are we going to leave darcs fall by the road. I am sure that this one time, I speak for everyone in our developer and user community.

9 December 2008

David Moreno: High-Order Perl available for free

My friend Marco first told me on IM, then I read it on PerlBuzz. The nice High-Order Perl book by
Mark Jason Dominus is now available for free (as in free beer) at its website. This book caught my attention a long time ago on a Barnes & Noble once, but since I had just too many book on queue, I decided not to buy it. I then read that it’s actually a good book on “advanced” techniques on Perl, so my interest grew, but for random reasons I just didn’t get it. Now I have no excuses not to, and either do you :)

30 October 2008

Clint Adams: Eight days of Mraz makes a harmed man fumble

  • Day One
The South American maid confesses that she doesn't clean much, because the Spaniards don't notice.
  • Day Two
People keep playing the same Jason Mraz song over and over again. It is awful. Does Jason Mraz have more than one song? If so, does he have any good songs? This makes Coldplay seem like good music.
  • Day Three
While dumpster-diving in Fulda, a large man wearing an archer's cap and peculiar shoes appeared, carrying one of the largest baskets I have ever seen. He showed me that it was full of bread, and attempted to sell me some. When I showed no interest in his wares, he recited the following gibberish:
 Ich glaab ich bin aus Staa
 unn hab' mehr Bauch wie Baa-
 unn doch bin ich en arme Tropp-
 denn ach, ich hab'e Loch im Kopp!
  • Day Four
More Jason Mraz. Ugh. Jos complains about people who will walk together without talking constantly. He repeats himself about fifty times without accidentally saying anything interesting. I am afraid that he will injure his voice and then his brain will have to start working. Luckily this does not occur.
  • Day Five
In Neu-Isenburg for a standoff with the Sky Chefs. Every time I enter Neu-Isenburg I get paranoid and start looking for UutiSaruman. It's creepy.
  • Day Six
A Hessian woman tells me I lead a sad life. I can tell that she's Hessian because she looks like a slender Austrian with teeth. I don't tell her this.
  • Day Seven
Razula asks if I remember what le k means in Slavic. I don't know why he thinks I knew in the first place. His friend starts spouting off a lecture on why Hungarians are superior to Germans because Hungarians lie, cheat, and steal, and Germans obey laws. I wonder if le k can help this situation. Probably not.
  • Day Eight
Now the South American maid is complaining that Spanish men are gay and don't realize it. Then she goes on at great length about the Great Flying Circus of North Korea. I think about asking her where it's from. I decide against it.

15 September 2008

Adam Rosi-Kessel: The man hears what he wants to hear

(and disregards the rest) Jonah Lehrer reports the result of a depressing but unsurprising experiment: The Facts Don’t Matter.
Political scientists Brendan Nyhan and Jason Reifler provided two groups of volunteers with the Bush administration’s prewar claims that Iraq had weapons of mass destruction. One group was given a refutation — the comprehensive 2004 Duelfer report that concluded that Iraq did not have weapons of mass destruction before the United States invaded in 2003. Thirty-four percent of conservatives told only about the Bush administration’s claims thought Iraq had hidden or destroyed its weapons before the U.S. invasion, but 64 percent of conservatives who heard both claim and refutation thought that Iraq really did have the weapons. The refutation, in other words, made the misinformation worse. A similar “backfire effect” also influenced conservatives told about Bush administration assertions that tax cuts increase federal revenue. One group was offered a refutation by prominent economists that included current and former Bush administration officials. About 35 percent of conservatives told about the Bush claim believed it; 67 percent of those provided with both assertion and refutation believed that tax cuts increase revenue. In a paper approaching publication, Nyhan, a PhD student at Duke University, and Reifler, at Georgia State University, suggest that Republicans might be especially prone to the backfire effect because conservatives may have more rigid views than liberals: Upon hearing a refutation, conservatives might “argue back” against the refutation in their minds, thereby strengthening their belief in the misinformation. Nyhan and Reifler did not see the same “backfire effect” when liberals were given misinformation and a refutation about the Bush administration’s stance on stem cell research.
It’s particularly interesting that the backfire effect is more pronounced with Republicans; this certainly resonates with my admittedly biased view. Better information doesn’t seem to fix the problem, either:
During the first term of Bill Clinton’s presidency, the budget deficit declined by more than 90 percent. However, when Republican voters were asked in 1996 what happened to the deficit under Clinton, more than 55 percent said that it had increased. What’s interesting about this data is that so-called “high-information” voters - these are the Republicans who read the newspaper, watch cable news and can identify their representatives in Congress - weren’t better informed than “low-information” voters.
Anyone have a better solution? Or should we just throw in the towel on democracy?

30 April 2008

Adrian von Bidder: Filesystems in Linux

With Hans Reiser convicted for murder, some seem to feel that reiserfs is more or less dead. Jason Perlow writes a very strange article on ZDNet to which I'm replying to it mainly because he alludes that Debian so far has failed to react. First, default installations of Debian create ext3 and not reiserfs filesystems (Please correct me if I'm wrong. I've just recently installed a fresh etch, but I didn't specifically look at the fs.) And even if it were reiserfs (v3), I don't see why a reaction would be called for now. The stability of reiserfs has come up every once, before the whole murderer story begun, and that the interaction between the reiserfs developers (including, of course, Hans Reiser) and the kernel team were always difficult has also been known for a long time. This is the kind of reason where I think it's appropriate for Debian to take steps (i.e. switching to a different filesystem), not a single event, where it is not even clear yet how the reiserfs (v3 and v4) efforts will move on. On to the technical stuff: Perlow tries, but doesn't really arrive at understanding the issues he's writing about. Reiser 4 is discounted without a single remark on its technical merits (I can't comment either as I have not looked at it so far.) Why he discounts ext4 is not clear to me (because it is not ripe for production use yet — but that's even more true of ZFS and this Linux-NTFS thingy he rambles about further down...) He discounts JFS2 because it hasn't got a new release for several years (is that bad in a filesystem?) but then touts ZFS as a great idea with minor licensing problems, without speaking of patents which is where the real problems lie (not to mention the fact that the Linux ZFS port probably is much less tested than ext4 or JFS.) And in a final jump into fantasy-land he mentions that NTFS might just be ideal for Linux, and Microsoft is said to have started cooperating nicely with the Free Software world, so all licensing and patent issues are certainly going away Real Soon Now™. At least for Novell, these issues shouldn't be a problem, I guess. Not mentioned by Jason are btrfs (which has a quite tightly coupled network filesystem brother, crfs and is in a very early state of development), and hammer, which comes from the BSD world and currently lacks a Linux port. Both efforts are probably more likely to replace ext3 or reiser on Linux than both ZFS and NTFS: no patent issues, no license issues, and the development is actually done by a community and not a single company. Update: Julien Blanche ha a much more succinct response to Jason Perlow. More intersting to read than mine, too.

13 April 2008

Rob Taylor: PackageKit Stop Energy

This should be a reply on Hughsies blog, but I don’t want to create a livejournal account. Richard, please move to! In response to Jason’s comments: Hey. I’m a Debian/Ubuntu kinda guy and I think PackageKit is a hell of a lot better than Ubuntu making ubuntu-only solutions like gnome-app-install. It’s DEFINITELY not a NIH, there was nothing that gave us this kind of capability at all before PackageKit started. Everyone knew what was needed to be written and Richard was the guy that jumped in and got it started. Also Richard wasn’t working at Red Hat when he started PackageKit and has worked with every distro he can on this. Jason, stop the Stop Energy. Update:
I though I might jump in and say a bit more on this problem: It really comes down to the question of whether a backend should ask questions of the user. Richard thinks that they shouldn’t, mainly because every time he’s seen backends ask questions they aren’t ones he knows the answer to and so, by inference, most people don’t know the answer to. These questions shouldn’t be asked in a user-friendly system.
However ‘These questions’ are actually multiple classes of problems and simplistically dismissing these is really just asking for failure in solving the problems correctly. Before I go any further, let me note that debconf has multiple levels of verbosity that it can ask questions at. Most Debian based distros just ask the highest priority questions. It should be noted that a PackageKit backend should allow a central administrator to set default values for any of these setting however. Lets break down some of the kind of questions asked though debconf: Some packages also use stdin/out to really make sure you know what the hell you’re doing, like kernel packages, but i think that’s somewhat outside of this scope of this problem. Those kind of things should just fail unless you’re doing leet-super-admin work ;) So the question really comes down to asking the user sensible questions that they a) know how to answer and b) care about. The point I made to Richard on IRC is that this is actually a difficult and multi-faceted problem and just hiding these problems won’t help in solving them well, so my take is that PackageKit should allow asking of questions, probably via a dbus interface, have a debconf frontend that channels questions by this channel, and then let the Ubuntu and Debian guys get on the case of forming projects and policy to help fix the problem at source. Update 2:Richard pointed me to this FAQ entry which explains why even handling debconf questions is hard.