Search Results: "ewt"

2 February 2014

Gunnar Wolf: CuBox-i4Pro

CuBox-i4Pro
Somewhere back in August or September, I pre-ordered a CuBox-i A nicely finished, completely hackable, and reasonably powerful ARM system, nicely packaged and meant to be used to hack on. A sweet deal! There are four models (you can see the different models' specs here) I went for the top one, and bought a CuBox-i4Pro. That means, I have a US$130 nice little box, with 4 ARM7 cores, 2GB RAM, WiFi, and... well, all of its basic goodies and features. For some more details, look at the CuBox-i block diagram. I got it delivered by early January, and (with no real ARM experience on my side) I finally got to a point where I can, I believe, contribute something to its adoption/usage: How to get a basic Debian system installed and running in it. The ARM world is quite different to the x86 one: Compatibility is much harder, the computing platform does not self-describe properly, and a kernel must first understand how a specific subarchitecture is before being able to boot on it. Somewhere in the CuBox forums (or was it the IRC channel?) I learnt that the upstream Linux kernel does not yet boot on the i.MX6 chip (although support is rumored to be merged for the 3.14 release), so I am using both a kernel and an uBoot bootloader not built for (or by) Debian people. Besides that, the result I will describe is a kosher Debian install. Yes, I know that my orthodox friends and family will say that 99% kosher is taref... But remember I'm never ever that dogmatic. (yeah, right!) Note that there is a prebuilt image you can run if you are so inclined: In the CuBox-i forums and wiki, you will find links to a pre-installed Debian image you can use... But I cannot advise to do so. First, it is IMO quite bloated (you need a 4GB card for a very basic Debian install? Seriously?) Second, it has a whole desktop environment (LXDE, if I recall correctly) and a whole set of packages I will probably not use in this little box. Third, there is a preinstalled user, and that's a no-no (user: debian, password: debian). But, most importantly, fourth: It is a nightly build of the Testing (Jessie) suite... Built back in December. So no, as a Debian Developer, it's not something we should recommend our users to run! So, in the end and after quite a bit of frustration due to my lack of knowledge, here goes the list of steps I followed:
Using the CuBox
On the i2 and i4 models, you can use it either with a USB keyboard and a HDMI monitor, or by a serial consoles (smaller models do not have a serial console). I don't have a HDMI monitor handy (only a projector), so I prefer to use the serial terminal. Important details to avoid frustration: The USB keyboard has to be connected to the lower USB port, or it will be ignored during the boot process. And make sure your serial terminal is configured not to use hardware flow control. Minicom is configured by default to use hardware flow control, so it was not sending any characters to the CuBox. ^A-O gets you to the Minicom configuration, select Serial port setup, and disable it.
Set up the SD card
I created a 2GB partition, but much less can suffice; I'd leave it at least to 1GB to do the base install, although it can be less once the system is set up (more on this later). Partition and format using your usual tools (fdisk+mke2fs, or gparted, or whatever suits your style).
Install the bootloader
I followed up the instructions on this CuBox-i forums thread to get the SPL and uBoot bootloader running. In short, from this Google Drive folder, download the SPL-U-Boot.img.xz file, uncompress it (xz --decompress SPL-U-Boot.img.xz), and write it to the SD card just after the partition map: As root,
# dd if=SPL-U-Boot.img of=/dev/mmcblk0 bs=1024 seek=1.
Actually, to be honest: As I wanted something basic to be able to debug from, I downloaded (from the same Google Drive) the busybox.img.gz file. That's a bit easier to install from: xz --decompress busybox.img.xz, and just dump it into the SD from the beginning (as it does already include a partition table):
# dd if=busybox.img of=/dev/mmcblk0
This card is already bootable and minimal, and allows to debug some bits from the CuBox-i itself (as we will see shortly).
After this step, I created a second partition, as I said earlier. So, my mmcblk0p1 partition holds Busybox, and the second will hold Debian. We are still working from the x86 system, so we mount the SD card in /media/mmcblk0p2
Installing the base system
Without debian-installer to do the heavy lifting, I went for debootstrap. As I ran it from my PC, debootstrap's role will be for this first stage only to download and do a very initial pre-unpacking of the files: Bootstrapping a foreign architecture implies, right, using the --foreign switch:
debootstrap --foreign --arch=armhf wheezy /media/mmcblk0p2 http://http.debian.net/debian
You can add some packages you often use by specifying --include=foo,bar,baz
So, take note notes: This board is capable of running the armhf architecture (HF for Hardware Float). It can also run armel, but I understand it is way slower.
First boot (with busybox)
So, once debootstrap finishes, you are good to go to the real hardware! Unmount the SD card, put it in the little guy, plug your favorite console in (I'm using the serial port), and plug the power in! You should immediately see something like:
  1. U-Boot SPL 2013.10-rc4-gd05c5c7-dirty (Jan 12 2014 - 02:18:28)
  2. Boot Device: SD1
  3. reading u-boot.img
  4. Load image from RAW...
  5. U-Boot 2013.10-rc4-gd05c5c7-dirty (Jan 12 2014 - 02:18:28)
  6. CPU: Freescale i.MX6Q rev1.2 at 792 MHz
  7. Reset cause: POR
  8. Board: MX6-CuBox-i
  9. DRAM: 2 GiB
  10. MMC: FSL_SDHC: 0
  11. In: serial
  12. Out: vga
  13. Err: vga
  14. Net: phydev = 0x0
  15. Phy not found
  16. PHY reset timed out
  17. FEC
  18. (Re)start USB...
  19. USB0: USB EHCI 1.00
  20. scanning bus 0 for devices... 1 USB Device(s) found
  21. scanning usb for storage devices... 0 Storage Device(s) found
  22. scanning usb for ethernet devices... 0 Ethernet Device(s) found
  23. Hit any key to stop autoboot: 3

Let it boot (that means, don't stop autoboot), and you will soon see a familiar #, showing you are root in the busybox environment. Great! Now, mount the Debian partition:
# mount /dev/mmcblk0p2 /mnt
Finishing debootstrap's task
With everything in place, it's time for debootstrap to work. Chroot into the Debian partition:
# chroot /mnt
And ask Debootstrap to finish what it started:
# debootstrap --second-stage
Be patient, as this step takes quite a bit to be finished.
Some extra touches...
After this is done, your Debian system is almost ready to be booted into. Why almost? Because it still does not have any users, does not know its own name nor knows I want to use it via a serial terminal. Three very simple tasks to fix. First two:
  1. # passwd
  2. Enter new UNIX password:
  3. Retype new UNIX password:
  4. passwd: password updated successfully
  5. # echo cubox-i.gwolf.org > /etc/hostname

For the second one, add a line to /etc/inittab specifying the details of the serial console. You can just do this:
# echo 'T0:23:respawn:/sbin/getty -L ttymxc0 115200 vt100' >> /etc/inittab
Boot into Debian!
So, ready to boot Debian? Ok, first exit the chroot shell, to go back to the Busybox shell, unmount the Debian partition, and set the root partition read-only:
  1. # exit
  2. # umount /mnt
  3. # mount / -o remount,ro

Disconnect and connect power, and now, do interrupt the boot process when you see the Hit any key to stop automount prompt. To see the configuration of uboot, you can type printenv We will only modify the parameters given to the kernel:
  1. CuBox-i U-Boot > setenv root /dev/mmcblk0p2 rootfstype=ext3 ro rootwait
  2. CuBox-i U-Boot > boot

So, the kernel will load, and a minimal Debian system will be initialized. In my case, I get the following output:
  1. ** File not found /boot/busyEnv.txt **
  2. 4703740 bytes read in 390 ms (11.5 MiB/s)
  3. ## Booting kernel from Legacy Image at 10000000 ...
  4. Image Name: Linux-3.0.35-8
  5. Image Type: ARM Linux Kernel Image (uncompressed)
  6. Data Size: 4703676 Bytes = 4.5 MiB
  7. Load Address: 10008000
  8. Entry Point: 10008000
  9. Verifying Checksum ... OK
  10. Loading Kernel Image ... OK
  11. Starting kernel ...
  12. Unable to get enet.0 clock
  13. pwm-backlight pwm-backlight.0: unable to request PWM for backlight
  14. pwm-backlight pwm-backlight.1: unable to request PWM for backlight
  15. _regulator_get: get() with no identifier
  16. mxc_sdc_fb mxc_sdc_fb.2: NO mxc display driver found!
  17. INIT: version 2.88 booting
  18. [info] Using makefile-style concurrent boot in runlevel S.
  19. [....] Starting the hotplug events dispatcher: udevd. ok
  20. [....] Synthesizing the initial hotplug events...done.
  21. [....] Waiting for /dev to be fully populated...done.
  22. [....] Activating swap...done.
  23. [....] Cleaning up temporary files... /tmp. ok
  24. [....] Activating lvm and md swap...done.
  25. [....] Checking file systems...fsck from util-linux 2.20.1
  26. done.
  27. [....] Mounting local filesystems...done.
  28. [....] Activating swapfile swap...done.
  29. [....] Cleaning up temporary files.... ok
  30. [....] Setting kernel variables ...done.
  31. [....] Configuring network interfaces...done.
  32. [....] Cleaning up temporary files.... ok
  33. [....] Setting up X socket directories... /tmp/.X11-unix /tmp/.ICE-unix. ok
  34. INIT: Entering runlevel: 2
  35. [info] Using makefile-style concurrent boot in runlevel 2.
  36. [....] Starting enhanced syslogd: rsyslogd. ok
  37. [....] Starting periodic command scheduler: cron. ok
  38. Debian GNU/Linux 7 cubox-i.gwolf.org ttymxc0
  39. cubox-i login:

And that's it, the system is live and ready for my commands!
So, how big is this minimal Debian installed system? I cheated a bit on this, as I had already added emacs and screen to the system, so yours will be a small bit smaller. But anyway Lets clear our cache of downloaded packages, and see the disk usage information:
  1. root@cubox-i:~# apt-get clean
  2. root@cubox-i:~# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. rootfs 1008M 347M 611M 37% /
  5. /dev/root 1008M 347M 611M 37% /
  6. devtmpfs 881M 0 881M 0% /dev
  7. tmpfs 177M 132K 177M 1% /run
  8. tmpfs 5.0M 0 5.0M 0% /run/lock
  9. tmpfs 353M 0 353M 0% /run/shm

So, instead of a 4GB install, we have a 350MB one. Great improvement! Now, lets get it to do something useful, in a most Debianic way!

18 January 2014

Miriam Ruiz: L ve is a changing project


I have just uploaded a newer version of L ve to Debian, 0.9.0. As usual, this version breaks compatibility with the API of previous versions. Literally: L VE 0.9.0 breaks compatibility with nearly every 0.8.0 game . It s a hard to fix situation from a package maintainer s point of view, at least until they agree on a stable API, hopefully in a 1.0 version sometime. L ve has been in Debian official repositories since 2008. As major changes, we can see that it s using SDL2 and LuaJIT now. Depending on where the bottlenecks were in some of the demos and games, the performance might have improved a lot. The improvements have been a lot, and the structure of the API is more consistent and clean. Congratulations to everyone that has made it possible. On the bitter part, well, most of the previous games and demos will most likely not work any more without some changes in the code. As we don t have any reverse dependencies in the archive (yet), this won t cause any severe problems. But, of course, Debian is not an isolated island, and people might need to execute some old code without being able to migrate it. I have prepared some packages for older versions of L ve that might make the situation more bearable for some, until code is migrated to the new API. These versions can be co-installed with the latest version in the archive (0.9.0). I m not sure if it will be needed, but if it was, I might consider putting previous 0.8 version in the official repositories. I would prefer not to do it, though, as that would make me the de facto maintainer of the upstream code, as L ve community is moving forwards with newer versions.

14 November 2013

Jan Wagner: Upgrade RAM of your QNAP TS-459 Pro+ to 2GB

After running a QNAP TS-459 Pro+ for the last 3 or 4 years at home, my monitoring was alerting me about memory warnings cause I upgraded my SqueezeBox Server to the latest nightly version. So I looked into how to upgrade the shipped Adata SU3S1333B1G9-B.

It seems that the KVR1333D3S8S9/2G should work, so I ordered one. After plugin it in, it revealed that the module wasn't working while it worked for others. After digging around, it seems essentially that the modul has a 8-chip design. Indeed, my modul just has only 8 memory chips, but the design is a 16 chip. So I thought it would be a nice try to order the one from amazon that worked for others. And like expected it's not the same module but has the same Kingston Part Number.

As you can see, the 16 chip design module has printed '9905428-189.A00lf' on and the 8 chip design '9931712-009.A00G'. It's something like in the Wireless LAN USB dongle business where they sell totally different hardware with the same Part Number but different Revisions.
I also ordered a Samsung M471B5773DH0-CH9 cause that's also a 8 chip design.

To make it short, both 8 chips design modules worked like a charm, the M471B5773DH0-CH9 and the KVR1333D3S8S9/2G (9931712-009.A00G). - Please avoid the Adolf one from Kingston! ;)

15 October 2013

Francois Marier: The Perils of RAID and Full Disk Encryption on Ubuntu 12.04

I've been using disk encryption (via LUKS and cryptsetup) on Debian and Ubuntu for quite some time and it has worked well for me. However, while setting up full disk encryption for a new computer on a RAID1 partition, I discovered that there are a few major problems with RAID on Ubuntu.

My Setup: RAID and LUKS Since I was setting up a new machine on Ubuntu 12.04 LTS (Precise Pangolin), I used the alternate CD (I burned ubuntu-12.04.3-alternate-amd64+mac.iso to a blank DVD) to get access to the full disk encryption options. First, I created a RAID1 array to mirror the data on the two hard disks. Then, I used the partition manager built into the installer to setup an unencrypted boot partition (/dev/md0 mounted as /boot) and an encrypted root partition (/dev/md1 mounted as /) on the RAID1 array. While I had done full disk encryption and mirrored drives before, I had never done them at the same time on Ubuntu or Debian.

The problem: cannot boot an encrypted degraded RAID After setting up the RAID, I decided to test it by booting from each drive with the other one unplugged. The first step was to ensure that the system is configured (via dpkg-reconfigure mdadm) to boot in "degraded mode". When I rebooted with a single disk though, I received a evms_activate is not available error message instead of the usual cryptsetup password prompt. The exact problem I ran into is best described in this comment (see this bug for context). It turns out that booting degraded RAID arrays has been plagued with several problems.

My solution: an extra initramfs boot script to start the RAID array The underlying problem is that the RAID1 array is not started automatically when it's missing a disk and so cryptsetup cannot find the UUID of the drive to decrypt (as configured in /etc/crypttab). My fix, based on a script I was lucky enough to stumble on, lives in /etc/initramfs-tools/scripts/local-top/cryptraid:
#!/bin/sh
PREREQ="mdadm"
prereqs()
 
     echo "$PREREQ"
 
case $1 in
prereqs)
     prereqs
     exit 0
     ;;
esac
cat /proc/mdstat
mdadm --run /dev/md1
cat /proc/mdstat
After creating that file, remember to:
  1. make the script executable (using chmod a+x) and
  2. regenerate the initramfs (using dpkg-reconfigure linux-image-KERNELVERSION).
To make sure that the script is doing the right thing:
  1. press "Shift" while booting to bring up the Grub menu
  2. then press "e" to edit the default boot line
  3. remove the "quiet" and "splash" options from the kernel arguments
  4. press F10 to boot with maximum console output
You should see the RAID array stopped (look for the output of the first cat /proc/mdstat call) and then you should see output from a running degraded RAID array.

Backing up the old initramfs If you want to be extra safe while testing this new initramfs, make sure you only reconfigure one kernel at a time (no update-initramfs -u -k all) and make a copy of the initramfs before you reconfigure the kernel:
cp /boot/initrd.img-KERNELVERSION-generic /boot/initrd.img-KERNELVERSION-generic.original
Then if you run into problems, you can go into the Grub menu, edit the default boot option and make it load the .original initramfs.

5 October 2013

Vincent Sanders: If I have a style, I am not aware of it.

I wish I had known about that quote from Michael Graves before now. I would have perhaps had an answer to some recent visitors to makespace.

There are regular scheduled visits to makespace where people can come and view our facilities, and perhaps start the process of becoming a member if they decide they like what they see.

Varnishing a folding chair using a stool as a stand
However, we sometimes get people who just turn up at the door. If a member feels charitable they may choose to give a short tour rather than just turn the person away. I happened to be in one Friday afternoon recently varnishing a folding chair when two such people rang the doorbell. There were few other members about and because watching varnish dry was dull I decided to be helpful and do a quick tour.

I explained that they really ought to return for a scheduled event for a proper tour, gave the obligatory minimal safety briefing, and showed them the workshops and tools. During the tour it was mentioned they were attending a certain local higher education establishment and were interested in makespace as an inexpensive studio.

Before they left I was asked what I was working on. I explained that I had been creating stools and chairs from plywood. At this point the conversation took a somewhat surreal turn, one of them asked, well more demanded, who my principle influence had been in designing with plywood.

When I said that I had mainly worked from a couple of Google image searches they were aghast and became quite belligerent. They both insisted I must have done proper research and my work was obviously influenced by Charles and Ray Eames and Arne Jacobsen and surely I intended to cite my influences in my design documents.

My admission that I had never even heard the names before and had no design documents seemed to lead to a distinctly condescending tone as they explained that all modern plywood design stemmed from a small number of early 20th century designers and any competent designers research would have revealed that.

At this point in proceedings I was becoming a bit put out that my good deed of showing off the workshop had not gone unpunished. I politely explained that I designed simply by generating a requirement in my head, maybe an internet search to see what others had done, measuring real things to get dimensions and then a great deal of trial and error.

I was then abruptly informed that my "design process was completely invalid and there were well established ways to design furniture correctly and therefore my entire design was invalid" and that I was wasting time and material. I thanked them for their opinion and showed them out, safe and well before anyone gets any ideas.

I put the whole incident out of my mind until I finished writing up the final folding chair post the other day. It struck me that perhaps I had been unknowingly influenced by these designers. It was certainly true I had generated ideas from the hundreds of images my searches had revealed.

I did some research and it turns out that from the 1930s to the 1950s there were a string of designers using plywood in novel ways from the butterfly stool by Sori Yanagi through formed curvy chairs by Alvar Aalto and eero saarinen.

While these designers produced some wonderfully iconic and contemporary furniture I think that after reviewing my initial notes that two more modern designers Christian Desile and Leo Salom probably influenced me more directly. Though I did not reference their designs beyond seeing the images along with hundreds of others, certainly nothing was directly copied.

And there in lies an often repeated observation: no one creates anything without being influenced by their environment. The entire creative process of several billion ape descendants (or Golgafrincham telephone sanitisers if you prefer) is based on the simple process of copying, combining and transforming what is around us.

Isaac Newton by Sir Godfrey Kneller [Public domain], via Wikimedia Commons
I must accept that certain individuals at points in history have introduced radical improvements in their field, people like Socrates Galileo Leonardo Newton Einstein. However, even these outstanding examples were enlightened enough to acknowledge those that came before. Newtons quotation "If I have seen further it is by standing on the shoulders of giants" pretty much sums it up.

In my case I am privileged enough to live in a time where my environment has grown to the size of the world thanks to the internet. My influences, and therefore what I create, is that much richer but at the same time it means that my influence on others is similarly diminished.

I have joined the maker community because I want to create. The act of creation teaches me new skills in a physical, practical way and additionally I get to exercise my mind using new techniques or sometimes things I had forgotten I already knew. I view this as an extension of my previous Open Source software work, adding a physical component to a previously purely mental pursuit.

But importantly I like that my creations might provide inspiration for someone else. To improve those chances in the wider world I force myself to follow a few basic rules:
Release
Possibly one of the hardest things for any project. I carefully avoided the word finish here because my experience leads me to the conclusion that I always want to improve my designs.

But it is important to get to a point in a project where you can say "that is good enough to share", this is more common in software but it really applies to any project.

Share
If your aim here is to improve your society with your contribution sharing your designs and information is important. I think there is nothing better than someone else taking one of your designs and using it and perhaps improving on it, remember that is what you probably did in one way or another better to make it less hard for that to happen.

Ensure your design files are appropriately licensed and they are readily accessible. I personally lean towards the more generally accessible open source licences like MIT but the decision is ultimately yours.

Licencing is important, especially in the current copyright happy society. I know it sounds dull and no one takes that seriously, right? Sorry, but yes they do and it is better for you to be clear from the start, especially if there is a software component to your project. Oh one personal plea, use an existing well known licence, the world simply does not need another one!

Write about it
The blog posts about the things I have made sometimes take almost as much time as the creation. The clear recording on my thoughts in written and photographic form often gives me more inspiration for improvements or other projects.

If someone else gets pleasure from the telling then that can only be good. If you do not do this then your voice cannot be heard and you wasted an opportunity to motivate others.

Feedback
If you do manage to get feedback on your creation, read it. You may disagree or not be interested for the current project but the feedback process is important. In software this often manifests as bug reports, in more physical projects this often becomes forum or blog comments.

Just remember that you need a thick skin for this, the most vocal members of any society are the minority with inflammatory opinion, the silent majority are by definition absent but there is still useful feedback out there.

Create again
By this I simply mean that once you are satisfied yourself on a project move on to the next. This may sound a little obvious but once you have some creative momentum it is much easier to keep going project to project than in you leave time between.

Also do not have too many projects ongoing, by all means have a couple so you do not get stuck waiting for materials or workshop time but more than three and four and you will never be able to release any of them.
Those students were perhaps somewhat misguided in how they stated their opinions, but they are correct that in the world in which we find ourselves we are all influenced. Though contrary to received wisdom those influences are more likely to be from the internet and our peers in the global maker society than historical artists.

14 September 2013

Joachim Breitner: Adding safe coercions to Haskell

Yesterday, I pushed my first sizable contribution to GHC, the Haskell compiler. The feature solves the problem that newtypes are not always free: If we have newtype Age = MkAge Int, then we have all learned that the Age function has zero run-time cost. But if the Int that we want to convert is inside another type, the conversion is no longer free: Converting a Maybe Int to a Maybe Age using, for example, fmap Age, will cause time and space overhead at runtime, and there was no way around it. Well, there is unsafeCoerce, but really, that ought to be avoided. So after some discussion with and encouragement of Simon Peyton Jones at RDP in Eindhoven this year I worked on a design (which was developed, as far as I know, by Simon, Roman Cheplyaka, Stephanie Weirich, Richard Eisenberg and me). In GHC 7.8, there will be a function coerce :: Coercible a b => a -> b that works, from the user point of view, like unsafeCoerce (i.e. no run-time cost), but with the big difference that it will only typecheck if the compiler can infer that it indeed is safe to coerce between a and b. So it will derive Coercible Age Int, and Coercible Int Age, and Coercible (Maybe Age) (Maybe Int) and even stuff like Coercible (Int -> Age) (Age -> Int), but not Coercible Int Bool. It will also not coerce between Age and Int if the constructor MkAge is not exported, to respect module boundaries. Under the hood this relies on the also new feature of roles, which were solved to make the previously unsafe GeneralizedNewtypeDeriving feature safe again, and which also guarantee that coerce is indeed as safe as the name suggests. The feature will come with 7.8, but not fully advertised , so things might change again for 7.10, and bugs with the feature may not necessarily qualify to be fixed in further 7.8.x releases, so beware. It also does not automatically convert fmap Age into coerce, but it is a step in that direction.

26 August 2013

Russell Coker: Scratching a Galaxy S

Some years ago when I first got a LG U990 Viewty (which in some ways is the best phone I ever owned) I went swimming and left my phone in my bag. My phone happened to rest on my car keys and had vibration mode enabled, after a couple of missed calls I had a nasty scratched area on the phone screen. Since then I ve been very wary about allowing metal objects to come in contact with a phone screen. Now I have a Samsung Galaxy S with some sort of motherboard damage (it won t even boot and I know it s not a software issue because it was initially intermittent). A phone that old isn t worth repairing (they sell on ebay for as little as $50) so it seemed worth testing how hard the screen is. The screen cover is Gorilla Glass which was the hardest glass available at the time the phone was new (apparently there are better versions of Gorilla Glass available now and my more recent phones should be tougher). My first test was with one of my favorite Japanese kitchen knives, it didn t scratch at all. Then I chose a knife sharpening stone as an obvious item that s harder than a knife, it scratched the screen easily. A quartz pebble also scratched the screen when I used some force, so presumably concrete and brick would also scratch it. Tests with all current Australian coins and my car keys showed that the screen is too hard to be scratched by them. I also tested hitting the phone screen with my keys, I hit it much harder than would happen if I was to run while having my phone and my keys in the same pocket and there was no damage. My conclusion is that any metal object you are likely to carry in your pocket is unlikely to cause any problem if knocked against the screen of a modern phone.

21 June 2013

Daniel Pocock: Practical challenges for interrupt-free computing

My previous blog on interrupt-free computing has been very well read. I've had a look at some practical implementation possibilities and can share some more details about how to go about it and potential problems. No perfect solution They say that a certain billionaire is so rich that he would not spend 5 seconds bending over to pick up a $100 banknote on the ground because he makes $500 in 5 seconds anyway. A man from Ethiopia may well run a marathon for the same $100. This little anecdote underlines the point that no two people have exactly the same priorities and it is hard to provide a one-size-fits-all solution. A good technical solution needs to let the user have some control over their own priorities as that will go a lot further in respecting their preferences, their objectives and their environment. A practical example: some users may not want to be disturbed by a VoIP call while watching a movie with their PC. Other users may be quite happy to receive these interruptions. Some users may want to only allow this interruption from a subset of possible callers. Building blocks In the original blog on this subject I suggested that Thunderbird and it's popular plugin, Mozilla Lightning calendar would provide one useful framework for integrating events, alerts and to-do items of various grades of importance. I already use these applications myself, from the re-branded icedove and iceowl-extension packages on Debian. They are also available in Fedora and many other distributions. Priority with Lightning An initial look at Lightning reveals a Priority attribute for tasks. It can be set to the values High, Medium or Low. These don't map to the concepts of Urgency and Importance from the Eisenhower method discussed in the original blog. Potential hacks: there are two ideas that come to mind, with no coding changes:
  • One possibility is to set Due dates for any item that has the Urgent characteristic and no date for items without the characteristic. In this case, the Priority attribute can then be used to distinguish Important and non-Important tasks.
  • The other possibility is to use the Categories feature to define four categories: Urgent-Important, non-Urgent-but-Important, Urgent-non-Important and neither-Urgent-Important.
A more elegant solution would be to develop an additional plugin that provides new attributes for the Urgency and Importance characteristics. For server-side storage of these attributes, it would be feasible to use extension properties as defined in RFC 2445, for example, X-urgency and X-importance This approach would make it easier to view a large range of tasks across multiple domains with the most urgent and important tasks at the top. Ideally, the Categories attribute of tasks can be used to distinguish tasks that relate to work (or project/client) from personal tasks (such as planning a holiday) Challenges with Lightning Some of the challenges that I've identified after reviewing this solution:
  • the Categories in Lightning don't appear to be synchronized with the Tags used in Thunderbird for email.
  • Furthermore, the Tags are not synchronized automatically with the list of server-side IMAP keywords and somebody having multiple Thunderbird profiles (e.g. on both a desktop and a laptop) needs to manually synchronize the lists of tags and categories between all their devices.
  • Many people use mobile devices such as smartphones and tablets to access their email, calendar and tasks. The lightweight mail/calendar clients on these devices don't support the full range of features for classification of content. Furthermore, their low-resolution user interfaces may even make it difficult to replicate the same experience on a mobile device. There is some hope, for example, rooting a tablet device and installing Linux would allow it to run a proper installation of Thunderbird with the various plugins described above.
Feeding system events and other data into the pipeline Given the text-based nature of the iCalendar format, it should be easy enough for any application that is capable of generating mail to generate these iCal events with X-Urgency and X-importance properties. These events - such as alerts about security updates - could then be dispatched through the system's mailer. Some additional effort may be needed to avoid duplication. The notification system in the desktop UI may then prioritize events from the mailbox rather than showing all alerts in real-time. Only applications that implicitly need to do so (such as VoIP/chat clients) would display notifications to the user directly. Conclusion: Interrupting the user Finally, after gathering all the events for the user in a mailbox like this, it would be possible for the plugin to be configured to interrupt the user only when some event reaches the user's own urgency/importance threshold. For example:
  • The user may want to be interrupted for Urgent items with a deadline in less than 24 hours, but if the task has a work' category, only interrupt during business hours
  • For all other items, the user only sees them when they open Lightning or some other calendar/task management application to explicitly review things that they need to do.

12 June 2013

Marc 'Zugschlus' Haber: How to amd64 an i386 Debian installation with multiarch

Migrating a Debian installation between architectures has always been difficult. The recommended way to crossgrade an i386 Debian to amd64 Debian was to reinstall the system, move over data and configuration. For the more brave, in-place crossgrades usually involved chroots, rescue CDs, a lot of ar p tar xf - data.tar.gz and luck. I have never been brave when it comes to system administration, have done a lot of architecture migrations with reinstallation, and have always taken the opportunity to clear out the contamination that accumulates itself when a system is running for a long time. I would even recommend doing this to most people even now. However, I have a few very ugly systems in place that are still on i386 because I didn t dare going the reinstallation path. Doing in-place crossgrades has become a lot easier since wheezy s release, since once now can have both i386 and amd64 libraries installed in parallel, which allows to replace foo:i386 with foo:amd64 without influencing the other parts of the system. The process is still full of pitfalls: I have only tried this yet with a freshly installed minimal wheezy server system. I do not, however, expect surprises when it comes to using this process with real life systems. I will document other pitfalls I have fallen into here at a later time. My minimal wheezy system was running in a KVM VM with its virtual disk as a LVM LV in the host system. I took a snapshot before beginning and used lvconvert --merge numerous time to return my LV to the original state. Be aware that lvconvert --merge removes the snapshot after merging it, so you ll need to re-create the snapshot before trying again. During the process, I discussed things with Paul Tagliamonte, who has done this before, but on a live system and with a slightly more invasive approach. He has blogged about this. Thank you very much, your hints were very helpful. Here is a commented typescript of what I have done. Be warned: This was a lab setting with a rather minimal system. If you re going to try this with a production system, have a backup, or, if you re on snapshottable infrastructure, take snapshots and be prepared to roll back if anything goes wrong. First let s see what we have and save a list of installed packages.
mh@swivel:~$ ssh wheezyarch.zugschlus.de
$ uname -a
Linux wheezyarch 3.9.4-zgsrv2008064 #2 SMP PREEMPT Wed Jun 5 12:57:51 UTC 2013 x86_64 GNU/Linux
$ dpkg --print-architecture
i386
$ dpkg --print-foreign-architectures
$ dpkg --list   grep i386   wc -l
175
$ dpkg --list   grep amd64   wc -l
0
$ sudo dpkg --get-selections   grep install   awk  print $1    sed  s/:.*//;s/$/:amd64
install/;  > get-selections.pre
So we have an i386 system running an x86_64 kernel. Next, we add amd64 as an additional architecture which allows us to install amd64 packages. We download the amd64 packages files. Note that dpkg still considers this an i386 system with amd64 as a foreign arch.
$ sudo dpkg --add-architecture amd64
$ sudo apt-get update
Get:1 http://security.debian.org wheezy/updates Release.gpg [836 B]
( )
Fetched 6.155 kB in 4s (1.400 kB/s)
Reading package lists... Done
$ dpkg --print-architecture
i386
$ dpkg --print-foreign-architectures
amd64
$ dpkg --list   grep i386   wc -l
175
$ dpkg --list   grep amd64   wc -l
0
Next, download everything that is necessary to migrate dpkg and apt from i386 and amd64. We cannot directly apt things as the remove-install cycle used by apt will leave us without working dpkg. One could work around this by un-aring the dpkg.deb and directly forcing the amd64 files upon the system, but I think this approach is much cleaner. apt-get download will not resolve dependices while apt-get --download install does. After the download, dpkg --install the packages. dpkg is not particularly smart about installation order, so Pre-Depends may fail or not. If they fail, dpkg will give a list of deb files that were affected, which is an easy help to finish their installation. One could also choose the easy way (which I did here) and repeat the dpkg --install *.deb command. Since the Pre-Depends will already be installed, the second install run will succeed.
$ sudo apt-get --download-only install dpkg:amd64 apt:amd64
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  gcc-4.7-base:amd64 libapt-pkg4.12:amd64 libbz2-1.0:amd64 libc6:amd64
  libgcc1:amd64 liblzma5:amd64 libselinux1:amd64 libstdc++6:amd64
  zlib1g:amd64
Suggested packages:
  aptitude:amd64 synaptic:amd64 wajig:amd64 dpkg-dev:amd64 apt-doc:amd64
  python-apt:amd64 glibc-doc:amd64 locales:amd64
The following packages will be REMOVED:
  apt cron-apt dpkg
The following NEW packages will be installed:
  apt:amd64 dpkg:amd64 gcc-4.7-base:amd64 libapt-pkg4.12:amd64
  libbz2-1.0:amd64 libc6:amd64 libgcc1:amd64 liblzma5:amd64 libselinux1:amd64
  libstdc++6:amd64 zlib1g:amd64
0 upgraded, 11 newly installed, 3 to remove and 0 not upgraded.
Need to get 10,0 MB of archives.
After this operation, 14,9 MB of additional disk space will be used.
Do you want to continue [Y/n]? 
Get:1 http://debian.debian.zugschlus.de/debian/ wheezy/main gcc-4.7-base amd64 4.7.2-5 [144 kB]
( )
Fetched 10,0 MB in 2s (3.535 kB/s)
Download complete and in download only mode
$ sudo dpkg --install /var/cache/apt/archives/*.deb
(Reading database ... 16586 files and directories currently installed.)
Preparing to replace apt 0.9.7.8 (using .../archives/apt_0.9.7.8_amd64.deb) ...
Unpacking replacement apt ...
dpkg: regarding .../dpkg_1.16.10_amd64.deb containing dpkg, pre-dependency problem:
 dpkg pre-depends on libbz2-1.0
dpkg: error processing /var/cache/apt/archives/dpkg_1.16.10_amd64.deb (--install):
 pre-dependency problem - not installing dpkg
Selecting previously unselected package gcc-4.7-base:amd64.
( )
Errors were encountered while processing:
 /var/cache/apt/archives/dpkg_1.16.10_amd64.deb
$ sudo dpkg --install /var/cache/apt/archives/*.deb
(Reading database ... 16905 files and directories currently installed.)
Preparing to replace apt 0.9.7.8 (using .../archives/apt_0.9.7.8_amd64.deb) ...
Unpacking replacement apt ...
( )
$
From dpkg s point of view, the system is now already amd64, with i386 being a foreign arch. However, the majory of packages is still i386.
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
173
$ dpkg --list   grep amd64   wc -l
11
$ 
The apt resolver is currently in a very bad broken state since the system is missing most essential packages in its native architecture. This is not easily solved by apt-get -f install, as this would zap most of the existing system . We let apt-get --download-only -f install download the packages that it wants to replace with its amd64 counterparts and their dependencies, and, again, use dpkg --install *.deb to install them.
$ sudo apt-get --download-only -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  cpio gettext-base:i386 klibc-utils libapt-inst1.5 libasprintf0c2:i386
  libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386 libklibc
  libreadline5:i386
Use  apt-get autoremove  to remove them.
The following extra packages will be installed:
  acpid aide bsd-mailx cpio cron curl daemon dctrl-tools exim4-base
  exim4-daemon-light initscripts insserv kbd klibc-utils less libbsd0
  libcomerr2 libcurl3 libdb5.1 libexpat1 libgcrypt11 libgdbm3 libgnutls26
  libgpg-error0 libgssapi-krb5-2 libidn11 libk5crypto3 libkeyutils1 libklibc
  libkmod2 libkrb5-3 libkrb5support0 libldap-2.4-2 liblockfile1 libncursesw5
  libnewt0.52 libp11-kit0 libpam-modules libpam0g libpcre3 libpopt0
  libreadline6 librtmp0 libsasl2-2 libslang2 libsqlite3-0 libssh2-1
  libssl1.0.0 libtasn1-3 libterm-readkey-perl libtinfo5 openssl perl
  perl-base python2.7 python2.7-minimal sysvinit-utils whiptail
Suggested packages:
  libarchive1 anacron logrotate checksecurity debtags mail-reader eximon4
  exim4-doc-html exim4-doc-info spf-tools-perl swaks bootchart2 rng-tools
  krb5-doc krb5-user libpam-doc perl-doc make python2.7-doc binutils
  binfmt-support bootlogd sash
Recommended packages:
  exim4 postfix mail-transport-agent psmisc mailx ca-certificates
  krb5-locales libgpm2 libfribidi0 libsasl2-modules libpng12-0
The following packages will be REMOVED:
  acpid:i386 aide:i386 aptitude:i386 bsd-mailx:i386 cpio:i386 cron:i386
  curl:i386 daemon:i386 dctrl-tools:i386 debsecan dmsetup:i386 e2fsprogs:i386
  exim4-base:i386 exim4-daemon-light:i386 git:i386 grub-common:i386
  grub-pc:i386 grub-pc-bin:i386 grub2-common:i386 ifupdown:i386
  initramfs-tools initscripts:i386 insserv:i386 ippl:i386 jed:i386 kbd:i386
  klibc-utils:i386 less:i386 libdevmapper-event1.02.1:i386
  libdevmapper1.02.1:i386 libklibc:i386 libnewt0.52:i386
  libparted0debian1:i386 libterm-readkey-perl:i386 lsof:i386 lvm2:i386
  molly-guard ntp:i386 openssh-server:i386 openssl:i386 parted:i386 perl:i386
  perl-base:i386 procps:i386 python-apt:i386 python2.7:i386
  python2.7-minimal:i386 rsyslog:i386 sysvinit:i386 sysvinit-utils:i386
  udev:i386 util-linux:i386 whiptail:i386
The following NEW packages will be installed:
  acpid aide bsd-mailx cpio cron curl daemon dctrl-tools exim4-base
  exim4-daemon-light initscripts insserv kbd klibc-utils less libbsd0
  libcomerr2 libcurl3 libdb5.1 libexpat1 libgcrypt11 libgdbm3 libgnutls26
  libgpg-error0 libgssapi-krb5-2 libidn11 libk5crypto3 libkeyutils1 libklibc
  libkmod2 libkrb5-3 libkrb5support0 libldap-2.4-2 liblockfile1 libncursesw5
  libnewt0.52 libp11-kit0 libpam-modules libpam0g libpcre3 libpopt0
  libreadline6 librtmp0 libsasl2-2 libslang2 libsqlite3-0 libssh2-1
  libssl1.0.0 libtasn1-3 libterm-readkey-perl libtinfo5 openssl perl
  perl-base python2.7 python2.7-minimal sysvinit-utils whiptail
0 upgraded, 58 newly installed, 53 to remove and 0 not upgraded.
Need to get 23,4 MB of archives.
After this operation, 17,5 MB disk space will be freed.
Do you want to continue [Y/n]? Y
Get:1 http://security.debian.org/debian-security/ wheezy/updates/main libgnutls26 amd64 2.12.20-7 [619 kB]
( )
Fetched 23,4 MB in 11s (2.038 kB/s)                                           
Download complete and in download only mode
$ sudo dpkg --install /var/cache/apt/archives/*.deb
(Reading database ... 16905 files and directories currently installed.)
Preparing to replace acpid 1:2.0.16-1+deb7u1 (using .../acpid_1%3a2.0.16-1+deb7u1_amd64.deb) ...
( )
Selecting previously unselected package libpam-modules:amd64.
dpkg: regarding .../libpam-modules_1.1.3-7.1_amd64.deb containing libpam-modules:amd64, pre-dependency problem:
 libpam-modules pre-depends on libdb5.1
  libdb5.1:amd64 is unpacked, but has never been configured.
dpkg: error processing /var/cache/apt/archives/libpam-modules_1.1.3-7.1_amd64.deb (--install):
 pre-dependency problem - not installing libpam-modules:amd64
Selecting previously unselected package libpcre3:amd64.
Unpacking libpcre3:amd64 (from .../libpcre3_1%3a8.30-5_amd64.deb) ...
( )
Errors were encountered while processing:
 /var/cache/apt/archives/libpam-modules_1.1.3-7.1_amd64.deb
$ sudo dpkg --install /var/cache/apt/archives/libpam-modules_1.1.3-7.1_amd64.deb
(Reading database ... 17007 files and directories currently installed.)
Unpacking libpam-modules:amd64 (from .../libpam-modules_1.1.3-7.1_amd64.deb) ...
Setting up libpam-modules:amd64 (1.1.3-7.1) ...
$ sudo apt-get clean
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
151
$ dpkg --list   grep amd64   wc -l
66
$ 
The concluding apt-get clean prevents packages from being reinstalled again in the next dpkg --install *.deb run. Now, the apt resolver is kind of fixed, but apt will still want to remove most of the system. I haven t found a way to get out of this mess short of allowing apt to de-install those packages and to manually install their amd64 counterparts later. First, we see what apt wants to do, and paste the The following packages will be removed part of apt-get s output into a sed expression which fixes the architecture strings for us so that we can use the resulting file as input for the following installation procedure. After that, we let apt-get do what itself considers a bad thing and requires us to type a consent sentence. The key is that this deinstallation of essential packages breaks the system, but not apt and dpkg, which allows us to un-break the system afterwards.
$ sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  gettext-base:i386 libapt-inst1.5 libasprintf0c2:i386 libcurl3-gnutls:i386
  libfreetype6:i386 libfuse2:i386 libreadline5:i386
Use  apt-get autoremove  to remove them.
The following extra packages will be installed:
  sysvinit-utils:i386
Suggested packages:
  bootlogd:i386 sash:i386
The following packages will be REMOVED:
  adduser aide-common aptitude:i386 bsd-mailx console-common console-data
  console-log cron curl debian-goodies debsecan dmsetup:i386 e2fsprogs:i386
  exim4-base exim4-config exim4-daemon-light git:i386 grub-common:i386
  grub-pc:i386 grub-pc-bin:i386 grub2-common:i386 ifupdown:i386
  initramfs-tools initscripts ippl:i386 jed:i386 libcurl3
  libdevmapper-event1.02.1:i386 libdevmapper1.02.1:i386 liberror-perl
  libfile-find-rule-perl libnumber-compare-perl libparted0debian1:i386
  libswitch-perl libterm-readkey-perl libterm-readline-perl-perl
  libtext-glob-perl libtimedate-perl locales lsof:i386 lvm2:i386 molly-guard
  ntp:i386 openssh-client:i386 openssh-server:i386 openssl openssl-blacklist
  openssl-blacklist-extra parted:i386 perl perl-modules procps:i386
  python-apt:i386 rsyslog:i386 stow sysv-rc sysvinit:i386 sysvinit-utils
  tzdata ucf udev:i386 util-linux:i386
The following NEW packages will be installed:
  sysvinit-utils:i386
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
  e2fsprogs:i386 util-linux:i386 (due to e2fsprogs:i386) sysvinit:i386
  sysvinit-utils tzdata (due to util-linux:i386)
0 upgraded, 1 newly installed, 62 to remove and 0 not upgraded.
Need to get 97,1 kB of archives.
After this operation, 131 MB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase  Yes, do as I say!  ? (abort with Ctrl-C)
$ sed  s/:i386//g  > removedpkg (paste  The following packages will be REMOVED:  stanza here and
type Ctrl-D afterwards)
  adduser aide-common aptitude:i386 bsd-mailx console-common console-data
  console-log cron curl debian-goodies debsecan dmsetup:i386 e2fsprogs:i386
  exim4-base exim4-config exim4-daemon-light git:i386 grub-common:i386
  grub-pc:i386 grub-pc-bin:i386 grub2-common:i386 ifupdown:i386
  initramfs-tools initscripts ippl:i386 jed:i386 libcurl3
  libdevmapper-event1.02.1:i386 libdevmapper1.02.1:i386 liberror-perl
  libfile-find-rule-perl libnumber-compare-perl libparted0debian1:i386
  libswitch-perl libterm-readkey-perl libterm-readline-perl-perl
  libtext-glob-perl libtimedate-perl locales lsof:i386 lvm2:i386 molly-guard
  ntp:i386 openssh-client:i386 openssh-server:i386 openssl openssl-blacklist
  openssl-blacklist-extra parted:i386 perl perl-modules procps:i386
  python-apt:i386 rsyslog:i386 stow sysv-rc sysvinit:i386 sysvinit-utils
  tzdata ucf udev:i386 util-linux:i386
$ cat removedpkg 
  adduser aide-common aptitude bsd-mailx console-common console-data
  console-log cron curl debian-goodies debsecan dmsetup e2fsprogs
  exim4-base exim4-config exim4-daemon-light git grub-common
  grub-pc grub-pc-bin grub2-common ifupdown
  initramfs-tools initscripts ippl jed libcurl3
  libdevmapper-event1.02.1 libdevmapper1.02.1 liberror-perl
  libfile-find-rule-perl libnumber-compare-perl libparted0debian1
  libswitch-perl libterm-readkey-perl libterm-readline-perl-perl
  libtext-glob-perl libtimedate-perl locales lsof lvm2 molly-guard
  ntp openssh-client openssh-server openssl openssl-blacklist
  openssl-blacklist-extra parted perl perl-modules procps
  python-apt rsyslog stow sysv-rc sysvinit sysvinit-utils
  tzdata ucf udev util-linux
$ sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  gettext-base:i386 libapt-inst1.5 libasprintf0c2:i386 libcurl3-gnutls:i386
  libfreetype6:i386 libfuse2:i386 libreadline5:i386
Use  apt-get autoremove  to remove them.
The following extra packages will be installed:
  sysvinit-utils:i386
Suggested packages:
  bootlogd:i386 sash:i386
The following packages will be REMOVED:
  adduser aide-common aptitude:i386 bsd-mailx console-common console-data
  console-log cron curl debian-goodies debsecan dmsetup:i386 e2fsprogs:i386
  exim4-base exim4-config exim4-daemon-light git:i386 grub-common:i386
  grub-pc:i386 grub-pc-bin:i386 grub2-common:i386 ifupdown:i386
  initramfs-tools initscripts ippl:i386 jed:i386 libcurl3
  libdevmapper-event1.02.1:i386 libdevmapper1.02.1:i386 liberror-perl
  libfile-find-rule-perl libnumber-compare-perl libparted0debian1:i386
  libswitch-perl libterm-readkey-perl libterm-readline-perl-perl
  libtext-glob-perl libtimedate-perl locales lsof:i386 lvm2:i386 molly-guard
  ntp:i386 openssh-client:i386 openssh-server:i386 openssl openssl-blacklist
  openssl-blacklist-extra parted:i386 perl perl-modules procps:i386
  python-apt:i386 rsyslog:i386 stow sysv-rc sysvinit:i386 sysvinit-utils
  tzdata ucf udev:i386 util-linux:i386
The following NEW packages will be installed:
  sysvinit-utils:i386
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
  e2fsprogs:i386 util-linux:i386 (due to e2fsprogs:i386) sysvinit:i386
  sysvinit-utils tzdata (due to util-linux:i386)
0 upgraded, 1 newly installed, 62 to remove and 0 not upgraded.
Need to get 97,1 kB of archives.
After this operation, 131 MB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase  Yes, do as I say! 
 ?] Yes, do as I say! (actually type this!)
Get:1 http://debian.debian.zugschlus.de/debian/ wheezy/main sysvinit-utils i386 2.88dsf-41 [97,1 kB]
Fetched 97,1 kB in 0s (687 kB/s)    
(Reading database ... 17050 files and directories currently installed.)
Removing aide-common ...
( )
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing e2fsprogs ...
Removing sysv-rc ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing sysvinit ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing sysvinit-utils ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing ucf ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing libdevmapper1.02.1:i386 ...
Removing libswitch-perl ...
Removing perl ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing dmsetup ...
Removing perl-modules ...
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
Removing util-linux ...
Removing tzdata ...
Processing triggers for mime-support ...
Selecting previously unselected package sysvinit-utils.
(Reading database ... 9802 files and directories currently installed.)
Unpacking sysvinit-utils (from .../sysvinit-utils_2.88dsf-41_i386.deb) ...
Setting up sysvinit-utils (2.88dsf-41) ...
$ sudo apt-get install $(cat removedpkg)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libasprintf0c2:i386 libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386
  libreadline5:i386
Use  apt-get autoremove  to remove them.
The following extra packages will be installed:
  e2fslibs gettext-base libapt-inst1.5 libasprintf0c2 libattr1 libblkid1
  libboost-iostreams1.49.0 libcap2 libcurl3-gnutls libcwidget3 libedit2
  libept1.4.12 libfreetype6 libfuse2 libgpm2 libncurses5 libopts25 libprocps0
  libreadline5 libsepol1 libsigc++-2.0-0c2a libss2 libudev0 libuuid1 libwrap0
  libxapian22 logrotate
Suggested packages:
  liblocale-gettext-perl tasksel debtags unicode-data anacron checksecurity
  popularity-contest xdg-utils zenity gpart e2fsck-static mail-reader eximon4
  exim4-doc-html exim4-doc-info spf-tools-perl swaks git-daemon-run
  git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email
  git-gui gitk gitweb multiboot-doc grub-emu xorriso desktop-base
  isc-dhcp-client dhcp-client ppp rdnssd net-tools gpm libcwidget-dev fuse
  libparted0-dev libparted0-i18n xapian-tools ntp-doc ssh-askpass libpam-ssh
  keychain monkeysphere rssh ufw parted-doc perl-doc make libpod-plainer-perl
  python-apt-dbg python-gtk2 python-vte python-apt-doc rsyslog-mysql
  rsyslog-pgsql rsyslog-doc rsyslog-gnutls rsyslog-gssapi rsyslog-relp
  doc-base sysv-rc-conf bum bootlogd sash util-linux-locales dosfstools
Recommended packages:
  aptitude-doc-en aptitude-doc apt-xapian-index libparse-debianchangelog-perl
  exim4 postfix mail-transport-agent psmisc mailx patch rsync ssh-client
  os-prober busybox busybox-initramfs busybox-static ca-certificates
  uuid-runtime xauth ncurses-term iso-codes usbutils
The following packages will be REMOVED:
  gettext-base:i386 libboost-iostreams1.49.0:i386 libcwidget3:i386
  libept1.4.12:i386 libopts25:i386 libxapian22:i386 logrotate:i386
  sysvinit-utils:i386
The following NEW packages will be installed:
  adduser aide-common aptitude bsd-mailx console-common console-data
  console-log cron curl debian-goodies debsecan dmsetup e2fslibs e2fsprogs
  exim4-base exim4-config exim4-daemon-light gettext-base git grub-common
  grub-pc grub-pc-bin grub2-common ifupdown initramfs-tools initscripts ippl
  jed libapt-inst1.5 libasprintf0c2 libattr1 libblkid1
  libboost-iostreams1.49.0 libcap2 libcurl3 libcurl3-gnutls libcwidget3
  libdevmapper-event1.02.1 libdevmapper1.02.1 libedit2 libept1.4.12
  liberror-perl libfile-find-rule-perl libfreetype6 libfuse2 libgpm2
  libncurses5 libnumber-compare-perl libopts25 libparted0debian1 libprocps0
  libreadline5 libsepol1 libsigc++-2.0-0c2a libss2 libswitch-perl
  libterm-readkey-perl libterm-readline-perl-perl libtext-glob-perl
  libtimedate-perl libudev0 libuuid1 libwrap0 libxapian22 locales logrotate
  lsof lvm2 molly-guard ntp openssh-client openssh-server openssl
  openssl-blacklist openssl-blacklist-extra parted perl perl-modules procps
  python-apt rsyslog stow sysv-rc sysvinit sysvinit-utils tzdata ucf udev
  util-linux
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
  sysvinit-utils:i386
0 upgraded, 89 newly installed, 8 to remove and 0 not upgraded.
Need to get 55.3 MB of archives.
After this operation, 134 MB of additional disk space will be used.
You are about to do something potentially harmful.
To continue type in the phrase  Yes, do as I say! 
 ?] Yes, do as I say! (actually type this!)
Get:1 http://debian.debian.zugschlus.de/debian/ wheezy/main e2fslibs amd64 1.42.5-1.1 [197 kB]
( )
Fetched 55.3 MB in 20s (2648 kB/s)                                            
perl: warning: Setting locale failed.
( )
This process may re-ask some debconf questions that you have already answered during initial system setup. I find it comforting to actually see that my answers were preserved, but you can always export DEBIAN_FRONTEND=noninteractive if you want installation to be silent. If you know how this step can be done more elegantly, please comment. We now, finally, have the resolver in a consistent state.
$ sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libasprintf0c2:i386 libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386
  libreadline5:i386
Use  apt-get autoremove  to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
121
$ dpkg --list   grep amd64   wc -l
119
$ 
Now we can force-crossgrade all i386 packages that are still left on the system. Again, we use the apt-get --download install, dpkg --install stunt as coreutils gets crossgraded at this step and dpkg is less than happy if it finds itself without /bin/rm. Remember, with --download, nothing is actually removed during this step even if apt-get claims to remove things. Apt will also complain that many of these packages are already installed. Those are libs that were pulled in previously. The first dpkg --install run in this step will complain about gazillions of dependency problems, which will all be resolved in concluding apt-get -f install after the second dpkg --install run, this time a manual one.
$ sudo apt-get --download-only install $(dpkg --list   awk  if( $1 ==  ii  && $4 ==
 i386  )   print $2     sed  s/:i386// )
Reading package lists... Done
Building dependency tree       
Reading state information... Done
( )
The following packages were automatically installed and are no longer required:
  libasprintf0c2:i386 libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386
  libreadline5:i386
Use  apt-get autoremove  to remove them.
Suggested packages:
  powermgmt-base bash-doc bzip2-doc diffutils-doc wdiff mlocate locate
  gnupg-doc xloadimage imagemagick eog libpcsclite1 iproute-doc resolvconf
  avahi-autoipd nfs-common ncompress indent
Recommended packages:
  bsdmainutils gnupg-curl libatm1
The following packages will be REMOVED:
  anacron:i386 apt-utils:i386 base-files:i386 base-passwd:i386 bash:i386
( )
The following NEW packages will be installed:
  anacron apt-utils base-files base-passwd bash bsdutils busybox bzip2
( )
0 upgraded, 56 newly installed, 49 to remove and 0 not upgraded.
Need to get 24,1 MB of archives.
After this operation, 2.030 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://debian.debian.zugschlus.de/debian/ wheezy/main kmod amd64 9-3 [60,5 kB]
( )
Fetched 24,1 MB in 11s (2.120 kB/s)                                           
Download complete and in download only mode
$ sudo dpkg --install /var/cache/apt/archives/*.deb
(Reading database ... 17098 files and directories currently installed.)
Preparing to replace adduser 3.113+nmu3 (using .../adduser_3.113+nmu3_all.deb) ...
Unpacking replacement adduser ...
( )
Errors were encountered while processing:
 /var/cache/apt/archives/bash_4.2+dfsg-0.1_amd64.deb
 /var/cache/apt/archives/coreutils_8.13-3.5_amd64.deb
 initscripts
 lvm2
 procps
 rsyslog
 sysvinit
 sysv-rc
 udev
 util-linux
 aide-common
 dmsetup
 e2fsprogs
 ifupdown
 initramfs-tools
 libdevmapper1.02.1:amd64
 libdevmapper-event1.02.1:amd64
 libparted0debian1:amd64
 molly-guard
 openssh-server
 parted
 grub-common
 grub-pc
 grub-pc-bin
 grub2-common
$ sudo dpkg --install /var/cache/apt/archives/bash_4.2+dfsg-0.1_amd64.deb
/var/cache/apt/archives/coreutils_8.13-3.5_amd64.deb
(Reading database ... 17129 files and directories currently installed.)
Preparing to replace bash 4.2+dfsg-0.1 (using .../bash_4.2+dfsg-0.1_amd64.deb) ...
Unpacking replacement bash ...
Preparing to replace coreutils 8.13-3.5 (using .../coreutils_8.13-3.5_amd64.deb) ...
Unpacking replacement coreutils ...
Setting up bash (4.2+dfsg-0.1) ...
update-alternatives: using /usr/share/man/man7/bash-builtins.7.gz to provide /usr/share/man/man7/builtins.7.gz
(builtins.7.gz) in auto mode
Setting up coreutils (8.13-3.5) ...
$ sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  libasprintf0c2:i386 libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386
  libreadline5:i386
Use  apt-get autoremove  to remove them.
The following extra packages will be installed:
  sysvinit-utils
Suggested packages:
  bootlogd sash
The following packages will be REMOVED:
  libslang2-modules:i386 sysvinit-utils:i386
The following NEW packages will be installed:
  sysvinit-utils
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
  sysvinit-utils:i386
0 upgraded, 1 newly installed, 2 to remove and 0 not upgraded.
23 not fully installed or removed.
Need to get 0 B/99,5 kB of archives.
After this operation, 328 kB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase  Yes, do as I say! 
 ?] Yes, do as I say! (actually type this!)
dpkg: warning: overriding problem because --force enabled:
 This is an essential package - it should not be removed.
(Reading database ... 17129 files and directories currently installed.)
Removing sysvinit-utils ...
Selecting previously unselected package sysvinit-utils.
(Reading database ... 17105 files and directories currently installed.)
Unpacking sysvinit-utils (from .../sysvinit-utils_2.88dsf-41_amd64.deb) ...
Setting up sysvinit-utils (2.88dsf-41) ...
Setting up sysv-rc (2.88dsf-41) ...
Setting up initscripts (2.88dsf-41) ...
Setting up util-linux (2.20.1-5.3) ...
Setting up e2fsprogs (1.42.5-1.1) ...
Setting up sysvinit (2.88dsf-41) ...
sysvinit: restarting... done.
(Reading database ... 17129 files and directories currently installed.)
Removing libslang2-modules:i386 ...
Setting up ifupdown (0.7.8) ...
Setting up procps (1:3.3.3-3) ...
[ ok ] Setting kernel variables ... /etc/sysctl.conf...done.
Setting up udev (175-7.2) ...
[ ok ] Stopping the hotplug events dispatcher: udevd.
[ ok ] Starting the hotplug events dispatcher: udevd.
update-initramfs: deferring update (trigger activated)
Setting up rsyslog (5.8.11-3) ...
[ ok ] Stopping enhanced syslogd: rsyslogd.
[ ok ] Starting enhanced syslogd: rsyslogd.
Setting up aide-common (0.15.1-8) ...
Setting up initramfs-tools (0.109.1) ...
update-initramfs: deferring update (trigger activated)
Setting up openssh-server (1:6.0p1-4) ...
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
Setting up molly-guard (0.4.5-1) ...
Setting up libdevmapper1.02.1:amd64 (2:1.02.74-7) ...
Setting up libdevmapper-event1.02.1:amd64 (2:1.02.74-7) ...
Setting up libparted0debian1:amd64 (2.3-12) ...
Setting up grub-common (1.99-27+deb7u1) ...
Setting up grub2-common (1.99-27+deb7u1) ...
Setting up grub-pc-bin (1.99-27+deb7u1) ...
Setting up grub-pc (1.99-27+deb7u1) ...
Installation finished. No error reported.
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.9.4-zgsrv2008064
Found initrd image: /boot/initrd.img-3.9.4-zgsrv2008064
done
Setting up parted (2.3-12) ...
Setting up dmsetup (2:1.02.74-7) ...
update-initramfs: deferring update (trigger activated)
Setting up lvm2 (2.02.95-7) ...
[ ok ] Setting up LVM Volume Groups...done.
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.9.4-zgsrv2008064
Found 12 processes using old versions of upgraded files
( )
$ sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libasprintf0c2:i386 libcurl3-gnutls:i386 libfreetype6:i386 libfuse2:i386
  libreadline5:i386
Use  apt-get autoremove  to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
72
$ dpkg --list   grep amd64   wc -l
175
$ 
We have completed the actual crossgrade. All non-multiarch packages are now amd64. The resolver is now fine as well. At this current state, all i386 packages that are still installed should be unneeded libs. We can safely nuke them. Careful minds may inspect the list of packages to be removed for important or non-lib items. After completion of the command, a second call of the same command verifies that we were successful.
$ sudo apt-get remove $(dpkg --list   awk  if( $1 ==  ii  && $4 ==  i386  )   print $2
  )
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  e2fslibs:i386 gcc-4.7-base:i386 libacl1:i386 libapt-inst1.5:i386
  libapt-pkg4.12:i386 libasprintf0c2:i386 libattr1:i386 libblkid1:i386
  libbsd0:i386 libbz2-1.0:i386 libc6:i386 libcap2:i386 libcomerr2:i386
  libcurl3:i386 libcurl3-gnutls:i386 libdb5.1:i386 libedit2:i386
  libexpat1:i386 libfreetype6:i386 libfuse2:i386 libgcc1:i386
  libgcrypt11:i386 libgdbm3:i386 libgnutls26:i386 libgpg-error0:i386
  libgpm2:i386 libgssapi-krb5-2:i386 libidn11:i386 libk5crypto3:i386
  libkeyutils1:i386 libkmod2:i386 libkrb5-3:i386 libkrb5support0:i386
  libldap-2.4-2:i386 liblockfile1:i386 liblzma5:i386 libmagic1:i386
  libncurses5:i386 libncursesw5:i386 libp11-kit0:i386 libpam-modules:i386
  libpam0g:i386 libpci3:i386 libpcre3:i386 libpng12-0:i386 libpopt0:i386
  libprocps0:i386 libreadline5:i386 libreadline6:i386 librtmp0:i386
  libsasl2-2:i386 libselinux1:i386 libsemanage1:i386 libsepol1:i386
  libsigc++-2.0-0c2a:i386 libslang2:i386 libsqlite3-0:i386 libss2:i386
  libssh2-1:i386 libssl1.0.0:i386 libstdc++6:i386 libtasn1-3:i386
  libtinfo5:i386 libudev0:i386 libusb-0.1-4:i386 libustr-1.0-1:i386
  libuuid1:i386 libwrap0:i386 linux-image-3.9.4-zgsrv2008064:i386 zlib1g:i386
0 upgraded, 0 newly installed, 70 to remove and 0 not upgraded.
After this operation, 67,5 MB disk space will be freed.
Do you want to continue [Y/n]? y
(Reading database ... 17111 files and directories currently installed.)
Removing e2fslibs:i386 ...
( )
$ sudo apt-get remove $(dpkg --list   awk  if( $1 ==  ii  && $4 ==  i386  )   print $2
  )
Reading package lists... Done
Building dependency tree       
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
70
$ dpkg --list   grep amd64   wc -l
175
$ 
To finally make the packages really vanish from the system, we purge all packages that dpkg --list still reports in state rc , which means removed, configuration still present . Thankfully, dpkg seems to handle the case where a dpkg-conffile belongs to a transitioned package gracefully.
$ sudo dpkg --purge $(dpkg --list   awk  if ($1 ==  rc )   print $2  )
(Reading database ... 15868 files and directories currently installed.)
Removing e2fslibs:i386 ...
Purging configuration files for e2fslibs:i386 ...
( )
$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386
$ dpkg --list   grep i386   wc -l
0
$ dpkg --list   grep amd64   wc -l
175
$ 
We now verify that all packages we have are either arch all or arch amd64.
$ dpkg --list   grep -v  \(amd64 all\) 
Desired=Unknown/Install/Remove/Purge/Hold
  Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
 / Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
 / Name                            Version                   Architecture Description
+++-===============================-=========================-============-========================================================================
As I found myself suddenly without e2fsprogs during one of my experiments, I check for e2fsprogs being present. If this package is missing, you ll only find out after rebooting.
$ dpkg --list e2fsprogs
Desired=Unknown/Install/Remove/Purge/Hold
  Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
 / Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
 / Name           Version      Architecture Description
+++-==============-============-============-=================================
ii  e2fsprogs      1.42.5-1.1   amd64        ext2/ext3/ext4 file system utilit
My locally built amd64 kernel with 32 bit support was an arch i386 package as well, so it got zapped in the migration process. Replace it with a real amd64 kernel. This step will vary if you use Debian kernels. Do not forget the firmwares.
$ sudo apt-get -t wheezy-zg-experimental install kernel-image-zgserver linux-firmware-image
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  cpio initramfs-tools kernel-image-zgsrv20080 klibc-utils libklibc
  linux-image-3.9.4-zgsrv20080
Suggested packages:
  libarchive1
The following NEW packages will be installed:
  cpio initramfs-tools kernel-image-zgserver kernel-image-zgsrv20080
  klibc-utils libklibc linux-firmware-image linux-image-3.9.4-zgsrv20080
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 11,2 MB of archives.
After this operation, 30,5 MB of additional disk space will be used.
Do you want to continue [Y/n]? 
Get:1 http://zg20110.debian.zugschlus.de/ wheezy-zg-experimental/main linux-image-3.9.4-zgsrv20080 amd64
3.9.4.20130605.0 [10,4 MB]
( )
I now make sure that the initrd I have built contains all file system modules and is of the right architecture:
$ cd /tmp
$ < /boot/initrd.img-3.9.4-zgsrv20080 gunzip   cpio -i
32053 blocks
$ find . -name  *ext*.ko 
./lib/modules/3.9.4-zgsrv20080/kernel/fs/ext3/ext3.ko
./lib/modules/3.9.4-zgsrv20080/kernel/fs/ext2/ext2.ko
./lib/modules/3.9.4-zgsrv20080/kernel/fs/ext4/ext4.ko
$ file ./lib/modules/3.9.4-zgsrv20080/kernel/fs/ext4/ext4.ko
./lib/modules/3.9.4-zgsrv20080/kernel/fs/ext4/ext4.ko: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV),
BuildID[sha1]=0x259a642448805c9597932ee25c62c8e6a00a32ad, not stripped
$ cd
As the last step, remove i386 as an architecture and reboot into the newly migrated system. We verify that the system considers itself amd64 with no foreign architectures.
$ sudo dpkg --remove-architecture i386
$ dpkg --print-architecture
amd64
$ 
$ dpkg --print-foreign-architectures
$ dpkg --list   grep i386   wc -l
0
$ dpkg --list   grep amd64   wc -l
181
$ sudo shutdown -r now
Broadcast message from root@wheezyarch (pts/0) (Wed Jun 12 16:09:24 2013):
The system is going down for reboot NOW!
$ Connection to wheezyarch.zugschlus.de closed by remote host.
Connection to wheezyarch.zugschlus.de closed.
$ ssh wheezyarch.zugschlus.de
Linux wheezyarch 3.9.4-zgsrv20080 #2 SMP PREEMPT Wed Jun 5 13:12:04 UTC 2013 x86_64
$ dpkg --print-architecture
amd64
$
We now check whether we have lost any packages in this process:
$ sudo dpkg --get-selections   grep install   awk  print $1    sed  s/:.*//;s/$/:amd64
install/;  > get-selections.post
$ wc -l get-selections.*
  234 get-selections.post
  226 get-selections.pre
$ diff --unified=0 get-selections.pre get-selections.post 
--- get-selections.pre  2013-06-12 19:47:40.278176000 +0200
+++ get-selections.post 2013-06-12 20:28:51.389874000 +0200
@@ -22,0 +23 @@
+cpio:amd64 install
@@ -58,0 +60 @@
+initramfs-tools:amd64 install
@@ -68,0 +71,3 @@
+kernel-image-zgserver:amd64 install
+kernel-image-zgsrv20080:amd64 install
+klibc-utils:amd64 install
@@ -108,0 +114 @@
+libklibc:amd64 install
@@ -163,0 +170,2 @@
+linux-firmware-image:amd64 install
+linux-image-3.9.4-zgsrv20080:amd64 install
So we have not actually lost things, but we have gained the pure amd64 kernel and the packages that are needed to build a proper initrd. These were missing from the initial install of the i386 test VM. The downside of the process is that we have lost most of the automatically installed markers in aptitude which make sure to remove a library once nothing depends on it any more. If you make a point in having those markers correctly set, as I do, this is tedious work to set it on every package in the aptitude curses interface, making aptitude suggest removing the entire system, and then again remove it from the packages you actually intend to keep. This should not result in anything being removed, otherwise you have goofed previously. All Done! I am now off perfecting my upgrade process so that the i386 lenny and squeeze machines can first be moved to wheezy and then to amd64.

29 May 2013

Russell Coker: Nexus 4

My wife has had a LG Nexus 4 for about 4 months now so it s time for me to review it and compare it to my Samsung Galaxy S3. A Sealed Case The first thing to note about the Nexus 4 is that it doesn t support changing a battery or using micro-SD storage. The advantage of these design choices is that it allows reduced weight and greater strength compared to what the phone might otherwise be. Such choices would also allow the phone to be slightly cheaper which is a massive advantage, it s worth noting that the Nexus 4 is significantly cheaper than any other device I can buy with comparable specs. My wife s phone has 8G of storage (not RAM thanks Robin) and cost $369 at the start of the year while the current price is $349 for the 8G version and $399 for the 16G version. Of course one down-side of this is that if you need 16G of storage then you need to spend an extra $50 on the 16G phone instead of buying a phone with 8G of storage and inserting a 16GB micro-SD card which costs $19 from OfficeWorks. Also there s no option of using a 32G SD card (which costs less than $50) or a 64G SD card. Battery etc The battery on the Nexus 4 isn t nearly big enough, when playing Ingress it lasts about half as long as my Galaxy S3, about 90 minutes to fully discharge. If it was possible to buy a bigger battery from a company like Mugan Power then the lack of battery capacity wouldn t be such a problem. But as it s impossible to buy a bigger battery (unless you are willing to do some soldering) the only option is an external battery. I was unable to find a Nexus 4 case which includes a battery (which is probably because the Nexus 4 is a lot less common than the Galaxy S3) so my wife had to buy an external battery. If you are serious about playing Ingress with a Nexus 4 then you will end up with a battery in your pocket and cable going to your phone from the battery, this is a real annoyance. While being a cheap fast phone with a clear screen makes it well suited to Ingress the issue of having a cable permanently attached is a real down-side. One significant feature of the Nexus 4 is that it supports wireless charging. I have no immediate plans to use that feature and the wireless charger isn t even on sale in Australia. But if the USB connector was to break then I could buy a wireless charger from the US and keep using the phone, while for every other phone I own a broken connector would render the phone entirely useless. Screen Brightness I have problems with my Galaxy S3 not being bright enough at midday when on auto brightness. I have problems with my wife s Nexus 4 being too bright in most situations other than use at midday. Sometimes at night it s painfully bright. The brightness of the display probably contributes to the excessive battery use. I don t know whether all Nexus 4 devices are like this or whether there is some variance. In any case it would be nice if the automatic screen brightness could be tuned so I could make it brighter on my phone and less bright on my wife s. According to AndroSensor my Galaxy S3 thinks that the ambient light in my computer room is 28 lux while my wife s Nexus 4 claims it s 4 lux. So I guess that part of the problem is the accuracy of the light sensors in the phones. On-Screen Buttons I am a big fan of hardware buttons. Hardware buttons work reliably when your fingers are damp and can be used by feel at night. My first Android phone the Sony-Ericsson Xperia X10 had three hardware buttons for settings, home, and back as well as buttons for power, changing volume, and taking a photo which I found very convenient. My Galaxy S3 has hardware buttons for power, home, and volume control. I think that Android phones should have more hardware buttons not less. Unfortunately it seems that Google and the phone manufacturers disagree with me and the trend is towards less buttons. Now the Nexus 4 only has hardware buttons for power, and volume control. One significant advantage of the Galaxy S3 over the Nexus 4 is that the S3 s settings and back buttons while not implemented in hardware are outside the usable screen area. So the 4.8 1280*720 display is all for application data while the buttons for home, settings, and back on the Nexus 4 take up space on the screen so only a subset of the 4.7 1280*768 is usable by applications. While according to specs the Nexus 4 has a screen almost as big as the Galaxy S3 and a slightly higher resolution in practice it has an obviously smaller screen with fewer usable pixels. Also one of the changes related to having the buttons on-screen means that the settings button is often in the top right corner which I find annoying. I didn t like that aspect of the GUI the first time I used a tablet running Android 3.0 and I still don t like it now. GPS My wife s Nexus 4 seems to be much less accurate than my Galaxy S3 for GPS. I don t know how much of this is due to phone design and how much is due to random factors in manufacturing. I presume that a large portion of it is due to random manufacturing issues because other people aren t complaining about it. Maybe she just got unlucky with an inaccurate phone. Shape and Appearance One feature that I really like in the Samsung Galaxy S is that it has a significant ridge surrounding the screen. If you place a Galaxy S face-down on a desk that makes it a lot less likely to get a scratch on the screen. The LG U990 Viewty also had a similar ridge. Of course the gel case I have bought for every Android phone has solved this problem, but it would really be nice to have a phone that I consider usable without needing to buy such a case. The Nexus 4 has a screen that curves at the edges which if anything makes the problem worse than merely lacking a ridge around the edge. On the up-side the Nexus 4 looks and feels nice before you use it. The back of the Nexus 4 sparkles, that s nice but when you buy a gel case (which doesn t seem to be optional with modern design trends) you don t get to see it. The Nexus 4 is a very attractive package, it s really a pity that they didn t design it to be usable without a gel case. Conclusion Kogan is currently selling the Galaxy S3 with 16G of storage for $429. When comparing that to the 16G version of the Nexus 4 at $399 that means there s a price of $30 to get a SD socket, the option of replacing a battery, one more hardware button, and more screen space. So when comparing the Google offers for the Nexus 4 with the Kogan offer on the Galaxy S3 or the Galaxy Note which also has 16G of storage and sells for $429 the Google offer doesn t seem appealing to me. The Nexus 4 is still a good phone and is working well for my wife, but she doesn t need as much storage as I do. Also when she got her phone the Galaxy S3 was much more expensive than it is now. Also Kogan offer the 16G version of the Nexus 4 for $389 which makes it more appealing when compared to the Galaxy S3. It s surprising that they can beat Google on price. Generally I recommend the Nexus 4 without hesitation to anyone who wants a very capable phone for less than $400 and doesn t need a lot of storage. If you need more storage then the Galaxy S3 is more appealing. Also if you need to use a phone a lot then a Galaxy S3 with a power case works well in situations where the Nexus 4 performs poorly.

2 May 2013

Russ Allbery: Review: The System of the World

Review: The System of the World, by Neal Stephenson
Series: The Baroque Cycle #3
Publisher: William Morrow
Copyright: 2004
ISBN: 0-06-052387-5
Format: Hardcover
Pages: 892
This is the third book of the three-volume Baroque Cycle. I think you could, if you really wanted, read it without reading the previous volumes; Stephenson is certainly long-winded enough that you can pick up most of what's going on while you read. It's been a year since I read the second volume, and I only resorted to Wikipedia a couple of times to remember plot elements (and mostly from the first book). However, I wouldn't recommended starting here. Many of the character relationships, and most of the underpinning of the plot, is established in the previous volumes and given more significance by them. You would also miss The Confusion, which is the best book of the series, although none of this series rises to the level at which I'd recommend it except under specific circumstances. Quicksilver establishes the characters of Daniel Waterhouse, a fictional Puritan whose family was close to Cromwell and who became a friend to Isaac Newton in the days following the Restoration; Jack Shaftoe, a vagabond who wanders Europe in a sequence of improbable adventures; and Eliza, who becomes a friend to Leibniz and a spy for William of Orange. The Waterhouse sections are prominent in Quicksilver: full of the early history of the Royal Society, alchemy, and a small amount of politics. Of those three characters, Eliza is by far the most interesting, which meant that I was delighted when The Confusion dropped Waterhouse almost entirely and mixed Eliza's further story with more improbable but entertaining sea adventures of Jack Shaftoe. You will immediately sense my root problem with The System of the World when you hear that it is almost entirely about Daniel Waterhouse. While Eliza and Jack both appear, they play supporting roles at best, and Eliza's wonderful sharp intelligence and pragmatic survival skills are left out almost entirely. Instead, this is a novel about Waterhouse's return to England after spending quite a bit of time in the American colonies working on calculating machines. He is almost immediately entangled in dangerous politics from multiple directions: the precarious national politics in England near the end of the reign of Queen Anne, Isaac Newton's attempts to maintain the currency of England as Master of the Mint, and a bombing attempt that may have been aimed at him, may have been aimed at Newton, and may have been aimed at someone else entirely. Much of the book consists of an extended investigation of this bombing plot, skullduggery involving counterfeiters, and attempts to use the currency and the Mint as part of the political conflict between Whigs and Tories, mixed in with attempts to construct a very early computer (this is Stephenson, after all). Leibniz and Eliza come into this only as confidants of the Hanoverians. All this may sound exciting, and there are parts of it that hold the attention. But this book sprawls as badly as Quicksilver did. There's just too much detail without either enough plot or enough clarity. Stephenson tries to make you feel, smell, and hear the streets of London and the concerns of an idiosyncratic group of semi-nobles during one of the more interesting junctures of British history, but he does that by nearly drowning you in it, and without providing enough high-level guidance. For most of the book, I felt like I was being given a tour of a house on my hands and knees with a magnifying glass. It's a bad sign when the reader of a historical novel is regularly resorting to Wikipedia, not to follow interesting tangents of supporting material, but to try to get a basic sense of the players and the politics involved because the author never explains them clearly. If you're more familiar with the details of British history than I am, and can more easily follow the casual intermixing of two or three forms of address for the same historical figure, you may not have that problem. But I think other structural issues remain, and one of the largest is Waterhouse himself. Jack Shaftoe, and particularly Eliza, are more interesting characters because they're characters. They're not always particularly believable, but they attack the world with panache and are constantly squirming into the center of things. Stephenson's portrayals of Newton, Leibniz, the Duke of Marlborough, Sophia of Hanover, Peter the Great, and the other historical figures who show up here are interesting for different reasons: Stephenson has history to draw on and elaborate, and it's fascinating to meet those people from a different angle than dry lists of accomplishments. History has a way of providing random details that are too bizarre to make up; Isaac Newton, for example, actually did disguise himself to infiltrate London criminal society in pursuit of counterfeiters while he was Master of the Mint! Waterhouse, for me, has none of these advantages. He is an invented character in whom I have no pre-existing interest. He drifts through events largely through personal connections, all of which seem to be almost accidental. He's welcome in the councils of the Royal Society because he's apparently a scientist, but the amount of actual science we see him doing is quite limited. His nonconformist background allies him squarely with the Whigs, but his actual position on religious matters seems much less set than the others around him. What he seems to want, more than anything else, is to help Leibniz in the development of a computer and to reconcile Newton and Leibniz. And he's not particularly effective at either. In short, he has little in the way of memorable character or dynamism, despite being the primary viewpoint character, and seems to exist mostly to know everyone and be everywhere that's important to the story. He feels like an authorial insertion more than a character. It's quite easy to believe that Stephenson himself would have loved to be in exactly the role and situation that Waterhouse finds himself in throughout the book, in the middle of the councils of the wise and powerful, in just the right position to watch the events of history. I can sympathize, but it doesn't make for engrossing reading. Novels live and die by the strength of their characters, particularly their protagonists; I want more than just a neutral viewpoint. The third major structural problem that I had with this book is that I think Stephenson buries his lede. After finishing it, I think this is a book with a point, a central premise around which all the events of the story turn, and which is the philosophical culmination of The Baroque Cycle as a whole. But Stephenson seems oddly unwilling to state that premise outright until the very end of the book. For the first half, one could be forgiven in thinking this is a story about alchemy and the oddly heavy gold that's been a part of the story since The Confusion, or perhaps about foundational but forgotten work on computation that preceded Babbage by a century. But those all turn out to be side stories, sometimes even without a proper conclusion. I appreciate honoring the intelligence of the reader, and I presume that Stephenson would like to guide the reader through the same process of realization that the characters go through, but I think he takes this much too far and fails to make the realization clear. I'll therefore state what I believe is the premise outright, since I think it's a stronger book with this idea in mind: The System of the World is a continuation of the transformational economics shown in The Confusion into the realm of politics. Specifically, it's about the replacement of people with systems, about the journey towards Parliamentary supremacy, central banking, and the persistent state, and about the application of scientific principles of consistency and reproducibility to politics and economics (however fitfully and arbitrarily). Quicksilver was about the rise of science; The Confusion was, in retrospect, about the rise of economics; and The System of the World tries to be about the rise of technocratic modern politics, barely perceptible among the squabbles between Tories and Whigs. I think that's a fascinating premise, and I would have loved to read a book that tackles it head-on. That's a concept that is much more familiar from the late 19th and early 20th centuries in the context of Marxism, early socialism, technological utopianism, and similar attempted applications of scientific analysis to political and human behavior for the betterment of human civilization. Shifting that 200 years earlier and looking at a similar question from the perspective of the giants of the Enlightenment feels full of of potential. There are moments when I think Stephenson captures the sense of a seismic shift in how economies are run, knowledge is established, and civilizations are knit together. But, most of the time, it just isn't clear. There's so much other stuff in this book, and in the whole series: so many false starts, digressions, abandoned plots, discarded characters, and awkward attempts at romance (as much as I like the characters, Stephenson's portrayal of the relationship between Eliza and Jack is simply ridiculous and not particularly funny) that the whole weight of the edifice crushes what I think is the core concept. Stephenson is never going to be sparse. When you start a Stephenson novel, you know it's going to be full of chunks of partly digested encyclopedia and random research findings that may have nothing to do with the plot. But his best books (Snow Crash, The Diamond Age, even Cryptonomicon) have an underlying structure off of which all of those digressions are hung. You can see the bones beneath the flesh, and the creature they create is one you want to get to know. I'm not sure there are any bones here, and that may be the peril, for Stephenson, of writing historical fiction. I wonder if he felt that the structure of history would provide enough structure by itself that he could wrap a few plots around the outside of it and call it good. If so, it didn't work, at least for me. A lot of things happen. Some of them are even exciting and tense. A lot of people meet, interact, and show off their views of the world. A great deal of history, research, and sense of place is described in painstaking detail. But at the end of the book, I felt like I had to reach for some sort of point and try to retrofit it to the story. Lots happened, but there wasn't a novel. And that makes it quite hard to get enthused by the book. If you adored Quicksilver, I suspect you will also like this. I think they're the most similar. If, like I did, you thought The Confusion was a significant step up in enjoyment in the series and were hoping the trend will continue, I'm sad to report that it didn't. If you were considering whether to read the whole series and were waiting to see what I thought of the end, my advice is to give The Baroque Cycle a pass unless you absolutely love Stephenson's digressions, don't care if they're about history instead of current technology, and cannot live without 3,000 pages of them. It's not that they're bad books, but they're very long books, they take a significant investment of time and attention, and I think that, for most readers, there are other books that would repay a similar investment with more enjoyment. Rating: 5 out of 10

30 March 2013

Steve Kemp: Time passes, Thorin sits down and starts singing about gold.

This weekend I have mostly been reading Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time . In modern times we divide the earth up into rings of lines, latitude and longitude, as wikipedia will explain. Finding your latitude is easy, finding your longitude is a difficult process, and it was vitaly important when people started to sail large distances, the book contained lots of stories of sailors being suddenly suprised by the appearance of land - because they'd misjudged their position. Having four ships, containing garlic, pepper, and other goods of value exceeding the total wealth of the UK, sink all at once was a major blow. Not to mention the large number of sailors who lost their lives. There were several solutions proposed, involving steady hands and telescopes, etc, but the book mostly discusses John Harrison and his use of watches/clocks.
John Harrison was featured in Only Fools & Horses, as the designer of the watch that made Delboy & Rodney millionaires. ->Time on our hands
The idea of using a clock is that you take one with you, set to the time of your departure location. Using that clock you can compare the time to the local-time, by viewing the sun, etc. Calculating the difference between the two times allows you to see how far away, in degrees, from your port, and thus how far you've traveled. Until harrison came along clocks weren't accurate enough to keep time. His clocks would lose a second a month, until then clocks might lose 15 minutes a day. (With more variations depending on temperture, location, and pressure. Clearly things like pendulum clocks weren't suitable for rocking ships either.) All in all this book was a great read, there were mentions of Galilao, Newton, and similar folk who we've all heard of. There was angst, drama, deceit, and some stunning craftmanship. Harrison was a woodworker, and he made his clocks out of wood (+brass where necessary). Choosing fast/slow-grown wood depending on purpose, and using wood that secreted oils naturally allowed him to avoid lubrication - which improved accuracy, as lubricants tend to thin/thicken when temperature/pressure change. A lovely read, thank you very much. In other news I received several patches for my templer static-site generator, and this has resulted in much improvement. I've also started using Test::Exception now, and slowly updating all my perl code to use this.

14 March 2013

Iustin Pop: Types as control flow constructs

A bug I've recently seen in production code gave me the idea for this blog post. Probably smarter people already wrote better things on this topic, so this is mostly for myself, to better summarise my own thoughts. Corrections are welcome, please leave a comment! Let's say we have a somewhat standard API in Python or C++: Signalling failures to initialise the object can be done in two ways: either by raising an exception, or by returning a null/None result. There are advantages and disadvantages to both: The None model can create latent bugs, for example in the following code:
ok = True
for arg in input_list:
  t = init_t(arg)
  if t is None:
    ok = False
    continue
  t.process()
return ok
The presence of the continue statement there is critical. Removing it, or moving it after some other statements which work with t will result in a bug. So using value-based returns forces us to introduce (manually) control points, without having the possibility to validate the model by the compiler (e.g. in C++). So it would seem that this kind of latent bugs pushes us to use the exception model, with its drawbacks. Let's look at how this interface would be implemented in (IMHO) idiomatic Haskell (where the a and b types represent the input and output types):
initT :: a -> Maybe T
processT :: T -> b
my_fn =
  
 case initT arg of
   Nothing -> -- handle failure
   Just v -> processT v
Yes, this can be written better, but it's beside the main point. The main point is that by introducing a wrapper type around our main type (T), we are forced via the type system to handle the failure case. We can't simply pass the result of initT to a function which accepts T, because it won't type check. And, no matter what we do with the result value, there are no exceptions involved here, so we only have to think about types/values, and not control flow changes. In effect, types become automatically-validated control-flow instructions. Or so it looks to me . So using types properly, we can avoid the exception-vs-return-value debate, and have all advantages without the disadvantages of either. If that is the case, why isn't this technique used more in other languages? At least in statically typed languages, it would be possible to implement it (I believe), via a C++ template, for example. In Python, you can't actually apply it, as there's no static way of enforcing the correct decapsulation. I was very saddened to see that Google's Go language, which is quite recent, has many examples where initialisation functions return a tuple err, value = , separating the actual value from the error, making it not safer than Python. It might be that polymorphic types are not as easy to work with, or it might be the lack of pattern matching. In any case, I don't think this was the last time I've seen a null pointer dereference (or the equivalent AttributeError: 'NoneType' object has no attribute ). Sadly You can even go further in Haskell and introduce more control flow structure via wrapper types. Please bear another contrived example: an HTML form that gets some input data from the user, validates it, saves it to the database, and echoes it back to the user. Without types, you would have to perform these steps manually, and ensure they are kept in the correct order when doing future modifications. With types, you only have to design the types correctly and export only smart contructors (but not the plain ones):
module Foo ( ValidatedValue
           , validateValue
           , RecordId
           , CommittedValue
           , commitValue
           , buildFeedbackForm
           ) where
data ValidatedValue a = ValidatedValue a
validateValue :: a -> Maybe (ValidatedValue a)
data RecordId =  
data CommittedValue a = CommittedValue a RecordId
commitValue :: ValidatedValue a -> ComittedValue a
buildFeedbackForm :: CommittedValue a -> HTMLDoc
From these types, it follows more or less that the only correct workflow is:
  1. get a value from the user
  2. validate it
  3. commit it, getting a transaction/record ID
  4. send the response to the user
In other words:
handleFeedbackForm = buildFeedbackForm . commitValue . validateValue
There are still issues here, e.g. the type a is completely hidden behind the wrapper types, and we can't recover some basic properties (even if we use newtype, unless we use GeneralizedNewtypeDeriving). But it does offer a way to improve control flow safety. And that is my 0.02 currency unit for today.

Russ Allbery: Review: Higher Order Perl

Review: Higher Order Perl, by Mark Jason Dominus
Publisher: Morgan Kaufmann
Copyright: 2005
ISBN: 1-55860-701-3
Format: Trade paperback
Pages: 563
Disclaimer: The author and I both used to be (technically still are, but the newsgroup is dead) moderators of comp.lang.perl.moderated, and I knew him electronically from answering questions on the Perl Usenet newsgroups many years ago. Perl is, in general construction, a typical imperative language in the style of C, with object-oriented capabilities bolted (somewhat awkwardly) on. Most Perl code looks remarkably like C code with more powerful built-in data types and no memory management. A minority of Perl programs embrace object-oriented techniques and remind one of C++ or Java (or Python). But Perl also has powerful capabilities borrowed from functional programming languages and the Lisp tradition, even though many Perl programmers rarely use them and aren't familiar with them. Higher Order Perl focuses on those capabilities and how to use them effectively, starting with callbacks and uses of function pointers and moving into recursion and analysis of recursive functions, iterators, currying, and parsers. It concludes with a fully-worked example of constructing a declarative programming system using the techniques developed earlier in the book. Higher Order Perl is a programming book for an intermediate to advanced Perl programmer, already a rare topic. This is not a look at how to apply Perl to another area of programming, or a cookbook of techniques. It's an attempt to help a Perl programmer think about and use the language differently, and it follows through on that. It's refreshing and rare to read a programming technique book that's targeted at the practicing expert. (I think the size of the audience is often too small for publishers to target it.) Even the most experienced Perl programmer is probably going to learn something fundamental from this book, not just interesting trivia around the edges. On the negative side, though, I found the book a bit too focused on computer science and mathematics problems, particularly in the choice of examples and sample scripts. (And I say this as someone who has a master's degree in computer science with a software theory focus.) This is a bit hard to avoid for topics like recursion, where problems like computing Fibonacci numbers are classic, but throughout the book I struggled to focus past feelings of "but why would I ever do that"? The extended discussion of the Newton-Raphson method is the most memorable; I'm not sure that's a problem many Perl programmers would have and need a higher-order technique for. There's also a lot of discussion of recursion analysis and transformations between recursive and iterative expressions of problems, which is ground I remember well from my degree but which I've rarely had any practical use for in day-to-day programming. This is not a uniform problem, though, just a tendency. There are some great examples that I think are more in the mainstream of Perl problems, including a reinvention of File::Find that shows how to add more flexibility, a web spider, and a great discussion on how to construct a conventional queryable database on top of a flat file (a topic that's near and dear to my heart). And then there's the chapter on infinite streams. Dominus presents a method of using closures to create a version of an interator: a sort of infinite linked list that can keep generating additional elements. He presents this in several contexts, but one of them is log parsing, and that turned out to be exactly the solution that I needed for a problem I was working on while reading this book. I've written about this elsewhere, but this was a wonderful idea that helped me think about both Perl and a major application area in a completely new way, and I wrote an application using that knowledge that would have taken me much longer using different techniques and would have been much less fun. So, for me, this chapter was more than worth the entire book, and blankets the rest of it in a delighted feeling. Other people may or may not have that experience. I think it will depend on whether one of these techniques hits home for you the way that one did for me. This is a book with some idiosyncracies, and some sections that may drag. Having lots of fully-worked examples is a major plus, but some of those are so comprehensive that one can get a bit lost in the details. That particularly hit me with the last couple of chapters on parsing and on the example declarative programming application. Quite a bit of that text involved reinventing a recursive descent parser (another very computer science example), when I'm not quite sure why one wouldn't use one of the existing parser generators on CPAN for practical purposes, and inventing a lot of new syntax to try to make parser Perl code more readable. But one certainly can't complain that Dominus omits necessary details, and there is some appeal in watching an experienced programmer work through a problem from analysis to implementation. But, despite the idiosyncracies, I recommend this book to any experienced Perl programmer who wants to expand their view of the language. The techniques here closures, higher-order functions, iterators and streams, and formal parsers are powerful and underused in the Perl community. You may want to pick and choose which sections you pay close attention to, but I think everyone will find something of interest here. In the weeks since I read this book, my opinion of it has only grown. And I can't tell you just how much I loved the infinite stream concept. Rating: 8 out of 10

27 February 2013

Neil McGovern: Let s get physical

Two recent announcements have been made about how it's viewed that people should interact with each other. Both, in my opinion, are misguided. Firstly, Yahoo's chief executive, Marissa Mayer has announced that she's banning staff from remote working. The idea behind this announcement is simple - that "some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings". This is absolutely spot on, but the next sentence in the leaked internal memo is more problematic: "Speed and quality are often sacrificed when we work from home".
Don't get me wrong, working remotely has some special challenges, but they are by no means insurmountable. There's lots of tips for people working remotely which will turn into a future blog post at some point (I'm running an internal training about how to do it in a couple of weeks), but three simple rules are stay connected, set a routine and take care of yourself. The second decision is one by Ubuntu to move Ubuntu Developer Summits to a purely online meeting, ditching the physical meeting. This misses the point of conferences. If we simply wanted to listen to talks and presentations, why meet up at all? Webcasts have been around for the last 20 years, and yet conferences still exist. The most important part of a conference isn't the talks, it's the "hallway track" - it's the ability for people to meet up, chat and socialise. Be this an impromptu meeting in the corridor, or over a few nice beers. Without this component, why schedule a time at all? Simply publish a list of talks over the coming 3 months, and anyone can pick the best time for them to attend. At Collabora, many of our engineers work remotely. One of the perks we offer is the ability to attend conferences, and to "touch base" and visit and work from one of the offices. It is important to recognise the importance of collaboration physically and it shouldn't be discounted the way Ubuntu has done. But it should not be seen as a silver bullet to an organisation, like Yahoo seem to be implying. Both extremes are wrong, and a balance must be struck to ensure the best outcome for productivity and innovation. (Title from an iconic 80s song)

14 February 2013

Russell Coker: Conversion of Video Files

To convert video files between formats I use Makefiles, this means I can run make -j2 on my dual-core server to get both cores going at once. avconv uses 8 threads for it s computation and I ve seen it take up to 190% CPU time for brief periods of time, but overall it seems to average a lot less, if nothing else then running two copies at once allows one to calculate while the other is waiting for disk IO. Here is a basic Makefile to generate a subdirectory full of mp4 files from a directory full of flv files. I used to use this to convert my Youtube music archive for my Android devices until I installed MX Player which can play every type of video file you can imagine [1]. I ll probably encounter some situation where this script becomes necessary again so I keep it around. It s also a very simple example of how to run a batch conversion of video files. MP4S:=$(shell for n in *.flv ; do echo $$n sed -e s/^/mp4\\// -e s/flv$$/mp4/ ; done)

all: $(MP4S)

mp4/%.mp4: %.flv
avconv -i $< -strict experimental -b $$(~/bin/video-encoding-rate $<) $@ > /dev/null Here is a more complex Makefile. I use it on my directory of big videos (more than 1280*720 resolution) and scales them down for my favorite Android devices (Samsung Galaxy S3, Samsung Galaxy S, and Sony Ericsson Xperia X10). My Galaxy S3 can t play a FullHD version of Gangnam Style without going slow so I need to do this even for the fastest phones. This makefile generates three subdirectories of mp4 files for the three devices. S3MP4S:=$(shell for n in *.mp4 ; do echo $$n sed -e s/^/s3\\// -e s/.mp4$$/-s3.mp4/ -e s/.flv$$/-s3.mp4/; done)
XPERIAMP4S:=$(shell for n in *.mp4 ; do echo $$n sed -e s/^/xperiax10\\// -e s/.mp4$$/-xperiax10.mp4/ -e s/.flv$$/-xperiax10.mp4/; done)
SMP4S:=$(shell for n in *.mp4 ; do echo $$n sed -e s/^/galaxys\\// -e s/.mp4$$/-galaxys.mp4/ -e s/.flv$$/-galaxys.mp4/; done)

all: $(S3MP4S) $(XPERIAMP4S) $(SMP4S)

s3/%-s3.mp4: %.mp4
avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 1280 720 $<) $@ > /dev/null

galaxys/%-galaxys.mp4: %.mp4
echo avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 800 480 $<) $@ > /dev/null

xperiax10/%-xperiax10.mp4: %.mp4
echo avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 854 480 $<) $@ > /dev/null The following script is used by the above Makefile to determine the resolution to use. Some Youtube videos have unusual combinations of width and height (Linkin Park seems to like doing this) so I scale them down so it fits the phone in one dimension and the other dimension is scaled appropriately. This requires a script from the Mplayer package and expects it to be in the location that it s used in the Debian package, for distributions other than Debian a minor change will be required. #!/bin/bash
set -e
OUT_VIDEO_WIDTH=$1
OUT_VIDEO_HEIGHT=$2

eval $(/usr/share/mplayer/midentify.sh $3)
XMULT=$(echo $ID_VIDEO_WIDTH*100/$OUT_VIDEO_WIDTH bc)
YMULT=$(echo $ID_VIDEO_HEIGHT*100/$OUT_VIDEO_HEIGHT bc)
if [ $XMULT -gt $YMULT ]; then
NEWX=$OUT_VIDEO_WIDTH
NEWY=$(echo $OUT_VIDEO_WIDTH*$ID_VIDEO_HEIGHT/$ID_VIDEO_WIDTH/2*2 bc)
else
NEWX=$(echo $OUT_VIDEO_HEIGHT*$ID_VIDEO_WIDTH/$ID_VIDEO_HEIGHT/2*2 bc)
NEWY=$OUT_VIDEO_HEIGHT
fi
echo $ NEWX x$ NEWY Note that I can t preserve TAB characters in a blog post. So those Makefiles won t work until you replace strings of 8 spaces with a TAB character.

29 December 2012

Russell Coker: Samsung Galaxy Camera a Quick Review

I recently had a chance to briefly play with the new Samsung Galaxy Camera [1]. The Galaxy Camera is an Android device with a 4.8 display (the same size as the Samsung Galaxy S3) that has a fairly capable camera (IE nothing like a typical phone camera). It runs Android 4.1 (Jelly Bean) and the camera has 21* zoom with a 16 megapixel sensor. Camera Features It seems that professional photographers are often annoyed when they see someone with a DSLR set in auto mode. It s widely regarded that auto mode is a waste of a good camera, although the better lenses used with DSLRs will usually give a better result than any compact camera even when it s in auto mode. The problem is that photography is quite complex, in an earlier post about digital cameras I summarised some of the technical issues related to cameras and even without any great detail it became a complex issue [2]. The Galaxy Camera has a reasonably friendly GUI for changing camera settings which even includes help on some of the terms, I expect that most people who use it will end up using most of the features which could make it a good training camera for someone who is going to move to a DSLR. A DSLR version of the Galaxy Camera could also be an interesting product. The camera also has modes such as Waterfall and Panorama , hopefully the settings for those would be exposed to the user so they could devise their own similar groups of settings. I ve seen the phone criticised for the lack of physical controls as the expert mode in software is inherently slower than manually turning dials on a DSLR. But it seems obvious to me that anyone who knows how to use the controls manually should be using a DSLR or bridge camera and anyone who doesn t already know how to do such things will be better suited by the software controls. It supports 120fps video at 720*480 resolution (with a file format stating that it s 30fps to give 1/4 speed) which could be useful. I used to have a LG Viewty smart-phone that did 120fps video but the resolution was too low to be useful. 720*480 is enough resolution to see some detail and has the potential for some interesting video, one use that I ve heard of is filming athletes to allow them to analyse their performance in slow motion. It also does 60fps video at 720p (1280*720) resolution. One down-side to the device is that the lens cover doesn t seem particularly sturdy. It s quite OK for a device that will be stored in a camera case but not so good for a device that will be used as a tablet. I didn t get to poke at the lens cover (people don t like it if you mess with their Christmas presents) but it s design is a couple of thin flaps that automatically retract when the camera is enabled which looks quite weak. I d like to see something solid which doesn t look like it will slide back if the device is treated as roughly as a phone. I think that the lack of a solid lens cover could be the one factor that prevents it from being used as a replacement for a smart phone. Apart from that a Galaxy Camera and a cheap GSM phone could perform all the functions of a high end phone such as the Galaxy S3 while also producing great pictures. It would probably make sense for retailers to bundle a cheap phone with a Galaxy Camera for this purpose. Tablet Features The device boasts WiFi Direct to allow multiple cameras and phones to talk to each other without a regular WiFi access point [3]. I didn t test this and I don t think it would be particularly useful to me, but it seems like a handy feature for less technical users. It can connect to the Internet via Wifi or 3G, supports automatic upload of pictures (it comes with Dropbox support by default like the Galaxy S3), and has a suite of photo and video editing software. I don t expect that any photo editing software that runs on an Android device would be much good (I think that you really need fine cursor control with a mouse and a high resolution screen), but it would probably be handy for sending out a first draft of photos. Most Android apps should just work, the exceptions being apps that rely on a camera that faces the user or full phone functionality. So the Galaxy Camera can do almost anything that an Android phone or tablet can do. Value The RRP for the Galaxy Camera is $599, that puts it in the same price range as a DSLR with a single lens. While that s not a bad price when compared to smart-phones (it s cheaper than the LTE version of the Galaxy S3 phone) it s still quite expensive for a camera that s not a DSLR. Fortunately Kogan is selling it for $469 and has free shipping at the moment [4]. This still makes it more expensive than some of Bridge Cameras which probably have significantly better optical features, but in terms of what the typical user can do with a camera the Galaxy Camera will probably give a much better result. The sensor in the Galaxy Camera is smaller than that in the Nokia 808 PureView [5] (1/2.3 vs 1/1.2 ) so the Nokia PureView should be able to take better pictures in some situations. Unfortunately the Nokia 808 doesn t run Android, I d probably own one if it did. Some of the reviews are rather harsh, the Verge has a harsh but fair review by Aaron Souppouris which makes a number of negative comparisons to cheaper cameras [6]. I really recommend reading Aaron s review as there s a lot of good information there. But I think that Aaron is missing some things, for example he criticises the inclusion of ebook software by saying that he wouldn t read a book on a camera. But the device is a small tablet computer which also has a compact camera included. I can easily imagine someone reading a book or playing Angry Birds on their camera/tablet while in transit to where they are will photograph something. I can also imagine a Galaxy Camera being a valuable tool for a journalist who wants to be able to write short articles and upload pictures and video when out of the office. Aaron concludes by suggesting that the Galaxy Camera is a $200 camera with $300 of editing features. I think of it as $200 in camera hardware with software that allows less skilled users to take full advantage of the hardware and the ability to do all the software/Internet things that you would do on a $450+ smart-phone. Would I Buy One? No. The Galaxy Camera is among other things a great toy, I d love to have one to play with but I can t spare $469 on one. Part of the reason for this is that my wife just bought a DSLR and is getting lessons from a professional photographer, so I really won t get better pictures from a Galaxy Camera. The DSLR on auto mode will allow me to take pictures that will usually be better than a Galaxy Camera can achieve (sometimes you just can t beat a good lens). For more demanding pictures my wife can tweak the DSLR. The 120fps video is a really nice feature, I don t know if my wife s DSLR can do that, but it s a toy feature not something I really need. I ve just bought a Galaxy S3 which is a great little tablet computer (most of the time it won t be used for phone calls). I don t need another 4.8 tablet so a significant part of the use of the Galaxy Camera doesn t apply to me. I recommend the Galaxy Camera to anyone who wants to take good photos but can t get a DSLR and lessons on how to use it properly. But if you would rather get a 35mm camera with interchangeable lenses that runs Android then it might be worth waiting. I expect that the Galaxy Camera will be a great success in the market (it s something you will love when you see it). That will drive the development of similar products, if Samsung doesn t release a 35mm Android camera soon then someone else will (for example Sony develops both high end cameras and Android phones). If my wife didn t have a DSLR then I d probably have bought a Galaxy Camera already. I will recommend it to my parents and many other people I know who want an OK camera and can benefit from a tablet, but don t know how to use a DSLR properly (or don t want to carry a bulky camera).

16 December 2012

John Goerzen: The world is still a good place

At times like these, it is easy to think of the world as a cold, evil place. Perhaps in some ways, it is. I saw this quote from Fred Rogers floating around today:
When I was a boy and I would see scary things in the news, my mother would say to me, Look for the helpers. You will always find people who are helping. To this day, especially in times of disaster, I remember my mother s words, and I am always comforted by realizing that there are still so many helpers so many caring people in this world.
Sometimes I think that Fred Rogers wisdom is so often under-appreciated. What he says is true, very true. I know what it s like to fear for my child s life. And sometimes the shoe has been on the other foot, when I have been one of the helpers. Many of you know these last few months have been the most difficult in my life. And despite having gone through the deaths of three relatives, nothing has quite compared to this. I can not even begin to express my gratitude for all the care, compassion, and love that has come my way and towards the boys. People I barely knew before are now close friends. Random strangers have offered kindness and support. I have never before needed to be cared for like that, and in some ways perhaps it was hard to let myself be cared for. But I did, and all that caring and generosity has made an incredible difference in my life. Most of us don t see our pain on CNN or BBC, but that doesn t mean it s less real. And it doesn t mean there s nobody that cares. Open up to others, let them care for you. Things can and do get better. The people in Newtown did nothing to deserve this. No matter what evidence is found, they will never get an adequate answer to why? Children have been frightened, families torn apart, lives ended, for no reason at all. But they will survive the terrible pain. In time, they will find happiness again. And they will feel love and compassion from people around the world something to sustain them in their grief. I am certain of this. I recently read this quote, part of a story about a dying cancer patient: Don t forget that it doesn t take much to make someone s day. Yes, the world is still a good place.

12 August 2012

John Goerzen: A Verbose, Hands-On Nexus 7 Review

Some of you may have noticed that I am not a concise author. Perhaps that has something to do with the fact that I am not a concise reader. I like facts, details, and lots of them. So as a recent Nexus 7 purchaser, here you go. Genesis I ve long used Android devices, and last year had a company-issued Motorola Xoom, which was the first Google Experience tablet with Honeycomb. That tablet has specs roughly similar to iPads; its 10.1 screen was the same, the 1280 800 screen was better than the iPad available at the time, and its 730g weight identical to the early iPads (though 10% higher than the current iPad). I lost access to it when I changed jobs, and had been without a tablet until recently. Other devices I own are the Galaxy Nexus, sporting a 4.65 screen; and what s now called the Kindle Keyboard, with an eInk screen. I had been somewhat interested in the Kindle Fire, but the closed nature and limited capability of the system kept me away. The Nexus 7 reviews, however, were stunning, as was the price. $200 for a great tablet. I wound up buying the $250 16GB model. But not until after I spent a great deal of time thinking about size. Physical Size My main concern was that the Nexus 7 would be too small to be useful. I had never been particularly pleased with my input speed on the Xoom. I tried to touch type on it, but was just never fast enough to surpass frustratingly slow. I have long been a fast and accurate typist on keyboard; well over 100 words per minute, and it is frustrating when my fingers can t maintain that speed. I figured the situation would be even worse on the Nexus 7, given its smaller size. I also found the Xoom to sometimes feel a little small with the 10 screen, and was concerned about that as well. And finally, 7 doesn t sound all that much larger than 4.65 . However, having actually had the Nexus 7 for a little while now, I m very pleased with the size, and may even prefer it. The 10 tablets are just too big and heavy to comfortably hold in one hand, and I ve realized that part of my Xoom frustration was the fact that I had to set it down and prop it up for anything beyond very brief use. At 340g, the Nexus 7 is less than half the weight of the Xoom or iPad, and it makes a huge difference. While still nowhere near where I d be with a keyboard, two-thumb typing in portrait mode, or even something approaching touch typing in landscape mode, is possible on the Nexus 7. The screen size hasn t been a bother, at all. This may be due to the fact that it s higher resolution (it s 1280 800 like the Xoom, but those pixels are crammed into only 7 ). I think it s also partly due to the fact that the browser in Jelly Bean is significantly better than the one in Honeycomb, and perhaps that websites are better at tablet-friendliness, too. Overall, the Nexus 7 feels a lot farther from the size of a laptop than did the Xoom, and as such is more prone to come with me in lots of situations, I think. It works reasonably well with foldable Bluetooth keyboards, so when thinking about a laptop replacement or alternative, that might be the way I go. A Bluetooth mouse also works with it, though I found it didn t provide near the utility that a BT keyboard does. Display The display is both amazing and disappointing. Browse some photos and some of them will show up in eye-popping clarity. Websites display fine. But the screen can also take on a washed-out appearance at times. I am notoriously picky in my displays, and this bothered me enough that I researched it. Analysis has shown that poor firmware calibration has lead to the compression of highlights, which mirrors what I was seeing. I am mostly used to it by now, but it s a disappointment. Most of the time, though, the screen is excellent. In comparison to my eInk Kindle, however, I don t think any tablet will ever be as good for book reading. The eInk screen truly is easier on the eyes, and the reflection of overhead lights on the Nexus 7 display can be distracting at first. I have had occasional issues with it not registering touches properly. This is always cleared up by touching the power button to put the unit to sleep, then waking it back up. Other Hardware There are three hardware buttons: power and volume up/down. Physically, the device fits my hand well, though I might wish it was a little lighter like my Kindle. Charging is accomplished via high-power 2A micro-USB, and there is, of course, a headphone port. There is no alert LED like my Galaxy Nexus has, and no vibration feature. The speaker is on the back, and the microphones along the left side a position which, it appears, many Nexus 7 cases are blocking. Battery Life I am astonished at how good this device is battery-wise, especially compared to the battery disaster that is the Galaxy Nexus. Google claims the Nexus 7 can survive 8 hours of solid screen-on use, and I don t doubt it. Mine s never gotten low enough to get a solid measurement. Wifi The wifi works well, as far as it goes. The wifi doesn t support 802.11n in 5GHz, which although somewhat common for devices like this, is a bit of a disappointment. Software The big story about the Nexus 7 is Jelly Bean. I had used Honeycomb on the Xoom, and Ice Cream Sandwich on my Galaxy Nexus, so I m familiar with its predecessors. Let s take a look. Project Butter Much has been made of Project Butter, Google s attempt to optimize Android to improve its responsiveness and the smoothness of things like scrolling. I can say they have done quite well. This device is so smooth you don t notice how smooth it is. It wasn t until I had been using it for a bit that I really noticed. That s a job well-done. Chrome The browser in Jelly Bean is now called Chrome. I am not sure if this is just marketing or not. It doesn t really feel all that different from previous versions of the Android browser, and the changes have been along the lines of incremental changes Google has introduced before. One of the very best new features happens when you touch a link that is close to other links on a page. Rather than getting a pretty much random page, Chrome pops up a partial-screen zoom box showing the part of the page near where your finger touched. With everything showing up huge, it is now easy to touch the precise link you want. Do so, and the box goes away, and your page loads. I am amazed at how much improvement this one change brings. Compared to ICS Browser, bookmarks can be brought up quicker, and the tab interface is nicer. All is not perfect in the land of Chrome, however. It contains several regressions from the Ice Cream Sandwich browser. I have two complaints about bookmarks. One is that previous versions of the browser would show thumbnails of sites in the bookmark viewer. This was a nice navigation aid. Chrome shows only favorites icons, if one is available, or a generic icon if not. Also, the bookmarks synced with other Android devices are called, confusingly enough, Desktop Bookmarks now, and require an extra tap to access. I have had occasional trouble with Chrome not wanting to prompt for credentials for servers on my LAN that use HTTP auth. Chrome has also removed the ICS browser s ability to save a page, including all its elements, for offline viewing. Good for things like an airline checkin screen and such. I have no idea why Chrome removed this. I installed the Firefox Beta for Android, which also doesn t have the offline save feature, but it does have a save to PDF feature. Soft Keyboard The on-screen soft keyboard in Jelly Bean is a significant regression from previous versions of Android. My biggest complaint is the lack of visual feedback for keypresses. On earlier versions of Android, when you push a key, you ll see an image of it pop up on the screen, offset a little from the location of the key itself. In JB, all that happens is that the key itself changes colors. Not very helpful, because it is under your finger at the time. This small thing frustrates me to no end. The keyboard in ICS introduced some nice features as well, mainly long-presses as shortcuts to other features. For instance, you can long-press a key on the top row of letters to get numerals without having to switch to the number mode. Similarly, long press the period and you get other common punctuation. The JB keyboard removed both of those features. Thankfully, in the Market, there is an app called Ice Cream Sandwich Keyboard. It appears geared towards people running earlier versions of Android. Sadly, it is also a step up over what we have in JB. Google Now and Voice Recognition The other main headline feature in Jelly Bean is Google Now. The somewhat-competitor to Apple s Siri, Google Now takes a bit of a different approach than Apple. It is said that Siri is better than Google Now at responding to queries, but Google Now is better at predicting what you want to know before you ever ask. I haven t ever used Siri, but I would buy that explanation. Google Now is available with a swipe up from the bottom of the screen, or with a single touch from any Home screen. Bring it up and it shows you current information about what it thinks you need to know. Examples include weather and forecast information, time to get to home or work from your current location, alerts that you need to leave soon to get to a certain place on time, flight schedules, sports scores, etc. Google Now has been mostly a gimmick to me, but that may be because I fall outside its target demographic in significant ways. I live nowhere near a public transportation system, work at home for the most part, haven t flown wince I ve had the Nexus 7, don t follow sports, and already know how long it takes to get places (and when it varies, it s because of muddy roads or harvest neither things that traffic services know about.) The weather widget always seems to show the temperature from a couple of hours ago. It does show the weather in your current location. Well, mostly. I was in Newton, KS one day. I tapped on the icon for more detail. That simply took me to a Google search for weather Newton . Which showed me the weather for Newton, Massachusetts 1600 miles away. Fail. Speech recognition in JB is definitely improved. It is somewhat useful with Google Now. I like being able to simply say set alarm for 30 minutes. And it does it a lot quicker than I could in the interface. It s supposed to be able to let me bring up my contacts in the same way, but it is much more likely to try turning such an attempt into a Google search than an actual display of a contact. It s picky on the precise language used for setting an alarm too; say it slightly differently, and it s another Google search. JB also supports limited offline speech recognition. I say limited because it s a bit strange. I have, for instance, a Remember the Milk widget on my home screen. It has a microphone icon to use to speak a new reminder. Tap it, and you can t use it offline. It also has a button that brings up the on-screen keyboard. Do that, then touch the microphone on the keyboard, and you can use offline recognition. I have no idea how to explain this difference, since both are clearly using Google s engine. The speech recognition is indeed better, and might make it suitable for use instead of a keyboard for composing short texts and such. But it rarely produces even a sentence that I don t have to correct in some way, even now. Conclusion Despite some of its shortcomings, I am very fond of the Nexus 7. It is an excellent device. And at $200-$250, it is an AMAZING device. I am truly impressed with it, and don t regret my purchase at all.

28 July 2012

Vincent Bernat: Switching to the awesome window manager

I have happily used FVWM as my window manager for more than 10 years. However, I recently got tired of manually arranging windows and using the mouse so much. A window manager is one of the handful pieces of software getting in your way at every moment which explains why there are so many of them and why we might put so much time in it. I decided to try a tiling window manager. While i3 seemed pretty hot and powerful (watch the screencast!), I really wanted something configurable and extensible with some language. So far, the common choices are: I chose awesome, despite the fact that StumpWM vote for Lisp seemed a better fit (but it is more minimalist). I hope there is some parallel universe where I enjoy StumpWM. Visually, here is what I got so far: awesome dual screen setup

Awesome configuration Without a configuration file, awesome does nothing. It does not come with any hard-coded behavior: everything needs to be configured through its Lua configuration file. Of course, a default one is provided but you can also start from scratch. If you like to control your window manager, this is somewhat wonderful. awesome is well documented. The wiki provides a FAQ, a good introduction and the API reference is concise enough to be read from the top to the bottom. Knowing Lua is not mandatory since it is quite easy to dive into such a language. I have posted my configuration on GitHub. It should not be used as is but some snippets may be worth to be stolen and adapted into your own configuration. The following sections put light on some notable points.

Keybindings Ten years ago was the epoch of scavanger hunts to recover IBM Model M keyboards from waste containers. They were great to type on and they did not feature the infamous Windows keys. Nowadays, this is harder to get such a keyboard. All my keyboards now have Windows keys. This is a major change with respect to configure a window manager: the left Windows key is mapped to Mod4 and is usually unused by most applications and can therefore be dedicated to the window manager. The main problem with the ability to define many keybindings is to remember the less frequently used one. I have monkey-patched awful.key module to be able to attach a documentation string to a keybinding. I have documented the whole process on the awesome wiki. awesome online help

Quake console A Quake console is a drop-down terminal which can be toggled with some key. I was heavily relying on it in FVWM. I think this is still a useful addition to any awesome configuration. There are several possible solutions documented in the awesome wiki. I have added my own1 which works great for me. Quake console

XRandR XRandR is an extension which allows to dynamically reconfigure outputs: you plug an external screen to your laptop and you issue some command to enable it:
$ xrandr --output VGA-1 --auto --left-of LVDS-1
awesome detects the change and will restart automatically. Laptops usually come with a special key to enable/disable an external screen. Nowadays, this key does nothing unless configured appropriately. Out of the box, it is mapped to XF86Display symbol. I have associated this key to a function that will cycle through possible configurations depending on the plugged screens. For example, if I plug an external screen to my laptop, I can cycle through the following configurations:
  • only the internal screen,
  • only the external screen,
  • internal screen on the left, external screen on the right,
  • external screen on the left, internal screen on the right,
  • no change.
The proposed configuration is displayed using naughty, the notification system integrated in awesome. Notification of screen reconfiguration

Widgets I was previously using Conky to display various system-related information, like free space, CPU usage and network usage. awesome comes with widgets that can fit the same use. I am relying on vicious, a contributed widget manager, to manage most of them. It allows one to attach a function whose task is to fetch values to be displayed. This is quite powerful. Here is an example with a volume widget:
local volwidget = widget(  type = "textbox"  )
vicious.register(volwidget, vicious.widgets.volume,
         '<span font="Terminus 8">$2 $1%</span>',
        2, "Master")
volwidget:buttons(awful.util.table.join(
             awful.button(   , 1, volume.mixer),
             awful.button(   , 3, volume.toggle),
             awful.button(   , 4, volume.increase),
             awful.button(   , 5, volume.decrease)))
You can also use a function to format the text as you wish. For example, you can display a value in red if it is too low. Have a look at my battery widget for an example. Various widgets

Miscellaneous While I was working on my awesome configuration, I also changed some other desktop-related bits.

Keyboard configuration I happen to setup all my keyboards to use the QWERTY layout. I use a compose key to input special characters like . I have also recently use Caps Lock as a Control key. All this is perfectly supported since ages by X11 I am also mapping the Pause key to XF86ScreenSaver key symbol which will in turn be bound to a function that will trigger xautolock to lock the screen. Thanks to a great article about extending the X keyboard map with xkb, I discovered that X was able to switch from one layout to another using groups2. I finally opted for this simple configuration:
$ setxkbmap us,fr '' compose:rwin ctrl:nocaps grp:rctrl_rshift_toggle
$ xmodmap -e 'keysym Pause = XF86ScreenSaver'
I switch from us to fr by pressing both left Control and left Shift keys.

Getting rid of most GNOME stuff Less than one year ago, to take a step forward to the future, I started to heavily rely on some GNOME components like GNOME Display Manager, GNOME Power Manager, the screen saver, gnome-session, gnome-settings-daemon and others. I had numerous problems when I tried to setup everything without pulling the whole GNOME stack. At each GNOME update, something was broken: the screensaver didn t start automatically anymore until a full session restart or some keybindings were randomly hijacked by gnome-settings-daemon. Therefore, I have decided to get rid of most of those components. I have replaced GNOME Power Manager with system-level tools like sleepd and the PM utilities. I replaced the GNOME screensaver with i3lock and xautolock. GDM has been replaced by SLiM which now features ConsoleKit support3. I use ~/.gtkrc-2.0 and ~/.config/gtk-3.0/settings.ini to configure GTK+. The future will wait.

Terminal color scheme I am using rxvt-unicode as my terminal with a black background (and some light transparency). The default color scheme is suboptimal on the readability front. Sharing terminal color schemes seems a popular activity. I finally opted for the derp color scheme which brings a major improvement over the default configuration. Comparison of terminal color schemes I have also switched to Xft for font rendering using DejaVu Sans Mono as my default font (instead of fixed) with the following configuration in ~/.Xresources:
Xft.antialias: true
Xft.hinting: true
Xft.hintstyle: hintlight
Xft.rgba: rgb
URxvt.font: xft:DejaVu Sans Mono-8
URxvt.letterSpace: -1
The result is less crisp but seems a bit more readable. I may switch back in the future. Comparison of terminal fonts

Next steps My reliance to the mouse has been greatly reduced. However, I still need it for casual browsing. I am looking at luakit a WebKit-based browser extensible with Lua for this purpose.

  1. The console gets its own unique name. This allows awesome to reliably detect when it is spawned, even on restart. It is how the Quake console works in the mod of FVWM I was using.
  2. However, the layout is global, not per-window. If you are interested by a per-window layout, take a look at kbdd.
  3. Nowadays, you cannot really survive without ConsoleKit. Many PolicyKit policies do not rely on groups any more to grant access to your devices.

Next.

Previous.