Search Results: "pasc"

1 November 2010

Michal Čihař: Cleaning up the web

After recent switch of this blog to Django, I've also planned to switch rest of my website to use same engine. Most of the code is already written, however I've found some forgotten parts, which nobody used for really a long time. As most of the things there is free software, I don't like idea removing it completely, however it is quite unlikely that somebody will want to use these ancient things. Especially when it quite lacks documentation and I really forgot about that (most of them being Turbo Pascal things for DOS, Delphi components for Windows and so on). Anyway it will probably live only on dl.cihar.com in case anybody is interested.

Filed under: Django Website 0 comments Flattr this

24 July 2010

Andrew Pollock: [geek] Cleaning up from 20 years ago

I'm a terrible hoarder. I hang onto old stuff because I think it might be fun to have a look at again later, when I've got nothing to do. The problem is, I never have nothing to do, or when I do, I never think to go through the stuff I've hoarded. As time goes by, the technology becomes more and more obsolete to the point where it becomes impractical to look at it. Today's example: the 3.5" floppy disk. I've got a disk holder thingy with floppies in it dating back to the mid-nineties and earlier. Stuff from high school, which I thought might be a good for a giggle to look at again some time. In the spirit of recording stuff before I throw it out, I present the floppy disks I'm immediately tossing out.
MS-DOS 6.2 and 6.22
Ah the DOS days. I remember excitedly looking forward to new versions of MS-DOS to see what new features they brought. I remember DOS 5.0 being the revolutionary one. The dir command grew a ton of options.
XTreeGold
More from the DOS days, when file management was such a pain in the arse that there was a business model to do it better. ytree seems like a fairly good looking clone of it for Linux.
WinZip for Windows 95, Windows NT and Windows 3.1
Ha. I actually paid money for an official WinZip floppy disk.
Nissan Maxima Electronic Brochure
I'm amazed this fit on a floppy disk
Turbo Pascal 6.0
Excluding GW-BASIC, this was the first "real" language I dabbled in. I learned it in Information Processing & Technology in grades 11 and 12. I never got into the OO stuff that version 6.0 was particularly geared towards.
Where in the World is Carmen Sandiego?
Awesome educational game. I was first introduced to this on the Apple ][, and loved it. This deserves being resurrected for a console.
Captain Comic II
Good sequel to the original, but I never found a version that worked properly (I could never convince it to let me finish it)
HDM IV
Ah, Hard Disk Menu. A necessity from the DOS days when booting up to a C:\> prompt just really didn't cut it. I used to love customising this thing.
ARJ, LHA, PK-ZIP
Of course, you needed a bazillion different decompression programs back in the days of file trading. I guess things haven't changed much with Linux. There's gzip, bzip2, 7zip, etc.
Zeliard
I wasted so many hours playing this. The ending was so hard.
MicroSQL
This was some locally produced software from Brisbane, written in Turbo Pascal (I think). It was a good introduction to SQL, I used it in high school and my first stab at University.
DOOM and DOOM II
Classics. I don't seem to have media for it any more, but I also enjoyed playing Heretic and Hexen. Oooh, Hexen has been ported to Linux? Must check that out...
SimCity 2000
I wasn't a big fan of this game, but I liked the isometric view that 2000 had, compared to the previous version.

19 May 2010

John Goerzen: Time to learn a new language

I have something of an informal goal of learning a new programming language every few years. It s not so much a goal as it is something of a discomfort. There are so many programming languages out there, with so many niches and approaches to problems, that I get uncomfortable with my lack of knowledge of some of them after awhile. This tends to happen every few years. The last major language I learned was Haskell, which I started working with in 2004. I still enjoy Haskell and don t see anything displacing it as my primary day-to-day workhorse. Yet there are some languages that I d like to learn. I have an interest in cross-platform languages; one of my few annoyances with Haskell is that it can t (at least with production quality) be compiled into something like Java bytecode or something else that isn t architecture-dependent. I have long had a soft spot for functional languages. I haven t had such a soft spot for static type checking, but Haskell s type inference changed that for me. Also I have an interest in writing Android apps, which means some sort of Java tie-in would be needed. Here are my current candidates: Of some particular interest to me is that Haskell has interpreters for Scheme, Lua, and JavaScript as well as code generators for some of these languages (though not generic Haskell-to-foo compilers). Languages not in the running because I already know them include: OCaml, POSIX shell, Python, Perl, Java, C, C++, Pascal, BASIC, Common Lisp, Prolog, SQL. Languages I have no interest in learning right now include Ruby (not different enough from what I already know plus bad experiences with it), any assembly, anything steeped in the Microsoft monoculture (C#, VB, etc.), or anything that is hard to work with outside of an Emacs or vim environment. (If your language requires or strongly encourages me to use your IDE or proprietary compiler, I m not interested that means you, flash.) Brief Reivews of Languages I Have Used To give you a bit of an idea of where I m coming from:

13 January 2010

Matt Brubeck: Finding SI unit domain names with Node.js

I'm working on some ideas for finance or news software that deliberately updates infrequently, so it doesn't reward me for checking or reloading it constantly. I came up with the name "microhertz" to describe the idea. (1 microhertz once every eleven and a half days.) As usual when I think of a project name, I did some DNS searches. Unfortunately "microhertz.com" is not available (but "microhertz.org" is). Then I went off on a tangent and got curious about which other SI units are available as domain names. This was the perfect opportunity to try node.js so I could use its asynchronous DNS library to run dozens of lookups in parallel. I grabbed a list of units and prefixes from NIST and wrote the following script:
var dns = require("dns"), sys = require('sys');
var prefixes = ["yotta", "zetta", "exa", "peta", "tera", "giga", "mega",
  "kilo", "hecto", "deka", "deci", "centi", "milli", "micro", "nano",
  "pico", "femto", "atto", "zepto", "yocto"];
var units = ["meter", "gram", "second", "ampere", "kelvin", "mole",
  "candela", "radian", "steradian", "hertz", "newton", "pascal", "joule",
  "watt", "colomb", "volt", "farad", "ohm", "siemens", "weber", "henry",
  "lumen", "lux", "becquerel", "gray", "sievert", "katal"];
for (var i=0; i<prefixes.length; i++)  
  for (var j=0; j<units.length; j++)  
    checkAvailable(prefixes[i] + units[j] + ".com", sys.puts);
   
 
function checkAvailable(name, callback)  
  var resolution = dns.resolve4(name);
  resolution.addErrback(function(e)  
    if (e.errno == dns.NXDOMAIN) callback(name);
   )
 
Out of 540 possible .com names, I found 376 that are available (and 10 more that produced temporary DNS errors, which I haven't investigated). Here are a few interesting ones, with some commentary: To get the complete list, just copy the script above to a file, and run it like this: node listnames.js Along the way I discovered that the API documentation for Node's dns module was out-of-date. This is fixed in my GitHub fork, and I've sent a pull request to the author Ryan Dahl.

29 December 2009

John Goerzen: Review: The Happiest Days of Our Lives (by Wil Wheaton)

I started to write this review last night, and went looking for Wil Wheaton s blog, where many of the stories came from, so I can link to it from my review. It was getting late, I was tired, and so I was a bit disoriented for a few seconds when I saw my own words flash up on the screen. At the time, his most recent story had excerpted my review of paper books. Wow, I thought. This never happens when I m about to review Dickens. And actually, it s never happened before, ever. I ll admit to owning a big grin when I saw that one of my favorite authors liked one of my blog posts. And Wil Wheaton is one of my favorite authors for sure. I enjoy reading others too, of course, but Wil s writing is something I can really identify with like no other. My parents were never in a London debtor s prison like Dickens were; I was never a promising medical student like A. C. Doyle. But I was, and am, a geek, and Wil Wheaton captures that more perfectly than anyone. After I read Just a Geek a few years ago, I gave it to my wife to read, claiming it would help her understand me better. I think it did. In The Happiest Days of Our Lives, Wil recounts memories of his childhood, and of more recent days. He talks of flashbacks to his elementary school days, when he and his classmates tried to have the coolest Star Wars action figures (for me: calculator watches). Or how his aunt introduced him to D&D, which reminded me of how my uncle got me interested in computers. Teaching himself D&D was an escape for the geeky kid that wasn t good at sports, as teaching myself Pascal and C was for me. Between us, the names and activities are different, but the story is the same. I particularly appreciated Wil s reflections on his teenage years. Like him, at that age, I often found myself as the youngest person in a room full of adults. Yet I was still a teenager, and like any teenager, did some things that I look back on with some embarrassment now. Wil was completely honest with himself he admitted crashing a golf cart on the Paramount studio lot, for instance, but also reminds me that he was a teenager then. He recognizes that he didn t always make the best choices and wasn t always successful with what he did, but isn t ashamed of himself either. That s helpful for me to remember; I shouldn t be unreasonably harsh on my 16-year-old self, and need to remember that I had to be a teenager too. I also identify with him as a dad. He wrote of counting the days until he could teach his boys about D&D, about passing on being a geek to his sons. I ve had a similar excitement about being able to help Jacob build his first computer. Already Jacob, who is 3, loves using the manual typewriter I cleaned up for him, and spent an hour using the adding machine I dug out on Sunday while I was watching the boys. (I regret that I didn t have time to take it apart and show him how it worked right then when he asked). And perhaps his 2nd-favorite present of Christmas was the $3.50 large-button calculator with solar cell power I got him as an impulse buy at the pharmacy the other day. He is particularly enamored with the square root button because a single press replaces all the numbers on the screen with completely different numbers! I can t find the exact passage now, but Wil wrote at one point about his transition from a career in acting to a career in writing. He said that he likes the feeling he gets when his writing can touch people. He s been able to redefine himself not as a guy that used to be an actor on Star Trek but a person that is a good author, now. I agree, and think his best work has been done with a keyboard instead of a camera. And that leaves me wondering where my career will take me. Yes, I m an author, but of technical books. Authors of technical books rarely touch people s hearts. There s a reason we read Shakespeare and Dickens in literature classes, but no high school English teacher has ever assigned Newton s Opticks, despite its incredible importance to the world. Newton revolutionized science, mathematics, and philosophy, but Opticks doesn t speak to the modern heart like Romeo and Jiuliet still does. Generations of people have learned more about the world from Shakespeare than from Newton. I don t have Wil s gift for writing such touching stories. I ve only been able to even approach that sort of thing once or twice, and it certainly won t make a career for me. Like Wil, I m rarely the youngest person in the room anymore. His days of being a famous teenage actor on a scifi series are long gone, as are mine of single-handedly defeating entire teams at jr. high programming contests. (OK, that s a stretch, but at the time it sure felt exciting.) But unlike him, I m not completely content with my niche yet. I blog about being a geek in rural Kansas, where there still aren t many. I m a dad, with an incredible family. And I write about programming, volunteer for Debian and a few other causes, and have a surprisingly satisfying job working for a company that builds lawn mowers. And yet, I have this unshakable feeling of unsettledness. That I need to stop and think more about what I really want to do with my life, perhaps cultivate some talents I don t yet have, or perhaps find a way to make my current path more meaningful. So I will take Wil s book as a challenge, to all those that were once sure of what their lives would look like, and are less sure with each passing year: take a chance, and make it yours. And on that score, perhaps I ve done more than I had realized at first. Terah and I took a big chance moving to Kansas, and another one when we bought my grandparents run-down house to fix up and live in. Perhaps it s not a bad idea to pause every few years and ask the question: Do I still like the direction I m heading? Can I change it? Wil Wheaton gives me lots to think about, in the form of easy-to-read reflections on his own life. I heartily recommend both Just a Geek and The Happiest Days of Our Lives. (And that has nothing to do with the fact that the Ubuntu machine he used to write the book probably had installed on it a few pieces of code that I wrote, I promise you.)

16 September 2009

Sergio Talens-Oliag: Encrypting a Debian GNU/Linux installation on a MacBook

A couple of weeks ago I updated my Debian Sid setup on the MacBook to use disk encryption; this post is to document what I did for later reference. The system was configured for dual booting Debian or Mac OS X using refit and grub2 as documented on the Debian Wiki; I don't use the Mac OS X system much, but I left it there to be able to test things and be able to answer questions of Mac OS X users when I have to. The Debian installation was done using two primary partitions, one for swap (I used a partition to be able to suspend to disk without troubles) and an ext3 file system used as the root file system. The plan was to use the Debian Installer to do the disk setup and recover the Sid installation from a backup once the encrypted setup was working OK. Backup for later recovery My first step was to install all the needed packages on the original system; basically I verified that I had the lvm2 and cryptsetup packages installed. The second step was to backup the root file system; to do it I changed to run level 1 and copied the files to an external USB disk using rsync. My third step was to boot into Mac OS X to reduce the space assigned to it; I had a lot of free space that I didn't plan to use with Mac OS X and I thought that this was the best occasion to reassign it to the Debian file system. Encrypted Lenny installation Now the machine was ready for the installer. As I formatted the system a couple of weeks ago I used a daily build of the Lenny Debian Installer, now that Lenny is out I would have used the official version. I booted the installer and on the partition disk step I selected the manual method; I left sda1 and sda2 as they were (the Mac OS X installation uses them) and set up sda3 and sda4 as follows: Note that I decided to put /boot on a plain ext3 partition to be able to use grub2 as the boot loader (if we put the kernel on an LVM logical volume we need to use lilo as the boot loader). Once sda4 was adjusted as LVM I entered on the LVM setup and created a LVM Volume Group (VG) with the name debian, using sda4 as the physical volume. Once the VG was defined I created a couple of Logical Volumes (LV): I left some space unallocated to be able to create LVM snapshots (I use them to do backups, I'll post about it on the next days). Once the LV were ready I finished with the LVM setup and went back to the partitioner to configure the Logical Volumes: Once both encrypted volumes were ready I entered on the Configure the encrypted volumes menu and the installer formatted the volumes for encryption and asked for the debian-root pass phrase. Back on the main partitioning menu I set up the debian-root_crypt encrypted volume: I didn't need to touch the debian-swap_crypt, it was configured automatically as swap because I choose a random encryption key. At this point I was finished with the partitioning; to finish I installed a minimal system and rebooted to try the system. As I had changed the disk layout I had to re-sync the partition tables from refit; once that was done I was able to boot from the newly installed system. Setting up suspend to disk I was using s2disk to suspend the system; to test if it still worked with the new setup I installed the uswsusp package and adjusted the resume device on the /etc/uswsusp.conf to /dev/mapper/debian-swap_crypt. After my first try I noticed that the resume step failed with the encrypted swap partition because it was using a random key, which means that the swap contents are unrecoverable after a reboot. Looking at the cryptsetup documentation I found that the solution was to use a derived key for the swap partition instead of a random one. The command sequence was as follows:
# disable swap
swapoff -a
# close encrypted volume
cryptsetup luksClose debian-swap_crypt
# change the swap partition setup on the /etc/crypttab file
sed -e -i 's%^debian-swap.*%debian-swap_crypt /dev/mapper/debian-swap debian-root_crypt cipher=aes-cbc-essiv:sha256,size=256,swap,hash=sha256,keyscript=/lib/cryptsetup/scripts/decrypt_derived,swap%' /etc/crypttab
# open the encrypted volumes with the new setup
/etc/init.d/cryptdisks start
# enable swap
swapon -a
# update the initrd image
update-initramfs -u
After executing all those commands the suspend to disk system worked as expected. Recovering the original system If I were going to reinstall the system completely I would have finished here, but in my case I wanted to recover my original system setup (except the minimal changes required to use the encrypted passions, of course). To recover my old installation I backed up some files (/etc/fstab, /etc/crypttab, /etc/uswsusp.conf and the current /boot contents to be able to boot in case of failure with my old kernel) from the current installation, after that I recovered all the files from the initial backup (except the ones just saved) using rsync again and regenerated the initrd images of my old kernels:
update-initramfs -u -k all
After that I rebooted and everything worked as on my original installation (except for the disk encryption, of course).

19 April 2009

Martin F. Krafft: Extending the X keyboard map with xkb

xmodmap has long been the only way to modify the keyboard map of the X server, short of the complex configuration daemon approaches used by the large desktop managers, like KDE and GNOME. But it has always been a hack: it modifies the X keyboard map and thus requires a baseline to work from, kind of like a patch needs the correct context to be applicable. Worse yet, xmodmap weirdness required me to invoke it twice to get the effect I wanted. When the recent upgrade to X.org 7.4 broke larger parts of my elaborate xmodmap configuration, I took the time to finally ditch xmodmap and implement my modifications as proper xkb configuration.

Background information I had tried before to use per-user xkb configuration, but could not find the answers I want. It was somewhat by chance that I found Doug Palmer s Unreliable Guide to XKB configuration at the same time that Julien Cristau and Matthew W. S. Bell provided me the necessary hints on the #xorg/irc.freenode.org IRC channel to get me started. The other resource worth mentioning is Ivan Pascal s collection of XKB documents, which were instrumental in my gaining an understanding of xkb. And just as I am writing this document, Debian s X Strike Force have published their Input Hotplug Guide, which is a nice complement to this very document you are reading right now, since it focuses on auto-configuration of xkb with HAL. The default xkb configuration comes with a lot of flexibility, and often you don t need anything else. But when you do, then this is how to do it:

Installing a new keyboard map The most basic way to install a new keyboard map is using xkbcomp, which can also be used to dump the currently installed map into a file. So, to get a bit of an idea of what we ll be dealing with, please run the following commands:
xkbcomp $DISPLAY xkb.dump
editor xkb.dump
xkbcomp xkb.dump $DISPLAY

The file is complex and large, and it completely went against my aesthetics to simply edit it to have xkb work according to my needs. I sought a way in which I could use as much as possible of the default configuration, and only place self-contained additional snippets in place to do the things I wanted done differently. setxkbmap and rule files Thus began my voyage into the domain of rule files. But before we dive into those, let s take a look at setxkbmap. Despite the trivial invocation of e.g. setxkbmap us to install a standard US-American keyboard map, the command also takes arguments. More specifically, it allows you to specify the following high-level parameters, which determine the sequence of events between key press and an application receiving a KeyPress event:
  • Model: the keyboard model, which defines which keys are where
  • Layout: the keyboard layout, which defines what the keys actually are
  • Variant: slight variantions in the layout
  • Options: configurable aspects of keyboard features and possibilities
Thus, with the following command line, I would select a US layout with international (dead) keys for my Thinkpad keyboard, and switch to an alternate symbol group with the windows keys (more on that later):
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch

In many cases, between all combinations of the aforementioned parameters, this is all you ever need. But I wanted more. If you append -print to the above command, it will print the keymap it would install, rather than installing it:
% setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch -print
xkb_keymap  
  xkb_keycodes    include "evdev+aliases(qwerty)"        ;
  xkb_types       include "complete"     ;
  xkb_compat      include "complete"     ;
  xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;
  xkb_geometry    include "thinkpad(us)"         ;
 ;

There are two things to note:
  1. The -option grp:win_switch argument has been turned into an additional include group(win_switch) on the xkb_symbols line, just like the model, layout, and variant are responsible for other aspects in the output.
  2. The output seems related to what xkbcomp dumped into the xkb.dump file we created earlier. Upon closer inspection, it turns out that the dump file is simply a pre-processed version of the keyboard map, with include instructions exploded.
At this point, it became clear to me that this was the correct way forward, and I started to investigate those points in order. The translation from parameters to an xkb_keymap stanza by setxkbmap is actually governed by a rule file. A rule is nothing more than a set of criteria, and what setxkbmap should do in case they all match. On a Debian system, you can find this file in /usr/share/X11/xkb/rules/evdev, and /usr/share/X11/xkb/rules/evdev.lst is a listing of all available parameter values. The xkb_symbols include line in the above xkb_keymap output is the result of the following rules in the first file, which setxkbmap had matched (from top to bottom) and processed:
! model         layout              =       symbols
  [...]
  *             *                   =       pc+%l(%v)
! model                             =       symbols
  *                                 =       +inet(evdev)
! option                            =       symbols
  [...]
  grp:win_switch                    =       +group(win_switch)

It should now not be hard to deduce the xkb_symbols include line quoted above, starting from the setxkbmap command line. I ll reproduce both for you for convenience:
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch
xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;

A short note about the syntax here: group(win_switch) in the symbols column simply references the xkb_symbols stanza named win_switch in the symbols file group (/usr/share/X11/xkb/symbols/group). Thus, the rules file maps parameters to sets of snippets to include, and the output of setxkbmap applies those rules to create the xkb_keymap output, to be processed by xkbcomp (which setxkbmap invokes implicitly, unless the -print argument was given on invocation). It seems that for a criteria (option, model, layout, ) to be honoured, it has to appear in the corresponding listing file, evdev.lst in this case. There is also evdev.xml, but I couldn t figure out its role.

Attaching symbols to keys I ended up creating a symbols file of reasonable size, which I won t discuss here. Instead, let s solve the following two tasks for the purpose of this document:
  1. Make the Win-Hyphen key combination generate an en dash ( ), and Win-Shift-Hyphen an em dash ( ).
  2. Let the Caps Lock key generate Mod4, which can be used e.g. to control the window manager.
To approach these two tasks, let s create a symbols file in ~/.xkb/symbols/xkbtest and add two stanzas to it:
partial alphanumeric_keys
xkb_symbols "dashes"  
  key <AE11>  
    symbols[Group2] = [ endash, emdash ]
   ;
 ;
partial modifier_keys
xkb_symbols "caps_mod4"  
  replace key <CAPS>  
    [ VoidSymbol, VoidSymbol ]
   ;
  modifier_map Mod4   <CAPS>  ;
 ;

Now let me explain these in turn:
  1. We used the option grp:win_switch earlier, which told xkb that we would like to use the windows keys to switch to group 2. In the custom symbols file, we now simply define the symbols to be generated for each key, when the second group has been selected. Key <AE11> is the hyphen key. To find out the names of all the other keys on your keyboard, you can use the following command:
    xkbprint -label name $DISPLAY -   gv -orientation=seascape -
    
    
    I had to declare the stanza partial because it is not a complete keyboard map, but can only be used to augment/modify other maps. I also declared it alphanumeric_keys to tell xkb that I would be modifying alphanumeric keys inside it. If I also wanted to change modifier keys, I would also specify modifier_keys. The rest should be straight-forward. You can get the names of available symbols from keysymdef.h (/usr/include/X11/keysymdef.h on a Debian system, package x11proto-core-dev), stripping the XK_ prefix.
  2. The second stanza replaces the Caps Lock key definition and prevents it from generating symbols (VoidSymbol). The important aspect of the second stanza is the modifier_map instruction, which causes the key to generate the Mod4 modifier event, which I can later use to bind key combinations for my window manager (awesome).
The easiest way to verify those changes is to put the setxkbmap -print output of the keyboard map you would like to use as a baseline into ~/.xkb/keymap/xkbtest, and append snippets to be included to the xkb_symbols line, e.g.:
"pc+us(intl)+inet(evdev)+group(win_switch)+xkbtest(dashes)+xkbtest(caps_mod4)"

When you try to load this keyboard map with xkbcomp, it will fail because it cannot find the xkbtest symbol definition file. You have to let the tool know where to look, by appending a path to its search list (note the use of $HOME instead of ~, which the shell would not expand):
xkbcomp -I$HOME/.xkb ~/.xkb/keymap/xkbtest $DISPLAY

You can use xev to verify the results, or just type Win-Hyphen into a terminal; does it produce ? By the way, I found xev much more useful for such purposes when invoked as follows (thanks to Penny for the idea):
xev   sed -ne '/^KeyPress/,/^$/p'

Unfortunately, xev does not give any indication of which modifier symbols are generated. I have found no other way to verify the outcome, other than to tell my window manager to do something in response to e.g. Mod4-Enter, reloaded it, and then tried it out.

Rules again, and why I did not use them in the end Once I got this far, I proceeded to add option-to-symbol-snippet mappings to the rules file, and added each option to the listing file too. A few bugs [[!debbugs 524512 desc=later]], I finally had setxkbmap spit out the right xkb_keymap and could install the new keyboard map with xkbcomp, like so:
setxkbmap -I$HOME/.xkb [...] -print   xkbcomp -I$HOME/xkb - :0

I wrote a small script to automatically do that at the start of the X session and could have gone to play outside, if it hadn t been for the itch I felt due to the entire rule file stored in my configuration. I certainly did not like that, but I could also not find a way to extend a rule file with additional rules. When I looked at the aforementioned script again, it suddenly became obvious that I was going a far longer path than I had to. Even though the rule system is powerful and allows me to e.g. automatically include symbol maps to remap keys on my Thinkpad, based on the keyboard model I configured, the benefit (if any) did not justify the additional complexity. In the end, I simplified the script that loads the keyboard map, and defined a default xkb_keymap, as well as one for the Thinkpad, wich I identify by its fully-qualified hostname. If a specific file is available for a given host, it is used. Otherwise, the script uses the default.

15 April 2009

Axel Beckert: Useless Statistics, the 2nd

Myon recently posted a nice statistic about popular single letter package name prefixes. Just out of curiosity I started wondering about popular single letter package name suffixes: On a machine with Debian oldstable, stable, testing, unstable and experimental in its sources.list, I ran the following command:
$ apt-cache search -n .   \
    awk ' print $1 '   \
    sed -e 's/.$//'   \
    sort   \
    uniq -c   \
    sort -n
And to my surprise there is a non-obvious winner:
$ apt-cache search -n '^gp.$'
gpa - GNU Privacy Assistant
gpc - The GNU Pascal compiler
gpe - The G Palmtop Environment (GPE) metapackage
gpm - General Purpose Mouse interface
gpp - a general-purpose preprocessor with customizable syntax
gpr - GUI for lpr: print files and configure printer-specific options
gps - Graphical Process Statistics using GTK+
gpt - G-Portugol is a portuguese structured programming language
gpw - Trigraph Password Generator
But since I searched through the binary packages many other hits are more obvious, like the seven packages hbf-cns40-1 to hbf-cns40-7:
      [...]
      4 ar
      4 aspell-f
      4 automake1.
      4 cpp-4.
      4 e
      4 g++-4.
      4 gappletviewer-4.
      4 gcc-4.
      4 gcj-4.
      4 gcompris-sound-e
      4 gfortran-4.
      4 gij-4.
      4 go
      4 gobjc-4.
      4 gobjc++-4.
      4 h
      4 iceweasel-l10n-e
      4 iceweasel-l10n-k
      4 kde-i18n-f
      4 kde-i18n-h
      4 kde-l10n-e
      4 kde-l10n-s
      4 kile-i18n-e
      4 koffice-i18n-e
      4 koffice-i18n-s
      4 koffice-l10n-e
      4 koffice-l10n-f
      4 libqbanking
      4 myspell-f
      4 myspell-h
      4 openoffice.org-help-e
      4 openoffice.org-l10n-b
      4 openoffice.org-l10n-h
      4 openoffice.org-l10n-k
      4 sd
      4 tcl8.
      4 tk8.
      5 aspell-e
      5 aspell-h
      5 iceweasel-l10n-s
      5 kde-i18n-b
      5 kde-i18n-e
      5 kde-i18n-t
      5 kde-l10n-k
      5 openoffice.org-l10n-e
      5 openoffice.org-l10n-t
      5 pa
      5 tc
      6 gc
      6 kde-i18n-s
      6 libdb4.
      6 m
      6 openoffice.org-l10n-n
      6 openoffice.org-l10n-s
      6 s
      7 hbf-cns40-
      9 gp
But there are also some other interesting observations to make: I leave it as an exercise to the reader to find the full names of the other package names starting with s, m, gc, pa or tc and having just one additional character. ;-)

18 February 2009

MJ Ray: Banking with Free Software/Firefox: MPS Italy

A web browser
Websites I ve just updated the online banking compatibility list after a report from Italy that Monte dei Paschi di Siena is not currently working for GNU/Linux users. Can anyone confirm they broke it, or tell us how to get it working, please?

1 February 2009

Patrick Winnertz: Midnight Commander revived (new version available)

After everybody though that Midnight Commander is dead and the next release of it will be released together with hurd, it is very cool to announce that a new team of developers are active again. We've taken over officially the development in December and now, after two months of work a new version of mc is available: 4.6.2. This is mostly a bugfix release addressing several very nasty bugs which are also present in the debian package. Here the notes what has changed in this release: As you see.. quite a long list of fixes :) Have fun and check the new release out right here! :)

30 June 2008

Russell Coker: The History of MS

Jeff Bailey writes about the last 26 years of Microsoft [1]. He gives Microsoft credit for “saving us from the TRS 80″, however CP/M-86 was also an option for the OS on the IBM PC [2]. If MS hadn’t produced MS-DOS for a lower price then CP/M would have been used (in those days CP/M and MS-DOS had the same features and essentially the same design). He notes the use of the Terminate and Stay Resident (TSR) [3] programs. As far as I recall the TSR operation was undocumented and was discovered by disassembling DOS (something that the modern MS EULAs forbid). Intel designed the 8086 and 80286 CPUs to permit code written for an 8086 to run unchanged in “protected mode” on an 80286 (as noted in the Wikipedia page about the 80286 [4]). Basically all that you needed to do to write a DOS program with the potential of being run directly in protected mode (or easily ported) was to allocate memory by requesting it from the OS (not just assuming that every address above your heap was available to write on) and by addressing memory only by the segment register returned from the OS when allocating memory (IE not assuming that incrementing a segment register is equivalent to adding 16 to the offset). There were some programs written in such a manner which could run on both DOS and text-mode OS/2 (both 1.x and 2.x), I believe that such programs were linked differently. The term Fat Binary [5] is often used to refer to an executable which has binary code for multiple CPUs (EG PPC and M68K CPUs on the Macintosh), I believe that a similar concept was used for DOS / OS/2 programs but the main code of the application was shared. Also compilers which produce object code which doesn’t do nasty things could have their object code linked to run in protected mode. Some people produced a set of libraries that allowed linking Borland Turbo Pascal code to run as OS/2 16bit text-mode applications. The fact that OS/2 (the protected-mode preemptively multi-tasking DOS) didn’t succeed in the market was largely due to MS. I never used Windows/386 (a version of Windows 2.x) but used Windows 3.0 a lot. Windows 3.0 ran in three modes, “Real Mode” (8086), “Standard Mode” (80286), and “Enhanced Mode” (80386). Real Mode was used for 8086 and 8088 CPUs, for 80286 systems if you needed to run one DOS program (there was no memory for running more than one), and for creating or adjusting the swap-file size for an 80386 system (if your 80386 system didn’t have enough swap you had to shut everything down, start Real Mode, adjust the swap file, and then start it again in Enhanced Mode). Standard Mode was the best mode for running Windows programs (apart from the badly written ones which only ran on Real Mode), but due to the bad practices implemented by almost everyone who wrote DOS programs MS didn’t even try to run DOS programs in 286 protected mode and thus Standard Mode didn’t support DOS programs. Enhanced Mode allowed multitasking DOS programs but as hardly anyone had an 80386 class system at that time it didn’t get much use. It was just before the release of Windows 3.1 that I decided to never again use Windows unless I was paid to do so. I was at a MS presentation about Windows 3.1 and after the marketing stuff they had a technical Q/A session. The questions were generally about how to work around bugs in MS software (mainly Windows 3.0) and the MS people had a very detailed list of work-arounds. Someone asked “why don’t you just fix those bugs” and we were told “it’s easier to teach you how to work around them than to fix them“. I left the presentation before it finished, went straight home and deleted Windows from my computer. I am not going to use software written by people with such a poor attitude if given a choice. After that I ran the DOS multi-tasker DesqView [6] until OS/2 2.0 was released. Desqview allowed multitasking well written DOS programs in real mode, Quarterdeck was the first company to discover that almost 64K of address space could be used above the 1MB boundary from real-mode on a 80286 (a significant benefit when you were limited to 640K of RAM), as well as multitasking less well behaved DOS programs with more memory use on an 80386 or better CPU. OS/2 [7] 2.x was described as “A Better DOS than DOS, a Better Windows than Windows”. That claim seemed accurate to me. I could run DOS VM86 sessions under OS/2 which could do things that even Desqview couldn’t manage (such as having a non-graphical DOS session with 716K of base memory in one window and a graphical DOS session in another). I could also run combinations of Windows programs that could not run under MS Windows (such as badly written windows programs that needed Real Mode as well as programs that needed the amount of memory that only Standard or Enhanced mode could provide). Back to Bill Gates, I recently read a blog post Eight Years of Wrongness [5] which described how Steve Ballmer has failed MS stockholders by his poor management. It seems that he paid more attention to fighting Linux, implementing Digital Restrictions Management (DRM), and generally trying to avoid compatibility with other software than to actually making money. While this could be seen as a tribute to Bill Gates (Steve Ballmer couldn’t do the job as well), I think that Bill would have made the same mistakes for the same reasons. MS has always had a history of treating it’s customers as the enemy. Jeff suggests that we should learn from MS that the freedom to tinker is important as is access to our data. These are good points but another important point is that we need to develop software that does what users want and acts primarily in the best interests of the users. Overall I think that free software is quite well written in regard to acting on behalf of the users. The issue we have is in determining who the “user” is, whether it’s a developer, sys-admin, or someone who wants to just play games and do some word-processing.

6 March 2008

Anthony Towns: The second half...

Continuing from where we left off… The lower bound for me becoming a DD was 8th Feb ‘98 when I applied; for comparison, the upper bound as best I can make out was 23rd Feb, when I would have received this mail through the debian-private list:
Resent-Date: 23 Feb 1998 18:18:57 -0000
From: Martin Schulze 
To: Debian Private 
Subject: New accepted maintainers
Hi folks,
I wish you a pleasant beginning of the week.  Here are the first good
news of the week (probably).
This is the weekly progress report about new-maintainers.  These people
have been accepted as new maintainer for Debian GNU/Linux within the
last week.
[...]
Anthony Towns <ajt@debian.org>
    Anthony is going to package the personal proxy from
    distributed.net - we don't have the source... He may adopt the
    transproxy package, too.
Regards,
        Joey
I never did adopt transproxy – apparently Adam Heath started fixing bugs in it a few days later anyway, and it was later taken over by Bernd Eckenfels (ifconfig upstream!) who’s maintained it ever since. Obviously I did do other things instead, which brings us back to where we left off…
Geez, this was meant to be briefer...

5 February 2008

Christoph Berg: Mouseover titles

Best(*) Firefox extension ever: Long Titles (Spotted on http://xkcd.org/about/) (*) PS: Of course Open in browser and Generic URL creator are also way cool.

28 January 2008

Martin F. Krafft: Consolidating packaging workflows across distros

I speculate that most of what we do for Debian squares with what others do for their respective distro. Thus, it should be possible to identify a conceptual workflow applicable to all distros, consolidate individual workflows on a per-package basis, and profit from each other. Jonathan let me have the after-afternoon-coffee slot of the Distro Summit for an impromptu discussion on the various workflows used by distros for packaging. The discussion round was very short-notice and despite the announcement sent to the conference mailing list, only ten people showed up: two people familiar with Fedora, and ( versus ) eight Debianites. Regardless, I think the discussion was success- and fruitful. We were able to identify a one-to-one mapping between the Fedora and Debian workflows, even though we use different techniques: Many Debian package maintainers use version control systems to maintain the ./debian directory, and if patch files are stored in ./debian/patches/, then Debian and Fedora both store patch files in a version control repository, which seems awful. Just as I am only one of many who are experimenting with VCS-based workflows for Debian packaging, the Fedora people are also considering the use of version control for packaging. Unlike Fedora, who seem to try to standardise on bzr, I try to cater for the plethora of version control systems in use in Debian, anticipating the impossibility of standardising/converging on a single tool across the entire project. It seems that our two projects are both at the start of a new phase in packaging, a paradigm shift . What better time could there be for us to listen to each other and come up with a workflow that works for both projects? My suggestion currently centres around a common repository for each package across all (participating) distros, and feature branches. Specifically, given an upstream source tree, modifications made during packaging for a given distro fall into four categories: Given a version control system with sufficient branching support, I imagine having different namespaces for branches: upstream-patches/*, distro/*, rpm/* or debian/*. Now, when building the Debian package, I d apply upstream-patches/*, distro/*, deb/* and debian/* in order, while my colleague from the Fedora project would apply upstream-patches/*, distro/*, rpm/* and fedora/*, before calling the build tools and uploading the package. There are surely problems to be overcome. Pascal Hakim mentioned patch dependencies, and I can t necessarily say with a clear conscience that my workflow isn t too complicated to be unleashed into the public yet. But if we find a conceptual workflow applicable to more than one distro, it should be possible to implement a higher-level tool to implement it. Also, the above is basically patch maintenance, not the entire workflow. Bug tracking system integration is going to play a role, as well as other aspects of daily distro packaging. I ll leave those for future time. For me, this is the start of a potentially fruitful cooperation and I hope that interested parties from other distros jump on. For now, I suggest my mailing list for discussion. You can also find some links on the Debian wiki.

4 November 2007

Lior Kaplan: Lazarus and fpc in Debian

A friend involved in the Lazarus and Free Pascal Compiler projects told me that they maintain a private repository for their packages. And .deb files for newer versions for Lazarus and fpc are available on SF.net. It’s funny to read the Lazarus Ubuntu repository while Ubuntu is using the Debian packages through the Universe section. And as far as I noticed these are the same packages. Anyway, I don’t think ignoring Debian gives us motivation regarding these packages (at least to myself as I’m not involved with these packages). It seems there’s a good will by Carlos Laviola, the fpc package maintainer and Mazen Neifer from freepascal.org to build the new version for Debian. I think that working tighter may result in better packages for the project. Looking at the Mazen’s changelog reveals that the new version closes 3 bug reports in Debian. But without releasing the source package (or at least the diff.gz file), we can’t really see all the changes done by you. From the changelog, I can also see the private packages don’t use changes done in Debian. Meaning they probably have some bugs already fixed in Debian. I see both people are members of the http://bollin.googlecode.com/svn/fpc/trunk/ repository, so what is the problem? It seems to me that a win-win situation is in our grasp with a little effort which will result in better packages for the fpc community.

4 September 2007

Daniel Baumann: Swiss Voting on OOXML

This is the result of Swiss voting on ISO/IEC DIS 29500, the fast-tracking of the Microsoft Office Open XML file format.
4 screen AGapproval
Accenture AGapproval
ADVIS AGapproval
ALTRAN AGapproval
Baggenstos Wallisellenapproval
Bechtle IT-Systemhaus Thalwilapproval
CIS-Consultingapproval
Comsoft Direct AGapproval
Coris SAapproval
Dr. Pascal Sieber & Partners AGapproval
dynawell agapproval
Ecma Internationalapproval
ELCA Informatik AGapproval
EPFL Lausannedisapproval
FSFE Free Software Foundation Europedisapproval
GARAIO AGapproval
Gysel Ulrich Emanueldisapproval
H.R. Thomann Consultingapproval
Hewlett-Packard (Schweiz) GmbHapproval
HSW Luzern, Institut IWIapproval
IAMCP Switzerlandapproval
IBM (Schweiz)disapproval
Informatikstrategieorgan Bund ISBapproval
isolutions gmbhapproval
itsystems AGapproval
Kull AGapproval
leanux.ch AGapproval
Leuchter Informatik AGapproval
MESO Productsapproval
Microsoft Schweiz GmbHapproval
MondayCoffee AGapproval
Namics AGapproval
NEXPLORE AGapproval
Novell (Schweiz) AGapproval
Online Consulting AGapproval
Open Textapproval
PageUp Bernapproval
PC-WARE Systems (Schweiz) AGapproval
Puzzle ITC GmbHdisapproval
SBS Solutions AGapproval
Secunet SwissIT AGdisapproval
SIUG Swiss Internet User Groupdisapproval
SKSFapproval
Skybow AGapproval
SoftwareONEapproval
SyGroup GmbHdisapproval
Sylog Consulting SAapproval
Syndregadisapproval
TheAlternativedisapproval
Trivadis AGapproval
Unic Internet Solutionsapproval
usedSoft AGapproval
Verein /ch/opendisapproval
WAGNER AG Kirchbergapproval
Wilhelm Tux (Verein)disapproval
Würgler Consultingdisapproval
Zürcher Hochschule der Künstedisapproval
Total of voting (75% majority)43 approval (75.4%); 14 disapproal (24.6%)
A majority with 75% has been reached with one vote. Why do Hewlett-Packard and Novell vote IN FAVOUR for OOXML!?

13 May 2007

Ross Burton: Sound Juicer "Nikki's Growing A Patch Out In The Backyard" 2.19.0

Sound Juicer "Nikki's Growing A Patch Out In The Backyard" 2.19.0 is out. Tarballs are available on burtonini.com, or from the GNOME FTP servers. This is the first release in the 2.19.x development series, after I failed to do anything useful in 2.17.x...

3 April 2007

MJ Ray: Online Banking: What works with GNU/Linux? (10)

Added: Handelsbanken, Vermont State Employees Credit Union, ING Direct US, Monte dei Paschi di Siena, Banca Sella Updated: HSBC, 1822direkt Thanks again everyone. Anyone want to keep a list for their own country? Please?

27 February 2007

Arnaud Vandyck: Fosdem 2007

Organisation Like last year, I did not help Pascal to manage the devrooms. I'd like to be more helpfull so as a lot of people I made a little donation and get a FOSDEM2007 t-shirt. The event was even better than last year (even if I think it's also cool last year except the wifi was not working last year). This is the first time since 2004 that I don't meet Wouter! I hope we'll meet next Debian or FOSDEM meeting. Women Is it me or are there more and more women involved in open source? This is one of the great news form this year. read more

25 February 2007

Adam Rosi-Kessel: Grimmelmann and Kozinski on Law

I recently came across two old and unrelated writings about law, both of which are worth reading, especially for people with strong opinions but no formal training. The first is this piece, entitled Seven Ways in Which Code Equals Law (And One in Which It Does Not), by recently-appointed New York Law School professor James Grimmelmann and EFF Legal Director Cindy Cohn. Several observations are particularly appropriate for the slashdot crowd (and, to a lesser extent, certain members of the Debian community and others who grew up on a diet of BASIC, Pascal, and then C and later perl). I especially like this bit about “hacking the law”:
Some people, seeing this connection, and remembering the values of good code, try to improve the legal system by treating it as a computer. People come to me with ideas for hacking the law. The government says that cryptography is a weapon, they say, but the Bill of Rights says we have the right to bear arms. So that means we have a Constitutional right to use cryptography. But the legal system isn t a computer. If you can t convince a judge that what you re proposing is consistent with the values underlying a law, your argument will go nowhere. People go to jail every year because they think they ve found a way to hack the Sixteenth Amendment. The income tax is illegal, they say, or, The income tax is voluntary, see, it says so right here, and then they get convicted of tax evasion and sent to jail. We did convince several judges about the Constitutional dimension of cryptography, but the claim started from the values of the First Amendment, not a mechanical reading of its words. It s a category mistake to treat the legal system as just another architecture with its own specialized language. Code and law are different ways of regulating; they have different textures. All of those people who are required to make the legal system work leave their mark on its outcomes: they make a certain amount of drift and discretion almost inevitable. Code doesn t have such a limit: it can make perfectly hard-nosed bright-line rules and hold everyone in the world to them. Code is capable of a kind of regulatory clarity and intensity that law can only state, never really achieve.
I don’t entirely agree with the other article, entitled What I Ate For Breakfast and Other Mysteries of Judicial Decision Making by outspoken Ninth Circuit Judge Alex Kozinski (unofficial site maintained by Aaron Swartz, wikipedia entry). For example, I think critical legal studies has resulted in some interesting insights, some of which actually have practical applciation. Still, Judge Kozinski makes an important point about the numerous factors that act as a check on discretion in judicial decisionmaking:
It is popular in some circles to suppose that judicial decision making can be explained largely by frivolous factors, perhaps for example the relationship between what judges eat and what they decide. Answering questions about such relationships is quite simple - it is like being asked to write a scholarly essay on the snakes of Ireland: There are none. But as far back as I can remember in law school, the notion was advanced with some vigor that judicial decision making is a farce. Under this theory, what judges do is glance at a case and decide who should win - and they do this on the basis of their digestion (or how they slept the night before or some other variety of personal factors). If the judge has a good breakfast and a good night’s sleep, he might feel lenient and jolly, and sympathize with the downtrodden. If he had indigestion or a bad night’s sleep, he might be a grouch and take it out on the litigants. Of course, even judges can’t make both sides lose; I know, I’ve tried. So a grouchy mood, the theory went, is likely to cause the judge to take it out on the litigant he least identifies with, usually the guy who got run over by the railroad or is being foreclosed on by the bank. This theory immodestly called itself Legal Realism. Just to prove that even the silliest idea can be pursued to its illogical conclusion, Legal Realism spawned Critical Legal Studies. As I understand this so-called theory, the notion is that because legal rules don’t mean much anyway, and judges can reach any result they wish by invoking the right incantation, they should engraft their own political philosophy onto the decision-making process and use their power to change the way our society works. So, if you accept that what a judge has for breakfast affects his decisions that day, judges should be encouraged to have a consistent diet so their decisions will consistently favor one set of litigants over the other. I am here to tell you that this is all horse manure. And, like all horse manure, it contains little seeds of truth from which tiny birds can take intellectual nourishment. The little truths are these: Under our law judges do in fact have considerable discretion in certain of their decisions: making findings of fact, interpreting language in the Constitution, statutes and regulations; determining whether officials of the executive branch have abused their discretion; and, fashioning remedies for violations of the law, including fairly sweeping powers to grant injunctive relief. The larger reality, however, is that judges exercise their powers subject to very significant constraints. They simply can’t do anything they well please.
Finally, I will link, but not embed, this video of the Anna Nicole Smith court order, for an entirely different perspective on the legal process. You almost don’t really want to have to see this stuff.

Next.

Previous.