Search Results: "Sami Haahtinen"

14 June 2010

Sami Haahtinen: Bacula & OpenERP

I've been working on setting up OpenERP for my needs and today I decided it was time to work on backing up the beast. Since I've been running bacula at home to backup my environment, it was time to tweak it so that it made reasonable backups of OpenERP too. In the end I was able to build a really elegant solution for backing it all up. I decided to go for the bpipe plugin that allows one to pipe programs directly to the bacula file daemon. This allowed me to do a live dump of the database with pg_dump and store it directly to the backup set without writing it to the disk. Since the other examples in bacula wiki define methods that either use files or FIFO to do the backup, I documented my setup there too. The only thing that was left was to add the directories specific for OpenERP to the backup and I was all set.

2 September 2009

Sami Haahtinen: SELinux and me...

Once in a while I get this urge to use SELinux on some of the servers I manage, but almost always run in to something that puts me back enough to never finish the project. This time I managed to figure out the last few glitches. In the end, SELinux still has a really steep learning curve, so it's not for the impatient ones. Even though enabling SELinux in Debian has become a lot easier since the first time I tried to get things running, it's still just the tip of the iceberg. In Debian it is just a matter of installing the right packages and running a few commands, but that's just where the troubles start. Most of the howtos focus on single user or shared installations where all users are created locally. Also most howtos fail to mention that you need to relabel files in certain cases. One of the most annoying problem I ran in to was changing all non-system users away from the unconfined_u class. This is of course done like this (found here):
semanage -m -s user_u __default__

The problem here is that it changes the existing user as well and you start to get errors like these:
denied    read   for  pid=32258 comm="bash" name=".profile" dev=dm-5 ino=185474 scontext=user_u:user_r:user_t:s0 tcontext=unconfined_u:object_r:unconfined_home_t:s0 tclass=file

The problem here is that the home directory for the user is still labeled for the wrong class. The fix is to relabel the home. Sadly this is something that you just need to know, it's not explained anywhere. At least I haven't found an explanation anywhere. Another good thing to do before you continue is to change the already existing user to the staff class. Staff class has a bit more relaxed security controls and you get to change the security roles (details here).
semanage login -m -s staff_u myuser
fixfiles relabel /home/myuser

This gets you a semi working setup, next problem usually is that some daemons are denied access to parts of your system. For me, this was postfix trying to access my home directory that was mounted over NFS. For such cases, you should persuade the maker of the module package to update the global policy if it's a common use case. Or you need to create a policy package that allows access to the given files. The process itself is documented in the audit2access manual page. In general you should study the audit2why and audit2allow packages. The former tells you if there is an easier way to fix something (like enable a boolean) and latter will create the required policy lines. Only problem is to compile the policy and load it. The only problem here was to find the right tool and the right lines from the manual. In general the SELinux learning curve is way too steep. It's a system that works pretty well once you learn all the tricks and start to completely understand the toolset. The community should continue working on lowering the bar for new users. There has been some major improvements since I first tried SELinux, so it's the right direction.

23 March 2009

Sami Haahtinen: Configuring split DNS

Since I enabled comments in this blog, I finally needed to configure a split DNS for my network. There are various reasons why one needs a split DNS and as it's usually pointed out, the reasons are usually non-technical. In my case the reasons are technical: I have a NAT in my local network that allows me to host this website locally. What causes problems is that the domain name points to the external IP address and that doesn't work from the inside. So split DNS it is. There are various ways of building a split DNS, one can use the views feature in bind9 or you can set up 2 separate DNS servers that provide different information (and redirect your local resolver to use the internal server). The latter is more secure if the internal zone is sensitive. I decided to use a hybrid solution. I already knew that PowerDNS Recursor was capable of serving authoritative zones (think pre-cached) so I decided to leverage on that. Setting this up turned out to be simpler than I expected. First I made a copy of the existing zone and edited it to fit my needs. I changed the IP address of to point to the IP address on the local network. I also adjusted some other entries that pointed to the local network. Next I modified bind to listen on the external IP address. This can be accomplished by adding a listen-on; ; to the options in the configuration. I also disabled the resolver by adding recursion no;, this forces the bind to work as authoritative only. Then I installed the PowerDNS Recursor (pdns-recursor package in debian) and configured it to listen on the internal address only (local-address= and added the pre-cached zone to the configuration with Now, after restarting both daemons, I had a working split DNS with minimal configuration. I was also able to change the external DNS to authoritative only mode, which is a good idea in any case.

22 March 2009

Sami Haahtinen: XenServer and non citrix kernels

For some time I've suffered from the infamous clocksource problem with all Linux hosts that aren't running the Citrix provided kernels. I'm bit old fashioned and I want to run Debian provided kernels instead the Citrix ones, mostly because the Debian kernel receives security updates. During the fight with my own server last night, it finally dawned to me. The clocksource problem appears after you suspend a Linux host and the kernel in the virtual machine starts spewing this:
Mar  5 09:24:17 co kernel: [461562.007153] clocksource/0: Time went backwards: ret=f03d318c7db9 delta=-200458290723043 shadow=f03d1d566f4a offset=143675d9

I've been trying to figure out what is different with Citrix and Debian kernels, because the problem doesn't occur with the Citrix provided kernel. The final hint to solving this problem came from Debian wiki. The same issue is mentioned there, but the workaround is not something I like. I perfer making sure that the host server has the correct time and the virtual machine just follows that time. But the real clue was the clocksource line. It turns out that the Citrix kernel uses jiffies as the clocksource per default, while Debian uses the xen clocksource. It would make sense that the xen clocksource is more accurate since it's native to the hypervisor. So by just running this on the domU fixes the problem:
echo "jiffies"> /sys/devices/system/clocksource/clocksource0/current_clocksource

There is no need to decouple the clock from the host, which is exactly what I needed. To make this change permanent, you need to add clocksource=jiffies to the bootparameters of your domU kernel. You can do this by modifying grub configuration and adding clocksource=jiffies to the kopt line and running update-grub. Or you can use XenCenter and modify the virtual machine parameters and clocksource=jiffies to boot parameters. It's also worth noting that this problem does apply to plain vanilla Debian installations as well, so reading that whole wiki page is a good idea.

5 March 2009

Sami Haahtinen: Debian Xen dom0 Upgrade woes

I finally decided that it's time for me to upgrade my Xen installation. It used to run etch with backported Xen, because the etch version was increasingly difficult to work with. I also acknowledge that some of the issues I've been having are simply caused by yours truly, but even still the Debian Xen installation is way too fragile to my taste. I've already considered installing XenServer Express locally and running the hosts on it. The big drawback has been that XenCenter (the tool that is used to manage XenServer) is windows only and it doesn't work with wine. So you can imagine my desperation... Anyway, the latest upgrade from etch to lenny was painful as usual. The first part went smoothly, bit of sed magic on sources.list and a few upgrade commands (carefully picking the Xen packages out of the upgrade set). So in the end I had a working lenny installation with backported Xen. Next I made sure that there was nothing major going on in my network (one of the virtual machines acts as my local firewall) and took a deep breath before upgrading the rest of the packages. I knew to be careful about xendomains -script which has reliably restored my virtual machines after reboot to a broken host so I had always ended up restarting my virtual machines after reboot. I carefully cleared XENDOMAINS_AUTO and set XENDOMAINS_RESTORE to false in /etc/default/xendomains so that the virtual machines would be saved but not restored or restarted on reboot. After the normal pre-boot checks I went for it. Oddly enough everything worked normally and the system came up after a bit of waiting. I checked the bridges and everything appeared normal, so it was time to try and restore a single domain to see that everything actually did work as planned.
Hydrogen:~# xm restore /var/lib/xen/save/Aluminium
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

Oof, Googling for the issue revealed that there were others that had suffered from the same problem on various different platforms the problems were caused by different things. One would assume that the problem is in the vif-bridge script that is mentioned in the xend-config.sxp file as the script that brings up the vif, but after many hours of tial and error and pointless googling (over gprs connection), I couldn't find any solution to the problem. It was time to call it a day (it was almost 3 am already...) During the night I had a new idea about the possible cause. What if the problem isn't in xend, but somewhere else. I fired up udevadm monitor to see what udev saw and it wasn't much. I'm not an expert with udev, but from previous encounters I had a vague feeling that there was supposed to be more events flying around. I wasn't able to pinpoint what was wrong so I decided to purge xen-utils, of which I had 2 versions installed: 3.2-1 and 3.0.2. I also removed everything related to xenstore. After reinstalling the current versions and restoring my configuration files the first host came up just fine. I still had problems resuming the virtual machines and I ended up rebooting them again, which was nothing new, but at least they were running again. In the end I don't know what was the actual cause for udev not handling the devices properly, but I'm happy to have them all running again. And I learned a valuable lesson of all this: udev is an important part of Xen, make sure it works properly.

17 February 2009

Sami Haahtinen: Know you upgrades, apt-listchanges

As an obligatory note, Debian Lenny was released earlier today. Which means that sysadmins all over the world are starting to upgrade their servers. There is an oddly little known tool that each and every sysadmin should install on at least one server they maintain, called apt-listchanges. It lists changes made to packages since the currently installed version. Sure that information will be overwhelming on major upgrades, but what is useful even on major upgrades is the capability to parse News files in the same way. News files contain important information about the package in question. For example a maintainer could list known upgrade problems there, like is done in the lighttpd package. Or list changes in package specific default behaviour, like is done in Vim package. Sure, you will notice these in time, but it's nice to get a heads up before a problem bites you.

Sami Haahtinen: Xen domU and grub

I've been bitten by grub upgrades and installations on Debian family domU servers. Apparently there are others out there who have been bitten too. The bug itself is caused by a missing device entry, probably because of udev. Anyway, grub-probe tries to discover the root device so that update-grub can properly generate a menu.lst. In certain scenarios the root device itself doesn't exist. Here is an example from a configuration generated with xen-tools:
Hydrogen:/etc/xen# grep phy Neon.cfg 
disk    = [ 'phy:Local1/Neon-disk,sda1,w', 'phy:Local1/Neon-swap,sda2,w' ]

While this is a valid configuration, the device sda doesn't exists within the virtual machine. As a workaround the above blog entry suggests manually adding the sda device and the device entry in This solution does work, but it will fail with the next upgrade. The proper solution is to adjust the Xen configuration so that the root device is created. And since Xen uses different naming scheme for devices we can upgrade to that too. So the above example becomes:
Hydrogen:/etc/xen# grep phy Neon.cfg 
disk    = [ 'phy:Local1/Neon-disk,xvda,w', 'phy:Local1/Neon-swap,xvdb,w' ]

You also need to adjust the existing grub configuration and fstab within the domU. It's a bit more work and requires an additional reboot, but it gives you a peace of mind that the next upgrade will work without a hitch.

25 January 2009

Sami Haahtinen: Purging postfix queue

Since I keep ending up in situations where I need to clean up postfix queue from mails sent by a single host and always forget the command, I'm posting it here. Maybe someone else will find it useful as well. To begin with, you need to determine the IP address of the culprit you want to eliminate. How you do this, is up to you. Grepping logs or examining the files in the queue both work. But for some reason there doesn't appear to be a good tool to get statistics on the sending IP addresses, only the origin and destination domains. Once you have determined the IP address which you want to purge, you can use the following spell. You might have to repeat the same line for active and incoming queues as well, but usually deferred is the queue I have the most mails. grep -lrE '[^0-9][^0-9]' /var/spool/postfix/deferred xargs -r -n1 basename postsuper -d - It's important that the IP address has escaped dots, because dots can account for any character. In the worst case it will end up matching a lot of wrong IP addresses. Another important bits are the '[^0-9]' groups in both ends of the pattern. Those make the IP address only match that particular IP address. Without that extra limitation would match anything that has 1 as the last number in the first octet and 1 as the first number of the last octet. For example: would be a valid match. The other important bit, yet oddly unknown, is the postsuper command. Postsuper modifies the queue and -d flag makes it delete files in the queue by QueueID. For some reason I keep on seeing all sorts of find -exec rm spells all over, which isn't really that nice for the daemon itself. So here it is, one more tidbit I've been meaning to write up for quite some time now. Enjoy!

24 January 2009

Sami Haahtinen: New tricks

"You never call, you never write. I hardly know you anymore." Yes, I've been meaning to write up on several things. For some time now, I've been a happy VIM user and a while back I ran in to a blog post where someone mentioned a new feature they found in VIM which got me to explore the vim-scripts package. There are a lot of scripts out there that extend VIM far beyond what it can do by default. And it's quite powerful even without the scripts. One of the neat little scripts I decided to install by default was surround, it allows one to easily replace surrounding parenthesis, tags or quotation marks. There are a lot of scripts in the vim-scripts package, but it's not always clear how to enable the scripts. Thats where vim-addon-manager comes to play, it provides a vim-addon command that allows you to easily enable or disable the scripts. I'm still trying to grasp the full potential of all the new commands available, but it certainly appears that I'll be having even more fun writing stuff. It's kind of odd, at first when you start to use vi-like editors, you struggle. But in the end it's just such a convenient way of editing files that it really does grow on you.

8 January 2009

Sven Mueller: About Usability

Sami Haahtinen wrote a nice post about usability. I do mostly agree with him, except for one little thing: A few years back, I noticed that I often started as a kind of Joe User with new applications, and often even stayed that way, often wishing that the UI of that application was simpler or at least made it more obvious which options are important and which I could most likely keep at their defaults.
Yet I am a system administrator and programmer/developer, so most people would probably consider me as an advanced user by default (wether that is correct in any given context or not, for example I still don t really grok GIMP).
Also I did have quite a lot of contact with users that have very little experience, knowledge and interest in computers and software and only use them to achive certain goals. Anyway, I m quite sure most readers of this blog are aware that Linus Torvalds once complained about the Gnome printing dialog being simplified way too much. And at that time, I had to agree (note that I didn t check out Gnome again since those days, at least 2 years ago). What I really wonder (especially since I realized this in some of my own applications) is why it is so hard for developers to add some additional checkbox or button that enables/disables advanced options. For example, let s take an applications printing dialog. By default, it would only allow the selection of a printer, orientation (if sensible) and (if the selected printer supports more than one) the paper size. Now advanced options might include duplex printing (automatic or manual), printing mutliple pages on a single sheet of paper, printing in grey on color printers, These options might be of use to anyone, but would most likely confuse beginners. So what I did in most of my little application was to just show the basic options and add an advanced options button which provided access to additional functionality which was not normally needed. As for the issue with firefox updates mentioned by Sami: I don t think it is a problem that firefox asks what to do with extensions. For one, it only lists extensions which are installed but claim not to support the new version of firefox. Second, practically everyone I know of who uses any extension can be considered to be an advanced user. And third: The dialog is pretty straightforward about what it is asking and when I once hit it while a ( dumb user ) friend was sitting with me, he understood what it was about quite easily (I didn t explain the dialog, just what extensions were). So my idea of a good UI is:
As simple as possible for beginning users, but allow advanced users to get to the details ( Details or Advanced options buttons/checkboxes/dialogs).

4 January 2009

Sami Haahtinen: About Usability

Some people consider the gnome usability guidelines a nuisance and some consider certain applications way too simplistic. While it is really hard to get the usability right, it's well worth it. We need to keep in mind that as computer oriented people we tend to see things differently. Things that are simple to us aren't really that simple to the "normal people". But one of the simple things we can do the insure that the software we write serve the people it's designed for is to remove all not needed pop-ups and questions. A good way to detect these would be to ask yourself why would a user choose anything else but the most logical option. It's kind of hard to explain, so lets pick an example: Firefox is updated, the first question it usually asks after upgrade is about incompatible extensions. The user is presented with choices, check for updates or cancel. Now, we are dealing with internet browser so the user should be connected to the internet so there is no problem with checking for the updates, we can rule out that scenario. The other scenario that I can come up with is that a developer doesn't want to update some certain extension. So at least I can't come up with any reasonable scenario why someone would want to select anything else but to upgrade the main option of upgrading the extensions. Why not just leave out the option and instead do the upgrade automatically. If you wish to be transparent, you can show the user that you are doing the upgrade. Or you can do like some applications do and just do it without bothering the user with the options. I know I sound like a Google fanboy, but Google generally gets this right. Their applications skip out all upgrade related notices and just do the upgrade. Regular user doesn't want to upgrade because the user has been scared with incompatibility notices and upgrade checklists for so long. Just going ahead with the upgrade in complete silence keeps their software up to date as well. Another example would be from few years back: The Ubuntu installation. Back in the day Debian was working on Debian Installer, which is also used as the main installer in Ubuntu alternative installation media. Debian Installer is capable of doing most things silently, but with Debian it by default asks a lot of questions. It doesn't matter, since most people who install Debian can be categorized as developers. But in my opinion, the thing that made Ubuntu a success was that it doesn't ask the questions that can be answered without asking the user. So, back to usability. There are basically 2 camps, the "normal" users and the developers. Developers want and need to see a lot of the backend behaviour, just to debug problems. Currently a lot of the open source software is focused towards developers while they are gaining grounds on the "normal" population as well. We should start focusing on the users for a change.

29 October 2008

Sami Haahtinen: OpenSSL is way too complicated

It's no wonder people hate dealing with certificates. I'm one of those people who really hate handling certificates. For some reason Linux lacks simple tools to manage certificates. Debian has one set of tools and I bet quite a few other distributions have their own set of tools. So nothing generic. In my opinion the problem lies in OpenSSL, which is an overly complicated piece of software. Sure it's able to encrypt your fries at the local fast food place, but most people never use anything more than the x509 module. And even that is complicated. I'm not saying that it should be point and click operation, but what I want is a tool that allows me to create, renew and verify SSL certificates. Which is pretty much the most common thing you do with OpenSSL. Now, lets assume you have a certificate and a key that you want to check if they match. Can you say from the top of your head how to do that? If so, you are either dealing with this stuff daily or you looked it up from a manual. The correct "magical mumbo jumbo" is:
server:/tmp# openssl rsa -noout -modulus -in /etc/ssl/private/some.key   openssl md5
server:/tmp# openssl x509 -noout -modulus -in signed.crt   openssl md5

Now, how easy was that. It only took me 10 minutes to construct that line. Most of the time went to searching Zimbra scripts for the correct magical line. The reason why I went through the scripts instead of the manual is that I had already seen the scripts do a appropriate check. And there is no way I could have constructed that in that kind of a time frame just by using the manual. Usually I think that people complain too much when they can't figure out how to make ls or something work like they want. A certain degree of manual reading is good for you, but in this case it's too much. In any case, I'm posting this rant as a reminder for myself so that the next time I can just look it up from here.

20 August 2008

Sami Haahtinen: Setting up an OpenVPN tunnel

Recently I fixed my OpenVPN tunnel and figured out what was wrong with my bridge setup. Here is the complete setup that I have. Server I wanted to have as much of the configuration on the server as possible so that I could easily add more clients and wouldn't have a need to update client configuration when ever the server preferences change. Here is my OpenVPN server configuration:
mode server
dev tap0
ifconfig-pool-persist /var/run/openvpn-ip.txt
keepalive 10 60
push "route"
push "dhcp-option DNS"
ca /root/easy-rsa-2.0/keys/ca.crt
key /root/easy-rsa-2.0/keys/
cert /root/easy-rsa-2.0/keys/
dh /root/easy-rsa-2.0/keys/dh1024.pem
up /etc/openvpn/

and the up script:
# Bind the tunnel interface to the bridge
ifup $1
brctl addif $BRIDGE $1

There is nothing really special about that configuration. The server is in TLS mode configures a bridge. The keys are generated with easy-rsa by following a openvpn howto entry. The up script just binds the tap0 device to the network bridge after bringing up the device. Next I created the interface configuration by adding the following to /etc/network/interfaces:
iface br0 inet static  
 bridge-ports eth0

The trick here is to create a single bridge with just the eth0 device. We use the up script for openvpn to add the tunnel device to the bridge. Otherwise the bridge would never contain the proper devices. Client As for the client, you simply set the client to use the CA-certificate and Host key created with easy-rsa, set the hostname and tunnel type. Tunnel type is assumed to be a tun-device instead of tap, so in my case I needed to change it too. There is no need to tell the client anything else. Everything else will be negotiated through the tunnel. And since I use NetworkManager to set up my tunnels I didn't have a need to drop in to a shell even once at the client.

Sami Haahtinen: Dynamic network configuration

I've been meaning to write up on this for quite some time now, but always have something seemingly more important things to do. Most modern Linux distributions offer ways to manage network interfaces via some kind of abstraction. Usually this abstraction allows one to dynamically rename and add network interfaces. For Debian family management is done with ifupdown, while with RedHat family it's sysconfig. In addition there is NetworkManager which is a cross platform solution for dynamic network configuration (which eliminates the need to rename interfaces). Case Xenserver Usually this comes in rather handy, but on other occasions it can be a pain. A while I was helping out with an installation of a XenServer instance. This server had, for some reason, ethernet interfaces reversed in comparison to the other XenServers on the site. Luckily the service console has the network configuration in ifcfg scripts. We were easily able to reverse the interfaces by binding the interfaces to certain hardware addresses. The only problem is that the interface is renamed only when the interface is brought up. What's worse the interfaces were enumerated before the server would bring up all of the interfaces. Only eth0 was brought up for management purposes before enumerating. This means that the original eth0 was renamed to _tmp_xxxxxx. The (oh so elegant) solution was to create a script that does ifup eth1; ifdown eth1. Problem solved. Case tap0 I also experienced similar problems when I was setting up my OpenVPN tunnel. I wanted to use a bridged connection to my network, but for that I needed to create a br0 interface with tap0 as a member. It's easy enough to create ifupdown configuration to set up tap0. Hook that up to br0 and you are all set. The same problem will bite you here. The interface tap0 is actually created only when the tap0 is brought up and br0 members are added only if they are present. The solution? In my case, create a custom script that adds the device manually after it's been created properly. I was unable to find anything that works without using a custom script =( In retrospect Looking back at these cases, I should have known better. The problem was obvious and it took me way too long to figure out the cause for my problems. Then again, the utilities should be able to create a reasonable abstraction for themselves that they use to determine the actual status of the whole system. This way the trickery needed for setting up simple interface would be obsolete. You can't have it all, but you can always hope.

5 August 2008

Sami Haahtinen: Locales, recap

After my post yesterday about locales and my problems, I got some comments. Bryan, Simon and Wouter all pointed out what eventually figured out too but didn't mention in the post (I actually had to re-read the post to see what I wrote) that LC_ALL overrides all of the settings. Even though I figured it out eventually, with these comments it finally me that the purpose is actually to temporarily override the locale. Another thing that didn't occur to me while figuring out the correct locale for my system was that I'm thinking about it all wrong. Since I've already gotten used to broken locales I kept thinking that I want en_US locale with some Finnish settings, but in reality I wanted fi_FI locale with English language. I have to admit that I was pretty sceptical about the solution but decided to try it out. And to my surprise, it actually worked! So the final solution is this: LANG="fi_FI.UTF-8"
It is a pretty clean solution and I'm happy to live with it. And as Simon pointed out, administrator changes should be preserved through upgrades. If not, one should file a bug for it. There we go, another problem solved. Thanks for the comments and suggestions.

4 August 2008

Sami Haahtinen: The pain of locales

Linux has this great thing called locales. You can basically control everything through a few system variables. The system is so flexible that you could in fact have English language with Swedish dates and Chinese error messages. The problem with the current system is that it's controlled mostly with one variable. Just about everyone I know (including me) set the system locale to en_US because they don't want to use their local language as the system language. This in turn causes new problems, mostly due to the metric system used in the sane countries. I finally got around to debug how to get the system to speak my language. The key here is the file /etc/default/locale which on Debian based systems contain the locale settings. Usually there is just one line in that file. LANG=en_US.UTF-8 Since I live in Finland and want my computer to speak English I can add the following lines: LC_NUMERIC="fi_FI.UTF-8"
This replaces the settings for numerals (decimal separator and such), paper (yes, the rest of the world uses A4), name, address, telephone and measurement (YAY, metric system!). This way I have a nice English speaking system with Finnish settings. I tried setting LC_ALL to fi_FI.UTF-8 but that causes gnome to speak Finnish to me, even though the LANGUAGE and LANG settings are set the English. LC_TIME is something I'd like to use, but I find the Finnish abbreviations for weekdays and months to be confusing. LC_MESSAGES causes gnome to talk partially Finnish, the general locale is English as it's supposed to be, but for example gnome-panel changes the menu entries to Finnish. I wish that it was easier to set these settings. I also doubt that various tools know how to respect these settings and will overwrite that file with the default setting. That is why I'm writing this entry so that I remember how to fix it when that happens. In the end the system is flexible but it's built in a funny way. The actual variable built by combining 2 or 3 values: language, country and possibly encoding. So to make thing easy for me I would have to declare en_FI.UTF-8 locale and start translating applications to that locale. I don't want to, so I'm sticking to this "temporary" solution.

3 August 2008

Sami Haahtinen: Lazyweb, I asked about dreambox

A while back wanted to know about a replacement for my Dreambox. I was also asked to write up a summary of the replies. Initially I assumed that my Dreambox 7025C was failing, it showed a classic symptom, progressively increasing error rate which wasn't showing up on my neighbors. Since we share the cable they should see the problems too, so I ordered new tuners for the box and the problem persisted. In the end it turns out that the problems were initially showing on such channels that my neighbors weren't watching or in such amount that they didn't pay attention to it. In the end it was a problem in the feed and my box is just fine. I now have 2 brand new DVB-C tuners for the 7025 (anyone want to buy them?) and a perfectly working setup. (YAY!). In a way I feel lucky that the box wasn't failing on me. Nobody suggested any complete solutions, but I was able to spot ReelBox Avantgarde as a possible alternative. It runs Linux like I want it to, but it's way too expensive. Another alternative could have been Maximum 8000, which appears to be a re-branded Marusys C-8000. It has Linux as the OS but the community side appears to be rather lacking. Then again, it's a pretty new device... I was suggested VDR and MythTV as the DIY solution. I dismissed MythTV without even looking in to it. Although I must admit that it's been quite a few years since I tried MythTV, but it was way too unstable for my taste. Also I wasn't too happy about the architecture of it. VDR is something I could have tried. The downside appears to be that it lacks in hardware support. The last time I was building my own PVR I ran in to the common problem of getting a decent output to the TV. VDR folks have traditionally solved this by using a HW decoder card to output the stream directly to the TV. Using VDR with a budget DVB card is possible but not recommended because of the CPU power needed to decode the stream. I've always considered VDR to be the more sophisticated solution of the two, but it is starting to sound a lot more hassle than it's supposed to. Also, the DIY solutions are hard to get in to a decent form. Usually the cases are bulky and custom modifications are required. I'm not one of those guys that install neon lights to their computers to make it look cool, but if I'm having it in my living room I expect it to look decent. Take a look at the VDR boxes section on the website for examples. While I admit that it's up to me to build a decent box, I'm not willing to spend too much time hand picking every single component so that it fits a casing and works with the given application. Finally, I'd like to thank all the people that sent me comments. Even though I ended up sticking with the current setup, it was a valuable look in to what is out there. The Dreambox I have, doesn't have a HDMI output and lacks in few areas. Keeping that in mind I'm eventually going to take this same look in to the alternatives while picking a replacement for the current box. Lets hope VDR, MythTV and Elisa (to name a few) continue to evolve and the video output properties of Linux become better and eventually I'll have the possibility of replacing my current setup with something that is even more flexible.

28 July 2008

Sami Haahtinen: Dear Lazyweb, about dreambox

For almost 2 years I've been a happy Dreambox user. While Dreambox has a lot of good features, the best being that it's based on Linux and people are actually encouraged to create their own firmwares and modify it, it lacks in hardware. All products suffer for some kind of disasters, for the 7025 series it was the power supply which will fail (it's only a matter of when). And now it looks like the rest of the hardware is failing too. So, "Dear Lazyweb" I'm asking if there is something similar to Dreambox in terms of flexibility and features. I'm mostly looking for an active community and solid device. HD capability is a plus and it must have DVB-C receivers (2). I'm not looking to build my own box, but if there is a reasonable framework for that I will look in to it too. In general I'm looking for a box that is flexible and has solid hardware from the "(sane + a bit) < insane" price range. If you happen to know of such a device or framework, drop me a note at (works with e-mail, jabber and msn, first 2 preferred =)

3 July 2008

Sami Haahtinen: Overcommit, your personal friendly salesperson

I just have to say it, memory management is utterly broken in Linux. Luckily it's not as broken as it used to be. Without going in to too much detail, there is a feature called overcommit in Linux. What this means is that it will over allocate memory to applications. At first thought it sounds insane. Why would one want to over allocate memory. It's like eating a cake and having it too. Well, there is a reason why this is done. If you think about resource allocation in general, there are a lot of places where this is being done successfully. For example ISPs always over sell their capacity because most of the users use a fraction of the resources allocated to them. That is why guaranteed capacity is so much more expensive, they literally allocate it for you (most of the times). So in a way memory management is like your friendly sales person from Linux corp. who is happy to sell what over you want because he needs the deal for his own bonus. So it is in fact reasonable to overcommit memory? Bzzt! Wrong! There are always problems with over allocating. ISPs, for example, monitor the usage of their resources and when certain limit is reached they get more resources. With memory it's not so easy, you have a finite amount of memory in your computer. Swap comes to the rescue to a certain degree, but it is critical for the applications to know their limits. If you have 1 MB of RAM available for the application let the application know. In most of the cases the application will crash because it doesn't know how to deal with the out of memory situation, but this is a side effect of the overcommit. Programmers don't run in to this kind of problems and they don't have to deal with them. So what happens when you hit the limit? In Linux there is this thing called oom-killer. oom-killer has just one purpose, to kill processes. While oom-killer has some clever heuristics built in to it, it will most likely end up killing the wrong program. So it's like a big game of core wars on your computer.
Generating locales...
  en_AU.UTF-8... done
  en_BW.UTF-8... done
  en_CA.UTF-8... done
  en_DK.UTF-8... /usr/sbin/locale-gen: line 238:  1971 Killed                  localedef $no_archive --magic=$GLIBC_MAGIC -i $input -c -f $charset $locale_alias $locale
  en_GB.UTF-8... done
  en_HK.UTF-8... done
This is what happens on one of my hosts, there are not that many services running on this host and for some reason the right process gets killed. After turning overcommit off:
Generating locales...
  en_AU.UTF-8... up-to-date
  en_BW.UTF-8... up-to-date
  en_CA.UTF-8... up-to-date
  en_DK.UTF-8... done
  en_GB.UTF-8... done
  en_HK.UTF-8... done
Apparently locale generation is one of the things that is capable handling memory as it's supposed to be handled. So how do you turn off overcommit? Sysctl is the way! Easiest way is to add the following to /etc/sysctl.conf and running "/etc/init.d/ force-reload":
vm.overcommit_ratio = 100
vm.overcommit_memory = 2
The second line is the magic line which turns off overcommit. Insanely enough the other options are 0 (Default) and 1 (Always overcommit), which just doesn't make any sense. If the default is to do overcommit why would I want to set it to always overcommit. The first line defines the maximum amount of memory a program can allocate. This is explained better on a post on fedora-devel. Even with these changes, overcommit isn't completely disabled, but things are a lot better. While overcommit is built because it makes sense, it doesn't work. Lets hope things get even better in the future. Update: Arthur de Jong pointed out that in certain situations overcommit is needed, like when there is a large application running on a server that prevents further memory allocations (thus disabling remote access and so on). That is true, I'm not trying to say that overcommit isn't useful in some cases. I'm just saying that it's just a disaster waiting to happen, unless you have finetuned the environment just right.

5 March 2008

Sami Haahtinen: Handling upstream packaging changelogs

For a while now I've been pondering how to handle a package that has Debian packaging in upstream tarball. The upstream packaging is pretty much written by me, so there is no duplication of work. The big problem is the packaging changelog. When ever a new upstream package is released the changelog is changed and since I keep my packaging in a VCS this causes conflicts and I end up manually merging the two changelogs. The problems is, how should I handle this? I'm starting to think that I should move to the Ubuntu style. Just include my changes to the upstream Debian packaging in the last changelog entry and just carry that along with the Debian package. The other options are to for the changelog in to changelog.upstream and my own. That would cause me to loose the upstream changes (if any) unless I mirror them in my own changelog. Or I could just split the Debian packaging from the upstream completely and just keep my own packaging instead. This would make it harder for me to provide the changes upstream that I've made for the packaging. Or I could just try and cope by merging the changelogs manually. It might be easiest to just keep the changes to the upstream packaging in the latest changelog entry since it should be quite obvious to me what I've changed. The downside is that the upstream changelog isn't really that informative and people using apt-listchanges get to see the same changes over and over since some of the changes are kept without including them in upstream packaging.