Search Results: "madduck"

30 January 2015

Laura Arjona: Going selfhosting: Installing Debian Wheezy in my home server

It was in my mind to open a new series of articles with topic selfhosting , because I really believe in free software based network services and since long time I want to plug a machine 24 7 at home to host my blog, microblog, MediaGoblin, XMPP server, mail, and, in conclusion, all the services that now I trust to very kind third parties that run them with free software, but I know I could run myself (and offer them to my family and friends). Last September I bought the domain larjona.net (curious, they say buy but it s a rent, for 1,2,3 years never yours. Pending another post about my adventures with the domain name, dynamic DNS, and SSL certs!) and I bought an HP Microserver G7 N54L, with 2 GB RAM. It had a 250GB SATA harddisk and I bought 2 more SATA harddisks, 1 TB each, to setup a RAID 1 (mirror). Total cost (with keyboard and mouse), 300 . A friend gave me a TFT monitor that was too old for him (1024 768) but it serves me well, (it s a server, no graphical interface, and I will connect remotely most of the times). Installing Debian stable (wheezy) I decided to install Debian stable. Jessie was not frozen yet, and since it was my first non-LAMP server install, I wanted to make sure that errors and problems would be my errors, not issues of the non-released-yet distro. I thought to install YunoHost or some other distro prepared for selfhosting, but I ve never tried them, and I have not much free time, so I decided to stick on Debian, my beloved distro, because it s the one that I know best and I m part of its awesome community. And maybe I could contribute back some bug reports or documentation. I wanted to try a crypto setup (just for fun, just for learn, for its benefits, and to be one more freecrypto-tester in the world) so after reading a bit: https://wiki.debian.org/DebianInstaller/SataRaid
https://wiki.archlinux.org/index.php/disk_encryption
http://madduck.net/docs/cryptdisk/
http://linuxgazette.net/140/kapil.html
http://smcv.pseudorandom.co.uk/2008/09/cryptroot/
http://www.linuxquestions.org/questions/linux-security-4/lvm-before-and-after-encryption-871379/ and some other pages, and try some different things, this is the setup that I managed to configure: Everything went well. Yay! Some doubts and one problem Everything went quite well except some doubts: After talking about this issues with friends (and in debian-women IRC channel), I decided to install the non-free driver, just in case, with the same reasoning as with the RAID: let the card do the job, so the CPU can care about other things. Again, I notice that learning a bit about benchmarking (and having some time to do some tests) would be nice And now, the problem: I left this problem apart and go on installing the software. I would think later what to do. Installing MediaGoblin The most urgent selfhosting service, for me, was GNU MediaGoblin, because I wanted to show my server to my family in Christmas, and upload the pictures of the babies and kids of the family. And it s a project where I contribute translations and I am a big fan, so I would be very proud of hosting my own instance. I followed the documentation to setup 2 instances of GNU MediaGoblin 0.7 (the stable release in the moment), with their corresponding PostgreSQL databases. Why two instances? Well, I want an instance to host and show my videos, images, and replicate videos that I like, and a private one for sharing photos and videos with my family. MediaGoblin has no privacy settings yet, so I installed separate instances, and the private one I put it in a different port, with a self-signed SSL cert, and enabled http-authorization in Nginx, so only authorized Linux users of my machine can accesss the website. Installing MediaGoblin was easier than what I thought. I only had some small doubts about the documentation, and they were solved in the IRC channel. You can access, for example, my user profile in my public instance, and see some different files that I already uploaded. I m very happy!! Face to face with the bug, again I had to solve the problem of the password not accepted in reboots. I began to think that it could be a bug in cryptsetup. Should I upgrade the package to the version in wheezy-backports? Jessie was almost frozen, maybe it was time to upgrade the whole system, to see if the problem was solved (and to see how my MediaGoblin was working or not in Jessie. It should work, it s almost packaged! But who knows). And if it didn t work, maybe it was time to file a bug So I upgraded my system to Debian Jessie. And after upgrade, the system didn t boot. But that s the story of another blog post (that I still need to finish to write don t worry, it has happy end, as you could see accessing my Mediagoblin site!). Comments? You can comment in this pump.io thread.
Filed under: My experiences and opinion Tagged: Debian, encryption, English, libre software, MediaGoblin, Moving into free software, N54L, selfhosting, sysadmin

6 June 2014

Gunnar Wolf: What defines an identity?

I must echo John Sullivan's post: GPG keysigning and government identification. John states some very important reasons for people everywhere to verify the identities of those parties they sign GPG keys with in a meaningful way, and that means, not just trusting government-issued IDs. As he says, It's not the Web of Amateur ID Checking. And I'll take the opportunity to expand, based on what some of us saw in Debian, on what this means. I know most people (even most people involved in Free Software development not everybody needs to join a globally-distributed, thousand-people-strong project such as Debian) are not that much into GPG, trust keyrings, or understand the value of a strong set of cross-signatures. I know many people have never been part of a key-signing party. I have been to several. And it was a very interesting experience. Fun, at the beginning at least, but quite tiring at the end. I was part of what could very well constitute the largest KSP ever in DebConf5 (Finland, 2005). Quite awe-inspiring We were over 200 people, all lined up with a printed list on one hand, our passport (or ID card for EU citizens) in the other. Actwally, we stood face to face, in a ribbon-like ring. And, after the basic explanation was given, it was time to check ID documents. And so it began. The rationale of this ring is that every person who signed up for the KSP would verify each of the others' identities. Were anything fishy to happen, somebody would surely raise a voice of alert. Of course, the interaction between every two people had to be quick More like a game than like a real check. "Hi, I'm #142 on the list. I checked, my ID is OK and my fingerprint is OK." "OK, I'm #35, I also printed the document and checked both my ID and my fingerprint are OK." The passport changes hands, the person in front of me takes the unique opportunity to look at a Mexican passport while I look at a Somewhere-y one. And all is fine and dandy. The first interactions do include some chatter while we grab up speed, so maybe a minute is spent Later on, we all get a bit tired, and things speed up a bit. But anyway, we were close to 200 people That means we surely spent over 120 minutes (2 full hours) checking ID documents. Of course, not all of the time under ideal lighting conditions. After two hours, nobody was checking anything anymore. But yes, as a group where we trust each other more than most social groups I have ever met, we did trust on others raising the alarm were anything fishy to happen. And we all finished happy and got home with a bucketload of signatures on. Yay! One year later, DebConf happened in Mexico. My friend Martin Krafft tested the system, perhaps cheerful and playful in his intent but the flaw in key signing parties such as the one I described he unveiled was huge: People join the KSP just because it's a social ritual, without putting any thought or judgement in it. And, by doing so, we ended up dilluting instead of strengthening our web of trust. Martin identified himself using an official-looking ID. According to his recount of the facts, he did start presenting a German ID and later switched to this other document. We could say it was a real ID from a fake country, or that it was a fake ID. It is up to each person to judge. But anyway, Martin brought his Transnational Republic ID document, and many tens of people agreed to sign his key based on it Or rather, based on it plus his outgoing, friendly personality. I did, at least, know perfectly well who he was, after knowing him for three years already. Many among us also did. Until he reached a very dilligent person, Manoj, that got disgusted by this experiment and loudly denounced it. Right, Manoj is known to have strong views, and using fake IDs is (or, at least, was) outside his definition of fair play. Some time after DebConf, a huge thread erupted questioning Martin's actions, as well as questioning what do we trust when we sign an identity document (a GPG key). So... We continued having traditional key signing parties for a couple of years, although more carefully and with more buzz regarding these issues. Until we finally decided to switch the protocol to a better one: One that ensures we do get some more talk and inter-personal recognition. We don't need everybody to cross-sign with everyone else A better trust comes from people chatting with each other and being able to actually pin-point who a person is, what do they do. And yes, at KSPs most people still require ID documents in order to cross-sign. Now... What do I think about this? First of all, if we have not ever talked for at least enough time for me to recognize you, don't be surprised: I won't sign your key or request you to sign mine (and note, I have quite a bad memory when it comes to faces and names). If it's the first conference (or social ocassion) we come together, I will most likely not look for key exchanges either. My personal way of verifying identities is by knowing the other person. So, no, I won't trust a government-issued ID. I know I will be signing some people based on something other than their name, but hey I know many people already who live pseudonymously, and if they choose for whatever reason to forgo their original name, their original name should not mean anything to me either. I know them by their pseudonym, and based on that pseudonym I will sign their identities. But... *sigh*, this post turned out quite long, and I'm not yet getting anywhere ;-) But what this means in the end is: We must stop and think what do we mean when we exchange signatures. We are not validating a person's worth. We are not validating that a government believes who they claim to be. We are validating we trust them to be identified with the (name,mail,affiliation) they are presenting us. And yes, our signature is much more than just a social rite It is a binding document. I don't know if a GPG signature is legally binding anywhere (I'm tempted to believe it is, as most jurisdictions do accept digital signatures, and the procedure is mathematically sound and criptographically strong), but it does have a high value for our project, and for many other projects in the Free Software world. So, wrapping up, I will also invite (just like John did) you to read the E-mail self-defense guide, published by the FSF in honor of today's Reset The Net effort.

21 February 2014

Jakub Wilk: For those who care about snowclones

Instances of the for those who care about X snowclone on Debian mailing lists:

4 July 2013

Daniel Pocock: My Linux server IPv6 deployment approach

I previously discussed the ease of deploying IPv6 for Linux servers. Whether it is Debian, Fedora or another distribution the IPv6 stack should "just work" these days. However, for maintaining a production network with minimum risk of interruption, there are a few extra things to be aware of during IPv6 deployment. Here I present a rough set of steps that can be followed, usually with minimal or no downtime, to deploy IPv6. The steps are in the order that they should be used in practice Plenty of other web sites already explain the theory of IPv6 networks or practical configuration detail (like how to set a static IPv6 address in Debian). This page ignores all of that detail and looks at the overall project strategy. Borrow the last decimal octet (or host bits) of IPv4 addresses As mentioned in my earlier blog:
  • Of the 128 bits in IPv6 addresses, most sites now use a fixed 64 bit netmask for all their subnets. This is the practice encouraged by RFC 4291.
  • Therefore, there are always more bits in the 64-bit "host" portion of the address than in the host portion of any 32-bit IPv4 address.
  • Consequently, it is possible to just borrow the host portion (typically the last octet for a class-C network) and use it as the host portion of an IPv6 address
IPv4 address Host portion of address IPv6 address
192.168.1.5/24 5 2001:1234:567::5
Gotcha: remember that the IPv4 addresses are usually written in decimal while IPv6 addresses, including the example above, are hex. This doesn't really matter in practice unless the addresses are being manipulated by code that is sensitive to the host bits. For all other intents and purposes, it just makes it easy to read the addresses and associate them with IPv4. A completely valid alternative is to use the full IPv4 address as the host portion of the IPv6 address. For example, if the IPv4 address = 192.168.1.5, the IPv6 address may be 2001:1234:567::192.168.1.5. This approach is slighly more verbose and is completely valid. Which approach you use is at your discretion. Create reverse mappings (PTR records) in DNS Just as IPv4 has the in-addr.arpa zone and telephone numbers have the e164.arpa zone (for ENUM), IPv6 addresses have a reverse mapping zone too: ip6.arpa The recommended way to proceed is to create reverse mappings for all hosts that have IPv4 reverse mappings. Some services may be using reverse lookups for authentication (e.g. MySQL will check this for any user who has a "Host" restriction defined). This is why it is recommended to create all the reverse mappings very early in the IPv6 program, before any hosts start making connections over IPv6. Create extra A records in DNS It's possible that some applications will be more stubborn about IPv6 adoption than others. For example, if you have a host called wolf.example.org and it hosts a mail server, web server, DNS server and XMPP server all using the name wolf then all those server processes will have to support IPv6 connectivity from the moment you add an AAAA record. A good practice is to create independent DNS A records (such as ns1.example.org and mail.example.org) for each application (not just CNAME records). Existing CNAME records can be converted to A records and converted back to CNAME records after the IPv6 deployment is 100% complete. Duplicate firewall entries Duplicate all IPv4 firewall entries to create an IPv6 firewall For many types of network, this is a trivial (although possibly tedious) task. There are some small gotchas:
  • There is no NAT. Any firewall entries for NAT need to be reviewed: for source NAT, it is necessary to use a mechanism like connection tracking to protect workstations that previously had asymmetrical access to the public Internet. For destination NAT, see the TPROXY feature in ip6tables
  • IPsec works slightly differently. Rules for AH and ESP packets can't simply be duplicated for IPv6, they need further tweaks or warnings will be generated.
  • IPv6 makes some ICMP packages mandatory. If IPv4 ICMP is completely blocked, you still need to enable the IPv6 ICMP (or at least the subset of ICMP packets that are mandatory).
For more comments about firewalling IPv6, please see my earlier blog on the subject Put the IPv6 addresses on the hosts Add the necessary records to the appropriate place, for example, the /etc/network/interfaces files on Debian/Ubuntu style systems. For hosts that have multiple addresses on a single interface, it is often desirable to ensure that outbound connections always appear to come from just one of the addresses. This can be achieved using the preferred_lft option when configuring the non-default addresses. For example, if configuring three addresses on eth0:
ip addr add dev eth0 preferred_lft 0 2001:1234:567::10/64
ip addr add dev eth0  2001:1234:567::11/64
ip addr add dev eth0 preferred_lft 0 2001:1234:567::12/64
In the example above, all outgoing IPv6 connections will appear to come from 2001:1234:567::11/64 Important: this is only a very shallow discussion of the issues around source addresses. If it doesn't provide a valid solution for you, there are many dedicated articles about this specific topic, madduck's blog looks at it for firewalls and tunnels but the concepts are valid for any type of multi-homed host Review listening processes with netstat Now that hosts have IPv6 addresses, it is worthwhile looking through the output of
netstat -nlp
to see which processes have bound to both IPv4 and IPv6 addresses and which processes remain stuck on IPv4 only. For processes that are only listening on IPv4 or only bound to a specific address (rather than 0.0.0.0), it is a good idea to investigate the way they are configured and find out if they will support IPv6. Review the monitoring infrastructure (Nagios, Ganglia, etc) Now is a good time to review the monitoring infrastructure. Make sure that tools like Nagios are testing the services using their dedicated DNS names and not the host names. For example, if the host wolf.example.org runs an LDAP server accessible as ldap1.example.org, make sure Nagios is configured to poll ldap1.example.org. Look at the check_v46 wrapper for Nagios and similar solutions to make sure that Nagios is testing both the IPv4 and IPv6 version of each service. Just as developers use unit testing as part of test-driven development workflows, administrators deploying IPv6 can use the monitoring framework to validate the progress of their IPv6 program. Check for ACLs Do any server processes use IP-based ACLs to control access? For example, an Apache web server may be using an IP ACL to restrict access to some pages. MySQL or Postgres databases may have IP ACLs defined for some users or databases. When hosts start using IPv6 to connect to services, their IPv6 source addresses won't pass the ACL and services may be inaccessible. Therefore, it is worthwhile checking for such ACLs early on. Prepare DNS servers first Preparing the DNS servers first is a good step. The DNS servers should be configured to accept queries over IPv6. Once the DNS servers are listening on IPv6 addresses, go ahead and do some of the following:
  • Insert IPv6 AAAA records for the name servers in their own zone files. As mentioned in the previous tip, the name server records should have their own independent A records already with names such as ns1.example.org. Consequently, no other application should notice when the AAAA record is created.
  • Create IPv6 glue records for any name servers that have IPv4 glue records
Configure secondary name servers to poll over IPv6 Configure the secondary nameservers to poll the primary nameserver using it's new IPv6 address. Update resolv.conf on all hosts Now that DNS servers are listening on IPv6, it is possible to put IPv6 nameserver addresses in /etc/resolv.conf on all the other hosts. Configure and test other low-level services Although services such as LDAP and MySQL may not be exposed to public Internet users, they are an essential foundation for many other applications to run. Make sure all these low level services are accessible over IPv6. Test them. After testing each of these services, it is OK to go ahead and create the AAAA records for the service. Configure and test other services Continue testing each service is accessible over it's IPv6 address and then add the necessary AAAA record. This includes all the high level services now, such as mail and web servers. Configure and test hostnames Finally, start adding AAAA records associated with hostnames to the DNS zone. Now it should be possible to ssh or ping6 the hostnames using IPv6. You may also want to add the IPv6 address of each host into it's own /etc/hosts file at this stage.

19 May 2013

Martin F. Krafft: Packaging workflows

All recent articles on packaging using a version control system should really appear over at Planet vcs-pkg. Feel free to just ping me with a feed URL that is vcs-pkg-specific.

Martin F. Krafft: Streaming a camera to the local network

I have a Raspberry Pi running Raspbian (wheezy) with a UVC camera available as /dev/video0. I've been trying for three weeks to live-stream the picture from the camera onto the local network. I have tried crtmpserver and vlc, read several dozens of how-tos, but so far I have not been able to get a streaming setup working, no matter what I tried. Hence my plea to the lazy web: does anyone have such a setup running on top of Debian? Would you please let me know how you did it? Thanks a lot! NP: Eels: End Times

24 February 2013

Sylvain Le Gall: Configuration management: Puppet is worth it.

Replying to an old blog post of Martin F. Krafft: Configuration management, I want to give my point of view. The problems listed by madduck are quite common with Puppet, but I think Puppet is still worth, mostly because you can solve all these problems. Let give you my opinion on the list: True. I think the approach of puppet is not really UNIXish. It is probably on purpose. The biggest issue is probably the PKI. It breaks frequently for unknown reason. The "non intuitive configuration language" is probably a matter of taste. I think the language is not very well designed and strange, but I can cope with that. The attempt to versioning -- if I understand correcly what it means -- refers to the fact that when Puppet replace a file it moves the old file to a bucket. This is not a good thing, but you can say "backup => '.puppet-bak'" and you get almost the same behavior as ".dpkg-old". False debate. We can discuss for hours on Ruby, PHP, Java or whatever pet language people has invented. I am not a fan of Ruby but it is still nice as a general purpose language. To my mind, Ruby is still better to write daemon than bash. False debate.
   info: Caching catalog for centi.....
   info: Applying configuration version '1350597216'
   notice: Finished catalog run in 3.08 seconds
The config of this node is not complex, but 3s is not that bad for something that runs every 30min. If you need sub-second speed for this kind of thing, maybe you are not looking for this kind of tool. Does 144s of server time per day is a big deal ? With a lot more complex setup, I can reach 30s for a run, although this is the point where I manage a lot of thing with it. False and True. Augeas allows you to replace a single value (even more precise than a line). Just have a look at the augeas type. This is pretty nice and allow to do thing like replacing "Defaults env_reset" by "Defaults env_reset, !tty_tickets" in 4 lines of code. So this i not precisely "a single line of text", but there is other way to do it. False. Well if you organize your code with manifests/site.pp and manifests/classes/*.pp, it seems like there is a separation between the two. Next you can try inheritance and define to create specific high-level features. False-ish. Hey at least there are error message ;-) Now, most of the error that are related to the programming language are useless (at least as cryptic as a C++ error message). But as usual with error message in programming language True. Multi versions installation is horrible and you have to fix a lot of stuff to manage a sane overall configuration. Not sure to understand this point, I use puppet over IPv6... To whoever is considering using puppet, this is worth a try. It is a nice system that really helps to maintain a decent configuration across nodes.

2 February 2013

Steve Kemp: More competition for server management and automation is good

It was interesting to read recently from Martin F. Krafft a botnet-like configuration management proposal. Professionally I've used CFEngine, which in version 2.x, supported a bare minimum of primitives, along with a distribution systme to control access to a central server. Using thse minimal primitives you could do almost anything: Now I have my mini cluster (and even before that when I had 3-5 machines) it was time to look around for something for myself. I didn't like the overhead of puppet, and many of the other systems. Similarly I didn't want to mess around with weird configuration systems. From CFEngine I'd learned that using only a few simple primitives would be sufficient to manage many machines provided you could wrap them in a real language - for control flow, loops, conditionals, etc. What more natural choice was there than perl, the sysadmin army-knife? To that end slaughter was born: Over time it evolved so that HTTP wasn't the only transport. Now you can fetch your policies, and the files you might serve, via git, hg, rsync, http, and more. Today I've added one final addition, and now it is possible to distribute "modules" alongside policies and files. Modules are nothing more than perl modules, so they can be as portable as you are careful. I envisage writing a couple of sample modules; for example one allowing you to list available sites in Apache, disable the live ones, enable/disable mod_rewrite, etc. These modules will be decoupled from the policies, and will thus be shareable. Anyway , I'm always curious to learn about configuration management systems but I think that even though I've reinvented the wheel I've done so usefully. The DSL that other systems use can be fiddly and annoying - using a real language at the core of the system seems like a good win. There are systems layered upon SSH, such as fabric, ansible, etc, and that was almost a route I went down - but ultimately I prefer the notion of client-pull to server-push, although it is possible in the future we'll launche a mini-daemon to allow a central host/hosts to initial a run.

1 February 2013

Martin F. Krafft: A botnet for configuration management

Following my last rant about configuration management, I've had a closer look at Salt, both for my personal use, as well as for a client who would like to deploy something that is not Puppet. Salt has some very good ideas, for instance: But there are also a couple of downsides to Salt: Those are the big issues. There are many small issues two, but those won't be around for too long as the project is moving along quickly and the community is vibrant. This is surely an important point that speaks for Salt. However, the above issues seem to hint at design choices that might well turn out to stand in the way later. Following a day of frustration, I now feel the overpowering urge to write my own configuration management system, because of course I feel that I could do it better than everyone else. Does this sound familiar to you? Let's just say hypothetically that I would, then I'd want to reuse as much existing functionality as possible. For instance, I'd want the entire remote execution framework to be independent from any configuration management implemented on top. So what does this mean? What would such a remote execution framework need? Here are some thoughts: Doesn't this sound like a Unix botnet to you? ;) I could imagine whacking this up with a bit of Python, some shell glue, socat and SSH: the server would have an authorized_keys file with forced commands connecting the client to the server process via sockets. Or I could imagine using twisted for that. But I would prefer if something like this already existed. Anyone? Comments are broken on my blog, and I cannot be bothered to work on them. If you have any input, please write to me. I will (eventually) condense all feedback into a new article. NP: Mouse on Mars: Parastrophics

14 November 2012

Martin F. Krafft: A small office, multi-function printer

Dear lazyweb, we need a new printer-scanner combination for our new office, ideally a laser printer and a high-resolution scanner (1200dpi). It must integrate nicely with CUPS and SANE, without requiring any proprietary, closed-licence drivers or plugins. It also has to come with an automatic document feeder in addition to the plain flatbed scanner. Additional, would-be-nice features in decreasing order of preference are I have investigated the Samsung CLX-6220FX, but the driver situation was such a nightmare (and the printer so loud), that we returned it right away. I should have read this article by Peter beforehand I have recently seen the HP Laserjet Pro 200 M276n, the successor of the HP Laserjet CM1415nf which looks promising, but according to LinuxPrinting.org, a proprietary plugin is required (WTF HP?). Does anyone have experience with these HP models and can recommend them for use with Debian stable? Does anyone have another recommendation? Please send me a message. Thanks!

19 October 2012

Martin F. Krafft: Digital picture frames and random image order

I am moving into an office. And as any proud husband and father, I want to have photos of my wife and daughter on my desk. The year being 2012, I did not hike to the photo shop to get a print framed, but instead thought to myself that I should put a digital photo frame onto my table. This idea exposed me to the ridiculous world of consumer electronics. It led me to conclude that digital picture frame designers need to be whacked with cluebats. Step by step Once accepted, the thought of a digital photo frame developed into a product definition along the following lines: my idea photo frame would connect to my Wifi-network, and obtain the photos on-the-go from a folder exposed via HTTP or CIFS, and then go on to display them in random order, incorporating new photos as it encounters them. With this in mind, I went to the shops, and since I believe in specialised retailers and want to support them, my first stop was Foto Sauter at Sendlinger Tor. Unfortunately, none of the frames they had came with Wifi, so I decided to look further. I vehemently oppose to the business practices of the Metro group, thus skipped Saturn and MediaMarkt, and eventually ended up at Conrad. They had a frame with Wifi! I jumped for joy, until I read the manual: pictures can be obtained from Flickr and Picasa. Period. All other models on the Internet seem to be similarly limited, including the new Sony S-frame. The night before, Penny had researched the field a bit and came to the conclusion that the S-frame would be the best product available. This led me to scratch Wifi off my requirements list and get a model that would read photos off a USB stick. I went back to the photo store and bought a "Sony S-frame", only to discover that it cannot show photos in random order. It has three viewing modes (single photo, collage, single photo with clock), and a random mode, but guess what: the random mode randomly switches the viewing modes, which then display the photos in lexicographical order. How stupid is that??? I returned the product and left the store after discovering that none of their products could do random playback. I went back to Conrad and found an "Intenso MediaCreator" (what media does it create???), which displayed the photos seemingly randomly. But at home I found out that the "random" order is always the same, probably because the bright engineer that programmed this thought it was better to sort filenames by last letter and call it random, than to figure out a way to roll a dice on the device. I wrote to the support team and asked them. The response was that the desired functionality (random selection) is not possible and won't be made available. So I am returning the product. Gah! Would someone please tell me about a digital picture frame (8 inch or so) that can display images in random order, ideally loading them off a CIFS share via Wifi? Or is it really the case that consumer electronics are completely useless these days, by which I mean that "consumers" have dumbed down so far to buy this crap? Update: a lot of people wrote in suggesting to invest in a cheap Android tablet. Some suggested Raspberry Pis in USB host mode (emulating the USB stick and hence the source of the images, provided that the frame doesn't cache). Other suggestions included the Samsung SPF-85V which can display images according to an RSS feed but needs Microsoft for that (or maybe not), and the community-developed, Linux-based Joggler. Regarding the non-random order on the Intenso frame, Paul Hedderly postulated that the order comes from the filesystems (FAT order) and can be changed by writing the files differently.

Martin F. Krafft: Configuration management

Puppet I've really had it with Puppet. I used to be able to put up with all its downsides
  • Non-Unix approach to everything (own transport, self-made PKI, non-intuitive configuration language, a faint attempt at versioning (bitbucket), and much much more )
  • Ruby
  • Abysmal slowness
  • Lack of basic functionality (e.g. replace a line of text)
  • Host management and configuration programming intertwined, lack of a high-level approach to defining functionality
  • Horrific error messages
  • Catastrophic upgrade paths
  • Did I mention Ruby and its speed?
  • Lack of IPv6 support
  • [I could keep going ]
but now that my fourth attempt to upgrade my complex configuration from version 0.25.5 to version 2.7 failed due to a myriad of completely incomprehensible errors ("err: Could not run Puppet configuration client: interning empty string") and many hours were lost in trying to hunt these down using binary searches, I am giving up. Bye bye Puppet.

An alternative But I need an alternative. I want a system that is capable of handling a large number of hosts, but not so complex that one wouldn't put it to use for half a dozen machines. The configuration management system I want looks about as follows: It
  • makes use of existing infrastructure (e.g. SSH transport and public keys, Unix toolchain, Debian package management and debconf)
  • interacts with the package management system (Debian only in my case)
  • can provision files whose contents might depend on context, particular machine data and conditionals. There should be a unified templating approach for static and dynamic files, with the ability to override the source of data (e.g. a default template used unless a template exists for a class of machine, or a specific hostname)
  • can edit files on the target machine in a flexible and robust manner
  • can remove files
  • can run commands when files change
  • can reference data from other machines (e.g. obtain the certificate fingerprint of each hosts that define me as their SMTP smarthost)
  • can control running services (i.e. enable init.d scripts, check that a process is running
  • is written in a sensible language
  • is modular and easily extensible, ideally using a well-known language (e.g. Python!)
  • allows to specify infrastructure with tags ("all webservers", "all machines in Zurich", "machines that are in Munich and receive mail"), but with the ability to override every parameter for a specific host
  • should just do configuration management, and not try to take away jobs from monitoring software
  • logs changes per-machine and collects data about applied configurations in a central location
  • is configured using flat files that are human-readable so that the configuration may be stored in Git (e.g. YAML, not XML)
  • can be configured using scripts in a flexible way
Since for me, Ruby is a downside of Puppet, I won't look at Chef, but from this page, I gleaned a couple of links: Ansible, Quattor, Salt, and bcfg2 (which uses XML though). And of course, there remains the ephemeral cfengine.

cfengine I haven't used cfengine since 2002, but I am not convinced it's worth a new look because it seems to be an academic project with gigantic complexity and a whole vernacular to its own. There is no doubt that it is a powerful solution, and the most mature of all of them, but it's far away from the Unix-like simplicity that I've come to love in almost 20 years of Debian. Do correct me if I am wrong.

Ansible Ansible looks interesting. It seems rather bottom-up, first introducing a way to remotely execute commands on hosts, which you can then later extend/automate to manage the host configurations. It uses SSH for transport, and its reason-to-be made me want to look at it. My ventures into the Ansible domain are not over yet, but I've put them on hold. First of all, it's not yet packaged for Debian (Ubuntu-PPA packages work on Debian squeeze and wheezy). Second, I was put off a bit by its gratuitous use of the shell to run commands, as well as other design decisions. Check this out: there are modules for the remote execution of commands, namely "shell", "command", and "raw". The shell modules should be self-explanatory; the command module provides some idempotency, such as not running the command if a file exists (or not). To do this, it creates a Python script in /tmp on the target and then executes that like so:
$SHELL -c /tmp/ansible/ansible-1350291485.22-74945524909437/command

Correct me if I am wrong, but there is zero need for this shell indirection. My attempts at finding an answer on IRC were met by user "daniel_hozac" with a reason along the lines of "it's needed, believe me", and on the mailing list, I am told that only the shell can execute a script by parsing the interpreter line at the top of the module. Finally, the raw execution module also executes using the shell And there a few other design decisions that I can't quite explain, around the command-line switch --sudo see the aforementioned message In short: running a command like
ansible -v arnold.madduck.net -a "/usr/bin/apt-get update" --sudo

does not invoke apt-get with sudo, as one might like; it invokes the shell that runs the Python script that runs the command. Effectively therefore, you need to allow sudo shell execution, and for proper automation, this has to be possible without a password. And then you might just as well allow root logins again. The author seems to think that "core behaviour" is that sudo allows all execution and that limiting the commands to run is not a use-case that Ansible will support. Apparently, I was the first to ever suggest this. There are always ways around (e.g. skip --sudo and just use sudo as the command, simply ignore the useless shell invocation and trust that your machine can handle it, but when such design decisions remain incomprehensible and get defended by the project people, then I am hesitant to invest more time on principle.

Salt Finally, I've looked at Salt, which is what I've spent most time on so far. From the discussions I started on host targeting and data collection, it soon became apparent that Salt is very thin and flexible, and that the user community is accomodating. Unfortunately, Salt does not use SSH, but at least it reuses existing functionality (ZeroMQ). As opposed to the push/pull model, Salt "minions" interestingly maintain a persistent connection to the server (which is not yet very stable), and while non-root usage is still not unproblematic, at least there has already been work done in this direction. I think I will investigate Salt more as it does look like it can do what I want. The YAML-based syntax does seem a bit brittle, but it's the best I've found so far. NP: The Pineapple Thief: Someone Here is Missing

14 September 2012

Martin F. Krafft: Italy removes restrictions on short sales

In the light of the recent announcement by the European central bank to bail out states without limits which is breaking the very law that the EU was built upon Italy s stock market supervisors have removed the restriction on short sales. In Italy, you may now again sell stuff on the financial market that you don t have. The only condition is that you have to be able to prove that you could currently buy it. But that, of course, is not a guarantee for you to be able to buy the good/stock/whatever when the person you sold it to actually wants it, given the volatility of the markets. I expect other countries to follow suit. Currencies and especially the Euro was made by bankers for bankers to earn money. Who actually believes that the ESM, to which Germany enslaved itself this week would fix anything is simply na ve. Temporarily, the markets were on hold, how convenient that this coincided with summer break. Now everything is back to normal and the next financial crisis is being built. NP: Porcupine Tree: Stupid Dream

12 September 2012

Martin F. Krafft: A black day for democracy

Today was a black day for democracy in Germany. The German constitutional court ruled in favour of the European Stability Mechanism. In combination with last week s announcement by the European Central Bank to purchase government bonds without limits (breaking the No-Bail-Out clause at the core of their mandate more obviously and irreversably than ever before), the German people have lost a good deal of democracy today. Why? you may ask because from now on, fiscal and financial policy will be made in Brussels, by people enjoying full immunity, but who are not elected democratically by the European people, let alone the Germans, and they will freely decide over who has to pay and be liable for whom. I am talking about people like Klaus Regling, who was already involved the very first time the Maastricht Criteria were violated. He is now at the front of the largest and most powerful financial weapon ever conceived. With immunity. And people like Mario Draghi, whom I would possibly call the most corrupt person I know. His announcement to save the Euro at whatever cost accidentally came only a day before his motherland Italy had to go to the market for more money and was able to place a bond at such ridiculously low interest rates that anyone who s kept up to speed with Italy s development had to rightfully ask how that was possible. While in the past, for whatever reason, the European people have let the ECB get by saying that they are not bailing out countries when they buy bonds on the secondary market (wtf!), they have finally dropped that restriction (the law). And as of today, the ESM is ready to go, along with the fiscal pact. Germany is now liable for more than quarter of all of the Eurozone s past and future debts. And no citizen will be able to have any more influence in this, or reverse it. Budget, fiscal policy and currency control are forever gone. Not that parliamentarian democracies were ever direct. Yet, in the past, one could at least vote for those people whose promises one was inclined to believe the most. You can still do that in the future, but those people won t be able to influence fiscal or financial policy anymore. There is no way back. The ESM and its employees enjoy full immunity, and the ESM is forever-binding. There is no exit clause. Thanks to the ECB s law breaking and the ESM, which I consider highly unconstitutional, at least in Germany, Eurozone-countries may refinance their debts at interest rates that are in no way related to their ability to pay back loans. All other countries foremost Germany are henceforth liable for others debts. The fundamental rule of the EU that no country would have to stand up for another country, is gone with the wind. Within an hour, the markets reacted. Germany, which previously had to pay negative interest (a sign of stability) saw interests on its bond shoot up. And Spain, Portugal, Greece and others who couldn t previously refinance their old debts, are now getting fresh money cheaper than ever. Spain s president Rajoy today didn t even bother beating around the bush anymore, he s now going to apply for fresh money but won t bother with any saving schemes or other restructurings. Monti in Italy has suggested the same. Wouldn t you take money if you were offered it for free, without the need to pay it back? This is more than inflation, in my opinion. What is currently happening in Europe is active depreciation of individual wealth. Our heads of state are actively working against the people. The Euro has lost all credibility and everyone knows it. It is only a question of time until it will tremble and fall. Meanwhile, the market celebrates and continues their gambles while they still can, on the backs of our currency and our wealth. Most affected are the people who have savings in Euros, whose life insurances are decreasing in worth and who cannot afford to diversify into other asset classes or currencies. On the other hand, those who let their money do the work are being saved. Whoever previously invested into bonds of struggling states, hoping to reap massive interest gains, is now proven right. Brussels has eliminated the risk factor. What kind of message does this send??? Hands up if you thought that our politicians are even interested in closing the rapidly widening gap between rich and poor. Really? That s naive. The Eurozone is corrupt, and our currency has never been as virtual as today. Nobody can say whether saving the Euro at all cost is the right thing and noone knows whether what s currently happening is just bad. I would have wished that our politicians had taken the crisis as an incentive to fix the system in the interest of the people and with a long-term focus: But on the contrary! Europe s policiticans are making it crystal clear that the foundation upon which it was built, the laws and rules, the promises and guarantees, no longer apply. The people were not asked. The promises once made were broken. Our politicians have ruled over our heads. More debts are being made, and more debts to pay off debts, and so on. It s long gotten out of control, now the process is institutionalised. I feel sorry for our kids. I find it irresponsible what is being done to them (in addition to the way we rape the environment). I also feel deeply with the people in the struggling countries who are being screwed by the crisis and are not at fault. What our politicians are doing is unfortunately not going to help long term. The problems are just postponed, and with every day, the inevitable crash will be more painful. I am sorry. Today is a black day for democracy. We have lost souvereignity. We have lost control over our currency. We have lost our budget rights. And I have lost my faith in the last instance of the German government that I trusted. As of today, I know that the German constitutional court is nothing more than a puppet in the hands of the politicians (who are themselves puppets of Brussels and the banks). The limit they imposed (Germany s liability must not increase beyond 190 billion Euros without the federal parliament s consent) is worthless. Soon the politicians will explain to us why it s inevitable that we must raise this limit. Not that the people could prevent it, but still I had hoped for a fundamental ruling. They should not have touched numbers. The EU had a no-bailout-clause from day one. It was conditional from the start. If one of the fundamental principles of a contract is broken, the contract becomes invalid. Not only did I expect the court to rule against socialised debt, I would have wished them to go a step further. The German national bank gave up control over the currency to the ECB only because the ECB incorporated the principles of the German national bank. Once the ECB overturned those principles, Germany should have reclaimed their souvereignity. But noone else in Europe would have wanted that. Merkel became a puppet herself. I am grateful that our daughter has dual citizenship. NP: Porcupine Tree: Live at Atlanta 2010

28 July 2012

Vincent Bernat: Switching to the awesome window manager

I have happily used FVWM as my window manager for more than 10 years. However, I recently got tired of manually arranging windows and using the mouse so much. A window manager is one of the handful pieces of software getting in your way at every moment which explains why there are so many of them and why we might put so much time in it. I decided to try a tiling window manager. While i3 seemed pretty hot and powerful (watch the screencast!), I really wanted something configurable and extensible with some language. So far, the common choices are: I chose awesome, despite the fact that StumpWM vote for Lisp seemed a better fit (but it is more minimalist). I hope there is some parallel universe where I enjoy StumpWM. Visually, here is what I got so far: awesome dual screen setup

Awesome configuration Without a configuration file, awesome does nothing. It does not come with any hard-coded behavior: everything needs to be configured through its Lua configuration file. Of course, a default one is provided but you can also start from scratch. If you like to control your window manager, this is somewhat wonderful. awesome is well documented. The wiki provides a FAQ, a good introduction and the API reference is concise enough to be read from the top to the bottom. Knowing Lua is not mandatory since it is quite easy to dive into such a language. I have posted my configuration on GitHub. It should not be used as is but some snippets may be worth to be stolen and adapted into your own configuration. The following sections put light on some notable points.

Keybindings Ten years ago was the epoch of scavanger hunts to recover IBM Model M keyboards from waste containers. They were great to type on and they did not feature the infamous Windows keys. Nowadays, this is harder to get such a keyboard. All my keyboards now have Windows keys. This is a major change with respect to configure a window manager: the left Windows key is mapped to Mod4 and is usually unused by most applications and can therefore be dedicated to the window manager. The main problem with the ability to define many keybindings is to remember the less frequently used one. I have monkey-patched awful.key module to be able to attach a documentation string to a keybinding. I have documented the whole process on the awesome wiki. awesome online help

Quake console A Quake console is a drop-down terminal which can be toggled with some key. I was heavily relying on it in FVWM. I think this is still a useful addition to any awesome configuration. There are several possible solutions documented in the awesome wiki. I have added my own1 which works great for me. Quake console

XRandR XRandR is an extension which allows to dynamically reconfigure outputs: you plug an external screen to your laptop and you issue some command to enable it:
$ xrandr --output VGA-1 --auto --left-of LVDS-1
awesome detects the change and will restart automatically. Laptops usually come with a special key to enable/disable an external screen. Nowadays, this key does nothing unless configured appropriately. Out of the box, it is mapped to XF86Display symbol. I have associated this key to a function that will cycle through possible configurations depending on the plugged screens. For example, if I plug an external screen to my laptop, I can cycle through the following configurations:
  • only the internal screen,
  • only the external screen,
  • internal screen on the left, external screen on the right,
  • external screen on the left, internal screen on the right,
  • no change.
The proposed configuration is displayed using naughty, the notification system integrated in awesome. Notification of screen reconfiguration

Widgets I was previously using Conky to display various system-related information, like free space, CPU usage and network usage. awesome comes with widgets that can fit the same use. I am relying on vicious, a contributed widget manager, to manage most of them. It allows one to attach a function whose task is to fetch values to be displayed. This is quite powerful. Here is an example with a volume widget:
local volwidget = widget(  type = "textbox"  )
vicious.register(volwidget, vicious.widgets.volume,
         '<span font="Terminus 8">$2 $1%</span>',
        2, "Master")
volwidget:buttons(awful.util.table.join(
             awful.button(   , 1, volume.mixer),
             awful.button(   , 3, volume.toggle),
             awful.button(   , 4, volume.increase),
             awful.button(   , 5, volume.decrease)))
You can also use a function to format the text as you wish. For example, you can display a value in red if it is too low. Have a look at my battery widget for an example. Various widgets

Miscellaneous While I was working on my awesome configuration, I also changed some other desktop-related bits.

Keyboard configuration I happen to setup all my keyboards to use the QWERTY layout. I use a compose key to input special characters like . I have also recently use Caps Lock as a Control key. All this is perfectly supported since ages by X11 I am also mapping the Pause key to XF86ScreenSaver key symbol which will in turn be bound to a function that will trigger xautolock to lock the screen. Thanks to a great article about extending the X keyboard map with xkb, I discovered that X was able to switch from one layout to another using groups2. I finally opted for this simple configuration:
$ setxkbmap us,fr '' compose:rwin ctrl:nocaps grp:rctrl_rshift_toggle
$ xmodmap -e 'keysym Pause = XF86ScreenSaver'
I switch from us to fr by pressing both left Control and left Shift keys.

Getting rid of most GNOME stuff Less than one year ago, to take a step forward to the future, I started to heavily rely on some GNOME components like GNOME Display Manager, GNOME Power Manager, the screen saver, gnome-session, gnome-settings-daemon and others. I had numerous problems when I tried to setup everything without pulling the whole GNOME stack. At each GNOME update, something was broken: the screensaver didn t start automatically anymore until a full session restart or some keybindings were randomly hijacked by gnome-settings-daemon. Therefore, I have decided to get rid of most of those components. I have replaced GNOME Power Manager with system-level tools like sleepd and the PM utilities. I replaced the GNOME screensaver with i3lock and xautolock. GDM has been replaced by SLiM which now features ConsoleKit support3. I use ~/.gtkrc-2.0 and ~/.config/gtk-3.0/settings.ini to configure GTK+. The future will wait.

Terminal color scheme I am using rxvt-unicode as my terminal with a black background (and some light transparency). The default color scheme is suboptimal on the readability front. Sharing terminal color schemes seems a popular activity. I finally opted for the derp color scheme which brings a major improvement over the default configuration. Comparison of terminal color schemes I have also switched to Xft for font rendering using DejaVu Sans Mono as my default font (instead of fixed) with the following configuration in ~/.Xresources:
Xft.antialias: true
Xft.hinting: true
Xft.hintstyle: hintlight
Xft.rgba: rgb
URxvt.font: xft:DejaVu Sans Mono-8
URxvt.letterSpace: -1
The result is less crisp but seems a bit more readable. I may switch back in the future. Comparison of terminal fonts

Next steps My reliance to the mouse has been greatly reduced. However, I still need it for casual browsing. I am looking at luakit a WebKit-based browser extensible with Lua for this purpose.

  1. The console gets its own unique name. This allows awesome to reliably detect when it is spawned, even on restart. It is how the Quake console works in the mod of FVWM I was using.
  2. However, the layout is global, not per-window. If you are interested by a per-window layout, take a look at kbdd.
  3. Nowadays, you cannot really survive without ConsoleKit. Many PolicyKit policies do not rely on groups any more to grant access to your devices.

13 April 2012

Martin F. Krafft: Mouse on Mars

The atmosphere in Munich s Backstage Werk just before the opening act to the Mouse on Mars was very chilled. People sat on the stairs or scattered themselves over the dance floor while low-fi ambient tunes came from the speakers. It wasn t loud, you had to try hard to hear the people mumble. I have no idea who the opening act was, and their first tune was very nice and groovy. Then ensued a noise explosion, one could only pity the electronic equipment that was being asked to perform in ways that may be described as everything else than you expect , and of course, the base beat shook the building; I am quite sure they didn t use treble at all, but I may also simply have been unable to hear it. Plus, it seemed to us that the musicians catered for what may be a widespread decrease of attention span: it was noticable how they jumped from one thing to the next, not leaving them (or their listeners) any time to get in the groove. My brother and I went outside for a bit and talked about today s music and its simplicity. We postulated the repetitiveness as the basis of a mass movement, considered scene clubs that played heavy techno to an audience that is so entirely different to who historically frequented such musical performances, and in general tried to avoid assuming a position between simplifying society and accepting that individual freedom is as eclectic as can be. When MoM opened, they continued pretty much in line with their openers and half way through the first tune, I started to wonder how long I would last, or when it would be reasonable to step outside again. I had been a little afraid this would happen, having bought and listened to their latest album Parastrophics in preparation of the concert and not being able to get into it. However, what then followed blew us away. Still heavy, still all over the place, but now they were developing sound scenes, ripping them apart, having fun playing with and teasing the audience, while putting on a groove that inevitably made your muscles twitch with the beat. David Bowie called MoM the next big thing and I have to give it to them: MoM have always had a certain aura of that s what your music is like? we can improve on that! to them, and yesterday, they continued along those lines with astounding consistency, and it felt fresh. It also felt real. They weren t just pushing buttons and computers making music, they were making music and the computers were their instruments. Between the two founding members of MoM sat Dodo Nkishi, drummer and microphone artist, and if you don t believe, fast, big breakbeat can be performed live, well, you re wrong. Most everyone in the room was dancing. And while I was more swaying in awe, watching and wondering how the heck they are doing what they are doing, I couldn t contain the bouncing any longer. They came back for an encore and there was no more stopping the crowd, Thomas or me. Three tracks later, they waved goodbye and left, but a bunch of us simply continued to dance. Thomas questioned who would last longer and I started yelling loudly for another encore. The lights turned on, I considered it a slap in the face, but I did not stop yelling. Others tuned in. And then the lights went off and the band came back. Following their 2.5 hour show, gosh was I exhausted. It was a magnificent show. If you aren t afraid of big beat electronica and you take pleasure in nonstandard art, I heartily recommend you ensure that MoM aren t soon playing near you without you there. PS: MoM will play at the (D sseldorf Open-Source Festival)[http://www.open-source-festival.de/en/] on 30 June 2012! PPS: Now I listen to Parastrophics and I am really enjoying it. NP: Mouse on Mars: Parastrophics

24 February 2012

Richard Hartmann: apt-get install vcsh

apt-get install vcsh I finally got around to package vcsh and David Bremner was kind enough to sponsor it. vcsh is available in testing, unstable, and squeeze-backports. Also, Ubuntu seems to have copied it over into their repositories automagically. As you probably don't know, vcsh is a tool to manage config files in git. Say you want to maintain one repository for zsh, one for vim, one for ssh, one for mplayer, and one for mr, but obviously there can only be one .git in $HOME. vcsh helps by moving $GIT_DIR into $XDG_CONFIG_HOME/vcsh/repo.d/ but keeping $GIT_WORK_TREE in $HOME. The splitting of configuration sets into separate repositories allows you avoid checking out, say, mplayer's config on your servers, or checking out your ssh config at work. If this sounds complicated, it's not; vcsh hides all of the dirty details from you. vcsh integrates nicely with mr by means of a plugin, making handling your configurations even more trivial. If you do get stuck along the way, simply drop into #vcs-home on irc.oftc.net or send email to the vcs-home mailing list. popcon tells me that there are more users than I know about directly, so pipe up if you're one of them. And if you are one of the people who have it installed but don't use it, I am even more interested. I would love to know why so I can improve it. Long story short, if you care about the integrity and/or history of your configuration, or if you use one or more computers, you should definitely give vcsh a try. Any kind of feedback appreciated :)

7 February 2012

Martin F. Krafft: Stop ACTA

I hope by now you have heard of ACTA. In any case, here is a nice 6:30 minute video giving a good overview. Please help stop ACTA. Our freedom is at risk. Whether you tell people about it, write about it, use services like Twitter to tell the world about #StopACTA, or whether you take the time to march against what corporate entities are lobbying politicians to do against their people please help protect the Internet as we know it. NP: God is an Astronaut: Moment of Stillness

15 December 2011

Martin F. Krafft: The rating agencies' circus

This is not about any real or alleged might of (private) rating agencies you know, the ones roughening up the financial markets these days. Given the recent influx of news about downgrades of banks and nations, I simply start to wonder what will happen when the triple-A category empties out (which it will the last few nations will be ejected as a consequence of currency explosions (CHF) and forced bailouts of others (EU). Will the whole circus start anew? And if so, why do we even pay any attention?? Gosh do I wish that people started to form their own opinions again. NP: Fila Brazillia: Power Clown

30 October 2011

Martin F. Krafft: How They Save the Euro in Brussels

Unbeknown to the participants of last week s Euro Summit in Brussels, the clueless leaders of Europe in whose hands it lies to save our asses have been recorded on film. I do not understand why they are dressed up, but I understand now why things are as they are.

Next.