Search Results: "sjr"

7 October 2023

Andrew Cater: Point release weekend for Debian: two releases this weekend: 202311071653

Over in Cambridge with RattusRattus, Sledge, egw and Isy. Andy is very kindly putting us up.

We're almost all of the way through testing 12.2 and some of the way through testing 11.8.

It's a LONG day - heads down into laptops and relatively quiet - I think we're all tired and we've a way to go yet.


7 January 2017

Simon Richter: Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time. If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot. To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64). Step 1: Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:
pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring
Step 2: Gradually Heat the Water so the Frog Doesn't Notice Then, I added the sources to sources.list, and added the architecture to dpkg:
deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main
dpkg --add-architecture ppc64
apt update
Step 3: Time to Go Wild
apt install dpkg:ppc64
Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64). Manually installed the libraries, then tried again:
apt install dpkg:ppc64
Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are
apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb
or if you didn't think far enough ahead, cursing followed by
cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb
Step 4: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad. With the aptitude resolver, it was then simple to upgrade a test package
aptitude install coreutils:ppc64 coreutils:powerpc-
The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine. So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line. Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use. There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with
dpkg --force-overwrite -i ...
and then resuming what aptitude was doing, using
aptitude install
So, in summary, this still works fairly well.

9 December 2016

Simon Richter: Busy

I'm fairly busy at the moment, so I don't really have time to work on free software, and when I do I really want to do something else than sit in front of a computer. I have declared email bankruptcy at 45,000 unread mails. I still have them, and plan to deal with them in small batches of a few hundred at a time, but in case you sent me something important, it is probably stuck in there. I now practice Inbox Zero, so resending it is a good way to reach me. For my Debian packages, not much changes. Any package with more than ten users is team maintained anyway. Sponsoring for the packages where I agreed to do so goes on. For KiCad, I won't get around to much of what I'd planned this year. Fortunately, at this point no one expects me to do anything soon. I still look into the CI system and unclog anything that doesn't clear on its own within a week. Plans for December: Plans for January: Plans for February: Other than that, reading lots of books and meeting other people.

1 November 2016

Simon Richter: Using the Arduino IDE with a tiling window manager

The Arduino IDE does not work properly with tiling window managers, because they do some interesting reparenting. To solve this, add
_JAVA_AWT_WM_NONREPARENTING=1
export _JAVA_AWT_WM_NONREPARENTING
to the start script or your environment. Credit: "Joost"

1 May 2016

Simon Richter: With great power comes great responsibility

On the other hand,
export EDITOR='sed -ie "/^\+/s/ override//"'
yes e   git add -p
is a good way to commit all your changes except the addition of C++11 override specifiers.

14 April 2016

Simon Richter: It Begins.

Just starting a small side project... 3D printing GLaDOS's head

14 March 2016

Simon Richter: IPsec settings for FortiNet VPN

So, a customer uses a FortiNet VPN gateway. Because I have perfectly fine IPsec software already installed, the only thing missing are appropriate settings. As they use IKEv1 in aggressive mode, there is not much of an error reply if you get any of them wrong. So, here's a StrongSwan setup that works for me:
conn fortinet
        left=%any
        leftauth=psk
        leftid=""
        leftauth2=xauth
        xauth_identity="your username"
        leftsourceip=%config
        right=gateway IP address
        rightsubnet=VPN subnet
        rightauth=psk
        keyexchange=ikev1
        aggressive=yes
        ike=aes128-sha1-modp1536!
        esp=aes128-sha1-modp1536!
        auto=add
Not sure if that can be optimized further by getting the VPN subnet through mode_config as well, but I'm basically happy with the current state. In addition to that, you need the PSK and XAUTH secrets, obviously.

12 March 2016

Simon Richter: 8192!

So, finally I've also joined the club. 2048 board with 8192 tile

3 March 2016

Simon Richter: Why sysvinit?

As most distributions have switched to systemd as default "init", some people have asked why we actually keep sysvinit around, as it's old and crusty, and systemd can do so much more. My answer is: systemd and sysvinit solve entirely different problems, and neither can ever replace the other fully. The big argument in favour of systemd is integration: when you keep everything in one process, it is easy to make sure that components talk to each other. That is nice, but kind of misses the point: integration could be done via IPC as well, and systemd uses a lot of it to separate privileges anyway. The tighter integration comes from something else: the configuration format. Where sysvinit uses a lot of shell scripts ("imperative" format), systemd uses explicit keywords ("descriptive" format), and this is the important design decision here: descriptive formats cannot do anything unanticipated, neither good nor bad. This is important for UI programmers: if there is a button that says "if I close the lid switch, suspend the laptop", then this needs to be connected to functionality that behaves this way, and this connection needs to work in both directions, so the user will find the active setting there when opening the dialog. So, we need to limit ourselves to a closed set of choices, so the UI people can prepare translations. This is the design tradeoff systemd makes: better integration in exchange for less flexibility. Of course, it is simple to allow "custom" actions everywhere, but that starts to break the "well-integrated" behaviour. There is an old blog post from Joey Hess about how to implement a simple alarm clock, using systemd, and this shows both how powerful and how limited the system is. On one hand, the single line WakeSystem=true takes care of the entire dependency chain attached to making sure the system is actually on -- when the system is going to suspend mode, some program needs to determine that there are scheduled jobs that should wake up the system, determined which of these is the next one, program the wakeup time into the hardware and only then allow the suspend to continue. On the other hand, the descriptive framework already breaks down in this simple example, because the default behaviour of the system is to go to sleep if the lid switch is closed, and the easiest way to fix this is to tell systemd to ignore the lid switch while the program is running. The IPC permissions disallow scheduled user jobs from setting this, so a "custom" action is taken that sets up the "ignore the lid switch" policy, changes to an unprivileged user, and runs the actual job. So, it is possible to get back flexibility, but this is traded for integration: while the job is running, any functionality that the "suspend laptop when the lid is closed" button had, is broken. For something simple as a switch on a personal laptop that breaks if the user configures something that is non-standard, that is acceptable. However, the keywords in the service unit files, the timer files, etc. are also part of a descriptive framework, and the services using them expect that the things they promise are kept by the framework, so there is a limit to how many exceptions you can grant. SystemV init really comes from the other direction here. Nothing is promised to init scripts but a vague notion that they will be run with sufficient privileges. There is no dependency tracking that ensures that certain services will always start before others, just priorities. You can add a dependency system like insserv if you like and make sure that all the requirements for it to work (i.e. services declaring their dependencies) are given, and there is a safe fallback of not parallelizing so we can always play it safe. Because there is so little promised to scripts, writing them becomes a bit tedious -- all of them have a large case statement where they parse the command line, all of them need to reimplement dropping privileges and finding the PID of a running process. There are lots of helpers for the more common things, like the LSB standard functions for colorful and matching console output, and Debian's start-stop-daemon for privilege and PID file handling, but in the end it remains each script's responsibility. Systemd will never be able to provide the same flexibility to admins while at the same time keeping the promises made in the descriptive language, and sysvinit will never reach the same amount of easy integration. I think it is fairly safe to predict both of these things, and I'd even go a step further: if any one of them tried, I'd ask the project managers what they were thinking. The only way to keep systems as complex as these running is to limit the scope of the project. With systemd, the complexity is kept in the internals, and it is important for manageability that all possible interactions are well-defined. That is a huge issue when writing adventure games, where you need to define interaction rules for each player action in combination with each object, and each pair of objects that can be combined with the "Use" action. Where adventure games would default to "This doesn't seem to work." or "I cannot use this.", this would not be a helpful error message when we're trying to boot, so we really need to cut down on actions and object types here. Sysvinit, on the other hand, is both more extreme in limiting the scope to very few actions (restart, wait, once) and only one object type (process that can be started and may occasionally terminate) in the main program, and a lot more open in what scripts outside the main program are allowed to do. The prime example for this is that the majority of configuration takes place outside the inittab even -- a script is responsible for the interface that services are registered by dropping init scripts into specific directories, and scripts can also create additional interfaces, but none of this is intrinsic to the init system itself. Which system is the right choice for your system is just that: your choice. You would choose a starting point, and customize what is necessary for your computer to do what you want it to do. Here's the thing: most users will be entirely happy with fully uncustomized systemd. It will suspend your laptop if you close the lid, and even give your download manager veto power. I fully support Debian's decision to use systemd as the default init for new installs, because it makes sense to use the default that is good for the vast majority of users. However, my opening statement still stands: systemd can never support all use cases, because it is constrained by its design. Trying to change that would turn it into an unmaintainable mess, and the authors are right in rejecting this. There are use cases that define the project, and use cases that are out of scope. This is what we need sysvinit for: the bits that the systemd project managers rightfully reject, but that are someone's use case. Telling these users to get lost would be extremely bad style for the "universal operating system." For example, if someone needs a service that asks a database server for a list of virtual machines to start, runs each in its private network namespace that is named after replacing part of the virtual machine name with the last two octets of the host machine's IP address, then binds a virtual Ethernet device into the network namespace and sets up a NAT rule that allows devices to talk to the public Internet and a firewall rule to stop VMs from talking to each other. Such a beast would live outside of systemd's world view. You can easily start it, but systemd would not know how to monitor it (as long as there is some process still running, is that a good sign), not know how to shut down one instance, not know how to map processes to instances and so on. SystemV init has the advantage here that none of these things are defined by the system, and as long as my init script has some code to handle "status", I can easily check up on my processes with the same interface I'd use for everything else. This requires me to have more knowledge of how the init system works, that is true, however I'd also venture that the learning curve for sysvinit is shallower. If you know shell scripting, you can understand what init scripts do, without memorizing special rules that govern how certain keywords interact, and keeping track of changes to these rules as the feature set is expanded. That said, I'm looking at becoming the new maintainer of the sysvinit package so we can continue to ship it and give our users a choice, but I'd definitely need help, as I only have a few systems left that won't run properly with systemd because of its nasty habit to keep lots of code in memory where sysvinit would unload it after use (by letting the shell process end), and these don't make good test systems. The overall plan is to set up a CI system that Ideally, I'd like to reach a state where we can actually ensure that the default configuration is somewhat similar regardless of which init system we use, and that it is possible to install with either (this is a nasty regression in jessie: bootstrapping a foreign-architecture system broke). This will take time, and step one is probably looking at bugs first -- I'm grateful for any help I can get here, primarily I'd like to hear about regressions via bug reports. tl;dr: as a distribution, we need both systemd and sysvinit. If you suggest that systemd can replace sysvinit, I will probably understand that you think that the systemd maintainers don't know project management. I'm reluctantly looking at becoming the official maintainer because I think it is important to keep around.

Simon Richter: UI Design is Hard, Let's Go Shopping

As I have a few hours of downtime on the train, I'm working on one of my favourite KiCad features again: the pin table. This silently appeared in the last KiCad release -- when you have a schematic component open in the editor, there is a new icon, currently on the right of the toolbar, that brings up a table of all pins. Currently, I have the pin number, name and type in the table (as well as the coordinates on the grid, but that's probably not that useful). For the release, this table is read-only, but can at least be useful for proofreading: left mouse click on a column header sorts (as would be expected), and right mouse click on a column header creates a grouping (i.e. all pins that have the same value in this column are collapsed into a single row). The latter is entirely unintuitive, but I have no idea how to show the user that this option exists at all. The good old DirectoryOpus on Amiga had a convention that buttons that had right-click functionality had an earmark to indicate that a "backside" exists, but that is a fairly obscure UI pattern, and would be hard to reproduce across all platforms. I could make the right click drop down a menu instead of immediately selecting this column for grouping -- at least this way it would be explained what is happening, but advanced users may get annoyed at the extra step. I could add a status bar with a hint, but that will always cost me screen space, and this only papers over the issue. I could add a menu with sorting/grouping options, which would allow novice and intermediate users to find the function, but that still does not hint at the shortcut, and the functions to set the sorting would have to be disabled on Linux because wxGTK cannot set the sort column from a program. For editing, another problem turns up in the collapsed rows: what should editing do when we are basically editing multiple pins at once. The most useful mapping would be: However, I'd like to later on introduce multiple-function pins, which would add multiple (partially) exclusive pin names per pin number, which would change the semantics: At the moment, I'm mostly inclined to have separate modes for "grouped by pin number" and "grouped by pin name" -- these modes could be switched by the current grouping column, but that is neither intuitive, nor does it provide a full solution (because grouping by pin type is also supported). Time to look at the use cases: At 32C3, I've implemented a quick completeness check, consisting of a summary line that collapses the pin number column into a comma-separated list of ranges (where a range is defined as consecutive numbers in the last group of digits). I wonder if there is a group of developers with an UI focus who are happy about solving problems like these...

14 January 2016

Vincent Sanders: Ampere was the Newton of Electricity.

I think Maxwell was probably right, certainly the unit of current Ampere gives his name to has been a concern of mine recently.

Regular readers may have possibly noticed my unhealthy obsession with single board computers. I have recently rehomed all the systems into my rack which threw up a small issue of powering them all. I had been using an ad-hoc selection of USB wall warts and adapters but this ended up needing nine mains sockets and short of purchasing a very expensive PDU for the rack would have needed a lot of space.

Additionally having nine separate convertors from mains AC to low voltage DC was consuming over 60Watts for 20W of load! The majority of these supplies were simply delivering 5V either via micro USB or DC barrel jack.

Initially I considered using a ten port powered USB hub but this seemed expensive as I was not going to use the data connections, it also had a limit of 5W per port and some of my systems could potentially use more power than that so I decided to build my own supply.

PSU module from ebay
A quick look on ebay revealed that a 150W (30A at 5V) switching supply could be had from a UK vendor for 9.99 which seemed about right. An enclosure, fused and switched IEC inlet, ammeter/voltmeter with shunt and suitable cables were acquired for another 15

Top view of the supply all wired up
A little careful drilling and cutting of the enclosure made openings for the inlets, cables and display. These were then wired together with crimped and insulated spade and ring connectors. I wanted this build to be safe and reliable so care was taken to get the neatest layout I could manage with good separation between the low and high voltage cabling.

Completed supply with all twelve outputs wired up
The result is a neat supply with twelve outputs which i can easily extend to eighteen if needed. I was pleasantly surprised to discover that even with twelve SBC connected generating 20W load the power drawn by the supply was 25W or about 80% efficiency instead of the 33% previously achieved.

The inbuilt meter allows me to easily see the load on the supply which so far has not risen above 5A even at peak draw, despite the cubitruck and BananaPi having spinning rust hard drives attached, so there is plenty of room for my SBC addiction to grow (I already pledged for a Pine64).

Supply installed in the rack with some of the SBC connected
Overall I am pleased with how this turned out and while there are no detailed design files for this project it should be easy to follow if you want to repeat it. One note of caution though, this project has mains wiring and while I am confident in my own capabilities dealing with potentially lethal voltages I cannot be responsible for anyone else so caveat emptor!

1 December 2015

Simon Richter: Debian at the 32C3

In case you are going to 32C3, you may be interested to join us at the Debian Assembly there. As in the last years, this is going to be fairly informal, so if you are loosely affiliated with Debian or want to become a member by the time 34C3 rolls around, you are more than welcome to show up and sit there.

28 October 2015

Simon Richter: CEC remote control with Onkyo Receivers

For future reference: if you have an Onkyo AV receiver, these are picky about the CEC logical address of the player. If you have a default installation of Kodi, the default type is "recording device", and the logical address is 14 -- this is rejected. If you change the CEC device configuration, which can be found in .kodi/userdata/peripheral_data/ to use a device_type of 4 (which maps to a logical address of 4), then the receiver will start sending key events to Kodi. You may also need to set your remote control mode to 32910, by pressing the Display button in combination with the input source selector your Pi is connected to for three seconds, and then entering the code. This setting is not available from the GUI, so you really have to change the XML file. Next step: finding out why Kodi ignores these key presses. :(

20 October 2015

Simon Richter: Bluetooth and Dual Boot

I still have a dual-boot installation because sometimes I need Windows for something (usually consulting work), and I use a Bluetooth mouse. Obviously, when I boot into Windows, it does not have the pairing information for the mouse, so I have to redo the entire pairing process, and repeat that when I get back to Linux. So, dear lazyweb: is there a way to share Bluetooth pairing information between multiple operating system installations?

17 October 2015

Simon Richter: Key Transition

So, since several years I've had a second gpg key, 4096R/6AABE354. Several of you have already signed it, and I've been using it in Debian for some time already, but I've not announced it more widely yet, and I occasionally still get mail encrypted to the old key (which remains valid and usable, but it's 1024R). Of course, I've also made a formal transition statement:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
OpenPGP Key Transition Statement for Simon Richter
I have created a new OpenPGP key and will be transitioning away from
my old key.  The old key has not been compromised and will continue to
be valid for some time, but I prefer all future correspondence to be
encrypted to the new key, and will be making signatures with the new
key going forward.
I would like this new key to be re-integrated into the web of trust.
This message is signed by both keys to certify the transition.  My new
and old keys are signed by each other.  If you have signed my old key,
I would appreciate signatures on my new key as well, provided that
your signing policy permits that without re-authenticating me.
The old key, which I am transitioning away from, is:
pub   1024D/5706A4B4 2002-02-26
      Key fingerprint = 040E B5F7 84F1 4FBC CEAD  ADC6 18A0 CC8D 5706 A4B4
The new key, to which I am transitioning, is:
pub   4096R/6AABE354 2009-11-19
      Key fingerprint = 9C43 2534 95E4 DCA8 3794  5F5B EBF6 7A84 6AAB E354
The entire key may be downloaded from: http://www.simonrichter.eu/simon.asc
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 6AABE354
If you already know my old key, you can now verify that the new key is
signed by the old one:
  gpg --check-sigs 6AABE354
If you are satisfied that you've got the right key, and the User IDs
match what you expect, I would appreciate it if you would sign my key:
  gpg --sign-key 6AABE354
You can upload your signatures to a public keyserver directly:
  gpg --keyserver keys.gnupg.net --send-key 6AABE354
Or email sr@simonrichter.eu (possibly encrypted) the output from:
  gpg --armor --export 6AABE354
If you'd like any further verification or have any questions about the
transition please contact me directly.
To verify the integrity of this statement:
  wget -q -O- http://www.simonrichter.eu/key-transition-2015-03-09.txt   gpg --verify
   Simon
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJU/bbEAAoJEH69OHuwmQgRWOIH/AogHxhhVO5Tp5FFGpBFwljf
NzKTPBExMhZ/8trAzYybOWFv3Bx4AGdWkYfDnxP6oxQJOXVq4KL6ZcPPuIZuZ6fZ
bu0XHdPMU89u0TymR/WJENRCOcydRBe/lZs+zdJbKQtEZ+on4uNXxHpUiZPi1xxM
ggSdVBKn2PlCBcYih40S9Oo/rM9uBmYcFavX7JMouBSzgX78cVoIcY6zPRmHoq4k
TkGKfvHeSu+wbzWRmDwu/PFHRA4TKNvR6oeO+Et1yk454zjrHMXineBILRvvMCIA
t54pV6n+XzOUmtXcKnkIGltK+ZaJSV6am0swtx84RaevVXrknIQE8NvlA4MNgguI
nAQBAQIABgUCVP23tAAKCRDSx966V9+/u3j4BACVHifAcO86jAc5dn+4OkFZFhV1
l3MKIolL+E7Q7Ox+vJunGJJuGnOnazUK+64yDGZ2JxNJ4QNWD1FOs/Ng2gm82Vin
ArBtyp1ZGWUa+349X+1qarUQF9qAaUXDZjFp5Hzh/o6KC4t3eECxcb41og3LUTQD
VuG2KWNXYBe5P5ak9Q==
=o61r
-----END PGP SIGNATURE-----

15 October 2015

Simon Richter: Restarting blogging

I haven't had a blog for a few years now (and I might at some point be tempted to import all the old content), but occasionally I just want to point at something cool, rant about technology, or talk about my projects in the hope that someone else gets inspired to point at, or rant about them. My current interests are

19 October 2012

Martin F. Krafft: Configuration management

Puppet I've really had it with Puppet. I used to be able to put up with all its downsides
  • Non-Unix approach to everything (own transport, self-made PKI, non-intuitive configuration language, a faint attempt at versioning (bitbucket), and much much more )
  • Ruby
  • Abysmal slowness
  • Lack of basic functionality (e.g. replace a line of text)
  • Host management and configuration programming intertwined, lack of a high-level approach to defining functionality
  • Horrific error messages
  • Catastrophic upgrade paths
  • Did I mention Ruby and its speed?
  • Lack of IPv6 support
  • [I could keep going ]
but now that my fourth attempt to upgrade my complex configuration from version 0.25.5 to version 2.7 failed due to a myriad of completely incomprehensible errors ("err: Could not run Puppet configuration client: interning empty string") and many hours were lost in trying to hunt these down using binary searches, I am giving up. Bye bye Puppet.

An alternative But I need an alternative. I want a system that is capable of handling a large number of hosts, but not so complex that one wouldn't put it to use for half a dozen machines. The configuration management system I want looks about as follows: It
  • makes use of existing infrastructure (e.g. SSH transport and public keys, Unix toolchain, Debian package management and debconf)
  • interacts with the package management system (Debian only in my case)
  • can provision files whose contents might depend on context, particular machine data and conditionals. There should be a unified templating approach for static and dynamic files, with the ability to override the source of data (e.g. a default template used unless a template exists for a class of machine, or a specific hostname)
  • can edit files on the target machine in a flexible and robust manner
  • can remove files
  • can run commands when files change
  • can reference data from other machines (e.g. obtain the certificate fingerprint of each hosts that define me as their SMTP smarthost)
  • can control running services (i.e. enable init.d scripts, check that a process is running
  • is written in a sensible language
  • is modular and easily extensible, ideally using a well-known language (e.g. Python!)
  • allows to specify infrastructure with tags ("all webservers", "all machines in Zurich", "machines that are in Munich and receive mail"), but with the ability to override every parameter for a specific host
  • should just do configuration management, and not try to take away jobs from monitoring software
  • logs changes per-machine and collects data about applied configurations in a central location
  • is configured using flat files that are human-readable so that the configuration may be stored in Git (e.g. YAML, not XML)
  • can be configured using scripts in a flexible way
Since for me, Ruby is a downside of Puppet, I won't look at Chef, but from this page, I gleaned a couple of links: Ansible, Quattor, Salt, and bcfg2 (which uses XML though). And of course, there remains the ephemeral cfengine.

cfengine I haven't used cfengine since 2002, but I am not convinced it's worth a new look because it seems to be an academic project with gigantic complexity and a whole vernacular to its own. There is no doubt that it is a powerful solution, and the most mature of all of them, but it's far away from the Unix-like simplicity that I've come to love in almost 20 years of Debian. Do correct me if I am wrong.

Ansible Ansible looks interesting. It seems rather bottom-up, first introducing a way to remotely execute commands on hosts, which you can then later extend/automate to manage the host configurations. It uses SSH for transport, and its reason-to-be made me want to look at it. My ventures into the Ansible domain are not over yet, but I've put them on hold. First of all, it's not yet packaged for Debian (Ubuntu-PPA packages work on Debian squeeze and wheezy). Second, I was put off a bit by its gratuitous use of the shell to run commands, as well as other design decisions. Check this out: there are modules for the remote execution of commands, namely "shell", "command", and "raw". The shell modules should be self-explanatory; the command module provides some idempotency, such as not running the command if a file exists (or not). To do this, it creates a Python script in /tmp on the target and then executes that like so:
$SHELL -c /tmp/ansible/ansible-1350291485.22-74945524909437/command

Correct me if I am wrong, but there is zero need for this shell indirection. My attempts at finding an answer on IRC were met by user "daniel_hozac" with a reason along the lines of "it's needed, believe me", and on the mailing list, I am told that only the shell can execute a script by parsing the interpreter line at the top of the module. Finally, the raw execution module also executes using the shell And there a few other design decisions that I can't quite explain, around the command-line switch --sudo see the aforementioned message In short: running a command like
ansible -v arnold.madduck.net -a "/usr/bin/apt-get update" --sudo

does not invoke apt-get with sudo, as one might like; it invokes the shell that runs the Python script that runs the command. Effectively therefore, you need to allow sudo shell execution, and for proper automation, this has to be possible without a password. And then you might just as well allow root logins again. The author seems to think that "core behaviour" is that sudo allows all execution and that limiting the commands to run is not a use-case that Ansible will support. Apparently, I was the first to ever suggest this. There are always ways around (e.g. skip --sudo and just use sudo as the command, simply ignore the useless shell invocation and trust that your machine can handle it, but when such design decisions remain incomprehensible and get defended by the project people, then I am hesitant to invest more time on principle.

Salt Finally, I've looked at Salt, which is what I've spent most time on so far. From the discussions I started on host targeting and data collection, it soon became apparent that Salt is very thin and flexible, and that the user community is accomodating. Unfortunately, Salt does not use SSH, but at least it reuses existing functionality (ZeroMQ). As opposed to the push/pull model, Salt "minions" interestingly maintain a persistent connection to the server (which is not yet very stable), and while non-root usage is still not unproblematic, at least there has already been work done in this direction. I think I will investigate Salt more as it does look like it can do what I want. The YAML-based syntax does seem a bit brittle, but it's the best I've found so far. NP: The Pineapple Thief: Someone Here is Missing

27 December 2011

David Welton: 2011 in Books

Since I got my Kindle a bit more than a year ago, I have finally been able to slake my thirst for reading materials, something that was prohibitively expensive when ordering English language books via Amazon.co.uk, and took lots of time to boot. Here are some of the interesting books I've happened on in the past year: The big one was "Start Small, Stay Small": which has tons of ideas on how to do small, niche startups, "for the rest of us". Those of us who aren't in Silicon Valley, who aren't seeking millions in VC funding, those who don't want to aim for "astronomically rich", but just a comfortable lifestyle with more control over our own destiny. This book gets special mention for being a big inspiration for LiberWriter. Here is a list of the others. And for fun, a variety of Sci fi and Western books, but nothing particularly noteworthy. Neal Stephenson's REAMDE was fun, but I'm not sure I'd read it more than once, like some of his other books. Here are my Amazon wishlists of things I'm considering reading at some point in the future. Comments welcome on the value of the books listed. "Regular" books: http://amzn.com/w/20I0Y1YGD1FUB and random fun books and movies. Business books: http://amzn.com/w/5B2JQOP8VZEW - although some of them are not strictly business books. Yes, if you're curious, the book links do have referral codes in them, to help sustain my reading habit.

3 November 2011

John Goerzen: Greece part 4: Food and Shopping

See also parts 1, 2, and 3. I am a person that enjoys food that s different from what s at home, and Rhodes didn t disappoint. Terah and I used to live close to an excellent Greek restaurant in Indianapolis, so we were already familiar in some way with the food. But there isn t any Greek restaurant at all in the Wichita area, so we missed it. Our favorite restaurant on Rhodes was Kalypso of Lindos. Everything there was just excellent, starting with the saganaki, one of my favorite Greek appetizers. I had yogurt with honey there for dessert both times we visited, a surprisingly tasty desert. Like many restaurants in Lindos, Kalypso had the option of eating on the rooftop, or at ground level. We ate on the roof, which had a nice view of the Lindos Acropolis. Being outdoors, there were sometimes cats around. This kitten enjoyed playing games with my shoestrings for awhile. Kalypso is at a 17th century captain s house. Here s a view of it from the rooftop: We, of course, had the chance to eat at quite a few different places during our visit, and I d go on way too long if I mentioned them all. Terah particularly enjoyed the gelaterie.gr ice cream shop in the square in Rhodes. We liked our lunch at Maria s Taverna in Lindos and enjoyed chatting with the staff there. I recently talked about shopping in Mexico, and perhaps learned a thing or two from that. I won t say we never buy them, but in general we don t buy souvenirs like t-shirts, plastic things made in China, etc. We prefer to buy local. Those items tend to be higher quality, more interesting, and we like to support the local economy. We also don t have lots of room for things, so we try to choose carefully. So it was something of a surprise to Terah, and perhaps even to me, when I suggested we go shopping one day. Terah typically enjoys shopping a lot more than I do. Anyhow, off we went to Lindos. One of the first things that had caught our eye in Lindos was the shop selling glass. But it wasn t just any glass; it appeared to be made with some sort of layered process, and has a distinctly three-dimensional feel to it. As you move around, it looks like the background shifts. We wound up with this item, which was made in Athens: By the time we visited Lindos specifically for shopping, we had a good feel for when the busy times of day were, so we could avoid them. It gave us the opportunity to visit with people and when they weren t busy, many shopkeepers liked to chat. I enjoy hearing people s stories and we heard several. One ceramics shop the Musa Shop -caught Terah s eye. They had such incredible and beautiful pieces outside that we just had to go in. We wound up with two pieces from there, both in shades of blue: Both remind me of the Aegean Sea and the deep blue sky of Rhodes. And then, as we were walking along, I pointed inside a shop and said to Terah, Hey, those look different. We went in, and eventually wound up buying these: The appearance, and even feel, of them is unlike anything I d seen. Quite interesting. And seeing those particular items in the Lakis Place shop led to making some new friends I ll write about it in the next post.

14 November 2008

Simon Richter: --as-needed

Raphael cheers for the addition of an --as-needed option to a linker command line. While I think the general idea of not linking libraries you don't use directly is a good one, --as-needed is not a proper solution since it tells the linker to second-guess the compiler. A proper solution would be to fix the build systems so they don't pass the names of unneeded libraries in the first place. There are valid reasons for linking against objects that provide no symbols to the object being built, for example if the plugin ABI of a program has these symbols exported to plugins (in order to have all plugins agree on a common set of allocation functions).

Next.