Search Results: "Patrick Schoenfeld"

23 October 2017

Patrick Schoenfeld: Testing javascript in a dockerized rails application with rspec-rails

The other day I wanted to add support for tests of javascript functionality in a (dockerized) rails application using rspec-rails. Since rails 5.1 includes system tests with niceties like automatically taking a screenshot on failed tests, I hoped for a way to benefit from this
features without changing to another test framework. Lucky me only recently the authors of rspec-rails added support for so-called system specs. There is not much documentation so far (but there are a lot of useful information in the corresponding bug report #1838 and a friendly guy named Thomas Walpole (@twalpole) is helpfully answering to questions in that issue. To make things a little bit more complicated: the application in question is usually running in a docker container and thus the tests of the application are also run in a docker container. I didn t want to change this, so here is what it took me to get this running. Overview Let s see what we want to achieve exactly: From a technical point of view, we will have the application under test (AUT) and the tests in one container (let s call it: web) and we will need another container running a javascript-capable browser (let s call it browser). Thus we need the tests to drive a remote running browser (at least when running in a docker environment) which needs to access the application under a different address than usually. Namely an address reachable by the chrome-container, since it will not be reachable via 127.0.0.1 as is (rightfully) assumed by default. If we want Warden authentication stubbing to work (as we do, since our application uses Devise) and transactional fixtures as well (e.g. rails handling database cleanup between tests without database_cleaner gem) we also need to ensure that the application server is being started by the tests and the tests are actually run against that server. Otherwise we might run into problems. Getting the containers ready Assuming you already have a container setup (and are using docker-compose like we do) there is not that much to change on the docker front. Basically you need to add a new service called chrome and point it to an appropriate image and add a link to it in your existing web-container. I ve decided to use standalone-chrome for the browser part, for which there are docker images provided by the selenium project (they also have images for other browsers). Kudos for that.
...
services:
  chrome:
    image: selenium/standalone-chrome
   
  web:
   
    links:
      - chrome
The link ensures that the chrome instance is available before we run the tests and that the the web-container is able to resolve the name of this container. Unfortunately this is not true for the other way round, so we need some magic in our test code to find out the ip-address of the web-container. More to this later. Other than that, you probably want to configure a volume for you to be able to access the screenshots, which get saved to tmp/screenshots
in the application directory. Preparing the application for running system tests There is a bit more to do on the application side. The steps are roughly:
  1. Add necessary depends / version constraints
  2. Register a driver for the remote chrome
  3. Configure capybara to use the appropriate host for your tests (and your configured driver)
  4. Add actual tests with type: :system and js: true
Let s walk them through. Add necessary depends What we need is the following: The required features are already part of 3.7.0, but this version is the version I used and it contains a bugfix, which may or may not be relevant. One comment about the rails version: for the tests to properly work it s viable to have puma use certain settings. In rails 5.1.4 (the version released, at time of writing this) uses the settings from config/puma.rb which most likely collides with the necessary settings. You can ensure these settings yourself or use rails from branch 5-1-stable which includes this change. I decided for the latter and pinned my Gemfile to the then current commit. Register a driver for the remote chrome To register the required driver, you ll have to add some lines to your rails_helper.rb:
if ENV['DOCKER']
  selenium_url =
      "http://chrome:4444/wd/hub"
  Capybara.register_driver :selenium_remote do  app 
    Capybara::Selenium::Driver.new(app,
         :url => selenium_url, :browser => :remote, desired_capabilities: :chrome )
  end
end
Note that I added those lines conditionally (since I still want to be able to use a local chrome via chromedriver) if an environment variable DOCKER is set. We defined that environment variable in our Dockerfile and thus you might need to adapt this to your case. Also note that the selenium_url is hard-coded. You could very well take a different approach, e.g. using an externally specified SELENIUM_URL, but ultimately the requirement is that the driver needs to know that the chrome instance is running on host chrome, port 4444 (the containers default). Configure capybara to use the appropriate host and driver The next step is to ensure that javascript-requiring system tests are actually run with the given driver and use the right host. To achieve that we need to add a before-hook to the corresponding tests or we can configure rspec accordingly to always include such a hook by modifying the rspec-configuration in rails_helper.rb like this:
RSpec.configure do  config 
  ...
  config.before(:each, type: :system, js: true) do
    if ENV['DOCKER']
      driven_by :selenium_remote
      ip = Socket.ip_address_list.detect addr  addr.ipv4_private?  .ip_address
      host! "http://# ip :# Capybara.server_port "
    else
      driven_by :headless_chrome
    end
  end
Note the part with the ip-address: it tries to find an IPv4 private address for the web-container (the container running the tests) to ensure the chrome-container uses this address to access the application. The Capybara.server_port is important here, since it will correspond to the puma instance launched by the tests. That heuristic (first ipv4 private address) works for us at the moment, but it might not work for you. It is basically a workaround to the fact that I couldn t get web resolvable for the chrome container which may be fixable on the docker side, but I was to lazy to further investigate that. If you change it: Just make sure the host! method uses an URI pointing to an address of the web-container that is reachable to the chrome-container. Define tests with type: :system and js: true Last but certainly not least, you need actual tests of the required type and with or without js: true. This can be achieved by creating tests files starting like this:
RSpec.feature "Foobar", type: :system, js: true do
Since the new rspec-style system tests are based around the feature-specs which used to be around previously, the rest of the tests is exactly like it is described for feature specs. Run the tests To run the tests a commandline like the following should do: docker-compose run web rspec It won t make a big noise about running the tests against chrome, unless something fails. In that case you ll see a message telling you where the screenshot has been placed. Troubleshooting Below I add some hints about problems I ve seen during configuring that: Test failing, screenshot shows login screen In that case puma might be configured wrongly or you are not using transactional fixtures. See the hints above about the rails version to use which also includes some pointers to helpful explanations. Note that rspec-rails by default does not output the puma startup output as it clutters the tests. For debugging purposes it might be helpful to change that by adding the following line to your tests:
ActionDispatch::SystemTesting::Server.silence_puma = false
Error message: Unable to find chromedriver This indicates that your driver is not configured properly, because the default for system tests is to be driven_by selenium, which tries to spawn an own chrome instance and is suitable for non-dockerized tests. Check if your tests are marked as js: true (if you followed the instructions above) and that you properly added the before-hook to your rspec-configuration. Collisions with VCR If you happen to have tests that make use of the vcr gem you might see it complaining about not knowing what to do with the requests between the driver and the chrome instance. You can fix this, by telling VCR to ignore that requests, by adding a line where you configured VCR:
VCR.configure do  config 
# required so we don't collide with capybara tests
config.ignore_hosts 'chrome'
...

9 June 2016

Patrick Schoenfeld: Ansible: Indenting in Templates

When using ansible to configure systems and services, templates can reach a significant complexity. Proper indenting can help to improve the readability of the templates, which is very important for further maintenance. Unfortunately the default settings for the jinja2 template engine in ansible do enable trim_blocks only, while a combination with lstrip_blocks would be better. But here comes the good news: It s possible to enable that setting on a per-template base. The secret is to add a special comment to the very first line of a template:
#jinja2: lstrip_blocks: True
This setting does the following: If enabled, leading spaces and tabs are stripped from the start of a line to a block . So a resulting template could look like this:
global
 % for setting in global_settings % 
     % if setting ... % 
    option   setting  
     % endif % 
 % endfor % 
Unfortunately (or fortunately, if you want to see it this way  this does not strip leading spaces and tabs where the indentation is followed by pure text, e.g. the whitespaces in line 4 are preserved. So as a matter of fact, if you care for the indentation in the resulting target file, you need to indent those lines according to the indentation wanted in the target file instead, like it is done in the example. In less simple cases, with more deep nesting, this may seem odd, but hey: it s the best compromise between a good, readable template and a consistently indented output file.

19 August 2015

Patrick Schoenfeld: aptituz/ssh 2.3.2 published

I ve just uploaded an update version of my puppet ssh module to the forge. The module aims at being a generic module to manage of ssh server and clients, including key generation and known_hosts management. It provides a mechanism to generate and deploy ssh keys without the need of storeconfig or PuppetDB but a server-side cache instead. This is neat, if you want to remain ssh keys during a reprovisioning of a host. Updates The update is mostly to push out some patches I ve received from contributors via pull requests in the last few months. It adds:

5 June 2015

Patrick Schoenfeld: Testing puppet modules: an overview

When it comes to testing puppet modules, there are lot of options, but for someone entering the world of puppet module testing, the pure variety may seem overwhelming. This is a try to provide some overview. So you ve written a puppet module and would like to add some tests. Now what?As of today, puppet tests basically can be done in two ways, complementing each other: Catalog tests
In most cases you should at least write some catalog tests.
As of writing this (June 2015) the tool of choice is rspec-puppet. There used to be at least one other and you might have heard about it, but it s deprecated. For an introduction to this tool, you are best served by reading its brief but sufficient docs. Function acceptance tests
If catalog testing is not enough for you (e.g. you want to test that your website profile is actually installing and serving your site on port 80 and port 443) the next logical step is to write beaker tests for tests in a real system (as real as a virtual machine can be). This is also what you need, if you are writing custom types. Today s tool of choice for this job is beaker with beaker-rspec. After you ve written some rspec tests, this might feel similar. Since the documentation, might not seem very newbie friendly at first glance, the page Howto Beaker lists the relevant pages in the documentation to get started in a sensible order. Basically it s: Update your modules build depencies (Gemfile), decide for a hypervisor, create (or describe) your test environment, write spec tests and execute them :)
Skeleton of a module If testing puppet modules falls into your lap and you ve already written your puppet code, it s to late too start with a module anatomy as generated by puppet module generate But: It s certainly a good bet to know which technologies are todays common best practice. Further reading A very good guide to setting things up and writing tests of both types is the threepart blog post by Micka l Can vet written for camptocamp. It is a basic guide into test-driven development (writing tests before writing actual code) on a practical example.

2 May 2015

Patrick Schoenfeld: Inbox: Zeroed.

E-Mail is a pest, a big time killer wasting your and my time each and every day. Of course it is also a valuable tool, one that no one can renounce. So how can it be of more use than trouble? So far I ve followed a no-delete policy when it comes to my mails, since space was not a problem at all. But it developed into a big nasty pile of mails, that brought regular distraction, each time I looked at my inbox. So I decided to adopt the Inbox zero concept. Step 1: Get the pile down My e-mails piled up since years, so I had around 10000 mails in my inbox, with some hundred being unread. I needed to get this pile down and started with the most recent mails, trying to identify clusters of mails, filtering for them and then following these steps: Since it wasn t possible to decide on a course for every mail (that would be a bit like hoovering in the dessert), I did this only for the first 1000 of mails or so. All mails older than a month were marked read and moved to archive immediately after. Another approach would be to move all files to a folder called DMZ and go to step 2. Step 2: Prepare for implanting some habits Most mails are the opposite of good old hackish perl code: read only. They are easy to act on, when they come around: just archive or delete them. But the rest will be what steals your time. Some mails require action, either immediately or in a while, some wait for a schedule, e.g. flight informations or reservation mails and stuff. Whatever the reason is, you want to keep them around, because they still have a purpose. There are various filing systems for those mails, most of them GTD variants. As a gmail user I found this variant, with multiple inboxes in a special gmail view, interesting and now give it a try. One word about the archive folders. I can highly recommend to reduce the number of folders you archive to as much as possible. Step 3: Get into habit Now to the hard part. Get into habit with acting on your inbox. Do it regularly, maybe every hour or so and be prepared to do quick decisions. Act on any mail immediately, which means either file/delete it, reply to it (if this is what takes less time) or mark it according to your filing system as prepared in step 2. And if no mails arrived, then it s a good moment to review your marked mails if any on them can be further processed. Now let s see weither my inbox will still be zeroed in a month from now.

26 April 2015

Patrick Schoenfeld: Sharing code between puppet providers

So you ve written that custom puppet type for something and start working on another puppet type in the same module. What if you needed to share some code between this types? Is there a way of code-reuse that works with the plugin sync mechanism? Yes, there is. Puppet even has two possible ways of sharing code between types. Option #1: a shared base provider A provider in puppet is basically a class associated with a certain (puppet) type and there can be a lot providers for a single type (just look at the package provider!). It seems quiet natural, that it s possible to define a parent class for those providers. So natural, that even the official puppet documentation writes about it. Option #2: Shared libraries The second option is a shared library, shipped in a certain namespace in the lib directory of the module, whereas the idea is mostly sketched in the feature ticket #14149. Basically one defines a class in the special Puppetx namespace, using the author- and module-name in the class name, in order to avoid conflicts with other modules.
require 'puppetx'
module Puppetx::<Author>
  module Puppetx::<Author>::<Modulename>
   ... your helper functions go here ...
  end
end
This example would be saved to
lib/<author>/<modulename>
in your module s folder and be included in your provider with something along the following:
require 
  File.expand_path(File.join(File.dirname(__FILE__), "..", "..", "..",
    "puppet_x", "<author>", "<modulename>.rb"))
Compatibility with Puppet 4: In puppet 4 the name of the namespace has changed slightly. It s now called PuppetX instead of Puppetx and is stored in a file puppet_x.rb , which means that the require and the module name itself need to be changed:
require 'puppet_x'
module PuppetX::<authorname>
  module PuppetX::<authorname>::<modulename>
For backward compatibility with puppet 3 you could instead add something like this, according to my co-worker mxey, who knows way more about ruby then I do:
module PuppetX
 module <Author>
 <Modulename> = Puppetx::<Authorname>::<Modulename>
 end
end
Apart from this you d need to change the require to be conditional on the puppet-version and refer to the module by the aliased version (which is left as an exercise for the reader ;))

25 April 2015

Patrick Schoenfeld: WordPress(ed) again.

I just migrated my blog(s) to WordPress. Just recently I decided to put more time into blogging again. I wasn t entirely happy with Movable Type anymore, especially since the last update broke my customized theme and I struggled with the installation of another theme, which basically just were never found, no matter which path I put it into. What I wanted is just more blogging, not all that technical stuff. And since the Movable Type makers also seem to have gone crazy (their Getting started site tells users to head over to movabletype.com to get a 999$ license) I decided to get back to WordPress. There were reasons, why I haven t chosen WordPress back when I migrated from blogger to MT, but one has to say anyway, that things have moved a lot since then. And WordPress is as easy as it can be and has a prosper community, something I cannot say about MovableType. The migration went okay, although there were some oddities in the blog entries exported by MT (e.g. datetime strings with EM and FM at the EOL) and I needed to figure out how the multisite feature in WordPress works. But now I have exactly what I want.

19 April 2015

Patrick Schoenfeld: Resources about writing puppet types and providers

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy. Now where to start, if you want to write your own type? Overview: modeling and providing types First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers. The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It s a good idea to start with a rough idea of what properties you ll be manage with your resource and what values they will accept, since the type also does the job of validation. What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions. A combination of a type and a matching provider is what forms a (custom) resource type. Resources Next I ll show you some resources about puppet provider development, that I found useful: Official documentation: Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details: Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza: Books:
The probably most complete information, including explanations of the puppet resource model and it s resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu. The puppet source:
Last but not least, it s always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

1 February 2012

Rapha&#235;l Hertzog: My Debian Activities in January 2012

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (213.68 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg The biggest change I made is a small patch that brings to an end years and years of recurring discussions about the build-arch and build-indep targets of debian/rules (see #229357). Last year the technical committee took this issue in its hands (see #629385) but it failed to take any resolution. Fortunately thanks to this we got some concrete numbers on the colateral damages inflicted on the archive for each possible approach. In the end, Guillem and I managed to agree on the way forward. The remaining of what I did as dpkg maintainer has not much to do with coding. I reviewed the work of Gianluca Ciccarelli on dpkg-maintscript-helper who is trying to provide helper functions to handle migration between directories and symlinks. I also reviewed a 2000-lines patch from Patrick Schoenfeld who s trying to provide a perl API to parse dpkg log files and extract meaningful data out of them. I updated the dpkg-architecture manual page to document the Makefile snippet /usr/share/dpkg/architecture.mk and to drop information that s no longer releveant nowadays. I reviewed a huge patch prepared by Russ Alberry to update the Debian policy and document the usage of symbols files for libraries. As the author of dpkg-gensymbols, I was keen to see it properly documented at the policy level. I brought up for discussion a detail that was annoying me for quite some time: some copyright notices were embedded in translatable strings and updating them resulted in useless work for translators. In the end we decided to drop those notices and to keep them only at the source level. I updated my multiarch branch on top of Guillem s branch several times, all the fixes that were in my branch have been integrated (often in a modified form). Unfortunately even if the code works quite well, Guillem doesn t want to release anything to Debian until he has finished to review everything and many people are annoyed by the unreasonable delay that it imposes. Cyril Brulebois tried to release a snapshot of the current multiarch branch to experimental but Guillem has been prompt to revert this upload. I m somewhat at a loss in this situation. I offered my help to Guillem multiple times but he keeps doing his work in private, he doesn t share many details of his review except some comments in commit logs or when it affects the public interface. I complained once more of this sad situation. Debian Package Maintenance Hub That s the codename I use for a new infrastructure that I would like to develop to replace the Package Tracking System and the DDPO and several other services. I started to draft a Debian Enhancement Proposal (DEP), see DEP-2, and requested some comments within the QA team. For now, it looks like that nobody had major objections on the driving idea behind this project. Those who commented were rather enthusiastic. I will continue to improve this DEP within the QA team and at some point I will bring the discussion to a larger audience like debian-devel@lists.debian.org. Package Tracking System Even if I started to design its replacement, the PTS will still be used for quite some time so I implemented two new features that I deemed important: displaying a TODO notice when there is (at least) one open bug related to a release goal, displaying a notice when the package is involved in an ongoing or upcoming transition. Misc packaging tasks I created and uploaded the dh-linktree package which is a debhelper addon to create symlink trees (useful to replace embedded copies of PHP/JavaScript libraries by symlinks to packaged copies of those files). I packaged quilt 0.50. I helped the upstream authors to merge a Debian patch that had been forwarded by Martin Quinson (a quilt s co-maintainer). I packaged a security release of WordPress (3.3.1) and a new upstream release of feed2omb and gnome-shell-timer. I prepared a new Debian release of python-django with a patch cherry-picked from the upstream SVN repository to fix the RC bug #655666. Book update We re again making decent progress in the translation of the Debian Administrator s Handbook, about 12 chapters are already translated. The liberation campaign is also (slowly) going forward. We re at 72% now (thanks to 63 new supporters!) while we were only at 67% at the start of January. Thanks See you next month for a new summary of my activities.

5 comments Liked this article? Click here. My blog is Flattr-enabled.

5 January 2012

Patrick Schoenfeld: Bringing GVFS to a good use

One of the GNOME features I really liked since the beginning of my GNOME usage is the ability to mount various network file system by a few clicks and keystrokes. It enables me to quickly access NFS shares or files via SFTP. But so far these mounts weren't actually mounts in a classical sense, so they were only rudimentary useful.

As a user who often works with terminals I was always halfway happy with that feature and halfway not:

- Applications have to be aware and enabled to make use of that feature, so its often neccessary to workaround problems (e.g. movie players not able to open a file on a share)
- No shell access to files

Previously this GNOME feature was realised with an abstraction layer called GNOME VFS, which all applications needed to use if they wanted to provide access to the "virtual mounts". It did no efforts to actually re-use common mechanisms of Un*x-like systems, like mount points. So it were doomed to fail at certain degrees.

Today GNOME uses a new mechanism, called GVFS. Its realized by a shared library and daemon components communicating over DBUS. At first glance it does not seem to change anything, so I was rather disappointed. But then I heard rumors, that Ubuntu was actually making these mounts available in a special mount point in ~/.gvfs.
My Debian GNOME installation were not.

So I investigated a bit and found evidence about a daemon called gvfs-fuse-daemon, which eventually is handling that. After that I figured this daemon to be in a package called gvfs-fuse and learned that installing it and restarting my GNOME session is actually all needed to do.
Now getting shell access to my GNOME "Connect to server" mounts is actually possible, which makes these mounts really useful after all. Only thing to find out is, if e.g. the video player example now works from Nautilus. But if it doesn't I'm still able to use it via a shell.

The solution is quiet obvious, on the one side. But totally non-obvious on the other.

A common user eventually will not find that solutin without aid. After all the package name does not really suggest what the package is used for, since its referring to technologies instead of the problem it solves. Which is understandable. What I don't understand is, why this package is not a dependency of the gnome meta package. But I haven't yet asked the maintainer, so I cannot really blame anybody.

However: Now GVFS is actually useful.

16 December 2011

Patrick Schoenfeld: Why Gnome3 sucks (for me)

When I started using Linux, I started with a desktop environment (KDE) and then tried a lot of (standalone) window managers, including but not limited to Enlightenment, Blackbox, Fluxbox and Sawfish. But I was never really satisfied as it felt as if something was missing.
It then came, that I became a user of a desktop environment again. Now I have been a GNOME user for at least five years.

Among the users of desktop environments, I'm probably not a typical user. In 2009 my setup drifted from a more or less standard GNOME 2.3 to a combination of GNOME and a tiling window manager, which I called Gnomad, as a logical continuation of something I've done for a long time since using computers: Simplifying tasks, which are not my main business.
I just didn't want to care about the hundred techniques to auto mount an USB stick or similar tasks, which are handed just fine by a common Desktop Environment. And I didn't want to care about arranging windows, because after all the arrangement of my windows was always more or less the same.

But there were rumors that GNOME3 significantly changed the user experience and I wanted to give it a try at some point in the future. This try was forced by latest updates in Debian unstable, so I tested it for some days.

Day 1: Getting to know each other
My first day was GNOME3 was a non-working-day. When I'm at home I'm mostly using my computer for some chatting and surfing in the web, so I don't have great demands on the
Window manager/Desktop Environment.
Accordingly the very first experience with GNOME3 was mostly a good one, except some minor issues.
The first thing to notice in a positive way, is the activities screen. I guess this one is inspired by Mac Expos , but its nevertheless a nice thing, as it provides an overview over opened applications.
Apart from that, its possible to launch applications from there. The classical application menu is gone, but this one is better. One can either choose with the mouse or start typing the applications name and it will incrementally search for it and show it immediately. Hitting Enter is enough to launch the application.
Additionally, on the left, there is a launcher for your favorite applications.

This one lead to the first question mark above my head.

I had opened a terminal via this launcher and now wanted to open another terminal, after I switched to a different workspace.
So I just clicked it again and had to notice that GNOME developers and I have a different perception, of whats intuitive, because that click
led me back to the terminal on the first workspace. It took me some minutes to realize how I'm able to start a second terminal, by just right clicking on the icon and click on Open new window or similar.

Day 2: Doing productive work
The next day was a work day and I was on a customer appointment to do support/maintenance tasks. On this appointments my notebook is not my primary work machine and so I could softly go over to using GNOME3 when doing productive work.
I can say that it worked, although I soon started to miss some keystrokes which I'm used to. Like switching workspaces with Meta4+Number or at least switch workspaces by cycling through them with STRG+Alt+Left and Right Arrow Keys. While the first is a shortcut specific to my GNOmad setup, the latter is something I knew back from the good old Gnome2 days.
It just vanished from the default keybindings and did nothing. Appearently, as I learned afterwards, it has been decided to use the Up/Down arrow keys instead.

While for new users this will not be a problem at all, this is really hard for someone using GNOME for about 5 years as these are keystrokes one is really used to.

Day 3: Going mulithead

The appointment ended on the third day at afternoon, so when I came back into the office, I had the chance to test the whole thing in my usual work environment. At office I have my notebook attached to a docking station which has a monitor attached to it. So usually I work in dual head mode, with my primary work screen being the bigger external screen.

That was the point, where GNOME3 became painful.

At first everything was fine. GNOME3 detected the second monitor and made it available for use with the correct resolution. But things started to become ugly, when I actually wanted to work with it. GNOME3 decided that the internal screen is the primary screen, so the panel (or what has stood around from it) was on that screen. I can live with that, as thats basically the same with GNOME2, but the question was: How to start an application in a way that its started on the big screen?
I knew that I couldn't just use the keystrokes I'm used to, like Meta4+p, which were bound to launching dmenu in my GNOmad setup, as I knew that I was not running GNOmad at present. So I thought hard and remembered that GNOME had a run dialog itself, bound to Alt+F2. Relieved I noticed, that this shortcut had not gone away. I typed 'chromium' and waited. A message appeared telling me that the file was not found. Okay. No, wait. What? I did not uninstall it, so I guess it should be there.
I tried several other applications and all were told not to be available. Most likely this is a bug and bugs happen but this was really serious for me.

Another approach was to use the activity screen. At first I used it manually, by moving the mouse to there, launch chromium (surprise, surprise, it was still there) and moved it to the right screen, because I haven't found a shorter way to do that. There must be a better way to that, I thought and so I googled. Actually there are more then one better way to do it.

  1. There is a hidden hot spot in the corner of the second screen, too. If one finds and moves the mouse over it, the activity screen will open on the primary monitor and on the secondary monitor, but the application list is only one the first. One can now type what he wants to start, hit Enter and Tada its on the screen where my mouse is. Not very intuitive, in my opinion, and I really would prefer if I had the same level of choice on the second screen.
  2. I can hit Meta4 and its opening the activitiy screen. From there everything is the same as described above.

There were many other small quirks that disturbed me, like that the desktop has vanished away (I used it seldom, but it was irritating that it wasn't there anymore), shortcuts I were missing and so on. lot of this is really specific to me being used to my previous setup, but I can't help myself but I really need those little helpers.

So, at some point I decided to go back to GNOmad again, knowing that I will run into the next problem again, because I would have to permanently disable the new gnome3 mode and instead launch GNOME in the fallback mode. Luckily that is as easy as typing the following in a terminal

gsettings set org.gnome.desktop.session session-name 'gnome-fallback'

I quickly got this working again, but had to notice another cruel thing in GNOME3, that even disturbed my GNOmad experience. GNOME3 now binds to Meta4+p to a function, which switches internal/extern monitor setting, and now that is a real PITA.

From this point on another journey began, that eventually ended in switch to a Gnome/Awesome setup but this is a different story for a different time.

14 December 2011

Patrick Schoenfeld: Facebook aggressively advertising dubious features

Yesterday some confusion arised on my side when I saw a new facebook advertising campaign in my facebook account (yes, I am a member of facebook, although I'm aware of the privacy concerns). Basically it was saying that I should try the friends finder and that some of my friends (showing and naming me three of them) would have used it already.

Some background:
The friends finder feature of facebook is a feature that asks for the password of your e-mail account. It will then crawl through your emails to find contacts that might already be on facebook but not connected to you (in a facebook sense).

My first feeling was: Oh my god. How can it be that friends (and family) of mine are so naive? Especially since there were people included which I consider to be quiet clever. But honestly: Who would be so naive to give an unknown company direct unsupervised (you can't tell what they really do) access to your mail account? Would you give it to your friend? Your husband? Your father? I guess the answer will be "No" in the most cases and these people are most likely people you trust. Well, I know, you could state similar things for Googlemail who crawl your mails to show you personalized advertising. And in fact you are right. But if one decides to use gmail for your email hosting you have to trust them anyway. Like you have to trust anybody else you retain to host your mails (who could do the same but just don't tell you). But in this case its a third party, Facebook.

But if you now think that this becomes a rant against my friends: You are wrong. Over the day I found out that it basically shows up all my friends in a rotation. Every time I open the start page it randomly picks three people from my friend list and shows them to me, telling the same lie to me all over again.

So what do we have here? Facebook tries to advertise the most dubious feature they have in the most aggressive way one could imagine. By pretending wrong facts. Wouldn't that even be an element of crime in Germany ("Irref hrung" 5 UWG or maybe also 4 UWG, "unsachliche Beeinflussung")?

Facebook, you can do better.

Update: Stefano raised a good point. I didn't actually make clear that it has been verified that those people actually did not use the "feature":

1. I asked some of them. They said, they didn't use it and were in fact surprised that I asked.
2. Others told me they saw that ad stating that I used the feature. I definitely did not use this "feature".
3. Probably weak: IMHO its highly unlikely that all of my friends used that feature and at the point I've written this it had already shown my whole friend list.

Patrick Schoenfeld: Ubuntu considering critical bugs an "invalid" bug?

I just discovered that bug report over at Ubuntu.
Short summary:
They have a script in upstart which is not meant to be run manually and if you do it will erase your whole file system. Additionally it seems that the fact that you shall not run that script is not communicated anywhere.

That alone isn't the most spectacular about it. Bugs happen. Whats spectacular about it is how a Canonical employee and member of the TechBoard (for people who don't know it: The people who decide about the technical direction Ubuntu takes) handles that bug. One quote of him to reflect it all:
Sorry, the only response here is "Don't Do That Then"

So what we have here is a classical case of bad programming. The problem in question is that the script expects a certain environment variable to be set. Fair enough. However it does not check if its set at all and instead of failing or using a sensible default it simply sticks to undefined behaviour. What we have here is a classical programming mistake every beginner tends to do. People who start programming often forget (or don't know) that every external value we rely on must be considered untrustworthy. Therefore a good practice is to check those values.In this case someone decided that this is useless because they suffer from the wrong assumption that nobody ever calls it manually and the other wrong assumption that caller of the scriptwill always set the environment variable correctly. This is a double-fail.
Now the developer in question does not accept that (someone else indicated why the behaviour of the script is dangerous), he simply says that the bug is invalid. Thats really a pity.

Patrick Schoenfeld: Struggling with Advanced Format during a LVM to RAID migration

Recently I decided to invest in another harddisk for my atom system. That system, I built up almost two years ago, has become the central system in my home network, serving as a fileserver to host my personal data, some git repositories etc., streaming server and since I switched to a cable internet connection it also serves as a router/firewall.Originally, I bought that disk to backup some data, of the systems in the network, but I realized that all data on this system were hosted on a single 320GB 2,5" disk and it became clear to me, that, in absense of a proper backup strategy, I at least should provide some redundancy. So I decided, once the disk was in place, that the whole system should move to a RAID1 over the two disks. Basically this is not that hard as it may seem at a first glance, but I had some problems due to a new sector size in some recent harddisks, which is called Advanced Format. But lets begin from the start. The basic idea of such a migration is:
  1. Install mdadm with apt-get. Make sure to answer 'all' to the question which devices need to be activated in order to boot the system.

  2. Partition the new disk (almost) identical.Because the new drive is somewhat bigger that wouldn't make sense, but at least the two partitions which should be mirrored on the second disk, need to be identical.Usually this is achieved easily by using
    sfdisk -d /dev/sda sfdisk /dev/second/sdb
    In this case, it wasn't that easy. But I will come to that in a minute.

  3. Change the type of the partitions to 'FD' (Linux RAID autodetect) with fdisk

  4. Erase evidence of an eventual old RAID from the partitions, which is probably pointless on a brand-new disk, but we want to be sure:
    mdadm --zero-superblock /dev/sdb1mdadm --zero-superblock /dev/sdb2
  5. Create two DEGRADED raid1 arrays from the partitions:
    mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missingmdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 missing
  6. Create filesystem on the first raid device, which will become /boot.

  7. Mount that filesystem somewhere temporary and move the contents of /boot to it:
    mount /dev/md0 /mnt/somewhere
  8. Unmount /boot, edit fstab to mount /boot from /dev/md0 and re-mount /boot (from md0)

  9. Create mdadm configuration with mdadm and append it to /etc/mdadm/mdadm.conf:
    mdadm --examine --scan >> /etc/mdadm/mdadm.conf
  10. Update the initramfs and grub (no manual modification needed with grub2 on my system)and install grub into the MBR of the second disk.
    update-initramfs -uupdate-grubgrub-install /dev/sdb
  11. The first point to pray: Reboot the system to verify it can boot from the new /boot.

  12. Create a physical volume on /dev/md1:
    pvcreate /dev/md1
  13. Extend the volume group to contain that device:
    vgextend <volgroup_name> /dev/md1</volgroup_name>
  14. Move the whole volume group physically from the first disk to the degraded RAID:
    vgmove <volgroup_name> /dev/md1</volgroup_name>
    (Wait for it to complete... takes some time ;)

  15. Reduce first disk from the VG:
    vgreduce <volgroup_name> /dev/sda2</volgroup_name>
  16. Prepare it for addition to the RAID (see step 3 and 4) and add it:
    mdadm --add /dev/md0 /dev/sda1mdadm --add /dev/md1 /dev/sda2
  17. Hooray! Watch into /proc/mdstat. You should see that the RAID is recovering.

  18. When recovery is finished pray another time and hope that system is still booting with it running from the RAID entirely. If it does: Finished :-)
Now to the problem with the advanced format:There is some action taking place with the hardware vendors to move to a new sector size. Physically my new device has a size of 4096 bytes per sector. Somewhat different to the 512 bytes disks used to have the last decade. Logically it still has 512 bytes per sector. As far as I understand this is achieved by placing 8 logical sectors into one physical sector, so when partitioning a new disk the alignment of the disk has to be so that partitions start in a sector which is a multiple of 8. That, obviously, wasn't the case with the old partitioning on my first disk. So I had to manually create partitions by specifying start points manually and making sure they are dividable by 8.Otherwise fdisk would complain about the layout on the disk.This does not work with cfdisk, because it does not accept manual alignment parameters and unfortunately the partitions it creates do have a wrong alignment. So good old fdisk and some calculations how many sectors are needed and where to start, to the rescue. So the layout is now:

Device Boot Start End Blocks Id System
/dev/sdb1 2048 291154 144553+ fd Linux raid autodetect
/dev/sdb2 291160 625139334 312424087+ fd Linux raid autodetect

10 December 2011

Patrick Schoenfeld: Migrating from blogspot to Movable Type

A while ago I decided to migrate my existing blogspot blog to an own domain and webspace again. My reasoning was mostly, that blogspot lacked some features, which I'd like to have in my blog.
Additionally, my requirements have changed a bit since I originally moved to blogspot and, last but not least, Blogspot was a compromise anyway.

So I started re-evaluating a possible software platform for my blog. In my numerous previous attempts to start blogging (there were several blogs of me in the internet since at least 2006), before I moved to Blogspot, I used Wordpress. But there were quiet some reasons against it, one of the biggest concerns being its security history. Also, while I worked a lot with PHP in the past years, I have developed a serious antipathy against software written in PHP, which I couldn't just ignore.

In the end, the decision fell on Movable Type, because its written in Perl, which is the language I prefer for most of my projects, because its features were matching my wishes (mostly) and because I heard some good opinions about it. Also it is used by my employer for our company blog.

So the next question was: How to migrate?

I decided to use Movable Type 5, although, at present, it seems not to be the communities choice. At least the list of plugins supporting MT5 is really short. Foremost there was no plugin to import blogger posts, which, after all, was the most important thing about the migration.
Luckily there is such a plugin for Movable Type and so I basically did the following:

  1. Install Movable Type 4
  2. Install the Blogger Import Plugin
  3. Import posts (it supports either the export file of blogger or directly importing posts via the Google API)
  4. Upgrade to Movable Type 5
  5. Check the result
Check the results, or: The missing parts

Obviously such an import is not perfect. Some posts contain images or in-site links. The importer is not able to detect that and honestly it would have a hard time to track that anyway.
So as soon as content is migrated, its time to look for the missing parts.

The process to find missing parts is basically very easy and common among the various missing parts:
Just search for your blogspot URL via the Search & Replace option in the Movable Type administration.

Now how to fix that? For links its quiet easy (although I forgot about them in the first run), as long as the permalinks have kept the same scheme.
In my case that is the case, since I decided to use the Preferred Archive option "Entry" in the blog settings for the new blog and the default (if there is an option, because I don't know)
in Blogspot. The importer does import the Basename of the document, so fixing links is just a matter of replacing the domain part of the URL.

For images its some more work. One has to get the images somehow and upload them in Movable Type. Eventually it then boils down to search and replace, but I decided to do that manually, since I only have a very low number of images in my posts so far.

After that I did everything else, which is not specific to the migration, like picking a template, modifying it to my wishes, considering the additions of plugin etc.
And here we are. There were some issues during the migration, which I haven't handled here. I will blog about them another time.

Patrick Schoenfeld: On Debian discussions

In my article "Last time I've used network-manager" I made a claim for which I've been criticized by some people, including Stefano, our current (and just re-elected) DPL. I said that a certain pattern, which showed up in a certain thread, were a prototype for discussions in the Debian surroundings.

Actually I have to commit, that this was a very generalizing statement, making my own point against the discussion point back directly at myself.
Because as Stefano said correctly there has been some progress in the Debian discussion cult.
Indeed, there are examples of threads, were discussions followed another scheme.
But to my own advocacy I have to say that such changes are like little plants (in the botanical sense). They take their time to grow and as long as they are so very new, they are very vulnerable to all small interruptions. Regardless of how tiny those interruptions may seem.

I've been following Debian discussions for 6 or 7 years. That scheme I was describing was that which had the most visibility of all Debian discussions. Almost every discussion which were important for a broader audience followed that scheme. It has a reason that Debian is famous for flamewars.
In a way its quiet similar to the network-manager perception, some people have. Negative impressions manifest themselves. Especially if they have years of time.
Positive impressions does not have a chance to manifest themselves as long as the progress is not visible enough to survive small interruptions.

I hope that I didn't cause to much damage with my comment, which got cited (context-less) on other sites. Hopefully the Debian discussion cult will improve further to a point where there is no difference between the examples of very good, constructive discussions we already have in some parts of the project and the project-wide decision-making-discussions which affect a broad audience and often led to flamewars.

Patrick Schoenfeld: PHP and big numbers

One would expect, that one of the most used script languages of the world would be ableto do proper comparisons of numbers, even big numbers, right?Well, PHP is not such a language, at least not on 32bit systems.Given a script like this:
<?
$t1 = "1244431010010381771";
$t2 = "1244431010010381772";
if ($t1 == $t2)
print "equal\n";
?>

A current PHP version will output:

schoenfeld@homer ~ % php5 test.php
equal

It will do the right thing on 64bit systems (not claiming that the numbers are equal).Interesting enough: An equal-type-equality check (see my article from a few years ago) will not tell that the two numbers are equal.

Patrick Schoenfeld: The caravan is moving on..

I once moved to blogspot with my blog, because I was to lazy to handle all the hard stuff about blogging (finding, setting up and maintaing a blog software) myself. It was a compromise, back then, because blogspot was missing features I'd like to have from a blog software.
Recently the compromise was slowly going to not satisfy me enough anymore and so I planned and executed a migration plan to an own domain and powered by Movable Type.

Because I was always blogging about more technical and Open Source-community related things as well as private stuff, politics and anything else, which I was interested about, and because these subject areas are not really related to each other, I decided to split the blog into two.

So in the future you will find this blog split into two places:

I've tried to migrate everything to this sites, which has been on blogspot, including comments. So if you happen to find your comments there: Do not wonder.

The blogspot blog will stay around, eventually forever, eventually only for a certain time, to let search engine visitors still find stuff. But this is the last entry I'll publish here.

So update your bookmarks and stay tuned :)

3 December 2011

Patrick Schoenfeld: Let me introduce DPKG::Log and dpkg-report

We have customers, which require a report about what we've done during maintenance windows. Usually this includes a report about upgrades, newly installed packages etc. and obviously everything we've done apart from that.
Till now we've prepared them manually. For a greater bunch of systems this is a big PITA, because to be somehow useful you have to collect the data for all systems and after that, prepare a report where you have:

Its also error-prone because humans make mistakes.

Perl to the rescue!
At least the part about generating a report about installed/upgraded/removed packages could be automated, because dpkg writes a well-formed logfile in /var/log/dpkg.log. But I noticed that there appearently is no library specialised at parsing that file. Its not a big deal, because the format of that file is really simple, but a proper library would be nice anyway.
And so I wrote such a library.
It basically takes a logfile, reads it line by line and stores each line parameterized into a generic object.

Its features include:

Based on that, I wrote another library DPKG::Log::Analyse, which takes a log, parses it with DPKG::Log and then extracts the more relevant information such as installed packages, upgraded packages, removed packages etc.

This, in turn, features:
- Info about newly installed packages
- Info about removed packages
- Info about upgraded packages
- Info about packages which got installed and removed again
- Info about packages which stayed halfinstalled or halfconfigured at the end of the logfile (or defined reporting period)


These libraries are already uploaded to CPAN and packaged for Debian.
They passed the NEW queue very quickly and are therefore available for Sid:

http://packages.debian.org/sid/libdpkg-log-perl

As an example use (and for my own use case, as stated above), I wrote dpkg-report, which uses the module and a Template::Toolkit based template to generate a report about what happened in the given logfile.
It currently misses some documentation, but it works somehow like this:

Report for a single host over the full log:
dpkg-report

Report for a single host for the last two days:
dpkg-report --last-two-days

Report for multiple hosts (and logfiles):
The script expects that each log file has the name <systemname>.dpkg.log so that it can guess the hostname from the system and can grab all such log files from a directory if a directory is specified as log-file arguments:</systemname>

dpkg-report --log-file /path/to/logs

This will generate a report about all systems without any merging.

Report for multiple hosts with merging:
dpkg-report --log-file /path/to/logs --merge

This will do the following:
A (fictive) report could look some what like this:

dpkg-Report for all:
------------------------
Newly Installed:
abc
Removed:
foo
Upgraded:
bar (0.99-12 - 1.00-1)
dpkg-Report for test*:
------------------------
Newly Installed:
blub
dpkg-Report for test1:
------------------------
Upgraded:
baz (1.0.0-1 -> 1.0.0-2)
dpkg-Report for test2:
------------------------
Upgraded:
zab (0.9.7 -> 0.9.8)

Currently this report generator is only included in the source package or in the git(hub) repository of the library. I wonder if it makes sense to let the source package build another binary package for it.
But its only a 238 lines perl script with a dependency on the perl library so I'm unsure if it warrants a new binary package. What do others think?

1 December 2011

Patrick Schoenfeld: PHP and big numbers

One would expect, that one of the most used script languages of the world would be ableto do proper comparisons of numbers, even big numbers, right?Well, PHP is not such a language, at least not on 32bit systems.Given a script like this:
<?
$t1 = "1244431010010381771";
$t2 = "1244431010010381772";
if ($t1 == $t2)
print "equal\n";
?>

A current PHP version will output:

schoenfeld@homer ~ % php5 test.php
equal

It will do the right thing on 64bit systems (not claiming that the numbers are equal).Interesting enough: An equal-type-equality check (see my article from a few years ago) will not tell that the two numbers are equal.

Next.