Search Results: "Francois Marier"

18 October 2010

Francois Marier: Manipulating debconf settings on the command line

It's not very easy to find information on how to adjust debconf settings after a package has been installed and configured. Most of the information out there is for Debian developers wanting to add support for debconf in their maintainer scripts.

I ran into the problem of being unable to change a package's configuration options through dpkg-reconfigure and I found the following commands to do it manually:

debconf-show packagename

to show the list of debconf values that a package has stored,

echo "get packagename/pgsql/app-pass" debconf-communicate

to query the current value of an option in the debconf database, and

echo "set packagename/pgsql/app-pass password1" debconf-communicate

to change that value.

I'm not convinced that this is the easiest way for system administrators to manually lookup and modify debconf options, but that's the best I could find at the time.

12 September 2010

Francois Marier: Setting up your own DNSSEC-aware resolver using Unbound

Now that the root DNS servers are signed, I thought it was time I started using DNSSEC on my own PC. However, not wanting to wait for my ISP to enable it, I decided to setup a private recursive DNS resolver for myself using Unbound.

Installing UnboundBeing already packaged in Debian and Ubuntu, unbound is only an apt-get away:
apt-get install unbound
though if you are running lenny, I suggest you grab the latest backport.

Once unbound is installed, follow these instructions to enable DNSSEC.

Optional settingsIn my /etc/unbound/unbound.conf, I enabled the following security options:
harden-referral-path: yes
use-caps-for-id: yes
and turned on prefetching to hopefully keep in cache the sites I visit regularly:
prefetch: yes
prefetch-key: yes
Finally, I also enabled statistics:
extended-statistics: yes
control-enable: yes
control-interface: 127.0.0.1
and ran sudo unbound-control-setup to generate the necessary keys.

Once unbound is restarted (sudo /etc/init.d/unbound restart) stats can be queried to make sure that the DNS resolver is working:
unbound-control stats

Overriding DHCP settingsIn order to use my own unbound server for DNS lookups and not the one received via DHCP, I added this line to /etc/dhcp/dhclient.conf:
supersede domain-name-servers 127.0.0.1;
and restarted dhclient:
sudo killall dhclient
sudo killall dhclient
sudo /etc/init.d/network-manager restart
If you're not using DHCP, then you simply need to put this in your /etc/resolv.conf:
nameserver 127.0.0.1

Testing DNSSEC resolutionOnce everything is configured properly, the best way I found to test that this setup was actually working is to use a web browser to visit these sites:
and using dig:
$ dig +dnssec A www.dnssec.cz   grep ad
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 3, ADDITIONAL: 1

Are there any other ways of making sure that DNSSEC is fully functional?

24 August 2010

Francois Marier: Combining multiple commits into one using git rebase

git rebase provides a simple way of combining multiple commits into a single one. However using rebase to squash an entire branch down to a single commit is not completely straightforward.

Squashing normal commitsUsing the following repository:
$ git log --oneline
c172641 Fix second file
24f5ad2 Another file
97c9d7d Add first file
we can combine the last two commits (c172641 and 24f5ad2) by rebasing up to the first commit:
$ git rebase -i 97c9d7d
and specify the following commands in the interactive rebase screen:
pick 24f5ad2 Another file
squash c172641 Fix second file
which will rewrite the history into this:
$ git log --oneline
1a9d5e4 Another file
97c9d7d Add first file

Rebasing the initial commitTrying to include the initial commit in the interactive rebase screen will return this error:
$ git rebase -i 97c9d7d^
fatal: Needed a single revision
Invalid base
and squashing the top commit in the interactive rebase screen:
$ git rebase -i 97c9d7d

squash 24f5ad2 Another file
squash c172641 Fix second file
will return this error:
Cannot 'squash' without a previous commit
So we need to use a different approach to deal with the initial commit.

Amending the initial commitHere is an alternative to rebase which will work on commits that don't have a parent.

Taking the previously rebased branch:
$ git log --oneline
1a9d5e4 Another file
97c9d7d Add first file
we can rewind the branch to the initial commit:
$ git reset 97c9d7d
$ git log --oneline
97c9d7d Add first file
without losing any of the changes introduced in 1a9d5e4 (shown here as uncommitted changes):
$ git status
# On branch master
# Changed but not updated:
# (use "git add ..." to update what will be committed)
# (use "git checkout -- ..." to discard changes in working directory)
#
# modified: file1
#
# Untracked files:
# (use "git add ..." to include in what will be committed)
#
# file2
no changes added to commit (use "git add" and/or "git commit -a")
Then we can reopen commit 97c9d7d and add the changes present in the working directory:
$ git add .
$ gitc -a --amend -m "Initial version"
which will finally give us a fully squashed branch:
$ git log --oneline
fcb85fb Initial version

$ git status
# On branch master
nothing to commit (working directory clean)

20 July 2010

Francois Marier: Cherry-picking a range of git commits

The cherry-pick command in git allows you to copy commits from one branch to another, one commit at a time. In order to copy more than one commit at once, you need a different approach.

Cherry-picking a single commitSay we have the following repository composed of three branches (master, feature1 and stable):

$ git tree --all
* d9484311 (HEAD, master) Delete test file
* 4d4a0da8 Add a test file
* 5753515c (stable) Add a license
* 4b95278e Add readme file
/
* a37658bd (feature1) Add fourth file
* a7785c10 Add lines to 3rd file
* 7f545188 Add third file
* 2bca593b Add line to second file
* 0c13e436 Add second file
/
* d3199755 Add a line
* b58d925c Initial commit

The "git tree" command is an alias I defined in my ~/.gitconfig:

[alias]
tree = log --oneline --decorate --graph

To copy the license file (commit 5753515c) to the master branch then we simply need to run:

$ git checkout master
$ git cherry-pick 5753515c
Finished one cherry-pick.
[master 08ff7d4] Add a license
1 files changed, 676 insertions(+), 0 deletions(-)
create mode 100644 COPYING

and the repository now looks like this:

$ git tree --all
* 08ff7d4a4 (HEAD, master) Add a license
* d94843113 Delete test file
* 4d4a0da88 Add a test file
* 5753515c (stable) Add a license
* 4b95278e Add readme file
/
* a37658bd (feature1) Add fourth file
* a7785c10 Add lines to 3rd file
* 7f545188 Add third file
* 2bca593b Add line to second file
* 0c13e436 Add second file
/
* d3199755 Add a line
* b58d925c Initial commit

Cherry-picking a range of commits
In order to only take the third file (commits a7785c10 and 7f545188) from the feature1 branch and add it to the stable branch, I could cherry-pick each commit separately, but there is a faster way if you need to cherry-pick a large range of commits.

First of all, let's create a new branch which ends on the last commit we want to cherry-pick:

$ git branch tempbranch a7785c10
$ git tree --all
* 08ff7d4a (HEAD, master) Add a license
* d9484311 Delete test file
* 4d4a0da8 Add a test file
* 5753515c (stable) Add a license
* 4b95278e Add readme file
/
* a37658bd (feature1) Add fourth file
* a7785c10 (tempbranch) Add lines to 3rd file
* 7f545188 Add third file
* 2bca593b Add line to second file
* 0c13e436 Add second file
/
* d3199755 Add a line
* b58d925c Initial commit

Now we'll rebase that temporary branch on top of the stable branch:

$ git rebase --onto stable 7f545188^ tempbranch
First, rewinding head to replay your work on top of it...
Applying: Add third file
Applying: Add lines to 3rd file
$ git tree --all
* ec488677 (HEAD, tempbranch) Add lines to 3rd file
* a85e5281 Add third file
* 5753515c (stable) Add a license
* 4b95278e Add readme file
* 08ff7d4a (master) Add a license
* d9484311 Delete test file
* 4d4a0da8 Add a test file
/
* a37658bd (feature1) Add fourth file
* a7785c10 Add lines to 3rd file
* 7f545188 Add third file
* 2bca593b Add line to second file
* 0c13e436 Add second file
/
* d3199755 Add a line
* b58d925c Initial commit

All that's left to do is to make stable point to the top commit of tempbranch and delete the old branch:

$ git checkout stable
Switched to branch 'stable'
$ git reset --hard tempbranch
HEAD is now at ec48867 Add lines to 3rd file
$ git tree --all
* ec488677 (HEAD, tempbranch, stable) Add lines to 3rd file
* a85e5281 Add third file
* 5753515c Add a license
* 4b95278e Add readme file
* 08ff7d4a (master) Add a license
* d9484311 Delete test file
* 4d4a0da8 Add a test file
/
* a37658bd (feature1) Add fourth file
* a7785c10 Add lines to 3rd file
* 7f545188 Add third file
* 2bca593b Add line to second file
* 0c13e436 Add second file
/
* d3199755 Add a line
* b58d925c Initial commit
$ git branch -d tempbranch
Deleted branch tempbranch (was ec48867).

It would be nice to be able to do it without having to use a temporary branch, but it still beats cherry-picking everything manually.

Another approachAnother way to achieve this is to use the format-patch command to output patches for the commits you are interested in copying to another branch and then using the am command to apply them all to the target branch:

$ git format-patch 7f545188^..a7785c10
0001-Add-third-file.patch
0002-Add-lines-to-3rd-file.patch
$ git am *.patch

Update: looking forward to git 1.7.2According to a few people who were nice to point this out in a comment, version 1.7.2 of git, which is going to be released soon, will have support for this in cherry-pick:

git cherry-pick 7f545188^..a7785c10

9 July 2010

Francois Marier: Restoring lost RT tickets

Request Tracker (also know as RT), the defacto Open Source ticket tracking software, has a special status value to get rid of spam tickets: deleted. It does exactly what it's supposed to do, that is get rid of these tickets quickly.

Unfortunately, it's very easy to accidentally delete a bunch of non-spam tickets via the bulk update interface. Restoring these deleted tickets is not straightforward.

Good news: none of the lost tickets were removed from the database.
Bad news: you can't search for deleted tickets.

Finding the lost ticketsIf you have a copy of the page before you did the bulk update (perhaps in your browser cache?) then you've got all you need. Otherwise, you'll need to access the database directly to get a list of ticket IDs.

First of all, find the user ID of the person who marked of all these tickets as deleted (42 in this example), then issue the following query:
 SELECT id, lastupdated, subject
FROM tickets
WHERE status = 'deleted' AND
lastupdatedby = 42
ORDER BY lastupdated;
You may want to filter some more using the lastupdated timestamp if you know roughly when the deletion occurred.

Restoring deleted ticketsSince all of the tickets are still in the database, all you need to do is to go to each of them directly (e.g. http://www.example.com/Ticket/Display.html?id=12345 for Ticket #12345) and change the status back to "open".

Francois Marier: Improving the performance of Request Tracker by reducing latency

Request Tracker is a really neat support tool, but one of the common complaints I heard from people using it during a previous project was that it was pretty slow.

There wasn't much we could do about the (overloaded) server it was running on, but I found that enabling mod_deflate really helped.

After watching this great video though, I was inspired to look into it a bit more, focussing this time on latency.

Description of tests
Also note that I was looking for the "best case" for each of the different configurations and so each screenshot was taken after reloading the homepage 10-20 times to maximize cache hits (thanks in large part to mod_expires).

(Is there a nice automated way of measuring average latency?)

Stock RT 3.6Using the default apache2-modperl2 config file (as supplied by RT), here's what the homepage (logged in as root) looked like before I changed anything:



The purple section here indicates the time spent waiting for the server. This shows that the server (running Mason inside mod_perl) is doing quite a bit of processing, including a lot more work than you'd expect while serving static files. It's quite impressive to see how fast the images are being served (directly by Apache) in comparison with the Javascript and the CSS files (which go through Mason).

The reason while Javascript and CSS files have to be served by mod_perl is that they are in fact templates. They contain a few Mason variables which must be substituted before being served.

Looking into it further though, all of these replacements have to do with variables defined in RT_SiteConfig.pm (mostly the install path). Here's an example:
var path = "" ? "" : "/";
which gets turned into:
var path = "/rt" ? "/rt" : "/";
So as long as these paths don't change, then there is no need to re-generate these files.

Static Javascript and CSSThis next diagram was produced after configuring Apache to serve all Javascript and CSS files directly from Apache:



The way I did that (without modifying any of the original files) was by saving the Javascript/CSS sent to the browser and using mod_rewrite rules to serve these files instead of the original templated ones:

# Serve static files directly
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/ahah.js$ /var/www/rt/ahah.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/cascaded.js$ /var/www/rt/cascaded.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/class.js$ /var/www/rt/class.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/combobox.js$ /var/www/rt/combobox.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/list.js$ /var/www/rt/list.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/titlebox-state.js$ /var/www/rt/titlebox-state.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/js/util.js$ /var/www/rt/util.js
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/css/3.5-default/main-squished.css$ /var/www/rt/main-squished.css
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/css/print.css$ /var/www/rt/print.css
RewriteRule ^/usr/share/request-tracker3.6/html/NoAuth/webrtfm.css$ /var/www/rt/webrtfm.css

Removing unnecessary images

Finally, one thing I noticed from this last graph is that the rounded corners in the theme require a number of small images. While these don't take a whole lot of bandwidth, they do require quite a bit of back and forth between the browser and the server.

So I replaced all of the "rounded corner" images in the main-squished.css file with the following CSS attributes:

-moz-border-radius-topleft: 8px;
-moz-border-radius-topright: 8px;
-moz-border-radius-bottomleft: 8px;
-moz-border-radius-bottomright: 8px;
-webkit-border-top-left-radius: 8px;
-webkit-border-top-right-radius: 8px;
-webkit-border-bottom-left-radius: 8px;
-webkit-border-bottom-right-radius: 8px;

(yes, Internet Explorer users probably don't get the rounded corners... oh well)

This eliminated a number of roundtrips and shaved off a few more milliseconds:



Merging CSS and Javascript filesBy this stage, the pages are pretty snappy so there is not much to be gained anymore, but I figured I'd try to reduce the latency a bit more by combining all Javascript files into one (and doing the same for CSS files with the exception of print.css). This is what I got:



(Note that I also took the opportunity to minify both squished files to reduce the filesize.)

Not a huge improvement and I unfortunately had to copy quite a few Mason templates from /usr/share/request-tracker3.6/html/ to /usr/local/share/request-tracker3.6/html/ and then replace all of the script tags with a single one in html/Elements/Header.

Others things to look intoI've stopped here, but there might be ways to further reduce the processing time on the server (hence the latency) by tuning mod_perl/Mason or Postgres. The RT wiki also has a few pointers.

Replacing Apache with Nginx (which means moving to FastCGI) was something I considered, but after trying it out, it turned out that it would add about 100ms of extra latency.

Feel free to leave a comment if you've found something else that makes a big difference on your site.

4 July 2010

Francois Marier: Querying deleted content in git

If you have removed a file (or part of a file) from git, it's not immediately obvious how to query its history. Here are two ways to deal with deleted content in git.

Commit history of a deleted fileIf we take the following two files:
$ ls
file1 file2

and then decide to delete one of them:
$ git rm file2
rm 'file2'
$ git commit -m "Delete a file"
[deletefile 87fadb9] Delete a file
1 files changed, 0 insertions(+), 1 deletions(-)
delete mode 100644 file2

To see the commit history of that file, you can't do it the usual way:
$ git log file2
fatal: ambiguous argument 'file2': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions

Instead, you need to do this:
$ git log -- file2

Finding the commit that deleted a lineFinding the commit that deleted a line is slightly more complicated. Unfortunately, we can't really use git blame for that. All we can do with git blame is to find the last commit which contained the deleted line.

So if we add the following file:
$ cat file3
one
two
three
$ git add file3
$ git commit -a -m "Add a third file"
[master e62ace6] Add a third file
1 files changed, 3 insertions(+), 0 deletions(-)
create mode 100644 file3

and remove the second line:
$ cat file3
one
three
$ git commit -a -m "Remove a line"
[removeline f3eb691] Remove a line
1 files changed, 0 insertions(+), 1 deletions(-)

then we can use git blame see what was the last revision to contain each line:
$ git blame --reverse HEAD^..HEAD file3
f3eb691d (Francois 2010-07-04 1) one
^e62ace6 (Francois 2010-07-04 2) two
f3eb691d (Francois 2010-07-04 3) three

Finding the commit that deleted that file requires using git log to search for the text contained on that deleted line:
$ git log --oneline -S'two' file3
f3eb691 Remove a line
e62ace6 Add a third file

20 June 2010

Francois Marier: Reducing website bandwidth usage

There are lots of excellent tools to help web developers optimise their websites.

Here are two simple things you have no excuse for overlooking on your next project.
HTML, XML, Javascript and CSS filesOne of the easiest ways to speed up a website (often to a surprising degree) is to turn on compression of plaintext content through facilities like mod_deflate or the Gzip Module.

Here's the Apache configuration file I normally use:
<ifmodule>AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript text/xml application/x-javascript application/javascript
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch "MSIE 6" no-gzip dont-vary
BrowserMatch ^Mozilla/4\.0[678] no-gzip</ifmodule>
ImagesAs far as images go, the following tools will reduce file sizes through lossless compression (i.e. with no visual changes at all):
(An alternative to optipng is pngcrush.)

Note that the --strip-all argument to jpegoptim will remove any EXIF/comments tags that may be present.

9 June 2010

Francois Marier: Restoring a single table from a Postgres database or backup

After dropping and restoring a very large database a few times just to refresh the data in a single table, I thought that there must be an easier way to do this. I was right, you can restore a single table.

If you are starting with a live database, you can simply use pg_dump to backup only one table:
pg_dump --data-only --table=tablename sourcedb > onetable.pg

which you can then restore in the other database:
psql destdb < onetable.pg

But even if all you've got is a full dump of the source database, you can still restore that single table by simply extracting it out of the large dump first:
pg_restore --data-only --table=tablename fulldump.pg > onetable.pg

before restoring it as shown above, using psql.

17 May 2010

Francois Marier: List of Open Source Conference Management Systems

Conference Management systems are web applications designed to make the lives of conference organisers easier. They usually include features such as registration, payment, paper submission and review, scheduling and publishing of announcements.

Since Wikipedia is apparently not a place for such a "spammy list," I figured I should at least post this here:
NameUsed byLicenseProgramming Language
A Conference ToolkitYAPC::EuArtistic LicensePerl
ConManUtah Open Source Conference, Texax Linux FestGPLPython (Django)
Open Conference Systemsvarious academic conferencesGPLPHP
OpenConferenceWareOpen Source BridgeMITRuby on Rails
PentabarfDebConf, FOSDEM, CCCGPLRuby on Rails
SCALEregSCALEGPLPython (Django)
Zookeeprlinux.conf.auGPLPython (Pylons)


If you know of any other Open Source systems, please leave a comment!

24 April 2010

Francois Marier: Monitoring a Belkin 600VA UPS with NUT on Debian/Ubuntu

I recently bought a Belkin 600VA UPS (model F6S600auUSB) and here's what I had to do to setup the monitoring and automatic shutdown on Ubuntu 9.10 (karmic). (This procedure should work on recent versions of Debian as well.)

This UPS comes with a proprietary monitoring tool (written in Java) and you can find instructions to get this working on the Ubuntu forums, but I was looking for a free solution that would integrate well with the rest of the system. So after reading this blog post I decided to go with the Network UPS Tools project:

$ apt-get install nut

Once the nut package is installed, I edited /etc/nut/nut.conf to set:

MODE=standalone

and created the following files:

$ vim /nut/ups.conf

[belkinusb]
driver = megatec_usb
port = auto
desc = "Belkin UPS, USB interface"

$ vim /etc/nut/upsd.conf

# MAXAGE 15
# LISTEN 127.0.0.1 3493
# MAXCONN 1024

$ vim /etc/nut/upsd.users

[local_mon]
password = MYPASSWORD
upsmon master

$ vim /etc/nut/upsmon.conf

MONITOR belkinusb@localhost 1 local_mon MYPASSWORD master
POWERDOWNFLAG /etc/killpower
SHUTDOWNCMD "/sbin/shutdown -h +0"

Then all that was left to do was to restart nut:

$ /etc/init.d/nut restart

and check syslog for any errors:

$ tail /var/log/syslog

While nut is running, it will monitor the UPS and report any power problems to syslog. Once the UPS is running on battery, it will make sure the computer is safely shut down before power runs out.

Hopefully future versions of GNOME Power Manager will be able to integrate with nut directly and display battery information through its notification icon.

31 March 2010

Francois Marier: "Abusing" git storage

A git repository is primarily intended to store multiple branches of a single program or component. The underlying system however is much more flexible. Here are two ways to add files which are related to the project but outside the normal history.
Storing a single file all by itselfThis intermediate trick allows you to store a single file into a git repository without the file being in any of the branches.

First of all, let's create a new file:
echo "Hi" > message.txt
and store it in git:
git hash-object -w message.txt
b14df6442ea5a1b382985a6549b85d435376c351
At this stage, the file is stored within the git repository but there is no other way to get to it other than using the hash:
git cat-file blob b14df6442ea5a1b382985a6549b85d435376c351
Hi
A good way to point to the stored contents is to use a tag:
git tag message.txt b14df6442ea5a1b382985a6549b85d435376c351
Now you can access your file this way:
git cat-file blob message.txt
Hi
Creating an empty branchAs seen in this git screencast or in the Git Community Book, you can create a branch without a parent or any initial contents:
git symbolic-ref HEAD refs/heads/foo
rm .git/index
git clean -fdx
To finish it off, add what you were planning on storing there and commit it:
git add myfile
git commit -m 'Initial commit'

9 February 2010

Francois Marier: Excluding files from git archive exports using gitattributes

git archive provides an easy way of producing a tarball directly from a project's git branch.

For example, this is what we use to build the Mahara tarballs:
git archive --format=tar --prefix=mahara-$ VERSION / $ RELEASETAG bzip2 -9 > $ CURRENTDIR /mahara-$ RELEASE .tar.bz2
If you do this however, you end up with the entire contents of the git branch, including potentially undesirable files like .gitignore.

There is an easy, though not very well-documented, way of specifying files to exclude from such exports: gitattributes.

This is what the Mahara .gitattributes file looks like:
/test export-ignore
.gitattributes export-ignore
.gitignore export-ignore
With this file in the root directory of our repository, tarballs we generate using git archive no longer contain the selenium tests or the git config files.

If you start playing with this feature however, make sure you commit the .gitattributes file to your repository before running git archive. Otherwise the settings will not be picked up by git archive.

27 December 2009

Francois Marier: Ignoring files in git repositories

According to the man page, there are three ways to exclude files from being tracked by git.

Shared list of files to ignore The most well-known way of preventing files from being part of a git branch is to add such files in .gitignore. (This is analogous to CVS' .cvsignore files.)

Here's an example:
*.generated.html
/config.php
The above ignore list will prevent automatically generated HTML files from being committed by mistake to the repository. Because this is useful to all developers on the project, .gitignore is a good place for this.

The next line prevents the local configuration file from being tracked by git, something else that all developers will want to have.

One thing to note here is the use of a leading slash character with config.php. This is to specifically match the config file in the same directory as the .gitignore file (in this case, the root directory of the repository) but no other. Without this slash, the following files would also be ignored by git:
/app/config.php
/plugins/address/config.php
/module/config.php

Local list (specific to one project)For those custom files that you don't want version controlled but that others probably don't have or don't want to automatically ignore, git provides a second facility: .git/info/exclude

It works the same way as .gitignore but be aware that this list is only stored locally and only applies to the repository in which it lives.

(I can't think of a good example for when you'd want to use this one because I don't really use it. Feel free to leave a comment if you do use it though, I'm curious to know what others do with it.)

Local list (common to all projects)Should you wish to automatically ignore file patterns in all of your projects, you will need to use the third gitignore method: core.excludesfile

Put this line in your ~/.gitconfig:
[core]
excludesfile = /home/username/.gitexcludes
(you need to put the absolute path to your home directory, ~/ will not work here unless you use git 1.6.6 or later)

and then put the patterns to ignore in ~/.gitexcludes. For example, this will ignore the automatic backups made by emacs when you save a file:
*~
This is the ideal place to put anything that is generated by your development tools and that doesn't need to appear in your project repositories.

20 December 2009

Francois Marier: Debugging logcheck rule files

logcheck is a neat little log file monitoring tool I use on all of my machines.

I recently noticed however that I hadn't received any logcheck messages in a while from one of my servers. Either that was a sign that things were going really well or, more likely, that logcheck wasn't producing any output anymore.

Manually logging an error to syslogHere's what I did to force a message to be printed to the logs:
logger -p kern.error This is a test
Which I would expect to produce this logcheck notice:
Dec 20 15:34:08 hostname username: This is a test
Unfortunately, that didn't happen on the next scheduled run.

Forcing a logcheck runTo rule out the following:
I ran logcheck manually:
sudo -u logcheck /usr/sbin/logcheck -o -d >& logcheck.out
Looking at the output file however, my test message still wasn't there. Either logcheck was broken or one of my rule files was swallowing everything.

Finding the broken rule fileTo find the broken rule, I started by ignoring rules defined in /etc/logcheck/ignore.d.server/ and /etc/logcheck/ignore.d.workstation/ by running logcheck in paranoid mode:
logger -p kern.error This is a test
sudo -u logcheck /usr/sbin/logcheck -o -d -p >& logcheck.out
This worked, so I then ran logcheck in server mode:
logger -p kern.error This is a test
sudo -u logcheck /usr/sbin/logcheck -o -d -s >& logcheck.out
Given that this also worked, it meant that the offending rule file was in /etc/logcheck/ignore.d.workstation/. So I moved all of my custom local-* rule files out of the way and ran logcheck in workstation mode:
logger -p kern.error This is a test
sudo -u logcheck /usr/sbin/logcheck -o -d -w >& logcheck.out
Once I verified that this worked, I started to put my local files back one by one until it broke again. Then slowly removed lines from the offending file until it worked.

Solution to my problemIt turns out that one of my rule files had a line like this:
path != NULL   column != NULL
Escaping the pipe symbols with backslashes solved the problem:
path != NULL \ \  column != NULL

Maybe I should periodically print a message to syslog to make sure that logcheck is still working...

3 October 2009

Francois Marier: Keeping track of what others are saying about your project

There are a few ways to easily keep track of what others are saying about your Free Software project. In a very popular project, you may already have enough feedback from its busy forum or mailing list, but in new and small projects, you often have to actively seek feedback/comments from end-users.

Here are a few ways to find out what people are saying online about your project.

RSS FeedsFirst of all, here are the RSS feeds I subscribe to for one of my projects (safe-rm):
Google AlertsI have also signed up for the following Google Alert:
Search terms:link:safe-rm.org.nz
Type:Comprehensive
Deliver to:Email
How often:as-it-happens
which emails me anytime someone links to my project's homepage in one of the resources indexed by Google.

Backtype AlertsFinally, these Backtype Alerts notify me anytime safe-rm features in a blog comment:

Anything else I should subscribe to or follow?

27 September 2009

Francois Marier: OpenStreetMap/OpenLayers and Privoxy

I have been meaning to switch to OpenStreetMap for a while now, but the fact that it didn't work with Privoxy was holding me back. I don't enjoy surfing the web without protection and Privoxy is one of the most convenient ways of enhancing one's browsing experience on Debian and Ubuntu.

So I decided to spend some time trying to figure out what was wrong. If you want to skip the details and just get the fix, scroll down to the bottom of this post for the solution.

Finding all affected URLsIn order to get a list of all of the URLs that Privoxy filters when one visits the front page of OpenStreetMap, I did the following:

Comparing the filesArmed with the list of all affected URLs, I downloaded them through Privoxy using wget into a privoxy/ directory:
$ http_proxy=localhost:8118 wget URL
Then I downloaded the same URLs without Privoxy into a noproxy/ directory:
$ http_proxy= wget URL
To find identical files, I ran md5sums in each directory:
md5sum *
After deleting all identical files, I was left with:
index.html
OpenLayers.js?1251388304
Which I diff'ed together:
colordiff -u noproxy/FILE privoxy/FILE
The first file, index.html did not have any relevant changes, but I noticed in the second one that a few of these
this.moveTo(this.position)
were replaced with
''.concat(this.position)
Searching for "concat" in /etc/privoxy/, I found that the "jumping-windows" filter was the culprit. After disabling it, all of the problems went away.

I have filed a bug upstream.

The SolutionHere's what you need to put in your /etc/privoxy/user.action:
 -filter jumping-windows 
.openstreetmap.org

22 September 2009

Francois Marier: grub on a bootable USB rescue stick

If you've got a cheap USB key lying around which is too small to be useful, follow these instructions to create a rescue "stick" in case your boot loader stops working.

(This is based on these instructions but simplified and updated for grub 2.)

Before you start, make sure you've got both the parted and the grub-pc packages installed.
Make the USB stick bootablePlug in your USB stick and replace the device name and capacity with the appropriate values. (You can find the total capacity of the stick by typing print inside parted.)
$ sudo parted /dev/sda
(parted) rm 1
(parted) mkpart primary 1MB 4009MB
(parted) mkfs 1 fat32
(parted) toggle 1 boot
(parted) quit
Install grubAgain, substitute sda with the correct device name.
mount /dev/sda /mnt

mkdir -p /mnt/boot/grub
cp /usr/lib/grub/i386-pc/* /mnt/boot/grub/
echo '(hd0) /dev/sda' > /mnt/boot/grub/device.map
sudo grub-install --root-directory=/mnt /dev/sda

umount /mnt
That's it, now all you have to do is reboot with the stick plugged in and test booting your box manually with it.

9 September 2009

Francois Marier: Twitter tracking clicks to external sites

Lately, while using the Twitter web interface with Javascript turned on, I've been redirected to a Privoxy error page whenever I click on an external link:
Request for blocked URL

Your request for http://twitter.com/link_click_count?url=http%3A%2F%2Fexample.com&linkType=web&tweetId=1234567890&userId=12345678&authenticity_token=f5920ebf56bfa111c54bb9f86814cafa76de00fa was blocked.

Block reason: Path matches generic block pattern.
Looking a little closer at the blocked URL, I noticed that for every external link you click, Twitter logs the destination URL, the originating tweet and your user ID.

So what can you do to opt out of this tracking?
Disable JavascriptThis is by far the easiest solution. You can simply install an extension like Noscript and use Twitter with Javascript turned off. That's what I do most of the time since the website is mostly usable without it.

A very fortunate side-effect of using Twitter without Javascript is that you can avoid a number of the worms which propagate through that service from time to time.
Use Privoxy to disable the link counterFor those times when I have to enable Javascript on the site, I use Privoxy, a great privacy-enhancing web proxy, to alter the Javascript code executed when clicking on external links. This effectively disables the extra tracking and takes me directly to the intended URL.

First of all, make sure you have this line in your /etc/privoxy/config:
filterfile user.filter
then, add this filter to /etc/privoxy/user.filter:
FILTER: twitter-link-count Disable the Twitter link tracker
s@twttr\.countClick=function@twttr.dsbldCountClick=function@Uig
and finally turn this filter on for the affected URLs (/etc/privoxy/user.action):
  +force-text-mode +filter twitter-link-count   
.twimg.com/.*/javascripts/twitter.js
Switch to IdenticaIf you want to avoid similar privacy problems in the future, you can also switch to an Open platform which respects your basic freedoms.

The main contestant in this space is Identica. It's a great alternative to Twitter and has a lot of interesting features (like groups and XMPP integration) which are not yet available on Twitter.

The StatusNet developers have kindly released their source code under the AGPL and their goal is to build the most open micro-blogging system out there.

So go over there now and claim your username!

7 June 2009

Francois Marier: Reinstalling grub on an unbootable Debian system

Fixing an unbootable computer after a failed grub installation can be a bit tricky. Here's what I ended up doing.

First of all, boot the machine up and get access to the root partition:
  1. Get a Debian installation CD for the same architecture (i.e. don't use an i386 CD if your root partition is amd64). The distro version doesn't matter too much: a lenny CD will boot squeeze/sid just fine.
  2. Boot the install CD and select Rescue mode under Advanced options.
  3. Answer the language, keyboard and network questions any way you want and provide the decryption passphrases for any of the encrypted partitions you need to mount.
  4. When prompted, request a shell on the root partition.
If you need to upgrade the version of the grub package (for example if this problem was caused by a bug which is now fixed):
  1. Make sure that the network interface is up (ifup eth0).
  2. Make sure that /etc/resolv.conf has at least one nameserver line, otherwise add one.
  3. Install the latest version using apt-get or dpkg.
Now that you have the right grub version, run the following (with the right device name for your machine):
grub-mkdevicemap
grub-install /dev/hda
update-grub
Finally, reboot and cross your fingers :)

Next.

Previous.