Search Results: "thk"

21 March 2022

Gunnar Wolf: Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? ), I read Anarcat s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not? Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.
From the Space Cadet keyboard that (obviously ) influenced Emacs early design
The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one: The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipograf a Cient fica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years But it is fun and interesting to see what we were able to produce with in-house tools back in 1985! So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM. Now, after 39 years hammering at Emacs buffers Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don t read my mail or handle my git from Emacs. I could say, I m a relatively newbie after almost four decades. Four decades When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands. And yes, Emacs and TeX are still the main tools I use day to day.

23 July 2021

Evgeni Golov: It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8. Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh? Well, and then I dared to test the same on Debian Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server Let's dig deeper! To fetch the keytab, the installer does roughly this:
# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com
And if one executes that by hand to see the a actual error, you see:
# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)
Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":
  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)
And guess what, all of these are perfectly set by ipa-client-install when joining the realm But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?
# hostname
foreman
Bingo! Let's see what happens if we force that to be an FQDN?
# hostname foreman.example.com
# kinit -k
NO ERRORS! NICE! We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting. Is it a version difference between the systems?
  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8
So, something changed in 1.18? Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix. Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN. I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?! Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

10 April 2020

Utkarsh Gupta: Debian Activities for March 2020

Here s my (sixth) monthly update about the activities I ve done in Debian this March.

Debian LTS This was my sixth month as a Debian LTS paid contributor.
I was assigned 24.00 hours and worked on the following things:

CVE Fixes and Announcements:
  • Issued DLA 2131-1, fixing CVE-2014-6262, for rrdtool.
    For Debian 8 Jessie , this problem has been fixed in version 1.4.8-1.2+deb8u1.
  • Issued DLA 2131-2, fixing regression caused by DLA 2131-1, for rrdtool.
    For Debian 8 Jessie , this problem has been fixed in version 1.4.8-1.2+deb8u2.
  • Issued DLA 2135-1, fixing CVE-2020-9546, CVE-2020-9547, and CVE-2020-9548, for jackson-databind.
    For Debian 8 Jessie , these problems have been fixed in version 2.4.2-2+deb8u12.
  • Issued DLA 2137-1, fixing CVE-2020-10232, for sleuthkit.
    For Debian 8 Jessie , this problem has been fixed in version 4.1.3-4+deb8u2.
  • Issued DLA 2139-1, fixing CVE-2020-5258 and CVE-2020-5259, for dojo.
    For Debian 8 Jessie , these problems have been fixed in version 1.10.2+dfsg-1+deb8u3.
  • Issued DLA 2141-1, fixing CVE-2020-10184 and CVE-2020-10185, for yubikey-val.
    For Debian 8 Jessie , these problems have been fixed in version 2.27-1+deb8u1.
  • Issued DLA 2146-1, fixing CVE-2019-15690, for libvncserver.
    For Debian 8 Jessie , this problem has been fixed in version 0.9.9+dfsg2-6.1+deb8u7.
  • Issued DLA 2147-1, fixing CVE-2019-17546, for gdal.
    For Debian 8 Jessie , this problem has been fixed in version 1.10.1+dfsg-8+deb8u2.
  • Issued DLA 2149-1, fixing CVE-2020-5267, for rails.
    For Debian 8 Jessie , this problem has been fixed in version 2:4.1.8-1+deb8u6.
  • Issued DLA 2153-1, fixing CVE-2020-10672 and CVE-2020-10673, for jackson-databind.
    For Debian 8 Jessie , these problems have been fixed in version 2.4.2-2+deb8u13.
  • Issued DLA 2154-1, fixing CVE-2020-10802 and CVE-2020-10803, for phpmyadmin.
    For Debian 8 Jessie , these problems have been fixed in version 4:4.2.12-2+deb8u9.

Other LTS Work:

Debian Work

Uploads to the Archive:
  • micro (2.0.2-1~bpo10+1) to buster-backports.
  • rails (2:5.2.4.1+dfsg-1) to unstable.
  • ruby-rack (2.0.8-1) to unstable.
  • ruby-grape (1.3.0-1) to experimental.
  • libgit2 (0.28.4+dfsg.1-3) to unstable.
  • micro (2.0.2-2) to unstable.
  • ruby-octokit (4.17.0-1) to unstable.
  • ruby-power-assert (1.1.6-1) to unstable.
  • rails (2:5.2.4.1+dfsg-2) to unstable.
  • ruby-octokit (4.17.0-2) to unstable.
  • ruby-method-source (1.0.0-1) to unstable.
  • libwebservice-ils-perl (0.18-1) to unstable.
  • libdata-hal-perl (1.001-1) to unstable.
  • rails (2:4.2.7.1-1+deb9u2) to stretch.
  • rails (2:5.2.2.1+dfsg-1+deb10u1) to buster.
  • libgit2 (0.28.4+dfsg.1-4) to unstable.
  • ruby-grape (1.3.1+git20200320.c8fd21b-1) to experimental.
  • ruby-grape-logging (1.8.3-1) to unstable.
  • ruby-grape (1.3.1+git20200320.c8fd21b-2) to unstable.
  • ruby-dry-equalizer (0.3.0-2) to unstable.
  • ruby-dry-core (0.4.9-2) to unstable.
  • ruby-dry-logic (1.0.5-2) to unstable.
  • ruby-dry-inflector (0.2.0-2) to unstable.
  • ruby-dry-container (0.7.2-2) to unstable.
  • ruby-dry-configurable (0.9.0-2) to unstable.
  • ruby-dry-types (1.2.2-2) to unstable.
  • micro (2.0.2-2~bpo10+1) to buster-backports.
  • golang-vbom-util (0.0~git20180919.efcd4e0-2) to unstable.
  • golang-github-tonistiigi-units (0.0~git20180711.6950e57-2) to unstable.
  • golang-github-jaguilar-vt100 (0.0~git20150826.2703a27-2) to unstable.
  • golang-github-grpc-ecosystem-grpc-opentracing (0.0~git20180507.8e809c8-2) to unstable.
  • rails (2:6.0.2.1+dfsg-3) to experimental.
  • libgit2 (0.99.0+dfsg.1-1) to experimental.
  • golang-github-goji-param (0.0~git20160927.d7f49fd-5) to unstable.
  • phpmyadmin-sql-parser (4.6.1-2) to unstable.
  • mariadb-mysql-kbs (1.2.10-2) to unstable.
  • golang-github-aleksi-pointer (1.1.0-1) to unstable.
  • golang-github-andreyvit-diff (0.0~git20170406.c7f18ee-2) to unstable.
  • golang-github-audriusbutkevicius-go-nat-pmp (0.0~git20160522.452c976-2) to unstable.
  • ruby-power-assert (1.1.7-1) to unstable.
  • ruby-test-unit (3.3.5-1) to unstable.
  • ruby-omniauth (1.9.1-1) to unstable.
  • ruby-warden (1.2.8-1) to unstable.
  • python-libais (0.17+git.20190917.master.e464cf8-2) to unstable.
  • lolcat (100.0.1-3) to unstable.
  • ruby-vips (2.0.17-1) to unstable.

Bug Fixes:
  • #836206 for lolcat.
  • #940338 for golang-github-audriusbutkevicius-go-nat-pmp.
  • #940335 for golang-github-andreyvit-diff.
  • #940334 for golang-github-aleksi-pointer.
  • #940362 for golang-github-goji-param.
  • #952025 for ruby-grape.
  • #867027 for ruby-grape.
  • #954529 for libgit2.
  • #954304 for rails (CVE-2020-5267) buster-pu.
  • #954304 for rails (CVE-2020-5267) stretch-pu.
  • #954304 for rails (CVE-2020-5267) unstable.
  • #953400 for micro.
  • #927889 for libgit2.
  • #952111 for micro.

Miscellaneous:
  • Sponsored a lot of uploads :)
  • Outreachy mentoring for GitLab project for Sakshi Sangwan.
  • Opened PRs & MRs upstream.

Until next time.
:wq for today.

17 September 2016

Jonas Meurer: data recovery

Data recovery with ddrescue, testdisk and sleuthkit From time to time I need to recover data from disks. Reasons can be broken flash/hard disks as well as accidently deleted files. Fortunately, this doesn't happen to often, which on the downside means that I usually don't remember the details about best practice. Now that a good friend asked me to recover very important data from a broken flash disk, I take the opportunity to write down what I did and hopefully don't need to read the same docs again next time :) Disclaimer: I didn't take the time to read through full documentation. This is rather a brief summary of the best practice to my knowledge, not a sophisticated and detailed explanation of data recovery techniques. Create image with ddrescue First and most secure rule for recovery tasks: don't work on the original, use a copied image instead. This way you can do, whatever you want without risking further data loss. The perfect tool for this is GNU ddrescue. Contrary to dd, it doesn't reiterate over a broken sector with I/O errors again and again while copying. Instead, it remembers the broken sector for later and goes on to the next sector first. That way, all sectors that can be read without errors are copied first. This is particularly important as every extra attempt to read a broken sector can further damage the source device, causing even more data loss. In Debian, ddrescue is available in the gddrescue package:
apt-get install gddrescue
Copying the raw disk content to an image with ddrescue is as easy as:
ddrescue /dev/disk disk-backup.img disk.log
Giving a logfile as third argument has the great advantage that you can interupt ddrescue at any time and continue the copy process later, possibly with different options. In case of very large disks where only the first part was in use, it might be useful to start with copying the beginning only:
ddrescue -i0 -s20MiB /dev/disk disk-backup.img disk.log
In case of errors after the first run, you should start ddrescue again with direct read access (-d) and tell it to try again bad sectors three times (-r3):
ddrescue -d -r3 /dev/disk disk-backup.img disk.log
If some sectors are still missing afterwards, it might help to run ddrescue with infinite retries for some time (e.g. one night):
ddrescue -d -r-1  /dev/disk disk-backup.img disk.log

Inspect the image Now that you have an image of the raw disk, you can take a first look at what it contains. If ddrescue was able to recover all sectors, chances are high that no further magic is required and all data is there. If the raw disk (used to) contain a partition table, take a first look with mmls from sleuthkit:
mmls disk-backup.img
In case of a intact partition table, you can try to create device maps with kpartx after setting up a loop device for the image file:
losetup /dev/loop0 disk-backup.img
kpartx -a /dev/loop0
If kpartx finds partitions, they will be made available at /dev/mapper/loop0p1, /dev/mapper/loop0p2 and so on. Search for filesystems on the partitions with fsstat from sleuthkit on the partition device map:
fsstat /dev/mapper/loop0p1
Or directly on the image file with the offset discovered by mmls earlier. This also might work in case of
fsstat -o 8064 disk-backup.img
The offset obviously is not needed if the image contains a partition dump (without partition table):
fsstat disk-backup.img
In case that a filesystem if found, simply try to mount it:
mount -t <fstype> -o ro /dev/mapper/loop0p1 /mnt
or
losetup -o 8064 /dev/loop1 disk-backup.img
mount -t <fstype> -o ro /dev/loop1 /mnt

Recover partition table If the partition table is broken, try to recover it with testdisk. But first, create a second copy of the image, as you will alter it now:
ddrescue disk-backup.img disk-backup2.img
testdisk disk-backup2.img
In testdisk, select a media (e.g. Disk disk-backup2.img) and proceed, then select the partition table type (usually Intel or EFI GPT) and analyze -> quick search. If partitions are found, select one or more and write the partition structure to disk.

Recover files Finally, let's try to recover the actual files from the image. testdisk If the partition table recovery was sucessfull, try to undelete files from within testdisk. Go back to the main menu and select advanced -> undelete. photorec Another option is to use the photorec tool that comes with testdisk. It searches the image for known file structures directly, ignoring possible filesystems:
photorec sdb2.img
You have to select either a particular partition or the whole disk, a file system (ext2/ext3 vs. other) and a destination for recovered files. Last time, photorec was my last resort as the fat32 filesystem was so damaged that testdisk detected only an empty filesystem. sleuthkit sleuthkit also ships with tools to undelete files. I tried fls and icat. fls searches for and lists files and directories in the image, searching for parts of the former filesystem. icat copies the files by their inode numer. Last time I tried, fls and icat didn't recover any new files compared to photorec. Still, for the sake of completeness, I document what I did. First, I invoked fls in order to search for files:
fls -f fat32 -o 8064 -pr disk-backup.img
Then, I tried to backup one particular file from the list:
icat -f fat32 -o 8064 <INODE>
Finally, I used the recoup.pl script from Dave Henk in order to batch-recover all discovered files:
wget http://davehenk.googlepages.com/recoup.pl
chmod +x recoup.pl
vim recoup.pl
[...]
my $fullpath="~/recovery/sleuthkit/";
my $FLS="/usr/bin/fls";
my @FLS_OPT=("-f","fat32","-o","8064","-pr","-m $fullpath","-s 0");
my $FLS_IMG="~/recovery/disk-image.img";
my $ICAT_LOG="~/recovery/icat.log";
my $ICAT="/usr/bin/icat";
my @ICAT_OPT=("-f","fat32","-o","8064");
[...]
Further down, the double quotes around $fullfile needed to be replaced by single quotes (at least in my case, as $fullfile contained a subdir called '$OrphanFiles'):
system("$ICAT @ICAT_OPT $ICAT_IMG $inode > \'$fullfile\' 2>> $ICAT_LOG") if ($inode != 0);
That's it for now. Feel free to comment with suggestions on how to further improve the process of recovering data from broken disks.

17 January 2014

Steve Kemp: So I found a job.

Just to recap my life since December:
I had worked with Bytemark for seven years and left for reasons which made sense. I started working for "big corp" with a job that on-paper sounded good, but ultimately turned out to be a poor fit for my tastes. I spent a month trying to decide "Is this bad, or is this just not what I'm used to?", because I was aware that there would obviously be big differences as well as little ones. At the point I realized some of the niggles could be fixed but most couldn't then I resigned, rather than prolong the initial probationary training period - because I knew I wouldn't stay, and it seemed unfair and misleading to stay for the full duration of the probationary period knowing full well I'd leave the moment it concluded - and the notice period switched from seven days to one month.
A couple of people were kind enough to get in touch and discuss potential offers, both locally, remotely in the UK, and from abroad (the latter surprised me, but pleased me too). I spent a couple of days "contracting", by which I really mean doing a few favours for friends, some of whom paid me in Amazon vouchers, and some of whom paid me in beer.
e.g. I tweaked the upcoming death Knight site to handle 3000 simultaneous HTTP connections, then I upgraded some servers from Squeeze to Wheezy for some other folk.
That aside I've largely been idle for about 10 days and have now picked the company to work for - so I'm going to be a contractor with a day-rate for an American firm for the next couple of months. If that goes well then I'll become a full-time employee, hopefully.

29 January 2013

Jonathan Carter: Ubuntu Developer Summit for 13.04 (Raring)

The War on Time Whoosh! I ve been incredibly quiet on my blog for the last 2-3 months. It s been a crazy time but I ll catch up and explain everything over the next few entries. Firstly, I d like to get out a few details about the last Ubuntu Developer Summit that took place in Copenhagen, Denmark in October. I m usually really good at getting my blog post out by the end of UDS or a day or two after, but this time it just flew by so incredibly fast for me that I couldn t keep up. It was a bit shorter than usual at 4 days, as apposed to the usual 5. The reason I heard for that was that people commented in previous post-UDS surveys that 5 days were too long, which is especially understandable for Canonical staff who are often in sprints (away from home) for the week before the UDS as well. I think the shorter period works well, it might need a bit more fine-tuning, I think the summary session at the end wasn t that useful because, like me, there wasn t enough time for people to process the vast amount of data generated during UDS and give nice summaries on it. Overall, it was a great get-together of people who care about Ubuntu and also many areas of interest outside of Ubuntu. Copenhagen, Denmark I didn t take many photos this UDS, my camera is broken and only takes blurry pics (not my fault I swear!). So I just ended up taking a few pictures with my phone. Go tag yourself on Google+ if you were there. One of the first interesting things I saw when arriving in Copenhagen was the hotel we stayed in. The origami-like design reminded me of the design of the Quantal Quetzel logo that is used for the current stable Ubuntu release. 2012-10-28_05-50-14_21 quantal The Road ahead for Edubuntu to 14.04 and beyond St phane previously posted about the vision we share for Edubuntu 14.04 and beyond, this was what was mostly discussed during UDS and how we ll approach those goals for the 13.04 release. This release will mostly focus on the Edubuntu Server aspect. If everything works out, you will be able to use the standard Edubuntu DVD to also install an Edubuntu Server system that will act as a Linux container host as well as an Active Directory compatible directory server using Samba 4. The catch with Samba 4 is that it doesn t have many administration tools for Linux yet. St phane has started work on a web interface for Edubuntu server that looks quite nice already. I m supposed to do some CSS work on it, but I have to say it looks really nice already, it s based on the MAAS service theme and St phane did some colour changes and fixes on it already. edu-server-account edu-server-password From the Edubuntu installer, you ll be able to choose whether this machine should act as a domain server, or whether you would like to join an existing domain. Since Edubuntu Server is highly compatible with Microsoft Active Directory, the installer will connect to it regardless of whether it s a Windows Domain or Edubuntu Domain. This should make it really easy for administrators in schools with mixed environments and where complete infrastructure migrations are planned. Authentication Options Choosing machine role You will be able to connect to the same domain whether you re using Edubuntu on thin clients, desktops or tablets and everything is controllable using the Epoptes administration tool. Many people are asking whether this is planned for Ubuntu / Ubuntu Server as well, since this could be incredibly useful in other organisations who have a domain infrastructure. It s currently meant to be easily rebrandable and the aim is to have it available as a general solution for Ubuntu once all the pieces work together. Empowering Ubuntu Flavours This cycle, Ubuntu is making some changes to the release schedule. One of the biggest changes made this cycle is that the alpha and beta releases are being dropped for the main Ubunut product. This session was about establishing how much divergence and changes the Ubuntu Flavours (Ubuntu Studio, Mythbuntu, Kubuntu, Lubuntu and Edubuntu) could have from the main release cycle. Edubuntu and Kubuntu decided to be a bit more conservative and maintain the snapshot releases. For Edubuntu it has certainly helped so far in identifying and finding some early bugs and I m already glad that we did that. Mythbuntu is also a notable exception since it will now only do LTS releases. We re tempted to change Edubuntu s official policy that the LTS releases are the main releases and treat the releases in between more like technology previews for the next LTS. It s already not such a far stretch from the truth, but we ll need to properly review and communicate that at some point. Valve at UDS and Steam for Linux One of the first plenaries was from Valve where Drew Bliss talked about Steam on Linux. Steam is one of the most popular publishing and distribution systems for games and up until recently it has only been available on Windows and Mac. Valve (the company behind Steam and many popular games such as Half Life and Portal) are actively working on porting games to run natively on Linux as well. Some people have asked me what I think about it, since the system is essentially using a free software platform to promote a lot of non-free software. My views on this is pretty simple, I think it s an overwhelmingly good thing for Linux desktop adoption and it s been proven to be a good thing for people who don t even play games. Since the announcement from Valve, Nvidia has already doubled perfomance in many cases for its Linux drivers. AMD, who have been slacking on Linux support the last few years have beefed up their support drastically with the announcement of new drivers that were released earlier this month. This new collection of AMD drivers also adds support for a range of cards where the drivers were completely discontinued, giving new life to many older laptops and machines which would be destined for the dumpster otherwise. This benefits not only gamers, but everyone from an average office worker who wants snappy office suite performance and fast web browsing to designers who work with graphics, videos and computer aided design. Also, it means that many home users who prefer Linux-based systems would no longer need to dual-boot to Windows or OS X for their games. While Steam will actively be promoting non-free software, it more than makes up for that by the enablement it does for the free software eco-system. I think anyone who disagrees with that is somewhat of a purist and should be more willing to make compromises in order to make progress. Ubuntu Release Changes Last week, there was a lot of media noise stating that Ubuntu will no longer do releases and will become a rolling release except for the LTS releases. This is certainly not the case, at least not any time soon. One meme that I ve noticed increasingly over the last UDSs was that there s an increasing desire to improve the LTS releases and using the usual Ubuntu releases more and more for experimentation purposes. I think there s more and more consensus that the current 6 month cycle isn t really optimal and that there must be a better way to get Ubuntu to the masses, it s just the details of what the better way is that leaves a lot to be figured out. There s a desire between developers to provide better support (better SRUs and backports) for the LTS releases to make it easier for people to stick with it and still have access to new features and hardware support. Having less versions between LTS releases will certainly make that easier. In my opinion it will probably take at least another 2 cycles worth of looking at all the factors from different angles and getting feedback from all the stakeholders before a good plan will have formed for the future of Ubuntu releases. I m glad to see that there is so much enthusiastic discussion around this and I m eager to see how Ubuntu s releases will continue to evolve. Lightning Talks Lightning talks are a lot like punk-rock songs. When it s good, it s really, really amazingly good and fun. When it s bad, at least it will be over soon :) Unfortunately, since it s been a few months since the UDS, I can t remember all the details of the lightning talks, but one thing that I find worth mentioning is that they re not just awesome for the topic they aim to produce (for example, the one lightning talks session I attended was on the topic of Tests in your software ), but since they are more demo-like than presentation-like, you get to learn a lot of neat tricks and cool things that you didn t know before. Every few minutes someone would do something and I d hear someone say something like Awesome! I didn t know you could do that with apt-daemon! . It s fun and educational and I hope lightning talks will continue to be a tradition at future UDSs. Social Stefano Rivera (fellow MOTU, Debianista, Capetonian, Clugger) wins the prize for person I ve seen in the most countries in one year. In 2012, I saw him in Cape Town for Scaleconf, Managua during Debconf, Oakland for a previous UDS and Copenhagen for this UDS. Sometimes when I look at silly little statistics like that I realise what a great adventure the year was! Between the meet n greet, an evening of lightning talks and the closing party (which was viking themed and pretty awesome) there was just one free evening left. I used it to gather with the Debian folk who were at UDS. It was great to see how many Debian people were attending, I think we had around a dozen or so people at the dinner and there were even more who couldn t make it since they work for Canonical or Linaro and had to attend team dinners the same evening. It was as usual, great to put some more faces to names and get to know some people better. It was also great to have a UDS with many strong technical community folk present who is willing to engage in discussion. There were still a few people who felt missing but it was less than at some previous UDSs. I also discovered my face on a few puzzles! They were a *great* idea, I saw a few people come and go to work on them during the week, they seem to have acted as good menial activities for people to fix their brains when they got fried during sessions :) 2012-10-31_14-32-28_374 Overall, this was a good and punchy UDS. I ll probably not make the next one in Oakland due to many changes in my life currently taking place (although I will remotely participate), but will probably make the one later this year, especially if it s in Europe. I ll also make a point of live-blogging a bit more, it s just so hard remembering all the details a few months after the fact. Thanks to everyone who contributed their piece in making it a great week!

13 January 2013

Bernhard R. Link: some signature basics

While almost everyone has already worked with cryptographic signatures, they are usually only used as black boxes, without taking a closer look. This article intends to shed some lights behind the scenes. Let's take a look at some signature. In ascii-armoured form or behind a clearsigned message one often does only see something like this:
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJQ8qxQAAoJEH8RcgMj+wLc1QwP+gLQFEvNSwVonSwSCq/Dn2Zy
fHofviINC1z2d/voYea3YFENNqFE+Vw/KMEBw+l4kIdJ7rii1DqRegsWQ2ftpno4
BFhXo74vzkFkTVjo1s05Hmj+kGy+v9aofnX7CA9D/x4RRImzkYzqWKQPLrAEUxpa
xWIije/XlD/INuhmx71xdj954MHjDSCI+9yqfl64xK00+8NFUqEh5oYmOC24NjO1
qqyMXvUO1Thkt6pLKYUtDrnA2GurttK2maodWpNBUHfx9MIMGwOa66U7CbMHReY8
nkLa/1SMp0fHCjpzjvOs95LJv2nlS3xhgw+40LtxJBW6xI3JvMbrNYlVrMhC/p6U
AL+ZcJprcUlVi/LCVWuSYLvUdNQOhv/Z+ZYLDGNROmuciKnvqHb7n/Jai9D89HM7
NUXu4CLdpEEwpzclMG1qwHuywLpDLAgfAGp6+0OJS5hUYCAZiE0Gst0sEvg2OyL5
dq/ggUS6GDxI0qUJisBpR2Wct64r7fyvEoT2Asb8zQ+0gQvOvikBxPej2WhwWxqC
FBYLuz+ToVxdVBgCvIfMi/2JEE3x8MaGzqnBicxNPycTZqIXjiPAGkODkiQ6lMbK
bXnR+mPGInAAbelQKmfsNQQN5DZ5fLu+kQRd1HJ7zNyUmzutpjqJ7nynHr7OAeqa
ybdIb5QeGDP+CTyNbsPa
=kHtn
-----END PGP SIGNATURE-----
This is actually only a form of base64 encoded data stream. It can be translated to the actual byte stream using gpg's --enarmor and --dearmour commands (Can be quite useful if some tool only expects one BEGIN SIGNATURE/END SIGNATURE block but you want to include multiple signatures but cannot generate them with a single gpg invocation because the keys are stored too securely in different places). Reading byte streams manually is not much fun, so I wrote gpg2txt some years ago, which can give you some more information. Above signature looks like the following:
89 02 1C -- packet type 2 (signature) length 540
        04 00 -- version 4 sigclass 0
        01 -- pubkey 1 (RSA)
        02 -- digest 2 (SHA1)
        00 06 -- hashed data of 6 bytes
                05 02 -- subpacket type 2 (signature creation time) length 4
                        50 F2 AC 50 -- created 1358081104 (2013-01-13 12:45:04)
        00 0A -- unhashed data of 10 bytes
                09 10 -- subpacket type 16 (issuer key ID) length 8
                        7F 11 72 03 23 FB 02 DC -- issuer 7F11720323FB02DC
        D5 0C -- digeststart 213,12
        0F FA -- integer with 4090 bits
                02 D0 [....]
Now, what does this mean. First all gpg data (signatures, keyrings, ...) is stored as a series of blocks (which makes it trivial to concatenate public keys, keyrings or signatures). Each block has a type and a length. A single signature is a single block. If you create multiple signatures at once (by giving multiple -u to gpg) there are simple multiple blocks one after the other. Then there is a version and a signature class. Version 4 is the current format, some really old stuff (or things wanting to be compatible with very old stuff) sometimes still have version 3. The signature class means what kind of signature it is. There are roughly two signature classes: A verbatim signature (like this one), or a signature of a clearsigned signature. With a clearsigned signature not the file itself is hashed, but instead a normalized form that is supposed to be invariant under usual modifications by mailers. (This is done so people can still read the text of a mail but the recipient can still verify it even if there were some slight distortions on the way.) Then the type of the key used and the digest algorithm used for creating this signature. The digest algorithm (together with the signclass, see above) describes which hashing algorithm is used. (You never sign a message, you only sign a hashsum. (Otherwise your signature would be as big as your message and it would take ages to create a signature, as asymetric keys are necessarily very slow)). This example uses SHA1, which is no longer recommended: As SHA1 has shown some weaknesses, it may get broken in the not too distant future. And then it might be possible to take this signature and claim it is the signature of something else. (If your signatures are still using SHA1, you might want to edit your key preferences and/or set a digest algorithm to use in your ~/.gnupg/gpg.conf. Then there are some more information about this signature: the time it was generated on and the key it was generated with. Then, after the first 2 bytes of the message digest (I suppose it was added in cleartext to allow checking if the message is OK before starting with expensive cryptograhic stuff, but it might not checked anywhere at all), there is the actual signature. Format-wise the signature itself is the most boring stuff. It's simply one big number for RSA or two smaller numbers for DSA. Some little detail is still missing: What is this "hashed data" and "unhashed data" about? If the signed digest would only be a digest of the message text, then having a timestamp in the signature would not make much sense, as anyone could edit it without making the signature invalid. That's why the digest is not only signed message, but also parts of the information about the signature (those are the hashed parts) but not everything (not the unhashed parts).

4 September 2012

Thomas Koch: leaving facebook

I've never really used facebook so it's not too hard for me to delete my account there. The only thing that's a pity is that I found (or others found me) many people that I once met and which I'd love to meet again once. So I'll go through my facebook contacts list and will try to copy all contact data especially the email adresses and plan to write emails to everybody every quarter or half a year about happenings in our lifes. I'd be glad to receive such emails from others too! They'll surely be read more carefully then my facebook timeline!

This gets me to another point: I still don't have a satisfying free (as in freedom) solution to manage my contacts across devices and available on my web server. The same for image galleries. So I can understand how convenient facebook for many people is. Still facebook is evil and it's an important and ongoing project to provide a better alternative.

It'll be harder for me to leave Twitter. This one comes next. If people would just switch to the free and privacy aware alternative identi.ca...

And then I've to finally provide a better alternative for my family to share baby pictures then Google Plus... Leaving Xing and Linkedin also doesn't worry me, I just have to collect the addresses of my important contacts there. Not so pressing is to leave Couchsurfing for bewelcome.org.

So if you want to hear news from me just pass by www.koch.ro from time to time or put my blogs feed into your feed reader. I'll also keep my identi.ca account.

update: One tiny example of facebooks evilness is, that they don't show me your real email address (anymore). They only show me an @facebook address. So if I don't know your real email address from any other source, I would not be able to send you messages without sending it over facebooks servers. I read that you can change the email address shown back to your real email address somewhere in the options.

16 June 2012

Vincent Bernat: GPG Key Transition Statement 2012

I am transitioning my GPG key from an old 1024-bit DSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time but I prefer all new correspondance to be encrypted with the new key. I will be making all signatures going forward with the new key. I have followed the excellent tutorial from Daniel Kahn Gillmor which also explains why this migration is needed. The only step that I did not execute is issuing a new certification for keys I have signed in the past. I did not find any search engine to tell me which key I have signed. Here is the signed transition statement (I have stolen it from Zack):
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256,SHA1
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted in the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
transition.
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
reauthenticating me.
The old key, which I am transitional away from, is:
  pub   1024D/F22A794E 2001-03-23
      Key fingerprint = 5854 AF2B 65B2 0E96 2161  E32B 285B D7A1 F22A 794E
The new key, to which I am transitioning, is:
  pub   4096R/353525F9 2012-06-16 [expires: 2014-06-16]
      Key fingerprint = AEF2 3487 66F3 71C6 89A7  3600 95A4 2FE8 3535 25F9
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs 95A42FE8353525F9
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff 95A42FE8353525F9
Please contact me via e-mail at <vincent@bernat.im> if you have any
questions about this document or this transition.
  Vincent Bernat
  vincent@bernat.im
  16-06-2012
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAEBCAAGBQJP3LchAAoJEJWkL+g1NSX5fV0P/iEjcLp7EOky/AVkbsHxiV30
KId7aYmcZRLJpvLZPz0xxThZq2MTVhX+SdiPcrSTa8avY8Kay6gWjEK0FtB+72du
3RxhVYDqEQtrhUmIY2jOVyw9c0vMJh4189J+8iJ5HGQo9SjFEuRrP9xxNTv3OQD5
fRTMUBMC3q1/KcuhPA8ULp4L1OS0xTksRfvs6852XDfSJIZhsYxYODWpWqLsGEcu
DhQ7KHtbOUwjwsoiURGnjwdiFpbb6/9cwXeD3/GAY9uNHxac6Ufi4J64bealuPXi
O4GgG9cEreBTkPrUsyrHtCYzg43X0q4B7TSDg27j0xm+xd+jW/d/0AlBHPXcXemc
b+pw09qLOwQWbsd6d4bx22VXI75btSFs8HwR9hKHBeOAagMHz+AVl5pLXo2rYoiH
34fR1HWqyRdT3bCt19Ys1N+d0fznsZNFOMC+l23QyptOoMz7t7vZ6GbB20ExafrW
+gi7r1sV/6tb9sYMcVV2S3XT003Uwg8PXajyOnFHxPsMoX9zsk1ejo3lxkkTZs0H
yLZtUj3iZ3yX9e2yfv3eOxitR4+bIntEbMecnTI9xJn+33QTz/pWBqg9uDosqzUo
UoQtc6WVn9x3Zsi7aneDYcp06ZdphgsyWhgiLIhQG9MAK9wKthKiZv8DqGYDOsKt
WwpQFvns33e5x4SM4KxXiEYEARECAAYFAk/ctyEACgkQKFvXofIqeU5YLwCdFhEL
P7vpUJA2zv9+dpPN5GLfBlcAn0mDGJcjJpYZl/+aXEnP/8cE0day
=0QnC
-----END PGP SIGNATURE-----
For easier access, I have also published it in text format. You can check it with:
$ gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
gpg: requesting key 353525F9 from hkp server keys.gnupg.net
gpg: key 353525F9: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
$ curl http://vincent.bernat.im/media/files/key-transition-2012.txt   \
>       gpg --verify
To avoid signing/encrypting with the old key who share the same email addresses than the new one, I have saved it, removed it from the keyring and added it again. The new key is now first in both the secret and the public keyrings and will be used whenever the appropriate email address is requested.
$ gpg --export-secret-keys F22A794E > ~/tmp/secret
$ gpg --export F22A794E > ~/tmp/public
$ gpg --delete-secret-key F22A794
sec  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
This is a secret key! - really delete? (y/N) y
$ gpg --delete-key F22A794E
pub  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
$ gpg --import ~/tmp/public
gpg: key F22A794E: public key "Vincent Bernat <bernat@luffy.cx>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: 3 marginal(s) needed, 1 complete(s) needed, classic trust model
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2014-06-16
$ gpg --import ~/tmp/secret
gpg: key F22A794E: secret key imported
gpg: key F22A794E: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
$ rm ~/tmp/public ~/tmp/secret
$ gpg --edit-key F22A794E
[...]
gpg> trust
[...]
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
I now need to gather some signatures for the new key. If this is appropriate for you, please sign the new key if you signed the old one.

2 January 2012

Thomas Koch: Perils of not switching to Git

Somebody probably already recommended you to switch to Git, because it's the best VCS. I'd like to go a step further now and talk about the risk you're taking if you won't switch soon.
By still using SVN (if you're using CVS you're doomed anyway), you communicate the following:
Be aware that good developers today will not consider working with or for you if you're still using SVN. - And that's the risk.
Until recently I thought that Mercurial would be an acceptable alternative to Git. Until I used mercurial for some time. It is not.

Update: The comments are not too approving, to say the least. Lets see what time will tell. In the meanwhile I'll attach links to this blogpost:

13 September 2010

Robert Collins: What do I do @ work?

I recently moved withing Canonical from being a paid developer of Bazaar to take on a larger challenge Technical Architect for Launchpad. Its been two months now, and its time to put my head up out of the coal face, have a look around and regroup. When I worked on Bazaar, every day when I started work got up I was working on a tool anyone can use, designed for collaboration upon sourcecode, for people writing software. This is a toolchain component right at the heart of the free software world. Bazaar and tools like it get used everyday to manage, distribute and collaborate on the sourcecode that makes up the components of Ubuntu, Debian, Fedora and so forth. Every time someone new starts using Bazaar for a new free or open source project, well I felt happy happy that in my small part I m helping with this revolution we re carrying out. Launchpad is pretty similar to Bazaar in some ways. Obviously they are both free software, both are written in Python, and both are sponsored by Canonical, my employer. And they both are designed to assist in collaboration and communication between free software developers albeit in rather different ways. Bazaar is a tool anyone can install locally, run as a command line, GUI, or local webserver, and share code either centrally (e.g. by pushing to Launchpad), or in a peer to peer fashion, acting as their own server. Launchpad, by contrast is a website which (usually) folk will use as a service in their browser, from the comand line FTP (for package building), ssh (for Bazaar branch pushing or pulling), or even local GUI programs using the Launchpad API service. This makes it more approachable for first time collaborators, but its less able to be used offline, and it has all the usual caveats of web sites : it needs a username and password, it s availability depends on the operators on the team I m part of. So there s a lot less room for error: if we do something wrong, the system is unavailable, and users can t just apt-get install an older release. With Launchpad our goal is to to get all the infrastructure that open source need out of the way, so that they can focus on their code, collaboration within their team and almost uniquely collaboration with other teams. As well as being open source, Launchpad is free for all open source projects to use Ubuntu is our single biggest user they use it for all bugtracking, translation and package building, and have a hugefraction of the total storage overhead in the database. Launchpad is a pretty nice system, so people use it, and as a result (on a technical basis) it is suffering from its own success: small corner cases in the code turn up every day or two, code written years ago to deal with a relatively small data set now has to deal with data sets a thousand or more times larger (one table, for instance, has over 600,000,000 rows in it. For the last two months then, I ve been working on Launchpad. As Technical Architect, I need to ensure that the things that we (users, stakeholders and developers of Launchpad) want to do are supported by the structure of the system : the platform(s) we re building on, the way we approach problems, coding standards and diagnostic tools. That sounds pretty dry and hands off, but I m finding its actually very balanced. I wrote a presentation when I started the job, which encapsulated the challenges I saw in front of the team on this purely technical front, and what I thought I needed to do. I think I was about right in my expectations: On a typical day, I ll be hands on in a problem helping get it diagnosed, talking long term structural changes with someone around how to make things more efficient / flexible / maintainable, and writing a small patch here or there to help move things along. In the two months since I took on this challenge, we ve made significant headway on the problem of performance for Launchpad : many inefficient code paths have been identified and removed, some new infrastructure has been created as is being rolled out to make individual pages faster, and we ve massively increased the diagnostic data we get when things go wrong. We ve introduced facilities for responding more rapidly to issues in the software (but they have to be rolled out across the system) and I hope, over the next 4 months we ll reach the first of my performance goals: for any webpage in Launchpad, it will complete rendering in 99% of the time. (Note that we already meet this goal if you measure the whole system, but this is biased by some pages being very frequently hit and also being very small).

13 June 2010

Gunnar Wolf: World Naked Bike Ride 2010 Mexico

For the second time (First time was in 2008; I didn't join in 2009 as I travelled to Nicaragua on that date), I took part of the World Naked Bike Ride. The WNBR is a global effort, where people in ~150 cities all over the world go cycling nude on the streets of our towns, with varied demands, including: I love my bike! One of the things I most like about WNBR is its diversity. Not everybody goes for the same reasons. As people who read me often will know, I took part because I believe (and act accordingly!) that the bicycle is the best, most efficient vehicle in by far most of the situations we face day to day, but we need to raise awareness in everybody that the bicycle is just one more vehicle: On one side, we have the right to safely ride on the streets, like any other vehicle. On the other side, we must be responsible, safe drivers, just as we want car drivers to be. Ok, and I will recognize it before anybody complains that I sound too idealistic: I took part of the WNBR because it is _tons_ of fun. This year, we were between 300 and 500 people (depending on whom you ask). Compared to 2008, I felt less tension, more integration, more respect within the group. Of course, it is only natural in the society I live in that most of the participants were men, but the proportion of women really tends to even out. Also, many more people joined fully or partially in the nude (as nudity is not required, it is just an invitation). There was a great display of creativity, people painted with all kinds of interesting phrases and designs, some really beautiful. Oh, one more point, important to me: This is one of the best ways to show that we bikers are not athletes or anything like that. We were people ranging from very thin to quite fat, from very young to quite old. And that is even more striking when we show our whole equipment. If we can all bike around... So can you! Some links, with obvious nudity warnings in case you are offended by looking at innocent butts and similar stuff: As for the sad, stupid note: 19 cyclists were placed under arrest in Morelia, Michoac n because of faltas a la moral (trasgressions against morality), an ill-defined and often abused concept. Also, by far, most of the comments I have read from people on the media, as well a most questions we had by reporters before or after the ride were either why are you going nude (because that's the only way I'll get your attention!) or But many people were not nude! (nudity is not a requirement but only an option.

16 May 2010

Thomas Koch: tnt is not topgit

As I've already written, I'm working on an alternative to topgit. I made a first attempt in perl some weeks ago, but gave up after some frustrating hours. Yesterday I started again in python and had a very nice time putting together the groundwork and the first two commands.
It may be noted, that I've no previous programming experience in neither perl nor python!
By now, I can create a patchset branch and add a patch branch to it. There's still a lot to do. For my talk at the Debian Mini Conference in Berlin next month I'd like to be able to update patch branches, export patchsets and give a status summary.
Maybe I can already find somebody who's interested in joining me with this project? The code is in my github account, however the name will most probably change.
One reason that I've been much faster in python is the fantastic python-git library. I can only recommend it!
In other news: I'm searching a couch to surf in Berlin from june 7.-12. I prefer couchsurfing over hotels mostly to get to know nice people around the world. Please contact me, if you'd like to host me for a night or two. (thomas at koch punkt ro)

2 April 2010

Thomas Koch: Design document for a patch management system on a DVCS

Dear friends of Debian,

this is my first post to Planet Debian. - The planet with the most geeky registration procedure in the known universe!

I proposed an alternative to topgit some days ago on the vcs-pkg.org list. Martin asked (and encouraged) me to give a better explanation of the idea, which I'll hereby try. Sorry for not giving any drawings, but I'm totally incapable of anything graphical.

Hopefully, I'll manage to come to the Debian Miniconf in Berlin. Then we could discuss the idea further and maybe even start implementing it. (Somebody would need to help me with my first steps in Perl then...)

The following text is available on github. Please help me expand it!

Design document for a patch management system on a DVCS
Requirements The system to implement manages patchsets. A patchset is a set of patches with a tree-ish dependency graph between the patches. There's one distinct root of this dependency graph. Patches are managed as branches with each branch representing a patch. Modification of a patch is done by a commit to the respective branch. A branch representing a patch as part of a patchset is called patchbranch. The patch of a patchbranch is created as the diff between the root of the patchbranch and the head. The most important management methods are:
  • Export a patchset in different formats
    • quilt
    • a merged commit of all patches
    • a line of commits with each commit representing one patch
  • Update a patchset against an updated root.
  • Copy a patchset
  • Delete a patchset from direct visibility while preserving all history about it
  • Hide and unhide a patchset from direct visibility
Additional requirements:
  • The system should be implementable on top of GIT, Mercurial and eventually Bazaar.
  • The system must easily cope with multiple different and independent patchsets.
  • All information about a patchset must be encoded in one distinct branch. Publishing this one branch must be sufficient to allow somebody else to recreate the patchset with all of its patchbranches.
  • The system should not rely on the presence of hooks.
  • The system should not require the addition of management files in patch branches (like .topmsg and .topdeps in topgit)
  • The system must be easy to understand for a regular user of the underlying DVCS.
  • The implementation may allow a patchset to depend on another patchset(s).

implementation
patchset meta branch A patchset meta branch holds all informations about one patchset. First, it holds references to the top commits of all patch branches in the form of parent references of commits. Thus pushing the patchset meta branch automatically also pushes all commits of all patch branches. Secondly, the patchset meta branch contains meta informations about the patchset. These meta informations are:
  • The names of all patch branches together with the most recent commit identifier of a particular patch branch. Let's save this information in a file called branches.
  • A message for each patch branch that explains the patch. These messages can be saved in the file tree as msg/$ PATCH-BRANCH-NAME
  • References to the dependencies of the patch (other patches of the same patchset or the root of the patchset). This is also encoded in the file branches.
Since the patchset meta branch holds all this informations, it is possible, to delete all patch branches and recreate them from this informations. Although the commits of the patchset meta branches hold references to the patch branches, its file tree does not need to contain any files from the referenced patches. This may confuse the underlying DVCS, but the patch meta branch is not ment to be directly inspected.

The branches file A branches file for a fictive patchset could look like:
# patch branches without an explicit dependency depend on the root of the
# patchset tree
# A Root can be given as either a fix commit (seen here), a branch or a tag.
# A fixed commit or tag is useful to maintain a patchset against an older
# upstream version
ROOT: 6a8589de32d490806ab86432a3181370e65953ca
# A tag as a dependency
#ROOT: upstream/0.1.2
# A branch as a dependency
#ROOT: upstream
# A regular patch with it's name and last commit
BRANCH: debian/use-debian-jars-in-build-xml 4bab542c261ff1a1ae87151c3536f19ef02d7937
# two other regular patches
BRANCH: upstream-jira/HDFS-1234 a8e4af13106582ca1bfbbcaeb0537f73faf46d87
BRANCH: upstream-jira/MAP-REDUCE-007 e3426bcbcb2537478f851edcf6eb04b34f6c7106
# This patch depends on the above two patches
# The sha1 below the dependency patches references a merge commit of the two
# dependencies
BRANCH: upstream-jira/HDFS-008 517851aa829d77e09bc5e59985fed1b0aa339cc6
DEPENDENCIES:
  upstream-jira/HDFS-1234
  upstream-jira/MAP-REDUCE-007
    cc294f2e4773c4ff71efb83648a0e16835fca841
# A patch branch that belongs to the patch branch, but won't get exported (yet)
BRANCH: upstream-jira/HDFS-9999 74257905azgsa4689bc5e59985fed1b0aa339cc6
BRANCH-FLAGS: noexport



16 September 2009

Pete Nuttall: How Random is xkcd?

I was reading xkcd by clicking on the random link when I noticed that the same cartoons were coming up again and again. I was wondering if this was Confirmation bias on my part or a duff random number generator on the server's part. Randall Munroe is a science geek, and I figured what he would do is test this idea... One python script and 12,000 (approx) requests later, I had a file full of numbers and started trying to remember some statisics. I threw together a quick bit of python to work out the mean and standard deviation (its here). The mean value is 97.4155602788 and the standard deviation is 171.040683155. If they are uniformly distributed from 1 to 361, the expected mean would be 104.211323761 and the standard deviation would be 181.0. So I was lost in thought for a while. However, the following quick check threw some light on the difference.
>>> data = [int(x) for x in open('numbers.data')]
>>> for x in xrange(0, 361):
...   if x not in data:
...     print x
... 
0
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
So comics numbered about 338 don't appear. And recomputing the mean and standard deviation for 1 to 337 gives a mean of 97.2830920561 and a standard deviation of 169.0, which is about the mean and standard deviation the data gives. I'm now waiting for someone who actually knows stats correcting me in where I went wrong. The conclusion? I suffer from confirmation bias :-(. For those who like pretty pictures, here is one, courtesy of the Google charts API and pygooglechart: chart of freq against comic number code here

28 July 2008

Neil Williams: Emdebian status update

over 96% successful
243 packages built successfully.
8 packages failed to cross build.

Failing packages
  1. cups
    http-addr.c: In function 'httpGetHostByName':
    http-addr.c:464: error: invalid 'asm': invalid operand for code 'w'
    http-addr.c:464: error: invalid 'asm': invalid operand for code 'w'

    history

  2. gtk+2.0 - cairo-directfb problems.
    history
    The cross-build keeps finding /usr/arm-linux-gnu/lib/libcairo.so first so it misses the extra symbols provided in /usr/arm-linux-gnu/lib/libcairo-directfb/lib/libcairo.so. Hopefully, this can be resolved by judicious hackery of LDFLAGS and LD_LIBRARY_PATH.

  3. ncurses

    /trunk/n/ncurses/trunk/ncurses-5.6+20080713/ncurses/curses.priv.h:1417: warning: parameter names (without types) in function declaration
    /trunk/n/ncurses/trunk/ncurses-5.6+20080713/ncurses/curses.priv.h:1418: error: expected '=', ',', ';', 'asm' or '__attribute__' before '_nc_to_widechar'

    history

  4. ntp - same assembly error as cups
    authkeys.c: In function 'authencrypt':
    authkeys.c:445: error: invalid 'asm': invalid operand for code 'w'
    authkeys.c:445: error: invalid 'asm': invalid operand for code 'w'

    history

  5. slang2 - confusing build behaviour - outside a chroot, files are put into a directory called elfobjs. Inside a chroot, files are put into a directory called elfamd64objs (and presumably that would change according to the BUILD architecture with is plain insane.
    history


The assembly errors look spooky and the cairo and slang2 errors could be made much simpler if the upstream builds were sane. (Honestly, why can't we just have /usr/lib/libcairo-directfb.so ?)

Some other packages fail to build, e.g. gcc-4.2, but we don't need those anymore so I'm ignoring those.

14 July 2008

Neil Williams: State of play in Emdebian ARM

Autobuilder Summary:
138 packages built successfully. 19 packages failed to cross build. 2 packages are tagged for a manual upload. (Dependency issues - rather than break the archive, these packages will not be uploaded until relevant dependencies have been fixed and uploaded.)
Stats
87% built OK
12% failed
Notes:
  1. All these packages built previously and fail at the latest versions in Debian Sid. Many could be upstream errors. All exist in Emdebian at
    previous versions (only 1 failure is known to contain bugs in the current versions in Emdebian and this only affects GUI installations. Two other GUI packages contains bugs but build OK.).
  2. Some of the packages in the repository (including some of the failures) are not used in *any* of the default emsandbox configurations. Two need to be removed from the repository completely.
  3. Some fixes are waiting for updates in Debian, either new uploads migrating to mirrors or pending fixes awaiting upload by me. (Probably tomorrow.)
  4. The history of each package is now available via the Emdebian Autobuilder Report. The package name links to the history of previous build attempts for this package. The version string links to the repository data for the current version in Emdebian. The result of the build links to the build log.
  5. A comparison with Debian unstable is constantly updated.
  6. 254 source packages exist in Emdebian but only 137 are currently built by the autobuilder. The rest will be built when new versions are uploaded to Debian (in some cases, that won't be until after Lenny). If I get time, I'll add build logs for these packages.
  7. The repository contains a fair number of packages that we don't currently use. This is a reflection of the problems of determining the "path" through the dependencies when starting at the bottom (glibc uClibc etc.). Bear this in mind when considering new architectures and new package sets. The process involves a lot of trial and error, particularly as cross-building potentially alters the dependency path at each iteration.

Problems
  1. libselinux and libsepol have just been uploaded with a bug fix that is needed for e2fsprogs and file cross-builds, amongst others. Waiting
    for the ARM mirror to receive the new builds, hopefully tomorrow.
  2. gtk+2.0 fails to cross-build because the patches now try to build the udeb which comes up against a bug in dpkg-cross. I'll be uploading the new version (including a couple of other bug fixes) tonight. (Final testing currently in progress.). The Gtk package also needs some work to run /usr/lib/libgtk2.0-0/update-gdkpixbuf-loaders so that the icons can be read in the GUI.
  3. pango1.0 also needs a fix to update the pango modules. This is usually done at build time but it means running a cross-built binary. It is a minor task and not CPU intensive so I plan to do this in postinst. Without this fix, no text is rendered in the GUI, only empty glyphs.
  4. xfonts-base is way too large still. More work is needed here to optimise just what is meant by the 'fixed' font for X, precisely which font files can be removed, which might be needed outside C or English locales etc.
  5. Other packages not currently cross-building include: (NOTE: Some of these build logs are VERY large.)
    • adduser - needs to be removed from the repository, it is perl and we implement it via busybox.
    • cups cupsys - /usr/include/pthread.h:653: warning: '__regparm__' attribute directive ignored.
    • gcc-4.2 - needs to be removed from the repository, it no longer builds libgcc1 which was the main reason for cross-building it.
    • glib2.0 - more generated content: /bin/sh: line 2: /usr/bin/glib-genmarshal: No such file or directory
    • gnomevfs - hasn't built for ages and isn't usable anyway due to dependency issues. Currently preventing the use of gpe-filemanager. Due to be replaced by GIO in Debian.
    • libusb - build failure, so far resistant to usual fixes.
    • ncurses - build failure - uses gcc in wide char support build.
    • ntp - build failure - assembly error. authkeys.c:445: error: invalid 'asm': invalid operand for code 'w'
    • pcre3 - build failure - apparently simply unresolved symbols resistant to usual fixes.

    • slang2 - possibly needs a patch update, tries to build -pic archives and then fails.
    • xorg-server - build failure: make[4]: *** No rule to make target -L/usr/arm-linux-gnu/lib', needed by Xprt'. Stop.


Of the problems above, only problems 2, 3 and 4 (gtk, pango and xfonts-base) are "critical" as these are the only ones where the current packages are sufficiently broken that the new version is essential before Emdebian can make a "release" of a series of three root filesystems for ARM. Sizes are still larger than desired - a fix for that requires changes in glibc which I hope to investigate during DebConf.

Any help on the above issues is appreciated.

Problem-solving:
$ emsource -c $package
$ cd /path/to/$package
$ emdebuild

If you have SVN access, use 'emdebuild --svn-only' to commit your changes (even if your changes don't solve all the problems).

Without SVN access, post the result of 'svn diff ../emdebian*' to this list (or 'svn diff ../debian*' if you have added a patch for the upstream code to debian/patches/).

Platforms:
All the above is based solely on Emdebian ARM for balloon3:
Linux balloon 2.6.25.2-pxa270 #1 Sun May 18 22:38:11 BST 2008 armv5tel unknown
balloon3 in CUED case
http://balloonboard.org/gallery/300/balloon3-0v1-fpga.jpg

AFAICT no other ARM platforms have been tested. :-(

I have a few fixes for the balloon3-config package that also needs an update in Emdebian - I've had some feedback from the touchscreen driver
upstream that we might be trying to use the wrong device (/dev/input/mice is the wrong target for the symlink, we can set whatever device we want in /etc/X11/xorg.conf and the driver will use that in preference to /dev/event0). As this is device-specific, /etc/X11/xorg.conf will be put into balloon3-config which is available via Emdebian SVN.

26 April 2008

Lior Kaplan: My GPG hall of shame


During FOSDEM’s key signing party I had a few people telling me they didn’t get my signatures on their key. It seems that although I already signed them, there was a problem with sending the signatures (probably my local mail settings or my ISP thinking I’m spamming). After a few reminders from people, I finally got to do the signing of FOSDEM party (including some people who gave me slips). Seems like some people follow carefully who didn’t signed they key… I hope now everyone will be satisfied (: If you didn’t get my signature yet, please let me know… I don’t want to hear the same complaints next year (that’s wasn’t fun ): ). For obvious reason I can only re-send you an existing signature I have. The fun part of the signing party is to meet people and ask them questions according to their e-mail addresses. Even better is to thank them for the work on free software I use. This year I thanked Patrick Brunschwig, the enigmail author. But also like to thank Thijs Kinkhorst from squirrelmail and Eike Rathke from openoffice. It was fun to meet some fellow Debian Developer I didn’t know from DebConfs.

8 August 2006

Ross Burton: Cornish Bliss (part 2)

Now, where was I... Carbis Bay Carbis Bay Ah yes. After the horrendously grim weather had passed, the weather improved and we headed for the beach. On the way down we commented on how this was the classic British burning weather: bright sunshine, a strong breeze, and occassional clouds combine to burn skin without even feeling that hot. Of course knowing this meant nothing, we were too distracted with purchasing pasties and drink to think about putting a decent amount of sunblock on. Steve Relaxing Obviously the main thing to do at the beach, after we'd sat down, not applied sunblock, and scoffed a pasty, was to dig a hole. A huge hole. Spades were purchased and we took turns to help Pete dig The Hole. Digging Diggers Astute readers will notice the inevitable outcome of saying this is burning weather, not putting enough sunblock on, and digging a hole (an activity that results in the back being exposed to the sun). Ouch. After the hole had been dug we had to full it in again to avoid trapping small childen in it. Obviously this led to a series of hilarious scenes involving burying Pete up to his chest, modelling breasts and a penis, and so on. Finally the hole was flat again, at which point an impromptu long jump sand pit was arranged. I came first in the long jump, and although failing miserably at triple jump although I swear my technique was best (it's all in the wrists). Pete Next was to explore the costal path in the opposite direction towards Porthkidney beach. The beach is pretty huge by my standards, and due to the lack of facilities (no close car park, shops, toilets, and so on) it's almost deserted: there were a few other people there with dogs (the other local beaches are dog-free in summer) and that's about it. Googling to confirm the name of the beach reveals that there is a history of naturism and "inappropriate gay activity", but we didn't encounter any of that. ;) Progression Ross Footprints The costal path was great, far more rough than the walk to St Ives (often just a foot wide cutting in the ground), steep in places, and generally running very close to the cliff edge. The views were great, but I always think what a horror paths like these would be in winter, with the full force of the Atlantic winds pounding against the cliffs. As a finale it turns out that the costal path follows the cliff all the way along the back of the beach, which would easily be another twenty minutes of walking to reach sand. There is a shortcut down some stone stairs to the beach, but we arrived at high tide and the bottom of the stairs (well, rocks) were a foot deep in water. Wading up to the beach was a fitting end to the walk, and made the beach feel like our own little desert island! Cliff Steps Wading To Land Limpets! Ross I'll have to explain the expression on Vicky in the above photo. As a child when Vicky went to visit her father in Devon they used to go to the beach and spend the day annoying the wildlife: chasing crabs, kicking limpets off rocks and so on. When Vicky noticed that the rocks at the bottom of the cliff were covered in limpets, she shouted "limpets!" with a manic expression and preceeded to prod them frantically. After lots of sitting around and digging tunnels, we headed for the dunes for a spot of dune diving. This involves running at top spead down the dunes and throwing yourself into the sand at the bottom. Ah, the simple pleasures in life! Porthkidney Beach Dune Diving Dune Diving Dune Diving Dune Diving After my dive I ran back up the hill in the manner of a mad man, arms out-stretched to Vicky as I collapsed in front of her, gasping "It's". "What?", was the confused reply. This is terrible, I really need to get Vicky to watch the Best Of Monty Python DVD we have somewhere... It's! Possibly more to write, but Lost is on, so I'm off for now.

21 March 2006

Uwe Hermann: Computer Forensics Wiki

For everyone who might be interested in computer forensics, data recovery (e.g. ext2/vfat undelete), file system internals, digital evidence, or just playing around with dd, The Sleuth Kit or similar tools: Checkout Simson Garfinkel's Forensics Wiki which gathers information on the above topics and many more. The content is licensed under the Creative Commons Attribution-ShareAlike 2.5 license. Contributors welcome!

Next.