Search Results: "srivasta"

1 February 2016

Lunar: Reproducible builds: week 40 in Stretch cycle

What happened in the reproducible builds effort between January 24th and January 30th:

Media coverage Holger Levsen was interviewed by the FOSDEM team to introduce his talk on Sunday 31st.

Toolchain fixes Jonas Smedegaard uploaded d-shlibs/0.63 which makes the order of dependencies generated by d-devlibdeps stable accross locales. Original patch by Reiner Herrmann.

Packages fixed The following 53 packages have become reproducible due to changes in their build dependencies: appstream-glib, aptitude, arbtt, btrfs-tools, cinnamon-settings-daemon, cppcheck, debian-security-support, easytag, gitit, gnash, gnome-control-center, gnome-keyring, gnome-shell, gnome-software, graphite2, gtk+2.0, gupnp, gvfs, gyp, hgview, htmlcxx, i3status, imms, irker, jmapviewer, katarakt, kmod, lastpass-cli, libaccounts-glib, libam7xxx, libldm, libopenobex, libsecret, linthesia, mate-session-manager, mpris-remote, network-manager, paprefs, php-opencloud, pisa, pyacidobasic, python-pymzml, python-pyscss, qtquick1-opensource-src, rdkit, ruby-rails-html-sanitizer, shellex, slony1-2, spacezero, spamprobe, sugar-toolkit-gtk3, tachyon, tgt. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them:
  • gnubg/1.05.000-4 by Russ Allbery.
  • grcompiler/4.2-6 by Hideki Yamane.
  • sdlgfx/2.0.25-5 fix by Felix Geyer, uploaded by Gianfranco Costamagna.
Patches submitted which have not made their way to the archive yet:
  • #812876 on glib2.0 by Lunar: ensure that functions are sorted using the C locale when giotypefuncs.c is generated.

diffoscope development diffoscope 48 was released on January 26th. It fixes several issues introduced by the retrieval of extra symbols from Debian debug packages. It also restores compatibility with older versions of binutils which does not support readelf --decompress.

strip-nondeterminism development strip-nondeterminism 0.015-1 was uploaded on January 27th. It fixes handling of signed JAR files which are now going to be ignored to keep the signatures intact.

Package reviews 54 reviews have been removed, 36 added and 17 updated in the previous week. 30 new FTBFS bugs have been submitted by Chris Lamb, Michael Tautschnig, Mattia Rizzolo, Tobias Frost.

Misc. Alexander Couzens and Bryan Newbold have been busy fixing more issues in OpenWrt. Version 1.6.3 of FreeBSD's package manager pkg(8) now supports SOURCE_DATE_EPOCH. Ross Karchner did a lightning talk about reproducible builds at his work place and shared the slides.

24 January 2016

Lunar: Reproducible builds: week 39 in Stretch cycle

What happened in the reproducible builds effort between January 17th and January 23rd:

Toolchain fixes James McCoy uploaded subversion/1.9.3-2 which removes -Wdate-time from CPPFLAGS passed to swig enabling several packages to build again. The switch made in binutils/2.25-6 to use deterministic archives by default had the unfortunate effect of breaking a seldom used feature of make. Manoj Srivastava asked on debian-devel the best way to communicate the changes to Debian users. Lunar quickly came up with a patch that displays a warning when Make encounters deterministic archives. Manoj made it available in make/4.1-2 together with a NEWS file advertising the change. Following Guillem Jover's comment on the latest patch to make mtimes of packaged files deterministic, Daniel Kahn Gillmor updated and extended the patch adding the --clamp-mtime option to GNU Tar. Mattia Rizzolo updated texlive-bin in the reproducible experimental repository.

Packages fixed The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet:

reproducible.debian.net Transition from reproducible.debian.net to the more general tests.reproducible-builds.org has started. More visual changes are coming. (h01ger) A plan on how to run tests for F-Droid has been worked out. (hc, mvdan, h01ger) A first step has been made by adding a Jenkins job to setup an F-Droid build environment. (h01ger)

diffoscope development diffoscope 46 has been released on January 19th, followed-up by version 47 made available on January 23rd. Try it online at try.diffoscope.org! The biggest visible change is the improvement to ELF file handling. Comparisons are now done section by section, using the most appropriate tool and options to get meaningful results, thanks to Dhole's work and Mike Hommey's suggestions. Also suggested by Mike, symbols for IP-relative ops are now filtered out to remove clutter. Understanding differences in ELF files belonging to Debian packages should also be much easier as diffoscope will now try to extract debug information from the matching dbgsym package. This means objdump disassembler should output line numbers for packages built with recent debhelper as long as the associated debug package is in the same directory. As diff tends to consume huge amount of memory on large inputs, diffoscope has a limit in place to prevent crashes. diffoscope used to display a difference every time the limit was hit. Because this was confusing in case there were actually no differences, a hash is now internally computed to only report a difference when one exists. Files in archives and other container members are now compared in the original order. This should not matter in most case but overall give more predictable results. Debian .buildinfo files are now supported. Amongst other minor fixes and improvements, diffoscope will now properly compare symlinks in directories. Thanks Tuomas Tynkkynen for reporting the problem.

Package reviews 70 reviews have been removed, 125 added and 33 updated in the previous week, gcc-5 amongst others. 25 FTBFS issues have been filled by Chris Lamb, Daniel Stender, Martin Michlmayr.

Misc. The 16th FOSDEM will happen in Brussels, Belgium on January 30-31st. Several talks will be about reproducible builds: h01ger about the general ecosystem, Fabian Keil about the security oriented ElectroBSD, Baptiste Daroussin about FreeBSD packages, Ludovic Court s about Guix.

17 January 2016

Lunar: Reproducible builds: week 38 in Stretch cycle

What happened in the reproducible builds effort between January 10th and January 16th:

Toolchain fixes Benjamin Drung uploaded mozilla-devscripts/0.43 which sorts the file list in preferences files. Original patch by Reiner Herrmann. Lunar submitted an updated patch series to make timestamps in packages created by dpkg deterministic. To ensure that the mtimes in data.tar are reproducible, with the patches, dpkg-deb uses the --clamp-mtime option added in tar/1.28-1 when available. An updated package has been uploaded to the experimental repository. This removed the need for a modified debhelper as all required changes for reproducibility have been merged or are now covered by dpkg.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: angband-doc, bible-kjv, cgoban, gnugo, pachi, wmpuzzle, wmweather, wmwork, xfaces, xnecview, xscavenger, xtrlock, virt-top. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Untested changes:

reproducible.debian.net Once again, Vagrant Cascadian is providing another armhf build system, allowing to run 6 more armhf builder jobs, right there. (h01ger) Stop requiring a modified debhelper and adapt to the latest dpkg experimental version by providing a predetermined identifier for the .buildinfo filename. (Mattia Rizzolo, h01ger) New X.509 certificates were set up for jenkins.debian.net and reproducible.debian.net using Let's Encrypt!. Thanks to GlobalSign for providing certificates for the last year free of charge. (h01ger)

Package reviews 131 reviews have been removed, 85 added and 32 updated in the previous week. FTBFS issues filled: 29. Thanks to Chris Lamb, Mattia Rizzolo, and Niko Tyni. New issue identified: timestamps_in_manpages_added_by_golang_cobra.

Misc. Most of the minutes from the meetings held in Athens in December 2015 are now available to the public.

13 August 2014

Ian Donnelly: The New Deal: ucf Integration

Hi Everybody, A few days ago I posted an entry on this blog called dpkg Woes where I explained that due to a lack of response, we were abandoning our plan to patch dpkg for my Google Summer of Code project, and I explained that we had a new solution. Well today I would like to tell you about that solution. Instead of patching dpkg, which would take a long time and seemed like it would never make it upstream, we have added some new features to ucf which will allow my Google Summer of Code project to be realized. If you don t know, ucf, which stands for Update Configuration File, is a popular Debian package whose goal is to preserve user changes to config files. It is meant to act as an alternative to considering a configuration file a conffile on systems that use dpkg. Instead, package maintainers can use ucf to handle these files in a conffile-like way. Where conffiles must work on all systems, because they are shipped with the package, configuration files that use ucf can be handled by maintainer scripts and can vary between systems. ucf exists as a script that allows conffile-like handling of non-conffile configuration files and allows much more flexibility than dpkg s conffile system. In fact, ucf even includes an option to perform a three-way merge on files it manages, it currently only uses diff3 for the task though. As you can see, ucf has a goal that while different than ours, seems naturally compatible to our goal of automatic conffile merging. Obviously, since ucf is a different tool than dpkg we had to re-think how we were going to integrate with ucf. Luckily, integration with ucf proved to be much more simple than integration with dpkg. All we had to do to integrate with ucf was to add a generic hook to attempt a three way merge using any tool created for the task such as Elektra and kdb merge. Felix submitted a pull request with the exact code almost a week ago and we have talked with Manoj Srivastava, the developer for ucf, and he seemed to really like the idea. The only changes we made are to add an option for a three-way merge command, and if one is present, the merge is attempted using the specified command. It s all pretty simple really. Since we decided to include a generic hook for a three-way merge command instead of an Elektra-specific one (which would be less open and would create a dependency on Elektra), we also had to add functionality to Elektra to work with this hook. We ended up writing a new script, called elektra-merge which is now included in our repository. All this script does is act as a liaison between the ucf --three-way-merge-command option and Elektra itself. The script automatically mounts the correct files for theirs and base and dest using the new remount command. Since the only parameters that are passed to the ucf merge command are the paths of ours, theirs, base and result, we were missing vital information on how to mount these files. Our solution was to create the remount command which mirrors the backend configuration of an existing mountpoint to create a new mountpoint using a new file. So if ours is mounted to system/ours using ini, kdb remount /etc/theirs system/theirs system/ours will mount /etc/theirs to system/theirs using the same backend as ours. Since theirs, base, and result should all have the same backend as ours, we can use remount to mount these files even if all we know is their paths. Now, package maintainers can edit their scripts to utilize this new feature. If they want, package maintainers can specify a command to use to merge files using ucf during package upgrades. I will soon be posting a tutorial about how to integrate this feature into a package and how to use Elektra in your scripts in order to allow for automatic three-way merges during package upgrade. I will post a link to the tutorial here once it is published. Sincerely,
Ian S. Donnelly

31 October 2013

Russell Coker: Links October 2013

Wired has an interesting article by David Samuels about the Skybox, a small satellite (about the size of a bar fridge) that is being developed to provide cheap photographs of the Earth from low orbit [1]. Governments of major countries will probably try to limit what they do, but if they can prove that it s viable then someone else from a different jurisdiction will build similar satellites. Alice Dreger gave an interesting TED talk about the various ways that people can fall outside the expected genetic sex binary [2]. The short film Love is All You Need has an interesting way of showing the way that non-straight kids are treated [3]. The Guardian has an interesting article by Ranjana Srivastava about doctors and depression [4]. Don Marti wrote an interesting post about believing bullshit as a way of demonstrating group loyalty [5]. Zacqart Adam Green wrote an interesting article for the Falkvinge blog about the way that the Ouya gaming console can teach children about free software and political freedom [6]. Read more at www.ouya.tv [7]. It s a pity that the Ouya is not conveniently sold outside the US and the UK, with shipping it would probably cost a lot more than $99 in Australia. Tim Chevalier wrote an interesting post for Geek Feminism about the unintended consequences of some codes of conduct [8]. Tim Chevalier wrote an interesting Geek Feminism post about Wikipedia describing how the Neutral Point Of View is a way of representing the views of people in power [9]. Ramin Shokrizade wrote an interesting article for Gamasutra about the Free 2 Play (F2P) techniques [10]. The concept of F2P games is that the game can be installed for free but requires regular small payments to make the game easier, apparently some people pay $3000 per year or more. The TED blog has an interesting interview with Jack Andraka, a teenager who invented a new test for pancreatic cancer (and also ovarian and lung cancer) that is cheaper, faster, and less invasive than other tests [11]. The blog post also has a link to Jack s TED talk.

19 September 2012

Stefano Zacchiroli: bits from the DPL for August 2012

DPL August report, posted on d-d-a a while ago (yep, I forgot to blog it up to now!, sorry for the oldies).
Dear project members, August has been a month with a good deal of vacations for many of us, including yours truly. Therefore the monthly report of DPL activities will be briefer than usual. Which is good, as it'll leave all my readers more time to do NMUs and fix RC bugs! Ongoing discussions Assets Core teams Legal and RC fun Hardware See? It's been quick(er)! Talk to you here next month, with a much lower count of Wheezy RC Bugs on the horizon, hopefully.
Cheers.
PS the boring day-to-day activity log for August 2012 is available at master:/srv/leader/news/bits-from-the-DPL.txt.201208

16 August 2012

Raphaël Hertzog: Happy Birthday Debian! And memories of an old-timer

For Debian s birthday, Francesca Ciceri of the Debian Publicity team suggested that developers blog about their first experiences with Debian . I found this a good idea so I m going to share my own early experience. It s quite different from what happens nowadays Before speaking of my early Debian experience, I have to set some context. In my youth, I have always been a Windows user and a fan of Bill Gates. That is until I got Internet at home at that point, I got involved in Usenet and made some friends there. One of those made me discover Perl and it has been somewhat of a revelation for me who had only been programming in Visual Basic, Delphi or ObjectPal. Later the same friend explained me that Perl was working much better on Linux and that Debian Linux installs it by default so I should try this one. I had no idea of what Linux was, but given how I loved Perl, I was eager to try his advice. So I got myself a Tri-Linux CD with Debian/RedHat/Slackware on it and started the installation process (which involved preparing boot floppies). But I did not manage to get the graphical interface working despite lots of fiddling with Xfree86 s configuration file. So I ended up installing RedHat and used it for a few months. But since many of the smart guys in my Usenet community were Debian users, I persisted and finally managed to get it to work! After a few months of usage, I was amazed at everything that was available for free and I wanted to give back. I filed my first bug report in July 1998, I created my first Debian packages in August 1998 and I got accepted as an official Debian developer in September 1998 (after a quick chat over the phone with Martin Schulze or James Troup I never understood the name of my interlocutor on the phone and I was so embarassed to have to use my rusty English over the phone that I never asked). That s right, it took me less than 3 months to become a Debian developer (I was 19 years old back then). I learned a lot during those months by reading and interacting with other Debian developers. Many of those went away from Debian in the mean time but some of them are still involved (Joey Hess, Manoj Srivastava, Ian Jackson, Martin Schulze, Steve McIntyre, Bdale Garbee, Adam Heath, John Goerzen, Marco D Itri, Phil Hands, Lars Wirzenius, Santiago Vila, Matthias Klose, Dan Jacobowitz, Michael Meskes, ). My initial Debian work was centered around Perl: I adopted dpkg-ftp (the FTP method for dselect) because it was written in Perl and had lots of outstanding bug reports. But I also got involved in more generic Quality Assurance work and tried to organize the nascent QA team. It was all really a lot of fun, I could take initiatives and it was clear to me that my work was appreciated. I don t know if you find this story interesting but I had some fun time digging through archives to find out the precise dates if you want to learn more about what I did over the following years, I maintain a webpage for this purpose.

One comment Liked this article? Click here. My blog is Flattr-enabled.

8 February 2012

C.J. Adams-Collier: SELinux on Wheezy

So, Collier Technologies LLC needs to pass annual audits to operate a certification authority recognized by the SoS. To this end, I m working with the fine group of developers who maintain SELinux. It seems that the configuration of Xorg that I m using while typing this here blog post does not have a policy set up for it in the Debian packages. Or if it does, I don t know enough about it to figure it out. I ve been keeping logs and publishing them here: http://www.colliertech.org/federal/nsa/ I ll update this post as progress is made. [edit 20120608T1042] It looks like loading all .pp files (except alsa) makes X run:
cjac@foxtrot:/usr/share/selinux/default$ time sudo \
semodule -i  ls *.pp   grep -v -e 'base.pp' -e 'alsa.pp' 
real	0m24.148s
user	0m23.249s
sys	0m0.628s
I had to boot into single user mode and load the policies before switching to runlevel 2. To get the kernel args added to the grub command line, I modified /etc/default/grub to include this line:
cjac@foxtrot:/usr/share/selinux/default$ grep -i selinux /etc/default/grub
GRUB_CMDLINE_LINUX=" selinux=1 security=selinux"
Next steps: [edit 20120208T1305] It looks like the seinfo package has not been updated in the last 18 months.
cjac@foxtrot:/usr/src/git/debian/setools$ grep url .git/config
	url = git://git.debian.org/git/users/srivasta/debian/setools.git
cjac@foxtrot:/usr/src/git/debian/setools$ git log   head -4
commit 22a5d3e451d8a1e60a3c746466c865e63089a92a
Merge: fa238f0 149e283
Author: Manoj Srivastava <srivasta>
Date:   Tue Jul 20 23:10:06 2010 -0700
[edit 20120208T1346] Stephen tells me that the modules are persistent across re-boots.
> What's the best way to do this at boot?
You just do it once and it remains until/unless you remove it with
semodule -r.  No need to do it on each boot.  Normally it is done when
you install the policy package, but since your policy package apparently
didn't install all modules, I'm suggesting that you do so manually.

15 November 2010

Manoj Srivastava: Manoj: Dear Lazyweb: How do you refresh or recreate your kvm virt periodically?

Dear Lazyweb, how do all y all using virts recreate the build machine setup periodically? I have tried and failed to get qemu-make-debian-root script work for me. Going through and redoing it from netinst ISO is an option but then I need debconf preseeding files, and I was wondering if there are some out there. And then there is the whole Oh, by the way, upgrade from Squeeze to Sid, please step. The less sexy alternative is going to the master copy and running a cron job to safe-upgrade each week, and re-creating any copy-on-write children. Would probably work, but I am betting there are less hackish solutions out there. First, some background. It has been an year since I interviewed for the job I currently hold. And nearly 10 months since I have been really active in Debian (apart from Debconf 10). Partly it was trying to perform well at the new job, partly it was getting permissions to work on Debian from my employer. Now that I think I have an handle on the job, and the process for getting permissions is coming to a positive end, I am looking towards getting my Debian processes and infrastructure back up to snuff. Before the interregnum, I used to have a UML machine setup to do builds. It was generated from scratch weekly using cron, and ran SELinux strict mode, and I used to have an automated ssh based script to build packages, and dump them on my box to test them. I had local git porcelain to do all this and tag releases, in a nice, effortless work flow. Now, the glory days of UML are long gone, and all the cool kids are using KVM. I have set up a kvm box, using a netinst ISO (like the majority of the HOWTO s say). I used madduck s old /etc/networking/interfaces set up to do networking using a public bridge (mostly because how cool his solution was, virsh can talk natively to a bridge for us now) and I have NFS, SELinux, ssh, and my remote build infrastructure all done, so I am ready to hop back into the fray once the lawyers actually ink the agreements. All I have to do is decide on how to refresh my build machines periodically. And I guess I should set up virsh, instead of having a shell alias around kvm. Just haven t gotten around to that.

25 August 2010

Manoj Srivastava: Manoj: Refreshing GNUPG keys sensibly

It has come up on the Planet recently, as well as the gnupg users mailing list: users need to refresh keys that they use to get updated information on revocations and key expiration. And there are plenty of examples of simple additions to ones crontab to set up a key refresh. Of course, with me, things are rarely that simple. Firstly, I have my GNUPGHOME set to a non standard location; and, secondly, I like having my Gnus tell me about signatures on mails to the Debian mailing lists, so I periodically sync debian-keyring.gpg into my GNUPGHOME. I add this as an additional keyring in my gpg.conf file, so that in normal operations gnus has ready access to the keys; but I do not care to refresh all the keys in debian-keyring. I also prefer to trust and update keys in my keyring, so the commands grow a little complex. Also, I want to get keys for any signatures folks have kindly added to my key and uploaded to the key server (not everyone uses caff), so just refresh-keys does not serve. Linebreaks added for readability.
# refresh my keys
# Note how I have to dance around keyring specification
45      4    *  * 4   (/usr/bin/gpg2 --homedir ~/.sec --refresh-keys
    $(/usr/bin/gpg2 --options /dev/null --homedir ~/.sec
      --no-default-keyring --keyring pubring.gpg --with-colons
        --fixed-list-mode --list-keys   egrep '^pub'   cut -f5 -d:  
            sort -u) >/dev/null 2>&1)
# Get keys for new sigs on my keys (get my key by default, in case 
# there are no unknown user IDs [do not want to re-get all keys])
44      4    *  * 5  (/usr/bin/gpg2 --homedir ~/.sec --recv-keys
    0xC5779A1C $(/usr/bin/gpg2  --options  /dev/null --homedir ~/.sec
       --no-default-keyring --keyring pubring.gpg --with-colons
         --fixed-list-mode --list-sigs 0xC5779A1C   egrep '^sig:'  
            grep 'User ID not found'   cut -f5 -d:   sort -u) >/dev/null 2>&1)

28 March 2010

Manoj Srivastava: Manoj: Customer obsession: Early days at a new Job

I have been at Amazon.com for a very short while (I have only gotten one paycheck from them so far), but long enough for first impressions to have settled. Dress is casual, Parking is limited. Cafeteria food is merely OK, and is not free. There is a very flat structure at Amazon. The front line work is done by one-or-two pizza teams size measure by the number of large pizzas that can feed the team. Individual experiences with the company largely depend on what team you happen to end up with. I think I lucked out here. I get to work on interesting and challenging problems, at scales I had not experienced before. There is an ownership culture. Every one including developers get to own what they produce. You are responsible for our product down to carrying pagers in rotation with others on your team, so that there is someone on call in case your product has a bug. RC (or customer impacting) bugs result in a conference call being invoked within 10-15 minutes, and all kinds of people and departments being folded in until the issue is resolved. Unlike others, I find the operations burden refreshing (I come from working as a federal government contractor). On call pages are often opportunities to learn thing, and I like the investigation of the current burning issue du jour. I also like the fact that I get to be my own support staff for the most part, though I have not yet installed Debian anywhere here. While it seems corny, customer obsession is a concept that pervades the company. I find ti refreshing. The mantra that it s all about the customer experience is actually true and enforced. Whenever a tie needs to be broken on how something should work the answer to this question is usually sufficient to break it. Most other places the management was responsible for, and worried about budgets for the department this does not seem to be the case for lower to middle management here. We don t get infinite resources, but work is planned based on user experience, customer needs, and technical requirements, not following the drum beat of bean counters. The focus is on the job to be done, not the hours punched in. I can choose to work from home if I wish, modulo meetings (which one could dial in to, at a pinch). But then, I have a 5 mile, 12 minute commute. I have, to my surprise, started coming in to work at 7:30 in the morning (I used to rarely get out of bed before 9:30 before), and I plan on getting a bike and seeing if I can ride my bike to work this summer. All in all, I like it here.

13 May 2009

Jonathan McDowell: Breaking the Web of Trust

With all the discussion about SHA-1 weaknesses and generation of new OpenPGP keys going on there's some concern about how the web of trust will be affected. I'm particularly interested in the impact on Debian; while it's possible to add new keys and keep the old ones around that hasn't worked so well for us with the migration away from PGPv3 keys. We still have 125 v3 keys left, many of them for users who also have a v4 key but haven't asked for the v3 key to be removed or responded to my email prodding them about it. I don't want to repeat that.

So if we're looking at key replacement we need to have some idea about where our Web of Trust currently stands, and what effect various changes might have on it. I managed to find the keyrings Debian shipped all the way back to slink and ran the keyanalyze and cwot stats against them. I then took the current keyring, pull in all the updates for the keys in it (so that any signatures from newly generated keys would be included) and ran the stats again. Finally I took details of 12 key migrations (mostly from Debian Planet but also a couple of others I knew about) and calculated what the effect of removing each key would be. These stats are cumulative and I replaced the most well connected (by centrality) keys first.

The results are below.

TotalSCSReachableMSDCentrality
1999-02-06 (slink)22836(15.78%)50 (21.92%)2.9022
2000-01-03 (potato)375104 (27.73%)180 (48.00%)4.3382
2001-09-22 (woody)948538 (56.75%)704 (74.26%)4.73202008.6249
2005-05-28 (sarge/etch) 1106883 (79.83%)969 (87.61%)3.34852074.6604
2007-12-0411911001 (84.04%)1062 (89.16%)3.11032113.3747
2009-01-18 (lenny)1126947 (84.10%)1010 (89.69%)3.04891941.2594
2009-04-04 (squeeze/sid)1121946 (84.38%)1008 (89.91%)3.04661936.9761
2009-05-06 (current)1067894 (83.78%)958 (89.78%)2.96701759.4363
TotalSCSReachableMSDCentrality
base1067904 (84.72%)959 (89.87%)2.96401776.4389
update-93sam1067902 (84.53%) 958 (89.78%)2.97341780.9874
update-joerg1067900 (84.34%) 958 (89.78%)2.97761780.7578
update-aurel321067898 (84.16%) 957 (89.69%)2.98031779.2497
update-noodles1067896 (83.97%) 956 (89.59%)2.98311777.8326
update-jaldhar1067896 (83.97%) 955 (89.50%)2.98551779.9193
update-srivasta1067896 (83.97%) 955 (89.50%)2.99041784.3382
update-ana1067895 (83.88%) 954 (89.40%)2.99261784.3102
update-nobse1067893 (83.69%) 953 (89.31%)2.99471782.2392
update-neilm1067892 (83.59%) 951 (89.12%)2.99741782.6098
update-reg1067891 (83.50%) 950 (89.03%)2.99771780.8515
update-rmayorga1067890 (83.41%) 949 (88.94%)2.99841779.4910
update-evgeni1067889 (83.31%) 948 (88.84%)2.99741776.6445

This is actually more hopeful than I thought. There's an obvious weakening as a result of the migrations, but the MSD stays under 3 and the centrality stays fairly constant too. The reachable/SCS counts do decrease, but at this point it looks fairly linear rather than an instant partition. Of course the more keys that are removed the more likely this is to drop off suddenly. Counteracting that DebConf9 is coming up which will provide a good opportunity for normally geographically disperse groups to cross sign, reinforcing the WoT for these new keys.

Either way I at least have a better handle on the current state of play, which gives me something to work with when thinking about how to proceed. For now, bed.

5 May 2009

Manoj Srivastava: Manoj: Debian list spam reporting the Gnus way

So, recently our email overlords graciously provided means for us minions to help them in their toils and help clean up the spammish clutter in the mailing lists by helping report the spam. And the provided us with a dead simple means of reporting such spam to them. Now, us folks who knoweth that there is but one editor, the true editor, and its, err, proponent is RMS, use Gnus to follow the emacs mailing lists, either directly, or through gmane. There are plenty of examples out there showing how to automate reporting spam to gmane, so I won t bore y all with the details. Here I only show how one serves our list overlords, and smite the spam at the same time. Some background, from the Gnus info page. I ll try to keep it brief. There is far more functionality present if you read the documentation, but you can see that for yourself.
The Spam package provides Gnus with a centralized mechanism for detecting and filtering spam. It filters new mail, and processes messages according to whether they are spam or ham. There are two contact points between the Spam package and the rest of Gnus: checking new mail for spam, and leaving a group. Checking new mail for spam is done in one of two ways: while splitting incoming mail, or when you enter a group. Identifying spam messages is only half of the Spam package s job. The second half comes into play whenever you exit a group buffer. At this point, the Spam package does several things: it can add the contents of the ham or spam message to the dictionary of the filtering software, and it can report mail to various places using different protocols.
All this is very plugin and modular. The advantage is, that you can use various plugin front ends to identify spam and ham, or mark messages as you go through a group, and when you exit the group, spam is reported, ham and spam messages are copied to special destinations for future training of your filter. Since you inspect the marks put into the group buffer as you read the messages, there is a human involved in the processing, but as much as possible can be automated away. Do read the info page on the Spam package in Gnus, it is edifying. Anyway, here is a snippet from my etc/emacs/news/gnusrc.el file, which can help automate the tedium of reporting spam. This is perhaps more like how Gnus does things than having to press a special key for every spam, and which does nothing to help train your filter.
    1  (add-to-list
    2   'gnus-parameters
    3   '("^nnml:\\(debian-.*\\)$"
    4           (to-address . "\\1@lists.debian.org")
    5           (to-list . "\\1@lists.debian.org")
    6           (admin-address . "\\1-request@lists.debian.org")
    7           (spam-autodetect . t)
    8           (spam-autodetect-methods spam-use-gmane-xref spam-use-hashcash spam-use-BBDB)
    9           (spam-process '(spam spam-use-resend))
   10           (spam-report-resend-to . "report-listspam@lists.debian.org")
   11           (subscribed . t)
   12           (total-expire . t)
   13           ))

4 May 2009

Manoj Srivastava: Manoj: Reflections on streetcars

Recently, I have made fairly major changes to kernel-package, and there were some reports that I had managed to mess up cross compilation. And, not having a cross-compilation tool chain handy, I had to depend on the kindness of strangers to address that issue. And, given that I am much less personable than Ms Vivien Leigh, this is not something I particularly look forward to repeating. At the onset, building a cross compiling tool chain seems a daunting task. This is not an activity one does frequently, and so one may be pardoned for being non-plussed by this. However, I have done this before, the most recent effort being creating one to compile rockbox binaries, so I had some idea where to start. Of course, since it is usually years between attempts to create cross-compiling tool chains, I generally forget how it is all done, and have to go hunting for details. Thank god for google. Well, I am not the only one in the same pickle, apparently, for there are gobs of articles and HOWTOs out there, including some pretty comprehensive (and intimidating) general tool sets to designed to create cross compilers in the most generic fashion possible. Using them was not really an option, since I would forget how to drive them in a few months, and have a miniature version of the current problem again. Also, you know, I don t feel comfortable using scripts that are too complex for me to understand I mean, without understanding, how can there be trust? Also, this time around, I could not decide whether to cross compile for arm-elf, as I did the last time, or for the newfangled armel target. A need for quickly changing the target for the cross compiler build mechanism would be nice. Manually building the tool chain makes a wrong decision here expensive, and I hate that. I am also getting fed up with having to root around on the internet every time I wanted to build a cross compiler. I came across a script by Uwe Hermann, which started me down the path of creating a script, with a help option, to store the instructions, without trying to be too general and thus getting overly complex. However, Uwe s script hard coded too many things like version numbers and upstream source locations, and I know I would rapidly find updating the script irritating. Using Debian source packages would fix both of these issues. I also wanted to use Debian sources as far as I could, to ensure that my cross compiler was as compatible as I could make it, though I did want to use newlib (I don t know why, except that I can, and the docs sound cool). And of course the script should have a help option and do proper command line parsing, so that editing the script would be unnecessary. Anyway, all this effort culminated in the following script: build cross toolchain, surprisingly compact. So I am now all set to try and cross compile a kernel the next time a kernel-package bug comes around. I thought that I would share this with the lazy web, while I was at it. Enjoy. The next thing, of course, is to get my script to create a qemu base image every week so I can move from user mode Linux to the much more nifty kvm, which is what all the cool kids use. And then I can even create an arm virtual machine to test my kernels with, something that user mode linux can t easily do.

23 April 2009

Manoj Srivastava: Manoj: Ontologies: Towards a generic, distribution agnostic tool for building packages from a VCS

This is a continuation from before. I am digressing a little in this post. One of the things I want to get out of this exercise is to learn more about Ontologies and Ontology editors, and on the principle that you can never learn something unless you build something with it (aka bone knowledge), so this is gathering my thoughts to get started on creating an Ontology for package building. Perhaps this has been done before, and better, but I ll probably learn more trying to create my own. Also, I am playing around with code, an odd melange of my package building porcelain, and gitpkg, and other ideas bruited on IRC, and I don t want to blog about something that would be embarrassing in the long run if some of the concepts I have milling around turn out to not meet the challenge of first contact with reality. I want to create a ontology related to packaging software. It should be general enough to cater to the needs any packaging effort in a distribution agnostic and version control agnostic manner. It should enable us to talk about packaging schemes and mechanisms, compare different methods, and perhaps to work towards a common interchange mechanism good enough for people to share the efforts spent in packaging software. The ontology should be able to describe common practices in packaging, concepts of upstream sources, versioning, commits, package versions, and other meta-data related to packages. vcs-pkg concept diagram I am doing this ontology primarily for myself, but I hope this might be useful for other folks involved in packaging software. So, here follow a set of concepts related to packaging software, people who like pretty pictures can click on the thumbnail on the right:

19 April 2009

Manoj Srivastava: Manoj: Looking at porcelain: Towards a generic, distribution agnostic tool for building packages from a VCS

This is a continuation from before. Before I go plunging into writing code for a generic vcs-pkg implementation, I wanted to take a close look at my current, working, non-generic implementation: making sure that the generic implementation can support at least this one concrete work-flow will keep me grounded. One of the features of my home grown porcelain for building package has been that I use a fixed layout for all the packages I maintain. There is a top level directory for all working trees. Each package gets a sub-directory under this working area. And in each package sub-directory, are the upstream versions, the checked out VCS working directory, and anything else package related. With this layout, knowing the package name is enough to locate the working directory. This enable me to, for example, hack away at a package in Emacs, and when done, go to any open terminal window, and say stage_release kernel-package or tag_releases ucf without needing to know what the current directory is (usually, the packages working directory is several levels deep /usr/local/git/debian/make-dfsg/make-dfsg-3.91, for instance. However, this is less palatable for a generic tool imposing a directory structure layout is pretty heavy. And I guess I can always create a function called cdwd, or something, to take away the tedium of typing out long cd commands. Anyway, looking at my code, there is the information that the scripts seem to need in order to do their work. So, if I do away with the whole working area layout convention, this can be reduced to just requiring the user to: Hmm. One user specified directory, where the results are dumped. I can live with that. However, gitpkg has a different concept: it works purely on the git objects, you feed it upto three tree objects, the first being the tree with sources to build, and the second and third trees being looked at only if the upstream tar archive can not be located, and passes the trees to pristine tar to re-construct the upstram tar. The package name and version are constructed after the source-tar archive is extracted to the staging area. I like the minimality of this. This is continued here.

16 April 2009

Manoj Srivastava: Manoj: Towards a generic, distribution agnostic tool for building packages from a VCS

I have been involved in vcs-pkg.org since around the time it started, a couple of years ago. The discussion has been interesting, and I learned a lot about the benefits and disadvantages of serializing patches (and collecting integration deltas in the feature branches and the specific ordering of the feature branches) and maintaining integration branches (where the integration deltas are collected purely in the integration branch, but might tend to get lost in the history, and a fresh integration branch having to re-invent the integration deltas afresh). However, one of the things we have been lax about is getting down to brass tacks and getting around to being able to create generic packaging tools (though for the folks on the serializing patches side of the debate we have the excellent quilt and the topgit packages). I have recently mostly automated my git based work-flow, and have built fancy porcelain around my git repository setup. During IRC discussion, the gitpkg script came up. This seems almost usable, apart from not having any built-in pristine-tar support, and also not supporting git submodules, which make is less useful an alternative than my current porcelain. But it seems to me that we are pretty close to being able to create a distribution, layout, and patch handler agnostic script that builds distribution packages directly from version control, as long as we take care not to bind people into distributions or tool specific straitjackets. To these ends, I wanted to see what are the tasks that we want a package building script to perform. Here is what I came up with.
  1. Provide a copy of one or more upstream source tar-balls in the staging area where the package will be built. This staging area may or may not be the working directory checked out from the underlying VCS; my experience has been that most tools of the ilk have a temporary staging directory of some kind.
  2. Provide a directory tree of the sources from which the package is to be built in the staging area
  3. Run one or more commands or shell scripts in the staging area to create the package. These series of commands might be very complex, creating and running virtual machines, chroot jails, satisfying build dependencies, using copy-on-write mechanisms, running unit tests and lintian/puiparts checks on the results. But the building a package script may just punt on these scripts to a user specified hook.
The first and third steps above are pretty straight forward, and fairly uncontroversial. The upstream sources may be handled by one of these three alternatives:
  1. compressed tar archives of the upstream sources are available, and may be copied.
  2. There is a pristine-tar VCS branch, which in conjunction with the upstream branch, may be used to reproduce the upstream tr archive
  3. Export and create an archive from the upstream branch, which may not have the same checksum as the original branch
The command to run may be supplied by the user in a configuration file or option, and may default based on the native distribution, to dpkg-buildpackage or rpm. There are a number of already mature mechanisms to take a source directory and upstream tar archive and produce packages from that point, and the wheel need not be re-invented. So the hardest part of the task is to present, in the staging area, for further processing, a directory tree of the source package, ready for the distribution specific build commands. This part of the solution is likely to be VCS specific. This post is getting long, so I ll defer presenting my evolving implementation of a generic vcs-pkg tool, git flavour, to the next blog post. This is continued here.

Manoj Srivastava: Manoj: The glaring hole in most git tools, or the submodule Cinderella story

There are a lot of little git scripts and tools being written by a lot of people. Including a lot of tools written by people I have a lot of respect for. And yet, they are mostly useless for me. Take git-pkg. Can t use it. Does not work with git submodules. Then there is our nice, new, shiny, incredibly bodacious 3.0 (git) source format. Again, useless: does not cater to submodules. I like submodules. They are nice. They allow for projects to take upstream sources, add Debian packaging instructions, and put them into git. They allow you to stitch together disparate projects, with different authors, and different release schedules and goals, into a coherent, integrated, software project. Yes, I use git submodules for my Debian packaging. I think it is conceptually and practically the correct solution. Why submodules? Well, one of the first things I discovered was that most of the packaging for my packages was very similar but not identical. Unfortunately, the previous incarnation of my packages with a monolithic rules file in each ./debian/ directory, it was easy for the rules files in packages to get out of sync and there was no easy way to merge changes in the common portions an any sane automated fashion. The ./debian/ directories for all my packages package that they are instrumental in packaging. So, since I make the ./debian/ directories branches of the same project, it is far easier to package a new package, or to roll out a new feature when policy changes the same commit can be applied across all the branches, and thus all my source packages, easily. With a separate debian-dir project, I can separate the management of the packaging rules from the package code itself. Also, I have abstracted out the really common bits across all my packages into a ./debian.common directory, which is yet another project, and included in as a submodule in all the packages so there is a central place to change the common bits, without having to duplicate my efforts 30-odd times. Now people are complaining since they have no idea how to clone my package repositories, since apparently no one actually pays attention to a file called .gitmodules, and even when they do, they, and the tools they use, have no clue what to do with it. I am tired of sending emails with one off-cluebats, and I am building my own porcelain around something I hope to present as a generic vcs-pkg implementation soon. The firs step is a wrapper around git-clone, that understands git submodules. So, here is the browsable code (there is a link in there to the downloadable sources too). Complete with a built in man page. Takes the same arguments as git-clone, but with fewer options. Have fun.

14 April 2009

Manoj Srivastava: Manoj: Yet another kernel hook script

With tonight s upload of kernel-package, the recent flurry of activity on this package (8 uploads in 6 days) is drawing to a close. I think most of the functionality I started to put into place is now in place, and all reported regressions and bugs in the new 12.XX version have been fixed. The only known deficiency is in the support of Xen dom0 images, and for that I am waiting for kernel version 2.6.30, where Linus has reportedly incorporated Xen patches. In the meanwhile, kernel-package seems to be working well, and I am turning my attention to other things. But, before I go, here is another example kernel postinst hook script (which, BTW, looks way better with syntax highlighting CSS on my blog than it does in a rss feed or an aggregator site).
    1  #! /bin/sh
    2  
    3  set -e
    4  
    5  if [ -n "$INITRD" ] && [ "$INITRD" = 'No' ]; then
    6      exit 0
    7  fi
    8  version="$1"
    9  vmlinuz_location="$2"
   10  
   11  
   12  if [ -n "$DEB_MAINT_PARAMS" ]; then
   13      eval set -- "$DEB_MAINT_PARAMS"
   14      if [ -z "$1" ]   [ "$1" != "configure" ]; then
   15          exit 0;
   16      fi
   17  fi
   18  
   19  # passing the kernel version is required
   20  [ -z "$version" ] && exit 1
   21  
   22  if [  -n "$vmlinuz_location" ]; then
   23      # Where is the image located? We'll place the initrd there.
   24      boot=$(dirname "$vmlinuz_location")
   25      bootarg="-b $boot"
   26  fi
   27  
   28  # Update the initramfs
   29  update-initramfs -c -t -k "$version" $bootarg
   30  
   31  exit 0

12 April 2009

Manoj Srivastava: Manoj: Sample kernel symlink postinst hook script

With the new kernel-package hitting Sid today, and the fact that it no longer does symlink handling by default, I thought it was time that we had an example script that shows how to do that. This is a fairly full featured script, feel free to cull down to use just what you want. I ll post a couple of ther scripts, if there is interest in this. BTW, this script does far more than the old kernel-package postisnt script ever did. Have fun.
    1  #!/bin/sh -
    2  #                               -*- Mode: Sh -*- 
    3  # 
    4  # This is an example of a script that can be run as a postinst hook,
    5  # and manages the symbolic links in a manner similar to the kernel
    6  # image default behaviour, except that the latest two versions (as
    7  # determined by ls -lct) are kept. You can modify this script 
    8  # 
    9  # Copyright 2003, 2004, 2005, 2006, 2007, 2008, 2009 Manoj Srivastava
   10  # Copyright 2009 Darren Salt
   11  
   12  set -e
   13  
   14  # The dir where symlinks are managed
   15  SYMLINKDIR=/
   16  
   17  if [ $# -ne 2 ]; then
   18      echo Usage: $0 version location
   19      exit 2
   20  fi
   21  
   22  version="$1"
   23  vmlinuz_location="$2"
   24  vmlinuz_dir="$(dirname "$2")"
   25  
   26  cd $SYMLINKDIR   exit 1
   27  
   28  if [ -n "$DEB_MAINT_PARAMS" ]; then
   29      eval set -- "$DEB_MAINT_PARAMS"
   30  fi
   31  
   32  if [ -z "$1" ]   [ "$1" != "configure" ]; then
   33      exit 0;
   34  fi
   35  
   36  rm -f vmlinuz vmlinuz.old vmlinuz-rd vmlinuz-rd.old initrd.img initrd.img.old 
   37  
   38  # Create a temporary file safely
   39  if [ -x /bin/tempfile ]; then
   40      outfile=$(tempfile -p outp -m 0600);
   41  else
   42      set -e
   43      mkdir /tmp/kernel-image-$version-$$
   44      outfile=/tmp/kernel-image-$version-$$/output
   45  fi
   46  
   47  (cd "$vmlinuz_dir" && ls -ct vmlinuz-*) > $outfile
   48  
   49  STD="$(head -n 1 $outfile               sed 's/vmlinuz-//')" 
   50  OLD="$(head -n 2 $outfile   tail -n 1   sed 's/vmlinuz-//')" 
   51  
   52  if [ "X$STD" = "X" ]; then
   53      exit 0;
   54  fi
   55  
   56  # If you want version-specific links, here's how to start
   57  STD24="$(grep vmlinuz-2.4 $outfile   head -n 1   sed 's/vmlinuz-//')"   true
   58  OLD24="$(grep vmlinuz-2.4 $outfile   head -n 1   tail -n 1   sed 's/vmlinuz-//')"   true
   59  
   60  STD25="$(grep vmlinuz-2.5 $outfile   head -n 1   sed 's/vmlinuz-//')"   true
   61  OLD25="$(grep vmlinuz-2.5 $outfile   head -n 1   tail -n 1   sed 's/vmlinuz-//')"   true
   62  
   63  echo Booting $STD, old is $OLD
   64  
   65  if [ -f "$vmlinuz_dir/"initrd.img-$STD ] ; then 
   66     ln -s "$vmlinuz_dir/"initrd.img-$STD initrd.img
   67     ln -s "$vmlinuz_dir/"vmlinuz-$STD vmlinuz-rd
   68  else
   69     ln -s "$vmlinuz_dir/"vmlinuz-$STD vmlinuz
   70  fi
   71  
   72  if [ "X$OLD" != "X" ]; then
   73      if [ -f "$vmlinuz_dir/"initrd.img-$OLD ] ; then
   74      ln -s "$vmlinuz_dir/"initrd.img-$OLD initrd.img.old
   75      ln -s "$vmlinuz_dir/"vmlinuz-$OLD vmlinuz-rd.old
   76      else
   77      ln -s "$vmlinuz_dir/"vmlinuz-$OLD vmlinuz.old
   78      fi
   79  fi
   80  
   81  # if [ "X$STD24" != "X" ]; then
   82  #     if [ -f "$vmlinuz_dir/"initrd.img-$STD24 ] ; then 
   83  #     ln -s "$vmlinuz_dir/"initrd.img-$STD24 initrd24.img
   84  #     ln -s "$vmlinuz_dir/"vmlinuz-$STD24 vmlinuz24-rd
   85  #     else
   86  #     ln -s "$vmlinuz_dir/"vmlinuz-$STD24 vmlinuz24
   87  #     fi
   88  # fi
   89  # if [ "X$OLD24" != "X" ]; then
   90  #     if [ -f "$vmlinuz_dir/"initrd.img-$OLD24 ] ; then
   91  #     ln -s "$vmlinuz_dir/"initrd.img-$OLD24 initrd24.img.old
   92  #     ln -s "$vmlinuz_dir/"vmlinuz-$OLD vmlinuz24-rd.old
   93  #     else
   94  #     ln -s "$vmlinuz_dir/"vmlinuz-$OLD vmlinuz24.old
   95  #     fi
   96  # fi
   97  
   98  # Run boot loaders here.
   99  #lilo
  100  
  101  rm -f $outfile 
  102  if [ -d /tmp/kernel-image-$version-$$ ]; then
  103      rmdir /tmp/kernel-image-$version-$$
  104  fi
  105  
  106  exit 0

Next.