Search Results: "Mark Hymers"

8 May 2012

Vincent Sanders: Repaying a debt

Some debts are merely financial and some easily repaid but some require repayment in kind . Few debts are more important to me personally than a favour earned by a good friend.

Several years ago, before I started this blog, I replaced the kitchen in my house. Finances were tight at the time and I had to do the entire refit with only limited professional help. Because of this I imposed upon Mark Hymers and Steve Gran to come and assist me. They worked tirelessly for three days over a bank holiday for no immediate reward.

Mark and Steve with a drill
This weekend I had the opportunity to assist Mark with his own kitchen refit and reply my debt.

Although the challenges have been different on this build they were, nonetheless present, including walls which were most definitely not square and affixing cabinets 10mm too high so the doors could not close.

We also got to make a hole for a 125mm extractor which was physically demanding and not a little tiring (Steve actually wielding the drill had a fabulous aim)

I took some photos to document the process which has resulted in an image which is positively threatening, though the two of them are nice people really!

All in all a pleasant weekend with friends, the whole favour thing was really moot, I would have done it for a friend anyway.

23 March 2012

Raphaël Hertzog: People behind Debian: J rg Jaspert, FTPmaster, Debian Account Manager, and more

Photo by Wouter Verhelst

J rg is a very active contributor within Debian, and has been for a long time. This explains why he holds so many roles (FTPmaster and Debian Account Manager being the 2 most important ones) Better known as Ganneff (his IRC nick), he s not exactly the typical hacker. He has no beard and used to drink milk instead of beers. :-) Check out his interview to learn more about some of the numerous ways one can get involved in Debian, managing its infrastructure and without having to be a packager. Raphael: Who are you? J rg: My name is J rg Jaspert and I m 35 years old working for a small company doing system administration and consulting work for our customers. I m married for a little while now and sometime soon a little Ganneff will be crawling out of my wife. (Whoever didn t think of the movie Alien now is just boring). Raphael: How did you start contributing to Debian? J rg: I started using Debian somewhere around 2000, 2001. Before that I had the misfortune to try SuSE and RedHat, both with a user experience that let me fully understand why people think Linux is unusable. (Due to my work I m in the unfortunate situation to have to use RedHat on two machines. Funny how they are still utter crap and worse than bad toys). And all of this lets get a Linux running here came up because I was trying to find a replacement for my beloved OS/2 installation, which I had for some years. So after I got Debian installed, good old Potato, I got myself active on our mailing lists, starting with the German user one. A bit later I replied to a question if someone can help as staff for a Debian booth somewhere. It was the most boring event I ever visited (very nice orga, unfortunately no visitors), but I got a few important things there: The software I packaged, found me a sponsor and voila, maintainer I was. Some more packages got added and at some point my sponsor turned out to be my advocate. The NM process run around 2 months, and mid April 2002 I got THE MAIL. Raphael: Some Debian developers believe that you have too many responsibilities within Debian (DAM, FTPMaster, Debconf, Partners, Planet Debian, Mirrors, ). Do you agree that it can be problematic, and if yes, are you trying to scale down? J rg: It s DebConf, tssk. And yes, I do have some extra groups and roles. And you even only list some, leaving out all I do outside Debian. But simply counting number of roles is a plain stupid way to go. Way more interesting is how much work is behind a role and how many other people are involved. And looking at those you listed I don t see any I am a SPOF. Let s look at those you listed: DAM: Here I did start out assisting James to get the huge backload down which had accumulated over time. Nowadays I am merely the one with the longest term as DAM. Christoph Berg joined in April 2008 and Enrico Zini followed during October 2010, both very active. Especially Enrico, lately with the redesign of the NM webpages. FTPMaster: The basic outline of the FTPMaster history is similar to the DAM one. I joined as an assistant, after the oh-so-famous Vancouver meeting in 2004. Together with Jeroen, we both then got the backload down which had accumulated there. He did most of the removals while I had a fun time cleaning up NEW. And we both prepared patches for the codebase. And in 2007, as the last action as DPL, Sam made me FTPMaster. Since then I haven t been alone either. In fact we have much more rotation in the team than ever before, which is a good thing. Today we are 3 FTPMasters, 4 FTP Assistants and 1 Trainee. Though we always like new blood and would welcome more volunteers. DebConf: I am very far outside the central DebConf team. I am not even a delegate here. Currently I am merely an admin, though there are 4 others with the same rights on the DebConf machines. I ve not taken any extra jobs this year, nor will I. Probably for next year again, but not 2012. Planet: I am one of three again, but then Planet is mostly running itself. Debian developers can just edit the config, cron is doing the work, not much needed here. Occasional cleanups, every now and then a mail to answer, done. In short: No real workload attached. Mirrors: My main part here is the ftpsync scriptset. Which is a small part of the actual work. The majority of it, like checking mirrors, getting them to fix errors, etc. is done by Simon Paillard (and since some time, Raphael Geissert is active there too, you might have heard about his http.debian.net). Having said that, there is stuff I could have handled better or probably faster. There always is. Right now I have 2 outstanding things I want to do a (last) cleanup on and then give away. Raphael: You got married last year. I know by experience that entertaining a relationship and/or a family takes time. How do you manage to combine this with your Debian involvement? J rg: Oh well, I first met my wife at the International Conference on OpenSource 2009 in Taiwan. So OpenSource, Debian and me being some tiny wheel in the system wasn t entirely news to her. And in the time since then she learned that there is much more behind when you are in a community like Debian, instead of just doing it for work. Even better that she met Debian people multiple times already, and knows with who I am quarreling Also, she is currently attending a language school having lots of homework in the evening. Gives me time for Debian stuff. :) How that turns out with the baby I have no idea yet. I do want to train it to like pressing the M key, so little-Ganneff can deal with NEW all on its own (M being Manual reject), but it might take a day or twenty before it gets so far. :) Raphael: Thanks to the continuous work of many new volunteers, the NEW queue is no longer a bottleneck. What are the next challenges for the FTPmaster team? J rg: Bad link, try this one. :) Also, no longer sounds like its recent. It s not, it s just that people usually recognize the negative only and not the positive parts. Well, there are a few challenges actually. The first one, even if it sounds simple, is an ongoing one: We need Debian Developers willing to do the work that is hidden behind those simple graphs. Yes, we are currently having a great FTP Team doing a splendid work in keeping that queue reasonably small this is a/THE sisyphean task per excellence. There will always be something waiting for NEW, even if you just cleaned the queue, you turn around and there is something else back in already. Spreading this workload to more people helps not burning one out. So if one or more of the readers is interested, we always like new volunteers. You simply need to be an uploading DD and have a bit of free time. For the rest we do have training procedures in place. Another one is getting the multi-archive stuff done. The goal is to end up with ONE host for all our archives. One dak installation. But separate overrides, trees, mirrors, policies and people (think RMs, backports team, security team). While this is halfway easy to think of in terms of merging backports into main it gets an interesting side note when you think of merging security into main . The security archive does have information that is limited to few people before public release of a security announce, and so we must make sure our database isn t leaking information. Or our filesystem layer handling. Or logs. Etc. Especially as the database is synced in (near) realtime to a DD accessible machine. And the filesystem data too, just a little less often. There is also a discussion about a good way to setup a PPA for Debian service. We do have a very far developed proposal here how it should work, and I really should do the finishing touches and get it to the public. Might even get a GSoC project on it. So far for some short to middle term goals. If you want to go really long term, I do think that we should get to the point where we get rid of the classical view of a source package being one (or more) tarballs plus the Debian changes. Where a new version requires the full upload of one or more of those parts of the source package. I don t know exactly where it should end up. Sure, stuff like one central DVCS, maintainers push there, the archive generates the source tarballs and prepares the mirrors do sound good for a quick glance. But there are lots of trouble and pitfalls and probably some dragons hidden here. Raphael: The Debian repositories are managed by DAK (Debian Archive Kit) which is not packaged. Thus Debian users pick tools like reprepro to manage their package repositories. Is that how things should be? J rg: Oh, Mark Hymers wants to do a package again. More power to him if he does, though yes, DAK is not exactly a quick-and-easy thing to install. But nowadays it is a trillion times easier than the past thanks to Mark s work people can now follow the instructions, scripts and whatever they find inside the setup directory. Still, it really depends on the archive size you are managing. A complex tool like dak does not make sense for someone who wants to publish one or a dozen of his own packages somewhere. Thats just like doing a finger amputation with a chainsaw it certainly works and is fun for the one with the chainsaw but you probably end up a little overdoing it. I myself am using dpkg-scan[packages sources] from a shell script but also mini-dinstall in places (never got friend with reprepro when I looked at it). Works, and for the few dozen packages those places manage it is more than enough. Also, using dak forces you into some ways of behaviour that are just what Debian wants but might not be what a user wants. Like inability to overwrite an existing file. One of the reasons why mentors.debian.net won t work with dak. Or the use of a postgres database. Or that of gpg. Sure, if you end up having more than just a dozen packages, if you have many suites and also movement between them, then dak is sure a thing to look at. And how should things be : however the user and admins of that certain install of reprepro, mini-dinstall, dak, whatever want it. This is not one-tool-for-all land :) Raphael: What is the role of Debian Account Managers (DAM)? Do you believe that DAMs have a responsibility to shape Debian by defining limits in terms of who can join and what can be done within Debian? J rg: Quote from https://lists.debian.org/debian-devel-announce/2010/10/msg00010.html:
The Debian Account Managers (DAM) are responsible for maintaining the list of members of the Debian Project, also known as Debian Developers. DAMs are authoritative in deciding who is a member of the Debian Project and can take subsequent actions such as approving and expelling Project members.
Now, aside from this quote, my OWN PERSONAL OPINION, without wearing anything even vaguely resembling a DAM hat: DAM is the one post that is entitled to decide who is a member or not. Usually that is in the way of joining (or not), which is simple enough. But every now and then this also means acting on a request to do something about whatever behaviour of a Debian Project member. I hate that (and i think one can easily replace I with WE there). But it s our job. We usually aren t quick about it. And we don t act on our own initiative when we do, we always have (numerous) other DDs complain/appeal/talk/whatever to us first. The expulsion procedure , luckily not invoked that often, does guarantee a slow process and lots of input from others. Are we the best for it? Probably not, we are just some people out of a thousand who happen to have a very similar hobby Debian. We aren t trained in dealing with the situations that can come up. But we are THE role inside Debian that is empowered to make such decisions, so naturally it ends up with us. Raphael: You did a lot of things for Debian over the years. What did bring you the most joy? Are there things that you re still bitter about? J rg: The most joy? Hrm, without being involved in Debian and SPI I would never have met my wife.
Or my current job. Or a GR against me. Not many running around with that badge, though I m still missing my own personal Serious problems with Mr. Jaspert thread. Bad you all.
Or visited so many places. Think of all the DebConfs, QA meetings, BSPs and whatever events.
Or met so many people.
Or learned so many things I would never even have come near without being DD. Raphael: Is there someone in Debian that you admire for their contributions? J rg: Yes.
Thank you to J rg for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Note that older interviews are indexed on wiki.debian.org/PeopleBehindDebian.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook.

One comment Liked this article? Click here. My blog is Flattr-enabled.

22 October 2011

Vincent Sanders: I do not want anything NASty to happen

I have a lot of digital data to store, like most people I have photos, music, home movies, email and lots of other random data. Being a programmer I also tend to have huge piles of source code and builds lying about. If all that was not enough I work from home so I have copious mountains of work data too.

Many years ago I decided I wanted a single robust, backed up, file server for all of this. So I slapped together a machine from leftovers stuffed some drives in a software RAID array, served over NFS and CIFS and never looked back.

Over time the hardware has changed and the system upgraded but the basic approach of a custom built server has remained. When I needed a build engine to churn out hundreds of kernels a day for the ARM Linux autobuilder the system was expanded to cope and mid 2009 the current instantiation was created.

Current full height tower fileserverThe current system is a huge tower case (courtesy of Mark Hymers) containing a Core 2 Quad 2.33GHz (8 threads) with 8Gigabytes of memory and 13 drives across four SATA controllers split into several RAID arrays. Despite buying new drives at higher capacities I have tended to keep the old drives around for extra storage resulting in what you see here.

I recently looked at the power usage of this monster and realised I was paying a lot of money to spin rust which was simply uneconomic. Seriously, why did I have six sub 200Gigabyte drives running when a single 2T to replace them would pay for itself in power saved in under a month! In addition I no longer required the compute power available either, most definitely time for a downsize!

Several friends suggested a HP micro server might be just the thing. After examining and evaluating some other options (Thecus and QNAP NAS) I decided the HP route was most definitely the best value for money.

The HP Proliant micro server is a dual core Athlon II 1.3GHz system with a Gigabyte of memory, space for four SATA hard drives and a single 5 inch bay for an optical drive. All this in a roughly 250mm on a side cube.

My HP proliant microserverI went out and bought the server from ebuyer for 235 with free shipping and 100 cashback. I Immediately sent off the cash back paperwork so I would not forget(what an odd way to get discount) so total cost for the unit was 135. I then used Crucial to select a suitable memory upgrade to take the total to 2 Gigabytes of RAM for 14

The final piece of the solution was the drives for the storage. I decided the best capacity to cost ratio could be had from 2 TB drives and with four bays available would give a raw capacity of 8 TB or more usefully for this discussion 7.8 TiB

I did an experiment with 3x1 TB 7200 RPM drives from the existing server and determined that The overall system would not really benefit enough to justify the 50% price premium of 7200 RPM drives over 5400 RPM devices. I ended up getting four Samsung Spinpoint F4EG 2 TB drives for 230.

I also bought a black LG DVD-RW drive for 16 I would have also required a SATA data cable and a molex to SATA power cable if I had not already got them.

My HP microserver with the front door openPutting the components together was really simple. The internal layout and design of the enclosure mean it is easy to work with and has the feel of build quality I usually associate with HP and IBM server kit not something this small and inexpensive.

The provided documentation is good but unnecessary as most operations are obvious. They even provide the bolts to attach all the drives along with a wrench in the lockable front door, how thoughtful is that!

I then installed the system with Debian squeeze from the optical drive. Principally because I happened to have a network installer CD to hand although the BIOS does have network boot capability.

I used the installer to put the whole initial system together and did not have to resort to the command line even once, very impressed with how far D-I has come.

After asking several people for advice the general consensus was that I should create two partitions on each drive one for a RAID 1 /boot and one for a RAID 5 LVM area.

I did have to perform the entire install a second time because there is a gotcha with GUID Partition Table, RAID 1 boot drives and GRUB. You must have a small "BIOS" partition on the front of the drive or GRUB cannot install in the MBR and your system will not boot!

The partition layout I ended up with looks like:
Model: ATA SAMSUNG HD204UI (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 32.0MB 32.0MB bios_grub
3 32.0MB 1000MB 968MB raid
2 1000MB 2000GB 1999GB raid

The small Gigabyte partition was configured as a RAID 1 across all four drives and formatted with ext2 and mount point of /boot. The large space was configured as RAID 5 across all four drives with LVM on top. Logical volumes were allocated formatted ext3 (on advice from seevral people about ext4 instability they had observed) for 50 GiB root, 4 GiB swap and 1 TiB home space.

The normal Debian install proceeded and after the post install reboot I was presented with a login prompt. Absolutely no surprises at all no additional drivers required and a correctly running system.

Over the next few days I did the usual sysadmin stuff, rsynced data from the old fileserver including creating logical volumes for the various arrays from the old server none of which presented much of a problem. The 5.5TiB Raid 5 did however take a day or so to sync!

I used the microservers eSATA port to attach external drives I use for backup purposes which has also not been an issue so far.

I am currently running both the new and old systems for a few days and rsyncing data to the microserver until I am sure of it. Actually I will make the switch this weekend and shut the old system down and leave it till next weekend before I scrub the old drives.

Before I made it live I decided to run some benchmarks and gather some data just for interest.
Bonnie (Version 1.96) was run in the root logical volume (I repeated the tests in other volumes, there is sub 1% variation) the test used a 4GiB size and 16 files

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec378K41M37M2216K330M412.811697+++++1833014246+++++14371
%CPU9711891301524+++2829+++22
Latency109ms681ms324ms116ms93389 s250ms29021 s814 s842 s362 s51 s61 s

Does not seem to be any notable issues there, the write speeds are a little lower than I might like but that is the cost of RAID 5 and 5400 RPM drives.

The rsync operations used to sync up the live data seem to manage just short of 20MiB/s for the home partition comprising of 250GiB in two and a half million files with the expected mix of file sizes. The video partition managed 33MiB/s on 1TiB of data in nine thousand files.

The bonnie tests were performed accessing the server over NFS with 24GiB size and 16 files.
Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec1733K29M19M4608K106M358.3146537142402157640821529
%CPU98249310108109997
Latency10894 s23242ms69159ms49772 s224ms250ms148ms24821 s157ms108ms2074 s719ms

or alternatively as percentages against the previous direct access values

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec4646851213328712+++1311+++10
CPU1011850104347133+++3231+++31
Latency925121324893227795093049186462983440661178688

Not that that tells us much aside from that write is a bit slower over the network, read is gigabit network bandwidth limited and latency of disc over the network is generally poorer than direct.

In summary the total cost was 395 for a complete ready to use system with 5.5TiB of RAID 5 storage which can be NFS served at nearly 900Mbit/s. Overall I am happy with the result, my only real issue is the write performance is a little disappointing but it is good enough for what I need.

3 April 2011

Raphaël Hertzog: March 2011 wrap up

Since I m soliciting donations to support my Debian work, the least I can do is explain what I do. You can thus expect to see an article like this one every month. Multi-Arch work I updated the code to use another layout for the control files stored in /var/lib/dpkg/info/. Instead of using a sub-directory per architecture (arch/package.type), we decided to use package:arch.type but only for packages which are Multi-Arch: same. dpkg is taking care to rename the files the first time it is executed with write rights and then updates /var/lib/dpkg/info/format to remember that the upgrade has been done and that we can rely on the new structure. I filed a few bugs on packages that are improperly accessing those internal files instead of using the appropriate dpkg-query interface. I sent a heads-up mail on -devel to make other people aware of those problems in the hope to discover most of them as early as possible. After that, the work stalled because Guillem went away for 2 weeks and thus stopped his review of my work. I hope he will quickly resume the review and that we will get something final this month. With the arrival of dpkg 1.16.0, it s now possible to start converting libraries to multi-arch even if full multi-arch support has not yet landed in dpkg proper. See http://wiki.debian.org/Multiarch/Bootstrapping for the detailed plan. If you re curious about Multi-Arch, you might want to read this article of Steve Langasek as well. Bug triage for dpkg in launchpad At the start of the month, there was close to 500 bugs reported against the dpkg package in Launchpad. Unfortunately most of it is noise many of the reported bugs are misfiled, they show an upgrade problem of a random package and that upgrade problem confuses update-manager which tries to configure an already configured package. This generates a second error that apport attributes to dpkg and the resulting bug report is thus filed on dpkg. There are literally hundreds of those that have to be reclassified. Michael Vogt and Brian Murray did some triaging, and I also spend quite some hours on this task. It s a bit frustrating as I tend to mark many reports Incomplete because there s no way they can be acted upon and many of them are so old that the reporter is unlikely to be able to provide supplementary information. But in the middle of this noise, there are some useful bug reports, like LP#739179 which enabled me to fix a regression even before it reached Debian Unstable (because Ubuntu runs a snapshot of dpkg with multiarch support). I subscribed to the Launchpad bugs for dpkg via the Debian Package Tracking System (thanks to the derivatives-bugs keyword) and will try to keep up with the incoming reports. Misc dpkg work The ftpmasters came up with a request for a new field (see 619131) in source packages. After a quick discussion and a round of review on debian-policy@l.d.o, I implemented the new Package-List field. This should allow the ftpmasters to save some time in NEW processing, but we deferred the change for the next dpkg version (1.16.1) to ponder a bit more on the design of the field. I also fixed a bunch of bugs (#619541, #605719, #598922, #616096) and merged a patch of Mark Hymers to recognize the new Built-Using field. Developers-reference work The review process for changes to the developers-reference is not working as it should. And I suffered from it while trying to integrate the patch I wrote for the Developer duties chapter (see #548867). We purposely changed the maintainer field from debian-doc to debian-policy in the hope to have more reviews of suggested changes and to seek some sort of consensus before committing anything. But we don t get more reviews and deciding to commit a patch is now even harder than it was (except for trivial stuff where personal opinions can t interfere). In my case, I only got the feedback of Charles Plessy which was very mixed to say the least. I tried to improve my patch based on what he expressed but I also clearly disagreed with some of his assertions and was convinced that my wording was in line with the dominant point of view within Debian. We tried to involve the release team in the discussion because most of what I documented was about helping making stable release happen, but nobody of the team answered. Instead of letting the situation (and my patch) rot, I solicited feedback from the DPL and from another developers-reference editor to see whether my patch was an improvement or not. After some more time, I went ahead and committed it. It was not pleasant for anyone. I don t know how we can improve this. Contrary to the policy, the developers-reference is a document that is not normative, I believe the result is better when we put some soul into it. But it s a real challenge when you seek a consensus and that the interest in reviewing changes is so low. DVD shop listed on debian.org In February, I launched a DVD shop whose benefits are used to fund my Debian work. Shortly after the launch I used the official form to be added to the official listing of Debian CD vendors and offered a few suggestions to deal with vendors who are selling unofficial images (with firmware in my case). A few weeks later, I got no answers: neither for my request nor for my suggestions, I mailed the cdvendors@debian.org team directly asking for a status update and quickly got an answer suggesting that Simon Paillard usually does the work and can t process the backlog due to some injury. At this point no concerns had been raised about adding me to the list. To save some time and some work for the team, I added myself to the list since I had commit rights and I informed them that I did it, so that they can review it. Shortly after I did that, Martin Zobel Helas objected to my addition. I cleared some misunderstandings but the discussion also lead to some changes to please everybody: the listing now indicates that some images are unofficial and I have prepared a special landing page for people coming from the Debian website through this listing. Debian column on OMG! Ubuntu I have always been a firm believer that it s important for Debian to reach out to the widest public with its message of freedom. Thus when Benjamin Humphrey contacted the debian-publicity team to find volunteers to write a Debian column on OMG! Ubuntu, I immediately jumped in. I wrote 4 articles over there. The tone is very different from my articles on my blog and I like that duality. Check out Debian is dying! Oh my word!, Debian or Ubuntu, which is the best place to contribute?, Are you contributing your share? and Ubuntu s CTO reveals DEX: an effort to close the gap with Debian. It s a great win-win situation, OMG! Ubuntu benefits from my articles, Debian s values are relayed further, and OMG! Ubuntu s large audience also helps me develop my own blog. Work on my book I had lots of paperwork to do this month (annual accounting stuff for my company) and I did not have as much time as I hoped for my book. Still I have a updated a few more chapters of my French book and I certainly hope to complete the update during April. This means that the work on the English translation could start in may. Work on my blog Just like for my book, it has been relatively difficult for me to cope with my policy of two articles every week. But I still managed to get quite some good stuff out. I interviewed Christian Perrier (Debian s translation coordinator) and also Bdale Garbee (chair of Debian s technical committee). I finished my series of Debian Cleanup Tips with 2 supplementary articles: The removal of firmware is causing troubles to quite some users so I wrote an article explaining how to deal with the problem. A regular reader also asked me to write an article about Jigdo, I executed myself because it was a good idea and that he has been very nice with me: Download ISO images of Debian CD/DVD at light speed with Jigdo. Last but not least, I shared my package maintainer pledge which inspired my developers-reference patch (see discussion above). Thanks Many thanks to all the people who showed their appreciation of my work. The 324.37 EUR that you gave me in February represented 2 days and a half of my time that I have spent working on the above projects. See you next month for a new summary of my activities.

2 comments Liked this article? Click here. My blog is Flattr-enabled.

26 March 2011

Philipp Kern: Debian ftpmaster Meeting Almost over

So since the last progress report I also got round to take a look at the following issues:
It was a productive hacking event for me, that for sure. But now it's almost over and they're actually stealing us an hour tonight. I would've liked to go home with less items on my to-do list, though (i.e. it just grew, it didn't shrink).

25 March 2011

Philipp Kern: Debian ftpmaster Meeting Autosigning

Proposals for autosigning were floating around for quite some time. The most controversial parts were how we secure the machines that do the building (and in turn: how do we secure the key) and who's going to manage the keyring (because there are multiple teams involved; such discussions can indeed take quite a bit time).

What we've agreed upon now is as follows:
Kudos to Mark Hymers and Joerg Jaspert (both ftpmasters) for implementing the necessary bits on the archive side. It turned out that dak grew support for most bits already in the meantime and it boiled down to sane key management, keyring distribution and setup. sbuild and buildd needed a bit more hackery, but a few patches later it seems to work fine.

So what's the point of this exercise? The main goal is to reduce the build turnaround time. This means cleaning Dependency-Waits and Build-Depends-Uninstallable much more quickly than it used to be. (With a signing run once a day and multiple dependency levels you'd need to wait some days for a leaf package to be buildable again.) This should help speeding up transitions a fair bit. Autosigning also means getting security updates faster, at least if there's a buildd that is not occupied otherwise.

The key generation and configuration deployment will gradually happen in the next days and weeks. It will be used on the regular archive, the security archive and backports (i.e. the archives run by the ftpmasters). As some logs will still need regular signing the deployment cannot happen entirely centralized as the buildd admins need to cope with a new log format. But those steps are tiny given that we can now add keys by ourselves and the archive will even accept them.

24 November 2009

Vincent Sanders: Getting things done

It seems that despite having a very important work deadline today I am destined to be interrupted every five minutes. Unfortunately I need to concentrate to get this done. And while i managed to get "in the zone" earlier, now I cannot string two words together.

This seems to be a recurring theme recently, my productivity is horribly low because I get interrupted all the time for "can you just" jobs. I turn the phone ringer off and hide my email window and I get called on my mobile and get asked why don't you answer your phone?

Its not so much the time lost to answering the specific query or doing the job but more my context switch time to restore to my previous task becomes huge. Somehow my switching time is absolutely non linear and today it has become so large I have decided to dump state altogether.

Because I know I have to do the school run soon and the interruptions continue I have abandoned the critical task altogether for now in the hope I can get back to it fresh. Do others get this I wonder? or is it a personal fault I should strive to fix? I know I used to be able to deal with this sort of thing with much less trouble, maybe I am getting less flexible as I age?

Fortunately (umm, you know what I mean) the Entropy Key software has some cleanups required which is an easy job and does not matter if I get interrupted so I do have a productive task to complete.

Oh speaking of the entropy key Niel blogged something silly I created the other day. I had a senior moment and filked the children's nursery rhyme "If You're happy and you know it" slapping some guitar chords on it.

It seems I cannot simply leave well enough alone and last Friday night (with the help of Mark Hymers) I made it a whole lot worse. I present to you the score of "If you're happy with your ekey, blog your praise" for piano and guitar. In fact I wrote this out in rosegarden so it can be turned into a midi file etc. all the source files are available. It is released under a CC license so I suppose someone could put the lyrics back to the proper nursery rhyme and use it for something less tacky.

14 February 2009

Joerg Jaspert: Lenny Release

I just finished most of my jobs for the Lenny release. I started, together with Mark Hymers, at 10:23 UTC. This was ahead schedule, which said 12:00 UTC, but better early than late. After disabling all our ftpmaster cronjobs, setting up our work environment and whatever else we needed and at 10:43 UTC we got the Release Teams GO. All of what we did got logged from screen, as we intend to write a dak tool doing such releases. Anyway, I won t bore you with a detailed set of actions we had to took, way too much for now, just some few moments (All times in UTC): While writing this entry people are still doing work for the release. For example the cd building will take time, after all there are well over 500 images (and 36 live cds). Current plan seems to be that we ftpmaster push the archive mirrors around 23:00 UTC, so it has time to sync everywhere. Some while later the cd mirror will get a push, and then the official announcement will happen. Thanks go out to everyone involved in preparing this release, be it in the past by fixing bugs, uploading packages, doing whatever was needed, as well as doing the work today.

30 November 2008

Jon Dowland: x40 suspend

Do you own a thinkpad X40, or a similar device - specifically one which needed the S3 hacks to get the backlight back on after resume? If you aren't sure, lshal grep quirk should tell you (if you have hal). You're looking for something like power_management.quirk.s3_bios = true (and s3_mode). Kernel 2.6.26 added a lot of quirk handling into the drivers, meaning an end to user-space hacks. pm-suspend, from the pm-utils package, assumes that all of these quirks have been resolved, but it seems that the S3 ones for my hardware have not. This means broken suspend/resume out-of-the-box. This is fixed in a version of pm-utils to be released in a week or thereabouts, after the freeze begins. The X40 at one point seemed to be the Debian hacker's laptop of choice. I bought mine after glowing recommendations from Steve McIntyre, Stephen Gran, Mark Hymers, Amaya Rodrigo, Daniel Stone, Matthew Garrett, Rob McQueen and several others. It's one of the best pieces of hardware I've ever had the pleasure of using, and I'm concerned that we might even be going backwards in terms of supporting this device from release-to-release. If you do have such a device, can you please try the following:
  • install the 2.6.26 kernel
  • boot into single user mode
  • modprobe i915
  • echo mem > /sys/power/state
  • resume your laptop
Please try this and report to me whether the backlight came back on - either in the discussion page for this post, or via email (jon.backlight@alcopop.org). The kernel bug to try and get this fixed is http://bugzilla.kernel.org/show_bug.cgi?id=10985, but it might not be possible to backport a fix for this to the Lenny kernel. The bug against pm-utils regarding assuming 2.6.26 is quirk-free is https://bugs.freedesktop.org/show_bug.cgi?id=16453 (Debian bug http://bugs.debian.org/488144).

17 August 2008

Wouter Verhelst: Hardware test

So something Bdale came up with this morning at breakfast (and during his talk too, apparently, which I didn't attend) was this idea of a "hardware compatibility test"): a bootable image that hardware vendors could run to see whether their hardware would run Debian. Apparently all the other vendors have it, too, and the lack of it may be one of the main reasons why Debian isn't currently supported by a whole lot of hardware vendors yet. Such a test wouldn't have to do all that much; just boot the machine (if it can) with the kernel that would be used for the installer and the system that is eventually installed; then run through a check of the available hardware, and finally come up with some kind of score that tells the vendor whether their hardware is supported at all, or if not, what they could do to improve the score. It seemed to me (and to Mark Hymers, who was seated to my left) that this is something that could be done fairly easily with a slightly modified version of debian-installer. It would be okay if there was a different version for every Debian release we do; and I tend to think it's not even going to be a problem if the first time we don't make the release, but release such a test slighty after the release of Debian. Having such a test would certainly give hardware vendors an incentive to improve their Debian support, especially if it's a simple thing that they can have some summer student do over all their hardware who'd then store the results in a database of sorts. Or so. Additionally, if we do this right, we could diversify between 'a wireless driver that will probably work if you load ndiswrapper or something similar' (which would get a score that tells them 'yes, it will run Debian', but no perfect score) and 'a wireless driver that works with free drivers and no additional firmware required' (which would get a perfect score if there's nothing more). By doing this, we would put Debian's collective driving force behind a move to better and more Linux-friendly hardware, which can only be a good thing. Bdale seems to thing this could be an industry-changing thing, and I can't think of a reason why it wouldn't work. Except for one: I'm not sure I'll have the time to work on this myself; and even if I would be sure of that, it's not going to be something that I can do all by myself -- other people would have to run the test on their hardware and communicate the results. So here's a request: is anyone else interested in this kind of thing? It doesn't sound like something too complicated; and given my business, it surely is something I have a personal interest in, so I will try to make time for it. But I can't do it alone...

9 August 2008

Wouter Verhelst: Scattergraphs

Debcamp is nice. Talking to people in person rather than having to do IRC to get anything done is a breeze, really. One thing I haven't been particularly happy about is the fact that there aren't any good statistics on what the build capacity of an architecture is; i.e., you don't know you're low on build capacity until you suddenly start backlogging and it's too late. So, since J rg Jaspert, Mark Hymers, and Steve Gran were there, I asked them whether it'd be possible to add some data to the projectb database about installing a binary. One additional column with a default now() option later, and we now have interesting data that I can make statistics of. Of course, since that column was only added four days ago, during one of which the link to ftp-master.debian.org was down (meaning, no uploads), there's not much data there yet; but I can start making some graphs now. Of course, the hard part is trying to figure out how to present the data in a manner that one can actually get useful conclusions from, which is harder than it seems. Anyhow, I'm trying. For now, on my public space on merkel, you can find a (mostly empty) per-architecture scattergraph relating the size of a binary package against the time between the dinstall of a source package and that of its binary for that particular architecture. If the time between a binary upload and a source upload is "often" more than a day for small packages, then it's obvious that the architecture in question is having problems keeping up. It's not very useful (yet), but the graphs will be updated on a daily basis, so that hopefully one or two months from now, they will contain useful data. Note that the scales are fixed to go from 0.1 day to 1000 days, and from 1k packages to 512M packages, so as to make sure one can actually compare graphs. I'm also trying to come up with a useful line-based graph, so that it is possible to compare architectures over time against eachother. This will probably involve something with averages and standard deviations or some such, I guess. Not sure yet.

7 July 2008

Joerg Jaspert: New FTP Assistant

And as I just announced we added a new FTP Assistant today! My condolence^Wcondo^Wcongratulations to Mark Hymers, who is now officially helping with NEW, removals and override changes. We still have two more volunteers in training, lets see if we can add both of them at a later point too. And maybe we get some more volunteers at some point. See our “Bits” mail for more details whats required. But add one more important point: Good IRC connectivity. We dislike people who disconnect or aren’t even on IRC. There are teams where that works great, we aren’t one of those. :) Comments: 0

10 May 2008

Joerg Jaspert: The annoyance continues: Ftpmaster, yet again

Lalala, it’s me again. Don’t shout, it’s not a long post! :) What did we do since I last dared to post with my Ftpmaster head on? (Yes, I should put a summary of all my blog posts into a Bits from mail sometime.) Comments: 2

23 November 2007

Joerg Jaspert: Todays work

Work. No, not at my workplace, for Debian. I decided to modify the ftp-master webpage a little bit, just adding some css magic (and the ability to conform with that xhtml1.0 strict thing out there). (And to be honest - about 95% of the “work” needed for this was done by Mark Hymers…) But that only happened after something else, which took away most of my time today (and yesterday and the day before). Namely - a nice overview of pending removals. The main purpose of that overview is of course “ftpteam members doing removals can take it for their work”, but i think it may also be nice for users to look there, instead of wading through all the bugreports against ftp.debian.org, which include bugs about totally different topics than removals. (And then I am lazy and want a commandline to paste… :) ) The design for that page was done by Martin Ferrari AKA Tincho, you don’t want to see how my design did look like… :), the idea for it is from Jeroen van Wolffelaar who wrote the first version of it in Perl (it is now implemented in Ruby). The removals html page is regenerated every hour, using the SOAP interface to the BTS, so at least the bugs information should be recent. The other information might have errors in them, don’t trust them too far. It should be right, but then - its only informational.. :) And now: Lets do some of those removals!

30 September 2007

DebConf team: DebConf maintenance (Posted by Joerg Jaspert)

In the last few days we (as in DebConf admins Steve Gran, Mark Hymers and myself) have been doing some DebConf maintenance work. Some time ago one of DebConf’s permanent sponsors, ByteMark sponsored a new machine, krabappel, which we use as a replacement for the older machine we have from them, cmburns. It was neccessary, as good old cmburns had a lot of stuff running and so was always pretty loaded. Imagine it running most of our websites, main mail and dns server and then also having 4 vserver instances on it. Those vserver itself where also running things that do use some resources, like our wiki and the main DebConf7 site, the PentaBarf test-host and our LedgerSMB instance. While it was long decided to move stuff our - we just had to find time to do it. This weekend seems to be it, everything is migrated except cmburns itself, as that needs a fix in the reverse dns entries for the new /28 we got. We are using xen-tools to make installation of new images easy. For that I wrote a script that is used as a role script, automating all that various setup tasks that one can automate easily. Which means that it takes about 5 minutes to get a complete new XEN domain up and running. (Add some minutes to add it to nagios2, munin, our mail setup and to sync the userdatabase.) Mark and Steve also did another migration: gallery.debconf.org is now using gallery2. Still needs a nice layout designed looking similar to our normal DebConf website, but that shouldn’t be too hard to get. While they have been on it they also integrated gallery.debian.net, so that it is back up again. Feel free to use both of them! gallery.debconf.org offers space for all DebConf related pictures, be it the main yearly DebConf (and related) or regional DebConfs, like the one planned for Panama soon. gallery.debian.net is a place where pictures from all Debian related events/parties/whatever can be put. Basically - whenever people working on Debian meet (like a BSP, a work meeting, etc) - feel free to put pictures up there. Just put them into a folder matching the year it happened in… Basic goal is to get a huge set of pictures from people involved in Debian.

24 June 2007

Dirk Eddelbuettel: New OpenMPI packages

Debian had OpenMPI package since early last year when Florian Ragwitz made some initial stabs at packaging. The package has seen a number of NMU and patches since then, but was generally getting cobwebs ... which was too bad because OpenMPI seems to have some wind behind its sails upstream. Unfortunately, little of that got packaged for Debian. After some discussions on and around the debian-science list, a new maintainer group was formed on Alioth under the pkg-openmpi name. Tilman Koschnick (who had already helped Florian with patches), Manuel Prinz, Sylvestre Ledru and myself have gotten things in good enough shape in reasonably short time. And I have just uploaded a lintian-clean package set openmpi_1.2.3-0 to Debian, where it is expected to sit in the NEW queue for a little bit before moving on to the archive proper. The changelog entry (which will appear here eventually) shows twelve bugs closed. Our plan is to provide a stable and well maintained MPI implementation for Debian. OpenMPI is the designated successor to LAM, and apart from MPICH2, everybody seems to have thrown their weight behind OpenMPI. So we will try to work with the other MPI maintainers to come up with sensible setups, alternatives priorities and the likes. If you are interested in MPI and would like to help, come join us at the Alioth project pkg-openmpi. Last but not least, thanks to Florian for the initial packaging, and to Clint Adams, Mark Hymers, Andreas Barth, and Steve Langasek (twice even) for NMUs.