Search Results: "nnair"

23 August 2020

Enrico Zini: Doing things /together/

Here are the slides of mine and Ulrike's talk Doing things /together/.
Our thoughts about cooperation aspects of doing things together. Sometimes in Debian we do work together with others, and sometimes we are a number of people who work alone, and happen to all upload their work in the same place. In times when we have needed to take important decisions together, this distinction has become crucial, and some of us might have found that we were not as good at cooperation as we would have thought. This talk is intended for everyone who is part of a larger community. We will show concepts and tools that we think could help understand and shape cooperation.
Video of the talk: The slides have extensive notes: you can use View Notes in LibreOffice Impress to see them. Here are the Inkscape sources for the graphs: Here are links to resources quoted in the talk: In the Q&A, pollo asked:
How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.
Ulrike wrote a more detailed answer: Code reviews: from nitpicking to cooperation

27 August 2017

Carl Chenet: The Importance of Choosing the Correct Mastodon Instance

Remember, Mastodon is a new decentralized social network, based on a free software which is rapidly gaining users (already there is more than 1.5 million accounts). As I ve created my account in June, I was a fast addict and I ve already created several tools for this network, Feed2toot, Remindr and Boost (mostly written in Python).

Now, with all this experience I have to stress out the importance of choosing the correct Mastodon instance.

Some technical reminders on how Mastodon works

First, let s quickly clarify something about the decentralized part. In Mastodon, decentralization is made through a federation of dedicated servers, called instances , each one with a complete independent administration. Your user account is created on one specific instance. You have two choices:

  • Create your own instance. Which requires advanced technical knowledge.
  • Create your user account on a public instance. Which is the easiest and fastest way to start using Mastodon.

You can move your user account from one instance to another, but you have to follow a special procedure which can be quite long, considering your own interest for technical manipulation and the total amount of your followers you ll have to warn about your change. As such, you ll have to create another account on a new instance and import three lists: the one with your followers, the one with the accounts you have blocked, and the one with the account you have muted.

From this working process, several technical and human factors will interest us.

A good technical administration for instance

As a social network, Mastodon is truly decentralized, with more than 1.5 million users on more than 2350 existing instances. As such, the most common usage is to create an account on an open instance. To create its own instance is way too difficult for the average user. Yet, using an open instance creates a strong dependence on the technical administrator of the chosen instance.

The technical administrator will have to deal with several obligations to ensure its service continuity, with high-quality hardware and regular back-ups. All of these have a price, either in money and in time. Regarding the time factor, it would be better to choose an administration team over an individual, as life events can change quite fast everyone s interests. As such, Framasoft, a French association dedicated to promoting the Free software use, offers its own Mastodon instance named: Framapiaf. The creator of the mastodon project, also offers a quite solid instance, Mastodon.social (see below).

Regarding the money factor, many instance administrators with a large number of users are currently asking for donation via Patreon, as hosting an instance server or renting one cost money.

Mastodon.social, the first instance of the Mastodon network

The Ideological Trend Of Your Instance

If anybody could have guessed the previous technical points since the recent registration explosion on the Mastodon social network, the following point took almost everyone by surprise. Little by little, different instances show their culture , their protest action, and their propaganda on this social network.

As the instance administrator has all the powers over its instance, he or she can block the instance of interacting with some other instances, or ban its instance s users from any interaction with other instances users.

With everyone having in mind the main advantages to have federalized instance from, this partial independence of some instances from the federation was a huge surprise. One of the most recent example was when the Unixcorn.xyz instance administrator banned its users from reading Aeris account, which was on its own instance. It was a cataclysm with several consequences, which I ve named the #AerisGate as it shows the different views on moderation and on its reception by various Mastodon users.

If you don t manage your own instance, when you ll have to choose the one where to create your account, make sure that the content you plan to toot is within the rules and compatible with the ideology of said instance s administrator. Yes, I know, it may seem surprising but, as stated above, by entering a public instance you become dependent on someone else s infrastructure, who may have an ideological way to conceive its Mastodon hosting service. As such, if you re a nazi, for example, don t open your Mastodon account on a far-left LGBT instance. Your account wouldn t stay open for long.

The moderation rules are described in the about/more page of the instance, and may contain ideological elements.

To ease the process for newcomers, it is now possible to use a great tool to select what instance should be the best to host your account.

Remember that, as stated above, Mastodon is decentralized, and as such there is no central authority which can be reached in case you have a conflict with your instance administrator. And nobody can force said administrator to follow its own rules, or not to change them on the fly. Think Twice Before Creating Your Account

If you want to create an account on an instance you don t control, you need to check two elements: the availability of the instance hosting service in the long run, often linked to the administrator or the administration group of said instance, and the ideological orientation of your instance. With these two elements checked, you ll be able to let your Mastodon account growth peacefully, without fearing an outage of your instance, or simple your account blocked one morning because it doesn t align with your instance s ideological line.

in Conclusion

To help me get involved in free software and writing articles for this blog, please consider a donation through my Liberapay page, even if it s only a few cents per week. My contact Bitcoin and Monero are also available on this page. Follow me on Mastodon Translated from French to English by St phanie Chaptal.

30 December 2016

Antoine Beaupr : My free software activities, November and December 2016

Debian Long Term Support (LTS) Those were 8th and 9th months working on Debian LTS started by Raphael Hertzog at Freexian. I had trouble resuming work in November as I had taken a long break during the month and started looking at issues only during the last week of November.

Imagemagick, again I have, again, spent a significant amount of time fighting the ImageMagick (IM) codebase. About 15 more vulnerabilities were found since the last upload, which resulted in DLA-756-1. In the advisory, I unfortunately forgot to mention CVE-2016-8677 and CVE-2016-9559, something that was noticed by my colleague Roberto after the upload... More details about the upload are available in the announcement. When you consider that I worked on IM back in october, which lead to an upload near the end of November covering around 80 more vulnerabilities, it doesn't look good for the project at all. Of the 15 vulnerabilities I worked on, only 6 had CVEs assigned and I had to request CVEs for the other 9 vulnerabilities plus 11 more that were still unassigned. This lead to the assignment of 25 distinct CVE identifiers as a lot of issues were found to be distinct enough to warrant their own CVEs. One could also question how many of those issues affect the fork, Graphicsmagick. A lot of the vulnerabilities were found through fuzzing searches that may not have been tested on Graphicsmagick. It seems clear to me that a public corpus of test data should be available to test regressions and cross-project vulnerabilities. It's already hard enough to track issues withing IM itself, I can't imagine what it would be for the fork to keep track of those issues, especially since upstream doesn't systematically request CVEs for issues that they find, a questionable practice considering the number of issues we all need to keep track of.

Nagios I have also worked on the Nagios package and produced DLA 751-1 which fixed two fairly major issues (CVE-2016-9565 and CVE-2016-9566) that could allow remote root access under certain conditions. Fortunately, the restricted permissions setup by default in the Debian package made both exploits limited to information disclosure and privilege escalation if the debug log is enabled. This says a lot about how proper Debian packaging can help in limiting the attack surface of certain vulnerabilities. It was also "interesting" to have to re-learn dpatch to add patches to the package: I regret not converting it to quilt, as the operation is simple and quilt is so much easier to use. People new to Debian packaging may be curious to learn about the staggering number of patching systems historically used in Debian. On that topic, I started a conversation about how much we want to reuse existing frameworks when we work on those odd packages, and the feedback was interesting. Basically, the answer is "it depends"...

NSS I had already worked on the package in November and continued the work in December. Most of the work was done by Raphael, which fixed a lot of issues with the test suite. I tried to wrap this up by fixing CVE-2016-9074, the build on armel and the test suite. Unfortunately, I had to stop again because I ran out of hours and the fips test suite was still failing, but fortunately Raphael was able to complete the work with DLA-759-1. As things stand now, the package is in better shape than in other suites as tests (Debian bug #806639) and autopkgtest (Debian bug #806207) are still not shipped in the sid or stable releases.

Other work For the second time, I forgot to formally assign myself a package before working on it, which meant that I wasted part of my hours working on the monit package. Those hours, of course, were not counted in my regular hours. I still spent some time reviewing mejo's patch to ensure it was done properly and it turned out we both made similar patches working independently, always a good sign. As I reported in my preliminary November report, I have also triaged issues in libxml2, ntp, openssl and tiff. Finally, I should mention my short review of the phpMyAdmin upload, among the many posts i sent to the LTS mailing list.

Other free software work One reason why I had so much trouble getting paid work done in November is that I was busy with unpaid work...

manpages.debian.org A major time hole for me was trying to tackle the manpages.debian.org service, which had been offline since August. After a thorough evaluation of the available codebases, I figured the problem space wasn't so hard and it was worth trying to do an implementation from scratch. The result is a tool called debmans. It took, obviously, way longer than I expected, as I experimented with Python libraries I had been keeping an eye on for a while. For the commanline interface, I used the click library, which is really a breeze to use, but a bit heavy for smaller scripts. For a web search service prototype, I looked at flask, which was also very interesting, as it is light and simple enough to use that I could get started quickly. It also, surprisingly, fares pretty well in the global TechEmpower benchmarking tests. Those interested in those tools may want to look at the source code, in particular the main command (using an interesting pattern itself, __main__.py) and the search prototype. Debmans is the first project for which I have tried the CII Best Practices Badge program, an interesting questionnaire to review best practices in software engineering. It is an excellent checklist I recommend every project manager and programmer to get familiar with. I still need to complete my work on Debmans: as I write this, I couldn't get access to the new server the DSA team setup for this purpose. It was a bit of a frustrating experience to wait for all the bits to get into place while I had a product ready to test. In the end, the existing manpages.d.o maintainer decided to deploy the existing codebase on the new server while the necessary dependencies are installed and accesses are granted. There's obviously still a bunch of work to be done for this to be running in production so I have postponed all this work to January. My hope is that this tool can be reused by other distributions, but after talking with Ubuntu folks, I am not holding my breath: it seems everyone has something that is "good enough" and that they don't want to break it...

Monkeysign I spent a good chunk of time giving a kick in the Monkeysign project, with the 2.2.2 release, which features contributions from two other developers, which may be a record for a single release. I am especially happy to have adopted a new code of conduct - it has been an interesting process to adapt the code of conduct for such a relatively small project. Monkeysign is becoming a bit of a template on how to do things properly for my Python projects: documentation on readthedocs.org including a code of conduct, support and contribution information, and so on. Even though the code now looks a bit old to me and I am embarrassed to read certain parts, I still think it is a solid project that is useful for a lot of people. I would love to have more time to spend on it.

LWN publishing As you may have noticed if you follow this blog, I have started publishing articles for the LWN magazine, filed here under the lwn tag. It is a way for me to actually get paid for some of my blogging work that used to be done for free. Reports like this one, for example, take up a significant amount of my time and are done without being paid. Converting parts of this work into paid work is part of my recent effort to reduce the amount of time I spend on the computer. An funny note: I always found the layout of the site to be a bit odd, until I looked at my articles posted there in a different web browser, which didn't have my normal ad blocker configuration. It turns out LWN uses ads, and Google ones too, which surprised me. I definitely didn't want to publish my work under banner ads, and will never do so on this blog. But it seems fair that, since I get paid for this work, there is some sort of revenue stream associated with it. If you prefer to see my work without ads, you can wait for it to be published here or become a subscriber which allows you to get rid of the ads on the site. My experience with LWN is great: they're great folks, and very supportive. It's my first experience with a real editor and it really pushed me in improving my writing to make better articles that I normally would here. Thanks to the LWN folks for their support! Expect more of those quality articles in 2017.

Debian packaging I have added a few packages to the Debian archive:
  • magic-wormhole: easy file-transfer tool, co-maintained with Jamie Rollins
  • slop: screenshot tool
  • xininfo: utility used by teiler
  • teiler (currently in NEW): GUI for screenshot and screencast tools
I have also updated sopel and atheme-services.

Other work Against my better judgment, I worked again on the borg project. This time I tried to improve the documentation, after a friend asked me for help on "how to make a quick backup". I realized I didn't have any good primer to send regular, non-sysadmin users to and figured that, instead of writing a new one, I could improve the upstream documentation instead. I generated a surprising 18 commits of documentation during that time, mainly to fix display issues and streamline the documentation. My final attempt at refactoring the docs eventually failed, unfortunately, again reminding me of the difficulty I have in collaborating on that project. I am not sure I succeeded in making the project more attractive to non-technical users, but maybe that's okay too: borg is a fairly advanced project and not currently aimed at such a public. This is yet another project I am thinking of creating: a metabackup program like backupninja that would implement the vision created by liw in his A vision for backups in Debian post, which was discarded by the Borg project. Github also tells me that I have opened 19 issues in 14 different repositories in November. I would like to particularly bring your attention to the linkchecker project which seems to be dead upstream and for which I am looking for collaborators in order to create a healthy fork. Finally, I started on reviving the stressant project and changing all my passwords, stay tuned for more!

25 October 2016

Laura Arjona Reina: Rankings, Condorcet and free software: Calculating the results for the Stretch Artwork Survey

We had 12 candidates for the Debian Stretch Artwork and a survey was set up for allowing people to vote which one they prefer.

The survey was run in my LimeSurvey instance, surveys.larjona.net. LimeSurvey its a nice free software with a lot of features. It provides a Ranking question type, and it was very easy for allowing people to vote in the Debian style (Debian uses the Condorcet method in its elections).

However, although LimeSurvey offers statistics and even graphics to show the results of many type of questions, its output for the Ranking type is not useful, so I had to export the data and use another tool to find the winner.
Export the data from LimeSurvey
I ve created a read-only user to visit the survey site. With this visitor you can explore the survey questionnaire, its results, and export the data.
Username: stretch
Password: artwork
First attempt, the quick and easy (and nonfree, I guess)
There is an online tool to calculate the Condorcet winner, http://www.ericgorr.net/condorcet/
The steps I followed to feed the tool with the data from LimeSurvey were these:
1.- Went to admin interface of lime survey, selected the stretch artwork survey, responses and statistics, export results to application
2.- Selected Completed responses only , Question codes , Answer codes , and exported to CSV. (results_stretch1.csv)
3.- Opened the CSV with LibreOffice Calc, and removed these columns:
id submitdate lastpage startlanguage
4.- Remove the first row containing the headers and saved the result (results_stretch2.csv)
5.- In commandline:
sort results_stretch2.csv   uniq -c > results_stretch3.csv
6.- Opened results_stretch3.csv with LibreOffice Calc and merge delimitors when importing.
7.- Removed the first column (blank) and added a column between the numbers and the first ranked option, and fulfilled that column with : value. Saved (results_stretch4.csv)
8.- Opened results_stretch4.csv with my preferred editor and search and replace ,:, for : and after that, search and replace , for > . Save the result (results_stretch5.csv)
9.- Went to http://condorcet.ericgorr.net/, selected Condorcet basic, tell me some things , and pasted the contents of results_stretch5.csv there.
The results are in results_stretch1.html
But where is the source code of this Condorcet tool?
I couldn t find the source code (nor license) of the solver by Eric Gorr.
The tool is mentioned in http://www.accuratedemocracy.com/z_tools.htm where other tools are listed and when the tool is libre software, is noted so. But not in this case.
There, I found another tool, VoteEngine, which is open source, so I tried with that.
Second attempt: VoteEngine, a Free Open Source Software tool made with Python
I used a modification of voteengine-0.99 (the original zip is available in http://vote.sourceforge.net/ and a diff with the changes I made (basically, Numeric -> numpy and Int -> int, inorder that works in Debian stable), here.
Steps 1 to 4 are the same as in the first attempt.
5.- Sorted alphabetically the different 12 options to vote, and
assigned a letter to each one (saved the assignments in a file called
6.- Opened results_stretch2.csv with my favorite editor, and search
and replace the name of the different options, for their corresponding
letter in stretch_key.txt file.
Searched and replaced , for (space). Then, saved the results into
7.- Copied the input.txt file from voteengine-0.99 into stretch.txt and edited the options
to our needs. Pasted the contents of results_stretch3_voteengine.cvs
at the end of stretch.txt
8.-In the commandline
./voteengine.py <stretch.txt  > winner.txt
(winner.txt contains the results for the Condorcet method).
9.- I edited again stretch.txt to change the method to shulze and
calculated the results, and again with the smith method. The winner in
the 3 methods is the same. I pasted the summary of these 3 methods
(shulze and smith provide a ranked list) in stretch_results.txt
If it can be done, it can be done with R
I found the algstat R package:
which includes a condorcet function but I couldn t make it work with the data.
I m not sure how the data needs to be shaped. I m sure that this can be done in R and the problem is me, in this case. Comments are welcome, and I ll try to ask to a friend whose R skills are better than mine!
And another SaaS
I found https://www.condorcet.vote/ and its source code. It would be interesting to deploy a local instance to drive future surveys, but for this time I didn t want to fight with PHP in order to use only the solver part, nor install another SaaS in my home server just to find that I need some other dependency or whatever.
I ll keep an eye on this, though, because it looks like a modern and active project.
Finally, devotee
Well and which software Debian uses for its elections?
There is a git repository with devotee, you can clone it:
I found that although the tool is quite modular, it s written specifically for the Debian case (votes received by mail, GPG signed, there is a quorum, and other particularities) and I was not sure if I could use it with my data. It is written in Perl and then I understood it worse than the Python from VoteEngine.
Maybe I ll return to it, though, when I have more time, to try to put our data in the shape of a typicall tally.txt file and then see if the module solving the condorcet winner can work for me.
That s all, folks! (for now )
Comments
You can coment on this blog post in this pump.io thread

Filed under: Tools Tagged: data mining, Debian, English, SaaS, statistics

1 March 2014

Sune Vuorela: Diploma thesis about media choice and usage in Free Software communities: I need your help

My hard-working KDE friend Mario asked me to help him to get Debian people to help him with his thesis. Here is what he writes: Dear Free Software contributor* I m currently in the process of writing my diploma thesis. I ve worked hard during the last few weeks and months on a questionnaire which shall collect some data for my thesis. Furthermore the data of this survey will be interesting for the Free Software communities as well. So please take some time or add it to your todo list or, even better, go directly to my questionnaire and help me make a great diploma thesis and improve the Free Software community in some ways. The questionnaire takes some 20 to 30 minutes. At the end of the questionnaire you ll find a way to participate in a draw where you can even win something nice. In a first round I got the feedback that the length of the questionnaire and that some questions (mostly the ones at the beginning of the questionnaire about the 12 different tasks) are quite abstract and difficult. But please try it, try your best and take the time and brain power. The remaining part of the questionnaire [1] (after these two pages with the tasks questions) is quite
easy and quickly done. And you have the possibility to come back to where you
have left filling in the questionnaire after a shorter or longer break. And if there are any questions, feedback or you need help don t hesitate a moment to write me an email or ping me on IRC (freenode.net and oftc.net) as unormal. This survey will be open till Sunday, the 9th of March 2014, 23.59 UTC. Thanks to all for reading and helping and towards the summer of 2014 you can read here what all the data you gave me showed us and where we can learn and improve. Thanks in advance and best regards
Mario Fux * By contributor I mean not just developers but translators, artist, usability
people, documentation writers and many more. Everybody who contributes in one
way or the other to Free Software.

23 December 2013

Russell Coker: Political Advocacy in Clubs

One topic that often gets discussed when it s near election time is whether clubs and societies should be political . Some organisations are limited in what they can do, for example in some jurisdictions religious organisations can theoretically lose their tax exempt status if they advocate for one party. In practice any organisation that has a wide membership will have a variety of political views represented so a policy of directly supporting one candidate or party is likely to lose some members. A common practice among some clubs is to send questionnaires to parties before elections. This might cause a policy change in the parties that do whatever it takes to get votes (as opposed to the parties who devise policy based on principle). But it also provides members a list of how the parties compare on the basis of the criteria that matter to the club. I think that organisations such as Linux Australia [1] and the Linux Users of Victoria [2] should send such questionnaires and publish an analysis of the results. I previously suggested a few questions that could be asked [3], the last one received some negative comments for being too tabloid but the others got some agreement. But obviously there would need to be some discussion about which questions are in scope and how they should be asked. Such a discussion would take a while and would need to be started well before an election was called, I think if we start now we should be able to get it done before the next federal election is called. There is one Australian political party that has a consistent record of having IT policies that are in line with the general aims of Linux Australia and which also has policies that meet the social standards that are generally agreed by most of the membership (EG opposing discrimination). But I know that there are some members of the Linux community who advocate various forms of discrimination and would vote accordingly so advocating for that party would get some negative reactions. But if someone wants to vote for a party that advocates discrimination against minority groups I don t think that there s any harm in providing information to allow them to vote for a pro-discrimination party that has a reasonable IT policy. In any case it doesn t seem likely that we can get most of the membership of an organisation like Linux Australia to agree on what parties are unacceptable, so sending a questionnaire to all parties avoids that debate. I would like to see this sort of thing done by LUGs for all state and territory elections. I will be involved in the process with LUV for the Victorian elections, but I have to just hope that my blog posts inspire people in other states and territories if anyone has already started on this then please let me know. I will also be involved with getting this done for the federal elections with Linux Australia, hopefully this post will help get people interested in that.

16 March 2013

Lucas Nussbaum: Ideas from the -vote@ DPL election discussions

After one week of campaign on -vote@, many subjects have been mentioned already. I m trying here to list the concrete, actionable ideas I found interesting (does not necessarily mean that I agree with all of them) and that may be worth further discussion at a less busy time. There s obviously some amount of subjectivity in such a list, and I m also slightly biased ;) . Feel free to point to missing ideas or references (when an idea appeared in several emails, I ve generally tried to use the first reference). On the campaign itself, and having general discussions inside Debian: On getting new users and contributors to Debian: Infrastructure, processes, releases: Relationships with upstreams/downstreams: This list could be moved to wiki.d.o if others find sufficiently useful to help maintaining it.

25 January 2013

Josselin Mouette: So(lusOS) I herd U liek mudslingingz?

After this announcement about the future of GNOME in Debian, I received several requests to look at SolusOS and their Consort fork of GNOME. On the paper, this doesn t look like a too bad idea, since they only forked gnome-panel and metacity, the two key components that are no longer maintained upstream. Still, starting a plain fork at the very moment when the former maintainer decided to give the key to anyone who wanted to, looked like a very destructive way of acting. Even if the fork gains momentum (which remains speculative), the amount of effort to rename packages makes you think twice before such a switch. I decided to go talk about it with them on IRC nevertheless. Mind you, they work on a GNOME fork and a Debian derivative, but deliberately use their own IRC server (you ll soon understand why). Just in case you would want to cooperate with SolusOS, you d have to use their infrastructure. It is an euphemism to say that the conversation didn t go well. This could have ended there, but thankfully for your already widening eyes, Ikey (the charming person I had the opportunity to discuss with) made the log public, in an attempt at public shaming that would soon gather his followers, chanting out loud how the revolutionary SolusOS would quickly replace every other Linux distribution. (As a side note, of the hundreds of such claims that were made public in the last 10 years, only one came true, and it was from a billionnaire who hired dozens of developers. Just saying.) Then you can understand why the specific IRC server: it is to K-line people who don t behave with enough deference to Ikey, because kickban is not enough. This log and the claims made in the comments look very ironical, if only for presenting a person working in a team of 10+ on the most popular desktop on one of the most popular desktop distributions, as a beggar. Even more ironical when it is the person who spent the most time on making GNOME Classic available and working nicely for wheezy. So when a dozen kids go chanting that Consort is so much better than GNOME and will replace it, while it just re-does the work I already did, let me at least smile. So what is SolusOS, after all? If you want to see where this story is going, there s a place that tells you: the consortium and consort-panel repositories. Here you will find interesting things: I ve stopped looking there. We re facing an ego issue, but not something that can improve the desktop for our users. In the end, I m sorry for those who genuinely thought we could do better by sharing some maintenance load with SolusOS, but it doesn t look like a big loss anyway.

27 April 2012

Ana Beatriz Guerrero Lopez: Debian in the Google Summer of Code 2012

This year our efforts have paid off and despite there being more mentoring organizations than there were in 2011 (175 in 2011 and 180 in 2012), this year in Debian we got 81 submissions versus 43 submissions in 2011.
You can see here the graphs of applications against time from this year: 2012 The result is this year we ll have 15 students in Debian versus 9 students last year! Without further ado, here is the list of projects and student who will be working with us this summer: If you want to know more about these projects, follow the links and ask the students (and mentors)!

28 April 2011

Rapha&#235;l Hertzog: No freeze of Debian s development, what does it entail?

The main feature of rolling is that it would never freeze. This is not without consequences. Possible consequences It can divert developers from working on the release No freeze means developers are free to continue their work as usual in unstable. Will it be more difficult to release because some people will spend their time working on a new upstream version instead of fixing RC bugs in the version that is frozen? Would we lose the work of the people who do lots of NMU to help with the release? It makes it more difficult to cherry-pick updates from unstable No freeze also means that unstable is going to diverge sooner from testing and it will be more difficult to cherry-pick updates from unstable into testing. And the release team likes to cherry-pick updates that have been tested in unstable because updates that comes through testing-proposed-updates have often not been tested and need thus a more careful review. Frozen earth My responses to the objections Those are the two major objections that we ll have to respond to. Let s try to analyze them a bit more. It s not testing vs rolling On the first objection I would like to respond that we must not put testing and rolling/unstable in opposition. The fact that a contributor can t do its work as usual in unstable does not mean that he will instead choose to work on fixing RC bugs in testing. Probably that some do, but in my experience we simply spend our time differently, either working more on non-Debian stuff or doing mostly hidden work that is then released in big batches at the start of the next cycle (which tends to create problems of its own). I would also like to argue that by giving more exposure to rolling and encouraging developers to properly support their packages in rolling, it probably means that the overall state of rolling should become gradually better compared to what we re currently used to with testing. The objection that rolling would divert resources from getting testing in a releasable shape is difficult to prove and/or disprove. The best way to have some objective data would be to setup a questionnaire and to ask all maintainers. Any volunteer for that? Unstable as a test-bed for RC bugfixes? It s true that unstable will quickly diverge from testing and that it will be more difficult to cherry-pick updates from unstable into testing. This cannot be refuted, it s a downside given the current workflow of the release team. But I wonder if the importance of this workflow is not overdone. The reason why they like to cherry-pick from unstable is because it gives them some confidence that the update has not caused other regressions and ensures that testing is improving. But if they re considering to cherry-pick an update, it s because the current package in testing is plagued by an RC bug. Supposing that the updated package has introduced a regression, is it really better to keep the current RC bug compared to trading it for a new regression? It sure depends on the precise bugs involved so that s why they prefer to know up-front about the regression instead of making a blind bet. Given this, I think we should use testing-proposed-updates (tpu) as a test-bed for RC bug fixes. We should ask beta-testers to activate this repository and to file RC bugs for any regression. And instead of requiring a full review by a release manager for all uploads to testing-proposed-updates, uploads should be auto-accepted provided that they do not change the upstream version and that they do not add/remove binary packages. Other uploads would still need manual approval by the release managers. On top of this, we can also add an infrastructure to encourage peer-reviews of t-p-u uploads so that reviews become more opportunistic instead of systematic. Positive reviews would help reduce the aging required in t-p-u before being accepted into testing. This changes the balance by giving a bit more freedom to maintainers but still keeps the safety net that release managers need to have. It should also reduce the overall amount of work that the release team has to do. Comments welcome Do you see other important objections beside the two that I mentioned? Do you have other ideas to overcome those objections? What do you think of my responses? Does your experience infirm or confirm my point of view?

18 comments Liked this article? Click here. My blog is Flattr-enabled.

17 February 2011

Rapha&#235;l Hertzog: People behind Debian: Maximilian Attems, member of the kernel team

Maximilian, along with the other members of the Debian kernel team, has the overwhelming job of maintaining the Linux kernel in Debian. It s one of the largest package and certainly one where dealing with bug reports is really difficult as most of them are hardware-specific, and thus difficult to reproduce. He s very enthusiastic and energetic, and does not fear criticizing when something doesn t please him. You ll see. My questions are in bold, the rest is by Maximilian. Who are you? My name is Maximilian Attems. I am a theoretical physicist in my last year of PhD at the Technical University of Vienna. My main research area is the early phase of a Quark-Gluon Plasma as produced in heavy ion collisions at the LHC at CERN. I am developing simulations that take weeks on the Vienna Scientific Cluster (in the TOP 500 list). The rest of the lab is much less fancy and boils down to straight intel boxes without any binary blobs or external drivers (although lately we add radeon graphics for decent free 3D). Mathematica and Maple are the rare exceptions to the many dev tools of Debian (LaTeX, editors, git, IDE s, Open MPI, ..) found at the institute, as those are unfortunately yet unmatched in Free Software for symbolic computations. The lab mostly runs a combination of Debian stable (testing starting from freeze) for desktops and oldstable/stable for servers. Debian is in use for more than 10 years. So people in the institute know some ups and downs of the project. Newcomers like my room neighbors are always surprised how functional a free Debian Desktop is. :) What s your biggest achievement within Debian? Building lots and lots of kernels together with an growing uptake of the officially released linux images. I joined the Debian kernel team shortly after Herbert Xu departed. I had been upstream Maintainer of the linux-2.6 janitor project for almost a year brewing hundreds of small cleanups with quilt in a tree named kjt for early linux-2.6. In Debian we had lots of fun in sorting out the troubles that the long 2.5 freeze had imposed: Meaning we were sitting on a huge diverging monolithic semi-good patchset. It was great fun to prepare 2.6.8 for Sarge with a huge team enthusiastic in shipping something real close to mainline (You have to imagine that back then you had no stable or longterm release nor any useful free tools like git. This involved passing patches around, hand editing them and seeing what the result does.) From the Sarge install reports a common pattern emerged that the current Debian early userspace was causing lots of boot failures. This motivated me to develop an alternative using the new upstream initramfs features. So I got involved in early userspace. Thanks to large and active development team initramfs-tools got a nice ecosystem. It still tries to be as generic and flexible as possible and thus gains many nice features. Also H. Peter Anvin (hpa) gave me the official co-maintenance of klibc. klibc saw uptake and good patches from Google in the last 2 years. I am proud that the early userspace is working out fairly well these days, meaning you can shuffle discs around and see your box boot. Later on we focused on 2.6.18 for Etch, which turned out to a be good release and picked up by several other distributions. Only very much later we would see such a sync again. With 2.6.26 for Lenny we got somehow unlucky as we just missed the new longterm release by one release. We also pushed for another update very late (during freeze) in the release cycle, which turned out to semi-work as too much things depend on linux-2.6. For Squeeze 2.6.32 got picked thanks to discussions at Portland Linux Plumbers and it turned out to be a good release picked up by many distributions and external patchsets. The long-term support is going very well. Greg KH is doing a great job in collecting various needed fixes for it. Somehow we had hoped that the Squeeze freeze would start sooner and that the freeze duration would be shorter, since we were ready for a release starting from the actual freeze on. The only real big bastard on the cool 2.6.32 sync is Red Hat. Red Hat Enterprise 6.0 is shipping the linux-2.6 2.6.32 in obfuscated form. They released their linux-2.6 as one big tarball clashing with the spirit of the GPL. One can only mildly guess from the changelog which patches get applied. This is in sharp contrast to any previous Red Hat release and has not yet generated the sharp and snide comments in press it deserves. Red Hat should really step back and not make such stupid management moves. Next to them even the semi-maintained Oracle Unbreakable 2.6.32 branch looks better: It is git fetchable. What are your plans and those of the kernel team for Debian Wheezy? Since 2.6.32 many of the used patches landed upstream or are on the way (speakup, Kbuild Debian specific targets, ..). The proper vfs based unionfs is something we d be looking forward. We haven t yet picked the next upstream release we will base Wheezy on, so currently we can happily jump to the most recent ones. There are plans for better interaction with Debian Installer thanks to generating our udebs properly in linux-2.6 source itself. Also we are looking forward to using git as tool of maintenance. We d hope that this will also allow for even better cross distribution collaboration. Concerning early userspace I plan to release an initramfs-tools with more generic userspace for the default case and finally also a klibc only for embedded or tuning cases. What do you like most in Debian? For one thing I do like the 2 year release cycle. It is not too long to have completely outdated software and on the other hand it gives enough time to really see huge progress from release to release. Also at my institute the software is is recent enough without too much admin overhead. For servers the three years support are a bit short, but on the manageable side. I do enjoy a lot the testing distribution. For my personal use it is very stable and thus I mainly run testing on my desktop and work boxes. (Occasionally mixing in things from sid for unbreaking transition or newer security fixes). Debian is independent and not a commercial entity. I think this is its main force and even more important these days. I enjoy using the Debian platform a lot at work thus in return this motivates me to contribute to Debian itself. I also like the fact that we strive for technical correctness. Is there some recurrent problem that hinders the progress of Debian? The New Maintainer process is a strange way to discourage people to contribute to Debian. It is particularly bureaucratic and a huge waste of time both for the applicant and his manager. It should be completely thrown overboard. One needs a more scalable approach for trust and credibility that also enhances the technical knowledge for coding and packaging of the applicant. NM is currently set in stone as any outside critics is automatically rejected. Young and energetic people are crucial for Debian and the long-term viability of the project, this is the reason why I d consider the New Maintainer process as Debian s biggest problem. Note from Rapha l Hertzog: I must say I do not share this point of view on the New Maintainer process, I have witnessed lots of improvements lately thanks to the addition of the Debian Maintainer status, and to the fact that a good history of contribution can easily subsume the annoying Tasks & Skills questionnaire. Another thing I miss is professional graphics input both for the desktop theme and the website. I know that effort has been done there lately and it is good to see movement there, but the end result is still lacking. Another trouble of Debian is its marketing capabilities. It should learn to better sell itself. It is the distribution users want to run and use not the rebranded copies of itself with lock-in sugar . Debian is about choice and it offers plenty of it: it is a great default Desktop. Linus Torvalds doesn t find Debian (and/or Ubuntu) a good platform to hack on the kernel. Do you know why and what can we do about this? The Fedora linux-2.6 receives contributions from several Red Hat employed upstream sub-Maintainers. Thus it typically carries huge patches which are not yet upstream. As a consequence eventual userland troubles get revealed quite quickly and are often seen there first. The cutting edge nature of Fedora rawhide is appealing for many developers. The usual Debian package division of library development files and the library itself is traditionally an entry barrier for dev on Debian. Debian got pretty easily usable these days, although we could and should again improve a lot more in this sector. Personally I think that Linus hasn t tried Debian for years. I have the feeling that the implication of the Debian Kernel team in LKML has been on the rise. Is that true and how do you explain this? Ben Hutchings is the Nr.1 contributor for 2.6.33. He also is top listed as author of patches on stable 2.6.32. Debian is not listed as organization as many send their linux-2.6 patches from their corporate or personal email address and thus it won t be attributed to Debian. There is currently no means to see how many patches get forwarded for the stable tree, but I certainly forwarded more then fifty patches. I was very happy when Greg KH personally thanked me in the 2.6.32.12 release. In the Squeeze kernel, the firmwares have been stripped and moved into separate packages in the non-free section. What should a user do to ensure his system keeps working? There is a debconf warning on linux-2.6 installation. It is quite clear that the free linux-2.6 can t depend on the firmware of the non-free archive (also there is no strict dependency there technically). On the terminal you d also see warnings by update-initramfs on the initramfs generation for drivers included in the initramfs. The debconf warning lists the filename(s) of the missing firmware(s). One can then apt-cache search for the firmware package name and install it via the non-free repository. The check runs against the current loaded modules. The match is not 100% accurate for special cases as the one where the device might be handled well by this driver without firmware, but is accurate enough to warrant the warning. The set of virtualization technologies that the official Debian kernel supports seems to change regularly. Which of the currently available options would you recommend to users who want to build on something that will last? KVM has been a smooth ride from day zero. It almost got included instantly upstream. The uptake it has is great as it sees both dev from Intel and AMD. Together with libvirt it s management is easy. Also the performance of virtio is very good. The linux containers are the thing we are looking forward for enhanced chroots in the Wheezy schedule. They are also manageable by libvirt. Xen being the bad outside boy has an incredible shrinking patchset, thus is fair to expect to see it for Wheezy and beyond. For many it may come a bit late, but for old hardware without relevant CPU it is there. Many tend to overstate the importance of the virtualization tech. I d be much more looking forward to the better Desktop support in newer linux-2.6. The Desktop is important for linux and something that is in heavy use. The much better graphics support of the radeon and nouveau drivers: The performance optimizations thanks to dcache scalability work and the neat automatic task-grouping for the CPU scheduler are very promising features for the usability of the linux desktop. Another nice to have feature is the online defrag of ext4 and its faster mkfs. Even cooler would be better scalability in ext4 (This side seems to have seen not enough effort lately). Is there someone in Debian that you admire for their contributions? Hans Peter Anvin and Ted Tso are a huge source of deep linux-2.6 knowledge and personal wisdom. I do enjoy all sorts of interactions with them. Christoph Hellwig with Matthew Wilcox and also William Irwin for setting up the Debian kernel Team. Several Debian leaders including the previous and the current one for their engagement, which very often happens behind the scene. The Debian Gnome Team work is great, also the interactions have always been always easy and a pleasure. Martin Michlmayr and previously Thiemo Seufer do an incredible job in porting Debian on funny and interesting ARM and MIPS boxes. Debian has a lot of upcoming potential on this area. I m looking forward to other young enthusiastic people in that area. Colin Watson is bridging Debian and Ubuntu, which is an immense task. Michael Prokop bases on Debian an excellent recovery boot CD: http://www.grml.org. I d be happy if any Debian Developer would work as carefully coding and working.
Thank you to Maximilian for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

One comment Liked this article? Click here. My blog is Flattr-enabled.

5 October 2010

Peter Eisentraut: Git User's Survey 2010

The Git User's Survey 2010 is up. Please devote a few minutes of your time to fill out the simple questionnaire; it'll help the Git community understand your needs, what you like about Git (and what you don't), and help improve it.

The survey is open from 1 September to 15 October, 2010.

Go to https://git.wiki.kernel.org/index.php/GitSurvey2010 for more information.

24 August 2010

Evgeni Golov: That was FrOSCon 2010

Well, FrOSCon is over and it s time to sum up the event a bit. First of all: it was great! But it was also hot (I heard the air-condition is off during week-ends ) and busy. And still, it was great! :) Why? Mostly because of the people of course! It was esp nice to meet Rhonda in person (almost all other people [except the grml ones] in the Debian-and-Friends corner were the usual suspects who you see at every (big) FOSS/Linux event). I had some nice chats with Enrico (about Geany and Xfce) and Leo (about bley), besides of the usual when will Squeeze be released? and how does X work in Debian? with random people. When there are a lot of people, a key signing party isn t too far away. Thanks formorer for the orga! You still owe me a sig from FrOSCon 2008 ;) And if someone wonders why he/she got a sign-mail from me even when he/she wasn t at the FrOSCon KSP: I used the fact that I have to sign a lot of keys to sign even more (from the OpenRheinRuhr 2009 KSP and the BSP in M nchengladbach at the beginning of this year [yes tg, I even signed yours, hope you liked the MIME]). As usual I missed almost every talk I wanted to hear and end up with just one (the one about RegEx), which sadly wasn t good at all. I guess that s all I have to say about FrOSCon. Oh wait, no, there was a questionnaire which included the question whether I ll visit future FrOSCons. Of course! Hope to see ya all at MRMCD, OpenRheinRuhr, 27C3 etc So Long, and Thanks for All the Frogs!

27 December 2009

Thorsten Glaser: speling[sic!]

With the Lintian 2.3.0 saturday-after-christmas release (by the way, over here if it s done twice it ll really become tradition) I ve run its spelling tests over all of MirOS CVS repository. The result: 293 kinds of typos in 35857 souce files. (Although there are the case things too. Without them, I have 51 typos in 7206 files. Asides from false positives (I used fgrep -rwl[i], and -i and -w don t play well together, and -w mis-catches GTK+ as GTK ) I probably can t (API, source code) or won t fix all of them though.) However, I have some rather hot asia-style food to eat right now, and will need to get up early tomorrow for work, so I am not applying/fixing them right now. (bsiegert@ and gecko2@ however are enjoying themselves at 26C3, see their wlog entries.) Note that all of today s fixes will not make it into the next MirBSD snapshot already, since it s built (i386) and building X11 already (sparc). On the other hand, the next bunch of WTF *.deb files will have them. I also need to fix makefs upstream for Hurd and continue the T&S questionnaire *sigh* Update: I suppose this is my Hello, Pl net Debian! posting (thanks aptituz!) well, my packages in the archive were already lintian clean, in case someone wonders (I did recheck with 2.3.0 though). My point was, why not use checking tools from one universe for another one, viceque versa? (Similar to synergy effects from knowledge.)

31 August 2009

Rapha&#235;l Hertzog: Interdependence in Debian, how to suffer less from it

Listening to Martin Krafft s talk at Debconf (related to his PhD) shed some new light on the idea that I expressed last year I wanted that each maintainer regularly answers a questionnaire so that he has to ask himself whether he does a good enough job with his packages. When thinking of this idea, I only saw the QA side of ensuring good maintenance on all packages, however I believe that the root problem lies further and this project would not be enough: we are interdependent but we are not equipped to deal with this reality. Martin s only merit has been to mention that we are interdependent, but it s worth analyzing a bit. Our organization is centered around individuals acting as package maintainers, and in theory each package maintainer can work on his corner and all goes well. We know that this model doesn t hold any more: transitions to testing require coordination of uploads and timely fixes of RC bugs, keeping up with the work frequently requires several volunteers that have to coordinate, etc. More and more of the work requires a level of availability that a single individual can t offer, yet in our day-to-day work we mainly interact with individuals. Wouldn t it be better if we could immediately know what we can expect from any Debian developer: All this information should be shared by all Debian maintainers (some of it is already available but either not publicly or not in any machine-parseable way) and we should actively use it. Here are some examples of use: for each RC bug report, you could look up if at least one maintainer is available and you could ping him explicitly if needed. When you plan an NMU, you could look up if the maintainer is likely to respond in the next day or not, and possibly adjust the number of days spent in the DELAYED queue. When organizing a large-scale transition, you could extract a list of packages whose maintainers are not available and arrange immediate NMUs. Furthermore there are many cases where the project s usual expectation exceed what the maintainer is ready to do. Documenting what part of the job is done (or not) by the maintainer makes it clear for volunteers whether their help is needed and whether they could/would be a better maintainer for a given package. Designing solutions to all these problems is going to be the scope of the DEP2 that I reserved some time ago. It s likely to be some sort of dedicated web interface. I would welcome supplementary drivers for this DEP, so if you re interested, get in touch with me. Partagez cet article / Share This

27 October 2008

Rapha&#235;l Hertzog: Debian membership reform

Following Ganneff’s post to debian-devel-announce, several discussions have again started on the topic of Debian’s membership and several proposals have been made. Unfortunately none of these proposals try to resolve the underlying trust problem that has been growing over the years. Despite the NM process (or maybe due to it), we managed to give DD status to people who are motivated but whose technical skills are doubtful (at that point people ask for an example, and as much as I hate fingerpointing, here’s an example with #499201. The same maintainer created troubles with libpng during the etch release cycle and tried to take over a base package like mawk recently). With our current model, all DD can sponsor, NMU, introduce/adopt/hijack packages without review. This is fine as long as we trust the body of DD to contain only skilled and reasonable people. I believe that premise to be somewhat broken since Debian has become too big for people to know everybody and since the NM process had no way to grant partial rights to volunteers who were motivated but that clearly had not shown their ability to handle more complex stuff than what they had packaged during their NM period (like some trivial perl modules for example). Thus I strongly believe that any membership reform must provide a convincing answer to that trust problem before being implemented. I took several hours to draft a proposal last Friday and I’ve been somewhat disappointed that nobody commented on it. I hope to draw some attention on it with this blog post. The proposal builds on the idea that we should not have classes of contributors but simply two: a short-term contributor and a long-term contributor (those are called Debian Developers and have the right to vote). But all contributors can be granted privileges as they need them for their work and each privilege requires the contributor to fulfill some conditions. The set of privileges and the conditions associated all need discussions (but I have personal opinions here, see below). There’s however one privilege that is somewhat particular: it’s the right to grant privileges to other contributors. Handling it as a privilege like another is on purpose: it makes it clear that anyone can try to get that privilege and the procedure is clear. In practice, imagine that set of people as a big team encompassing the responsibilities split over DAM/AM/FD/DM-team and where all members can do all the steps required to grant/retire a privilege provided that 2 or 3 members agrees and that nobody opposes (in case of opposition a specific procedure is probably needed). I called that set of people the Debian Community Managers. It should contain only skilled and dedicated developers. One of their main duties would be to retain the trust that the project as a whole must have in all its members. They would have the powers to retire privileges if they discover someone that has not acted according to the (high) expectations of the project. Among the privileges would be limited upload rights (like DM have currently), full upload rights (like DD have currently although it might be that we want to split that privilege further in right to sponsor, right to package new software, right to maintain a package of priority > standard, etc.) and developer status (email + right to vote, once you can prove 6 months of contribution). There’s lots of stuff to discuss in such a proposal (like how to decide who gets what privileges among existing DD) but I think it’s a good basis and need some serious consideration by all the project members. The NM process is there only so that we can collectively trust that new members are as good as we expect them to be and trust can only be built over time so it’s good that we can grant privileges progressively. Some people believe that I’m reinventing a new NM process that will end up to be very similar to the current one. My answer is that the conditions associated to each privilege should be based on the work done by the contributor and the advocations that he managed to collect. It should not be a questionnaire like Task and Skills . This, together with the distribution of the power/work on many people, would render this system very different from today’s NM process. Some people believe that I’m copying Ubuntu when designing this since it’s somewhat similar to the process to become MOTU and/or get upload right to Ubuntu’s main component. Let me say that I’m not copying deliberately at least, I simply took the problem from the most important side. But remember that many aspects of Ubuntu have been designed by Debian developers that tried to avoid known pitfalls of Debian, and maybe they got some things right (or better at least) while doing this. Partagez cet article / Share This

15 August 2008

DebConf 8 video: Best practises in team-maintaining packages

a) Situation

During the last few years team-maintaining groups of packages has become
more and more widespread in Debian; the exact number of packaging teams is
unknown, estimates vary between 42 and 893 -- reality is probably somewhere
in between. [0]

Although the challenges for all teams are similar, there is a lack of
communication between those groups which leads to a situation where many
are "reinventing the wheel".

b) Objectives

- bring members of different packaging teams together
- get an overview of different work flows, tools, and challenges
- compile generally useful 'models of good practise'
- define possible areas for cooperation / tasks of mutual interest

c) Methods

Should the proposal for the BOF be accepted, a short questionnaire covering
"typical questions" for packaging teams will be prepared before Summer that
allows interested participants from packaging teams to prepare a short
overview of their team's situation and work flow.

This will allow the BOF to begin with short and structured presentations
dealing with common aspects of team packaging. From these presentations
common points of interest will be collected to form the "agenda" for the
next part of the BOF.

Ideally the session will conclude with a summary of "best practises,"
ideas for improving individual groups' work flows and ideas for
improved cooperation between teams. -- Findings will be made public in
the hope that they will be helpful to others.



[0]
http://wiki.debian.org/Teams lists 42 "Packaging teams";
http://krum.ethz.ch/ddc/teams-of-2007.txt has 218 entries;
http://alioth.debian.org/ talks about 789 "hosted projects", looking in
/home/groups on alioth shows 893 directories in general and 387 pkg-*
directories.
Full event details

DebConf 8 video: Best practises in team-maintaining packages

a) Situation

During the last few years team-maintaining groups of packages has become
more and more widespread in Debian; the exact number of packaging teams is
unknown, estimates vary between 42 and 893 -- reality is probably somewhere
in between. [0]

Although the challenges for all teams are similar, there is a lack of
communication between those groups which leads to a situation where many
are "reinventing the wheel".

b) Objectives

- bring members of different packaging teams together
- get an overview of different work flows, tools, and challenges
- compile generally useful 'models of good practise'
- define possible areas for cooperation / tasks of mutual interest

c) Methods

Should the proposal for the BOF be accepted, a short questionnaire covering
"typical questions" for packaging teams will be prepared before Summer that
allows interested participants from packaging teams to prepare a short
overview of their team's situation and work flow.

This will allow the BOF to begin with short and structured presentations
dealing with common aspects of team packaging. From these presentations
common points of interest will be collected to form the "agenda" for the
next part of the BOF.

Ideally the session will conclude with a summary of "best practises,"
ideas for improving individual groups' work flows and ideas for
improved cooperation between teams. -- Findings will be made public in
the hope that they will be helpful to others.



[0]
http://wiki.debian.org/Teams lists 42 "Packaging teams";
http://krum.ethz.ch/ddc/teams-of-2007.txt has 218 entries;
http://alioth.debian.org/ talks about 789 "hosted projects", looking in
/home/groups on alioth shows 893 directories in general and 387 pkg-*
directories.
Full event details

4 August 2008

Runa Sandvik: Assembly 2008

I had to learn it the hard way; really cheap plane tickets doesn’t exist unless you buy them months in advance, and killing three hours at an airport can be done by writing a long post on how to kill three hours at an airport. But Finnair does have the funniest instruction videos I’ve ever seen. Going to Helsinki for Assembly 2008 was worth the money. This was my first party outside Norway, and the one thing that really surprised me was that most of the gamers actually did kill all audio and lights when told to do so. Smash told me that some hardcore gamers just put a towel over their head and screen to continue playing, but I still think it deserves some credit. Being at the party place was ok, even with the security check. I think the scene booth organized by Truck was great, even though I don’t have anything to compare it to. It was nice to see someone walk up to the booth and ask a lot of questions. I’m happy about missing out on Hesburger, and sad that it took me two days to find the Pizza Hut. Boozembly was a lot more fun, even with the rain. I found out that Sauli can be a very nice guy, that you don’t need toothpaste when you have mintu, that guys in kilts got nothing under and that kiitos means thanks. I sure hope information like this will be useful to someone someday. It’s always nice to see old friends, like Spiikki, again and even nicer knowing that you can still pick up where you left two years ago. And it’s fun meeting the skilled people you’ve admired for a long time. Actually telling them that got a bit easier when I managed to stop being so shy. I always find it difficult to keep it short and still cover the most important things, so I just want to say the following; I think the 4k by Portal Process and TBC rocks, and the 64k by Fairlight did impress me. Thanks to Leia and Little Bitchard for letting me crash at their place. And thanks to Gargaj for being awesome.

7 May 2008

Steve Kemp: You're not too technical, just ugly, gross ugly

Well a brief post about what I've been up to over the past few days. An alioth project was created for the maintainance of the bash-completion package. I spent about 40 minutes yesterday committing fixes to some of the low-lying fruit. I suspect I'll do a little more of that, and then back off. I only started looking at the package because there was a request-for-help bug filed against it. It works well enough for me with some small local additions The big decision for the bash-completion project is how to go forwards from the current situation where the project is basically a large monolithic script. Ideally the openssh-client package should contain the completion for ssh, scp, etc.. Making that transition will be hard. But interesting. In other news I submitted a couple of "make-work" patches to the QPSMTPD SMTP proxy - just tidying up a minor cosmetic issues. I'm starting to get to the point where I understand the internals pretty well now, which is a good thing! I love working on QPSMTPD. It rocks. It is basically the core of my antispam service and a real delight to code for. I cannot overemphasise that enough - some projects are just so obviously coded properly. Hard to replicate, easy to recognise... I've been working on my own pre-connection system which is a little more specialied; making use of the Class::Pluggable library - packaged for Debian by Sarah. (The world -> Pre-Connection/Load-Balancing Proxy -> QPSMTPD -> Exim4. No fragility there then ;) Finally I made a tweak to the Debian Planet configuration. If you have Javascript disabled you'll no longer see the "Show Author"/"Hide Author" links. This is great for people who use Lynx, Links, or other minimal browsers. TODO:
I'm still waiting for the creation of the javascript project to be setup so that I can work on importing my jQuery package. I still need to sit down and work through the Apache2 bugs I identified as being simple to fix. I've got it building from SVN now though; so progress is being made! Finally this weekend I need to sit down and find the time to answer Steve's "Team Questionnaire". Leave it any longer and it'll never get answered. Sigh.
ObQuote: Shooting Fish

Next.