Search Results: "Sven Mueller"

18 November 2012

Sven Mueller: Ingress

As some may have noticed, Google launched something that is unusual for the company: A game. A mobile alternate reality game, to be precise. See allthingsd article for a bit more information. In my opinion this game is most interesting to people who are mildly interested in geo-caching. You spend most of the gaming time either looking for the right portals (geo-location that is an interesting place for one reason or another, many located at sculptures, murals or (more scarcely) interesting local businesses), or trying to solve the puzzles that appear in the game itself or on http://www.nianticproject.com/. I was one of the beta testers and really enjoy the game, especially in areas with more players. As the game is most fun when people coordinate a bit, I started a google group for the munich area for one of the two teams (the enlightenment), but I will only let people enter that group who can be verified to be on that team. On a side note: I still have a few invites to the game left, so if you are interested, please leave a comment and enter your gmail.com email address (you need a Google account to play anyway) while doing so. I will send out invites in the order I received the requests (which I admit might not always be the same in which they were submitted. First come first serve and no guarantee to get an invitation though. Your email address you enter when commenting isn t shown to anyone and I don t share them with anyone but the game (and thus Google). EDIT: So short time went past, but I already ran out of invites (note that I didn t approve comments that really only were like please send me an invite , but still did sent one out). Feel free to still ask me for an invite, I will send some out as I get new ones, but this might take anything between a few days and some weeks. PS: For the lucky ones who got invites: You don t strictly need to do the training offered when you start the game for the first time, but I strongly encourage you to do so.
PPS: I m opening my group to any enlightened players in Germany. I will share some useful information about some game details there in the near future (nothing company confidential of course, just information anyone can gather in the game or from the published information). EDIT: As noted before, I don t have any more invites to give away. As I still get like 4 requests for invites per day, and now have more than 30 unfulfilled requests pending (with no new invites to give in sight), I m now disabling comments. Disclaimer: I work for Google since a few months ago, but this post solely represents my own opinions and hasn t been endorsed by Google in any way.

14 August 2012

Sven Mueller: UK going completely crazy on cryptography law.

It seems that the UK government recently passed a law that makes it illegal to be unable to decrypt what the law enforcement entities think is encrypted: From http://falkvinge.net/2012/07/12/in-the-uk-you-will-go-to-jail-not-just-for-encryption-but-for-astronomical-noise-too/:
But it s worse than that. Much worse. You re not going to be sent to jail for refusal to give up encryption keys. You re going to be sent to jail for an inability to unlock something that the police think is encrypted. Yes, this is where the hairs rise on our arms: if you have a recorded file with radio noise from the local telescope that you use for generation of random numbers, and the police asks you to produce the decryption key to show them the three documents inside the encrypted container that your radio noise looks like, you will be sent to jail for up to five years for your inability to produce the imagined documents.
This is just insane. Edit: The law was created several years ago, but the blog post somehow made me think it was more recent.

20 April 2012

Sven Mueller: Ex-TSA lead on air traffic security woes

I came across this article today and really found it noteworthy: http://online.wsj.com/article/SB10001424052702303815404577335783535660546.html Original title: Why Airport Security Is Broken And How To Fix It Kip Hawley, TSA head from July 2005 to January 2009, writes about how the current TSA procedures came into being, how he failed at some of his goals (to make the checks less annoying) during his involvement and what could be done to fix procedures. I especially liked these points:
By the time of my arrival, the agency was focused almost entirely on finding prohibited items. Constant positive reinforcement on finding items like lighters had turned our checkpoint operations into an Easter-egg hunt. When we ran a test, putting dummy bomb components near lighters in bags at checkpoints, officers caught the lighters, not the bomb parts.
(also quoted on LWN.net) And this one:
The public wants the airport experience to be predictable, hassle-free and airtight and for it to keep us 100% safe. But 100% safety is unattainable.
I think the most important thing he mentioned is the fifth and last of his action items to improve both experience by passengers and security:
5. Randomize security: Predictability is deadly. Banned-item lists, rigid protocols if terrorists know what to expect at the airport, they have a greater chance of evading our system.
He got it nailed there, in my opinion: If security measures are predictable, the loopholes in it are also predictable, so you basically give attackers a handbook of what to avoid when planning the attack. This, by the way isn t limited to physical security and air travel, but also applies to IT security (though it is much easier to hide your IT security measures and make them somewhat unpredictable that way, then it is to do so with physical security and passenger screening on airports.

13 October 2011

Sven Mueller: Strange MySQL (5.0) issue with authentication

Dear Lazyweb . I ran into a very strange issue with MySQL authentication. The initial situation is as follows:
Two MySQL servers, let s name them a0.my.do.main and a1.my.do.main. Their my.cnf configuration is identical except for the server_id (equals 1 on a0 and 2 on a1).
Since I wanted to set up a master< ->master replication, I created a user repl@%.my.do.main on a0, then shut down the mysql server on both hosts, copied over the mysql database on the filesystem layer (i.e. rsync of the data directory), then started mysql on both hosts again. So far, so good, both servers came up nicely, and local (unix socket) login as a root user worked fine as expected. However: Logging in via TCP socket only works from a1 (to both a0 and a1). Logging in via TCP socket from a0 doesn t work to either host. I also created a test user sven @ %.my.do.main (I also tried with @ % ) which has all privileges.
Here is what the commands output:

sven@a0 ~ > mysql -u sven -ptestpass -h a0.my.do.main
ERROR 1045 (28000): Access denied for user 'sven'@'a0.my.do.main' (using password: YES)
sven@a0 ~ > mysql -u sven -ptestpass -h a1.my.do.main
ERROR 1045 (28000): Access denied for user 'sven'@'a0.my.do.main' (using password: YES)
sven@a1 ~ > mysql -u sven -ptestpass -h a0.my.do.main
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 32
Server version: 5.0.77-log Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> Bye
sven@a1 ~ > mysql -u sven -ptestpass -h a1.my.do.main
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 24
Server version: 5.0.77-log Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
All packages on both machines are up to date (CentOS5 with all updates available). The server variables shown with show variables also seem identical (except for the hostname and the server id). Besides, it looks more like a client problem to me than a server problem. Last datapoint: If I define the mysql user sven with an explicit host (a0.my.do.main and a1.my.do.main) instead of a wildcard, everything works nicely, from and to both hosts. So, dear lazyweb: Does anyone have an idea what might be causing this issue? Update: Due to some comments, let me add a few points of information:
  1. flush hosts doesn t help. Host resolution looks OK, according to documentation, the hostname in the error message is what the server things is right, not what the client specified, and that documentation seems to be consistent with my experiences.
  2. The db and user tables look as expected. And I m quite sure they are not the issue here, as I can connect from other hosts that match the host pattern.
  3. There is no ~/.my.cnf on either host
  4. Copying the database files on the filesystem layer is one of the documented ways of initializing the database on the slave for replication. Anyway, I had the same issue when trying to set up the database on the slave by loading a dump.
  5. DNS looks fine, both forward and reverse mappings, both servers use the same DNS server.
  6. IPv6 is not involved, it is disabled on these servers
Thanks for the hint at serverfault.com, I will try that if I don t get sufficient info here. Update 2 Another strange data point:
When I disable the (non-privileged) anonymous access user in the mysql.user table (i.e. drop the rows with user= "), everything works as expected from both servers. However, why does it work from one host (client) to both DBs if the anonymous access is allowed (with no privileges except for being able to see that the test and mysql databases exist), but not from the other host (client)? It really looks like an issue on the client side, but I still don t get what exactly goes wrong.

6 April 2011

Sven Mueller: Strange iptables error with kernels >= 2.6.32 solved

Alright. If you ever come into the same situation I was in and need a newer kernel (2.6.32 or up, perhaps also 2.6.31) on some system with old iptables package (versions below 1.4.0 I think, 1.3.5 in my case: CentOS5/RHEL5), you might get this helpful error message when using the iprange module in your iptables rules: iptables: Unknown error 18446744073709551615 Or even more helpful, if you use iptables-restore to load your rules, you will get an error in the line containing the COMMIT statement (iptables-restore: line X failed). The reason for this is that the netfilter guys have removed an interface to the iprange module in kernel version 2.6.31 or 2.6.32 (see my bug report at #711 of the netfilter bugzilla). Just posting this so it might hopefully help others if they get into the same situation.

18 May 2009

Sven Mueller: Link collection 2009/05/18

That s it for now, will update the post if I find more interesting links in the next few days.

31 March 2009

Sven Mueller: Link collection 2009/03

Well, I normally despise of thinks like this link collection, but I thought I might add it anyway, since these are useful links for me and if I don t post them here, I m likely to forget where to find them in the near future:
  1. Sean Finney has a nice post about storing the list of parameters a (shell) script got in a way that it can be restored later. Quite handy if your script walks through the arguments parsing them (and consuming them while doing so) but you want to be able to display them in a helpful way if the parsing fails at some point.
  2. A while ago, Ingo J rgensmann had a post that helps retrieving files from lost+found after a filesystem check, provided that you run his helper script on a regular basis. The same approach can also be used if you have a backup of all files, but lost the sorting work you did after the backup was done. This is possible because running the script can be done more often then you would normally do backups.
  3. He also has a small post about mtr oddities when IPv6 comes into play
  4. Adeodato Sim wrote about databases and when timestamps that store the timezone information really are more useful then timestamps that don t.
  5. Adeodato also has a short post on using ssh as a socks proxy, which can be quite handy if you are behind a firewall.
Update: Fixed link to Ingo s file retrieval from lost+found article. Thanks to Patrick Schoenfeld who pointed me at the wrong link.
Also thanks to the anonymous poster who found an alternative way to store and (in a way) restore commandline parameters. The solution doesn t work in an as general way as that by Sean Finney et al., but it is much shorter and therefore interesting for where it can be used (when you control how commandline parameters are processed). See comments on this post for details.

26 March 2009

Sven Mueller: Apache SSL oddity

If you ever have the problem described below, you have to realize that Apache HTTPD is selecting the SSL certificate (and/or key) not based on you VirtualHost definition, but based on the ServerName in the current context. This means that you have two virtual hosts with the same ServerName, but on two different IPs, and are wondering why the wrong key (and/or certificate) is used, it might be because of this. Assume we have the following (simplified) configuration:
<VirtualHost a.b.c.d:443>
ServerName a.b.c.d
SSLCertificateFile /etc/ssl/certs/a.b.c.d_443.pem
SSLCertificateKeyFile /etc/ssl/certs/a.b.c.d_443-key.pem
</VirtualHost>
<VirtualHost a.b.c.d:444>
ServerName a.b.c.d
SSLCertificateFile /etc/ssl/certs/a.b.c.d_444.pem
SSLCertificateKeyFile /etc/ssl/certs/a.b.c.d_444-key.pem
</VirtualHost>

(note: This happened to me when wanting to open a SSL server for subversion on a host that also had Apache run as a reverse/application proxy for some specific application that managed its own CA) Both VirtualHosts will use the very same certificate and key (as far as I was able to tell, but at least the same key), in my case that of the specific application (which was(is?), in its way, broken regarding SSL). This might as well be a bug or a design decision (AKA feature), I don t know. However the workaround that worked for us was to change one of the server names to something else (since it seems it is unused anyway, unless the apache decides to generate a self-refering URL, which it certainly won t do for the reverse/application proxy part). Update: Fixed formatting on configuration sample to make it clear this is not about namebased virtual hosts.

1 March 2009

Sven Mueller: Rescueing files from lost+found

Oh well, I tried to search for it but failed miserably, so I m blogging about it hoping the original author might read my post on planet.debian.org (where I read the original post): A friend of mine might need exactly the sort of scripts someone posted here:
  1. Creating a list of file checksum+pathname/filename
  2. Reading files from lost+found and moving them to their original filename by identifying from the list in (1)
However, I can t recall who posted about this, and don t want to do the same work again. Please, Dear Lazyweb, help me ;-)

8 January 2009

Sven Mueller: About Usability

Sami Haahtinen wrote a nice post about usability. I do mostly agree with him, except for one little thing: A few years back, I noticed that I often started as a kind of Joe User with new applications, and often even stayed that way, often wishing that the UI of that application was simpler or at least made it more obvious which options are important and which I could most likely keep at their defaults.
Yet I am a system administrator and programmer/developer, so most people would probably consider me as an advanced user by default (wether that is correct in any given context or not, for example I still don t really grok GIMP).
Also I did have quite a lot of contact with users that have very little experience, knowledge and interest in computers and software and only use them to achive certain goals. Anyway, I m quite sure most readers of this blog are aware that Linus Torvalds once complained about the Gnome printing dialog being simplified way too much. And at that time, I had to agree (note that I didn t check out Gnome again since those days, at least 2 years ago). What I really wonder (especially since I realized this in some of my own applications) is why it is so hard for developers to add some additional checkbox or button that enables/disables advanced options. For example, let s take an applications printing dialog. By default, it would only allow the selection of a printer, orientation (if sensible) and (if the selected printer supports more than one) the paper size. Now advanced options might include duplex printing (automatic or manual), printing mutliple pages on a single sheet of paper, printing in grey on color printers, These options might be of use to anyone, but would most likely confuse beginners. So what I did in most of my little application was to just show the basic options and add an advanced options button which provided access to additional functionality which was not normally needed. As for the issue with firefox updates mentioned by Sami: I don t think it is a problem that firefox asks what to do with extensions. For one, it only lists extensions which are installed but claim not to support the new version of firefox. Second, practically everyone I know of who uses any extension can be considered to be an advanced user. And third: The dialog is pretty straightforward about what it is asking and when I once hit it while a ( dumb user ) friend was sitting with me, he understood what it was about quite easily (I didn t explain the dialog, just what extensions were). So my idea of a good UI is:
As simple as possible for beginning users, but allow advanced users to get to the details ( Details or Advanced options buttons/checkboxes/dialogs).

15 December 2008

Sven Mueller: another vote

Well, following some other posts on Planet debian, I decided to also publish my vote on the GR regarding Firmware blobs in Debian. Here is my vote:
V: 7112113
In other words, I rank the first option as low as possible (7, below further discussion), ranked all options I think make sense above further discussion (which is at rank 3) and put all other options (except for the one empowering the release team to decide the issue) at the same, highest rank. The option empowering is ranked at the second level. My priority really is to get Lenny released. While I think we should do as much as we can removing non-free stuff from Debian, I really don t think it is sensible to delay Lenny because of this. Lenny has far less non-free bits in it than Etch, and I m also not really sure wether I really want to consider firmware (which doesn t run on the CPU Debian is for) as software in the sense of the DFSG. The reasoning here is that I don t see any difference between a hardware manufacturer A who embeds a (non-free) firmware in the hardware (for example in the sense of storing it in a flashable memory) and a hardware manufacturer B who allows us to distribute the firmware blob (and avoids storing it in the hardware). In my opinion, both are equally (non-)free. And following this, I don t see why we should support hardware created by A natively while not directly supporting hardware made by B. One could of course argue that we are providing more of a service for B (as we are distributing the firmware for them, while they save some money otherwise spent for the flash memory), but I would like to remind everyone that our foundation documents also include the following: Our priorities are our users and free software This puts free software and our users at the same level. So to cater for one, we might have to make sacrifices on the other priority. In my opinion, the best compromise here is to not consider firmware blobs as software in the sense of the DFSG, but alternatively, I would accept saying we want to remove all such firmware from Debian, but not to delay Lenny because of this process. Additionally, we should make it easy for users to add all effected drivers/firmwares before/during installation of a system (or even include them on our installation media as some proposed, with a short question to the user wether he wants those firmware blobs to be installed). Apart from my opinion on the subject itself, I must also mention my concerns about how this vote was written in a very manipulative way. Additionally, those wanting to get Lenny released were proposing too many different options. This is generally a bad idea.

29 November 2008

Sven Mueller: Re: Silly translations

In Silly translations, Gintautas Miliauskas wrote about some rather silly translations. This reminded me of a finding by a colleague a few days ago.
He had recently updated his Ubuntu installation to KDE4 (using a german locale). After this, he reassigned a hotkey (to CTRL-SPACE). However, KDE showed: STRG-Weltraum (ctrl-”outer space”) instead of STRG-Leertaste (ctrl-space key) :-) UPDATE: Since Frederik (see comments) reminded me of the fact I already knew:
This silly translation is not the fault of the german KDE members, but a bug in the Ubuntu package, which is taking more or less random translations from Rosetta.

8 October 2008

Sven Mueller: How to solve a credit crisis

Anand Kumria wrote:
If IBM were to go bankrupt, would the government step in? Unlikely. Investors would lose (money), staff — another word for investors — would lose (jobs), but customers would win (their computers would keep working). Some customers would win more than others (especially those who had the equipment on lease); if no one is collecting, why pay?
I’m wondering here where Anand got the idea that once a company went bankrupt, that you don’t need to pay to that company anymore. When a company goes bankrupt, at least in Germany the following happens: A trustee/liquidator is selected. This liquidator is then collecting the information who owes money to the company and who still needs to get (how much) money from it. The liquidator also has to check the option of selling company assets (which might include the contracts of customers that still have to pay) to fulfill the debts of the bankrupt company. After he turned all assets into money, the money is distributed among those who still have to get money from the bankrupt company. Anyway, regarding his main argument that the (average) customer of a company (bank in this case) should never have to pay for the failed speculations of that company, I somehow have to agree with him. Someone putting money into a regular bank account or papers with fixed interest rate should never lose his money. But there are also customers buying bank shares with a chance of higher revenue than with fixed interest rate papers. These should suffer from failure of the bank management, as they more or less explicitly wanted to be tied to the success (or failure) of the bank. However, this is mostly irrelevant, since the failure of so many “investment banks” has side affects that might cost the average inhabitant of the affected countries even more than the discussed rescue plans. One of these effects is that the banks are now much more conservative regarding lease and mortgage plans, effectively leaving many home owners with no option to fulfill previous obligations (remaining debt after a previous mortgage expired can’t be refinanced by a new mortgage), causing them to have to sell their homes to pay the first mortgage. This is in some way stupid because this causes people who were perfectly paying their mortgage rates to loose their house, while the bank which would be giving them a new mortgage could get a new and good customer, improving their income. On the other hand, if the other side effects of the current crisis cause those “good” mortgage customers to loose their jobs, they might turn into bad customers who are unable to pay their rates. All in all, this is a spiral that could cause the whole economy to break down (a small example: The bank is not giving out mortgages, so no one will build new houses so the builders loose their jobs so they don’t pay their mortgage rates anymore,…. - over simplified, but still shows what I mean). Unless the spiral is terminated in time, before too drastic things happen. All in all, I do understand why the politicians try to rescue those banks (or at least the customers of those banks), though I think that in an economy with slightly higher regulation, there wouldn’t be the need for such a rescue plan. I know there are some german banks affected by the crisis as well (among them Hypo Real Estate and others), but the average private customer of such banks shouldn’t loose money due to the regulations we have in place. In general, there should be some security fund which makes sure that private customers never loose money put into regular bank accounts or fixed interest papers, vice-versa, banks should calculate mortgages so that they can be pretty sure their customers are actually able to pay off their rates - it doesn’t make sense if someone starts off having to pay 500$/month for their mortgage and has to pay over 1000$ a few years (as in 2-3 years) later, because the bank raised the interest that much. I have no problem with people loosing money from shares of banks or other companies directly or indirectly through investment funds.

4 January 2008

Sven Mueller: User configuration

Anthony Towns wrote about user configuration (i.e. ~/.foo) and the XDG spec/proposal versus his own (and according to him) simpler version. The main goal of all this is to get rid of all the ~/.foo directories and files in a users home directory. Apart from technically minor differences, I don’t see why AJs proposal would be simpler than the XDG approach. All it does is hardcoding the values which are adjustable in XDG via XDG_*_DIRS. There is one notable difference though: AJ proposes fallback values which are compatible with existing application paths (i.e. HOME_ETC/XDG_CONFIG_HOME should default to ~/ - resulting in ~/.foo files/dirs). This is a good proposal in a way: If you patch an existing application to strictly follow the proposal made by AJ, it still finds its old configuration (and other files) if the new environment variables are not set. No need to move anything around. However, I think the XDG approach is better since it makes sure that compliant applications don’t clutter ~/ anymore. They do need some mechanism to move existing configurations and data around though if the XDG-compliant stuff isn’t already there, but an old style config exists. All in all, I think it is worth following the XDG spec, even if it is slightly more complex than AJs proposal. For one, it already exists for a while and I think that some applications already started following that spec. It also makes slightly more sense to me to have the system fully configurable as to where the apps store data as opposed to the hardcoded fallback/default values AJ proposes (even if we would change AJs proposal to use “$HOME-cleaning” default values).

22 October 2007

Sven Mueller: RE Anthonys some fun post

I really dislike posts like (sorry AJ, you are just one example) AJ Towns blog post
“Some fun”. What I dislike? Well, the post lacks critical information: Which slashdot post inspired him? What data is he talking about? How did he turn the data into those graphics? Sorry AJ, your post is just the latest example of this style of post, and I really got frustrated over such posts, this is not meant as a personal attack. Edit:
So to make my wish clear: Please, fellow bloggers, don’t assume that your readers are following your favourite web resources as closely as you do (and with the same specific interests). Explicitly say what you are writing about, reference resources needed to understand what you are doing, at least give readers a chance to find out what you did. In AJs case, it would probably have been enough to reference the /. article or comment which inspired him.

17 October 2007

Sven Mueller: CPU feature flags and their meanings

Since I never really found a nice overview of which CPU flags (see /proc/cpuinfo) mean what, so I gathered some information using the web, with the most notable sources being the BOINC FAQ entry on CPU Register Acronyms at [1] and the output of the nice little (though seemingly mostly unmaintained) cpuid utility. See my results at [2]. Any suggestions for enhancements and completions are highly welcome, just leave a comment to this post. [1]: href=”http://boincfaq.mundayweb.com/index.php?language=1&view=176
[2]: http://blog.incase.de/index.php/cpu-feature-flags-and-their-meanings/

18 September 2007

Sven Mueller: Brane Dump The Thoughts of Matt Palmer

In “Documentation - the chicken and the egg” [1], Matt Palmer wrote about the problem that noone writes documentation because noone reads it and that noone reads documentation because the few documentation in existence usually isn’t very good. So he, like me hs the habit of searching for help on the net instead of in a projects documentation. However I disagree with him in the consequences a bit. Because I noticed that on several projects with good documentation (subversion is an example that immediately comes to my mind, but postfix isn’t too bad either), the internet search returns a reference to the docs more often than not. Of course, this means that the documentation needs to be searchable for those webcrawlers. So if you have the task to write documentation, I strongly suggest to make it searchable in some way. For company-internal projects, this obviously means that the documentation must be reachable via an internal search engine. In a project a few years back, we implemented an internal search engine which included company internal information (even with user based access rules so that each user only got those results he could actually access) as well as an external search engines results. It was closed source, but a pretty nice idea. It didn’t, however, index any locally (user’s desktop) stored documents, only what was on some company web page (but including .doc, .rtf, .pdf and the like which where retrievable via http). [1] http://www.hezmatt.org/~mpalmer/blog/general/documentation_the_chicken_and_the_egg.html

1 September 2007

Sven Mueller: IFA - Force-Feedback-Vest with tickling attacks.

In the recent online article heise online - IFA special - Force-Feedback-Weste mit Kitzelattacke, Heise News (a german IT news site) had a really nice caption under one image showing a new force-feedback vest by Philips. The image looks like this:
Image of the new Philips Force-Feedback Vest
The german caption is: “Philips’ Force-Feedback-Weste erm glicht t dliche Kitzelattacken in PC-Shootern.”
Translation to english is: “Philips Force-Feeback-Vest allows deathly tickling-attacks in PC-Shooters.” (Means First-Person-Shooters). Nice. That’s at least a pleasant way of dying: Being tickled to death. Of course, the article itself clears up the misunderstanding: The vest is more tickling than “punching” the player, even if his game-ego is killing most ferociously.

13 August 2007

Sven Mueller: Migrating aptitudes knowledge about auto-installed packages

Jonathan McDowell wonders how to take aptitude’s knowledge about auto-installed packages from one computer to the other. Well, I looked into the issue for a few minutes and I found a solution though it doesn’t look nice.
for i in
    # grep installed package names
     COLUMNS=200 dpkg -l   grep -E '^ii'  awk ' print $2 ;'  
do
    # find package in /var/lib/aptitude/pkgstates
    # check for the right state (1 seems to mean
    # auto-installed, 3 manually installed)
    if
        grep-available -s Package,State 
                       -F Package 
                       -X $i /var/lib/aptitude/pkgstates  
          grep -q 'State: 1'
    then
            echo $i
    fi
done
Results in a list of auto-installed packages AFAICT. If you change the “State: 1″ into “State: 3″, it seems you get only manually installed packages. So if you take the latter and feed it to “aptitude install”, your database should be right. If you take the former, you feed the list to “aptitude markauto”. A cleaner solution (which works even if /var/lib/aptitude/pkgstates changes formats) would involve checking “aptitude show” output, evaluating the “Automatically installed:” field. However, simply feeding “aptitude show” output to grep-dctrl (or any equivalent) resultes in the rather irritating error message “grep-dctrl: -:14: expected a colon.”, with the 14 changing to a different line in the packages description (usually a line directly following an empty line). So I’m just giving you an idea here.

27 July 2007

MJ Ray: Debian Maintainers

I've voted in favour of the Debian Maintainers GR because, despite flaws like micro-managing the initial situation [Sven Mueller], I believe it is a useful step towards reforming the New Maintainer process. Those of you with long memories may recall that I think NM should be a modern portfolio-based qualification [-project, April 2006, but probably not the first time I explained it] instead of the current, inconsistent AM-dependent one which sees good people applying too early and sometimes being turned away, sometimes being accepted too quickly, but most often sitting in DAMnation. Despite some claims to be interested in fixing NM [Raphael Hertzog], it seems the current NM team requires throwing more people at the buggy system [Raphael Hertzog] before even considering fixing the damn bugs. Of course, I suspect any suggestion would be met with a claim that NM then wasn't performing too badly, now that it had more people. I think that's damage, so I hope the DM GR can be a first stepping stone in a new path to becoming a DD which routes around the damage.

Next.