Search Results: "dburrows"

9 December 2021

David Kalnischkies: APT for Advent of Code

Screenshot of my Advent of Code 2021 status page as of today Advent of Code 2021
Advent of Code, for those not in the know, is a yearly Advent calendar (since 2015) of coding puzzles many people participate in for a plenary of reasons ranging from speed coding to code golf with stops at learning a new language or practicing already known ones. I usually write boring C++, but any language and then some can be used. There are reports of people implementing it in hardware, solving them by hand on paper or using Microsoft Excel so, after solving a puzzle the easy way yesterday, this time I thought: CHALLENGE ACCEPTED! as I somehow remembered an old 2008 article about solving Sudoku with aptitude (Daniel Burrows via archive.org as the blog is long gone) and the good same old a package management system that can solve [puzzles] based on package dependency rules is not something that I think would be useful or worth having (Russell Coker). Day 8 has a rather lengthy problem description and can reasonably be approached in a bunch of different way. One unreasonable approach might be to massage the problem description into Debian packages and let apt help me solve the problem (specifically Part 2, which you unlock by solving Part 1. You can do that now, I will wait here.) Be warned: I am spoiling Part 2 in the following, so solve it yourself first if you are interested. I will try to be reasonable consistent in naming things in the following and so have chosen: The input we get are lines like acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab cdfeb fcadb cdfeb cdbaf. The letters are wires mixed up and connected to the segments of the displays: A group of these letters is hence a digit (the first 10) which represent one of the digits 0 to 9 and (after the pipe) the four displays which match (after sorting) one of the digits which means this display shows this digit. We are interested in which digits are displayed to solve the puzzle. To help us we also know which segments form which digit, we just don't know the wiring in the back. So we should identify which wire maps to which segment! We are introducing the packages wire-X-connects-to-Y for this which each provide & conflict1 with the virtual packages segment-Y and wire-X-connects. The later ensures that for a given wire we can only pick one segment and the former ensures that not multiple wires map onto the same segment. As an example: wire a's possible association with segment b is described as:
Package: wire-a-connects-to-b
Provides: segment-b, wire-a-connects
Conflicts: segment-b, wire-a-connects
Note that we do not know if this is true! We generate packages for all possible (and then some) combinations and hope dependency resolution will solve the problem for us. So don't worry, the hard part will be done by apt, we just have to provide all (im)possibilities! What we need now is to translate the 10 digits (and 4 outputs) from something like acedgfb into digit-0-is-eight and not, say digit-0-is-one. A clever solution might realize that a one consists only of two segments so a digit wiring up seven segments can not be a 1 (and must be 8 instead), but again we aren't here to be clever: We want apt to figure that out for us! So what we do is simply making every digit-0-is-N (im)possible choice available as a package and apply constraints: A given digit-N can only display one number and each N is unique as digit so for both we deploy Provides & Conflicts again. We also need to reason about the segments in the digits: Each of the digit packages gets Depends on wire-X-connects-to-Y where X is each possible wire (e.g. acedgfb) and Y each segment forming the digit (e.g. cf for one). The different choices for X are or'ed together, so that either of them satisfies the Y. We know something else too through: The segments which are not used by the digit can not be wired to any of the Xs. We model this with Conflicts on wire-X-connects-to-Y. As an example: If digit-0s acedgfb would be displaying a one (remember, it can't) the following package would be installable:
Package: digit-0-is-one
Version: 1
Depends: wire-a-connects-to-c   wire-c-connects-to-c   wire-e-connects-to-c   wire-d-connects-to-c   wire-g-connects-to-c   wire-f-connects-to-c   wire-b-connects-to-c,
         wire-a-connects-to-f   wire-c-connects-to-f   wire-e-connects-to-f   wire-d-connects-to-f   wire-g-connects-to-f   wire-f-connects-to-f   wire-b-connects-to-f
Provides: digit-0, digit-is-one
Conflicts: digit-0, digit-is-one,
  wire-a-connects-to-a, wire-c-connects-to-a, wire-e-connects-to-a, wire-d-connects-to-a, wire-g-connects-to-a, wire-f-connects-to-a, wire-b-connects-to-a,
  wire-a-connects-to-b, wire-c-connects-to-b, wire-e-connects-to-b, wire-d-connects-to-b, wire-g-connects-to-b, wire-f-connects-to-b, wire-b-connects-to-b,
  wire-a-connects-to-d, wire-c-connects-to-d, wire-e-connects-to-d, wire-d-connects-to-d, wire-g-connects-to-d, wire-f-connects-to-d, wire-b-connects-to-d,
  wire-a-connects-to-e, wire-c-connects-to-e, wire-e-connects-to-e, wire-d-connects-to-e, wire-g-connects-to-e, wire-f-connects-to-e, wire-b-connects-to-e,
  wire-a-connects-to-g, wire-c-connects-to-g, wire-e-connects-to-g, wire-d-connects-to-g, wire-g-connects-to-g, wire-f-connects-to-g, wire-b-connects-to-g
Repeat such stanzas for all 10 possible digits for digit-0 and then repeat this for all the other nine digit-N. We produce pretty much the same stanzas for display-0(-is-one), just that we omit the second Provides & Conflicts from above (digit-is-one) as in the display digits can be repeated. The rest is the same (modulo using display instead of digit as name of course). Lastly we create a package dubbed solution which depends on all 10 digit-N and 4 display-N all of them virtual packages apt will have to choose an installable provider from and we are nearly done! The resulting Packages file2 we can give to apt while requesting to install the package solution and it will spit out not only the display values we are interested in but also which number each digit represents and which wire is connected to which segment. Nifty!
$ ./skip-aoc 'acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab   cdfeb fcadb cdfeb cdbaf'
[ ]
The following additional packages will be installed:
  digit-0-is-eight digit-1-is-five digit-2-is-two digit-3-is-three
  digit-4-is-seven digit-5-is-nine digit-6-is-six digit-7-is-four
  digit-8-is-zero digit-9-is-one display-1-is-five display-2-is-three
  display-3-is-five display-4-is-three wire-a-connects-to-c
  wire-b-connects-to-f wire-c-connects-to-g wire-d-connects-to-a
  wire-e-connects-to-b wire-f-connects-to-d wire-g-connects-to-e
[ ]
0 upgraded, 22 newly installed, 0 to remove and 0 not upgraded.
We are only interested in the numbers on the display through, so grepping the apt output (-V is our friend here) a bit should let us end up with what we need as calculating3 is (unsurprisingly) not a strong suit of our package relationship language so we need a few shell commands to help us with the rest.
$ ./skip-aoc 'acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab   cdfeb fcadb cdfeb cdbaf' -qq
5353
I have written the skip-aoc script as a testcase for apt, so to run it you need to place it in /path/to/source/of/apt/test/integration and built apt first, but that is only due to my laziness. We could write a standalone script interfacing with the system installed apt directly and in any apt version since ~2011. To hand in the solution for the puzzle we just need to run this on each line of the input (~200 lines) and add all numbers together. In other words: Behold this beautiful shell one-liner: parallel -I ' ' ./skip-aoc ' ' -qq < input.txt paste -s -d'+' - bc (You may want to run parallel with -P to properly grill your CPU as that process can take a while otherwise and it still does anyhow as I haven't optimized it at all the testing framework does a lot of pointless things wasting time here, but we aren't aiming for the leaderboard so ) That might or even likely will fail through as I have so far omitted a not unimportant detail: The default APT resolver is not able to solve this puzzle with the given problem description we need another solver! Thankfully that is as easy as installing apt-cudf (and with it aspcud) which the script is using via --solver aspcud to make apt hand over the puzzle to a "proper" solver (or better: A solver who is supposed to be good at "answering set" questions). The buildds are using this for experimental and/or backports builds and also for installability checks via dose3 btw, so you might have encountered it before. Be careful however: Just because aspcud can solve this puzzle doesn't mean it is a good default resolver for your day to day apt. One of the reasons the default resolver has such a hard time solving this here is that or-groups have usually an order in which the first is preferred over every later option and so fort. This is of no concern here as all these alternatives will collapse to a single solution anyhow, but if there are multiple viable solutions (which is often the case) picking the "wrong" alternative can have bad consequences. A classic example would be exim4 postfix nullmailer. They are all MTAs but behave very different. The non-default solvers also tend to lack certain features like keeping track of auto-installed packages or installing Recommends/Suggests. That said, Julian is working on another solver as I write this which might deal with more of these issues. And lastly: I am also relatively sure that with a bit of massaging the default resolver could be made to understand the problem, but I can't play all day with this maybe some other day. Disclaimer: Originally posted in the daily megathread on reddit, the version here is just slightly better understandable as I have hopefully renamed all the packages to have more conventional names and tried to explain what I am actually doing. No cows were harmed in this improved version, either.

  1. If you would upload those packages somewhere, it would be good style to add Replaces as well, but it is of minor concern for apt so I am leaving them out here for readability.
  2. We have generated 49 wires, 100 digits, 40 display and 1 solution package for a grant total of 190 packages. We are also making use of a few purely virtual ones, but that doesn't add up to many packages in total. So few packages are practically childs play for apt given it usually deals with thousand times more. The instability for those packages tends to be a lot better through as only 22 of 190 packages we generated can (and will) be installed. Britney will hate you if your uploads to Debian unstable are even remotely as bad as this.
  3. What we could do is introduce 10.000 packages which denote every possible display value from 0000 to 9999. We would then need to duplicate our 10.190 packages for each line (namespace them) and then add a bit more than a million packages with the correct dependencies for summing up the individual packages for apt to be able to display the final result all by itself. That would take a while through as at that point we are looking at working with ~22 million packages with a gazillion amount of dependencies probably overworking every solver we would throw at it a bit of shell glue seems the better option for now.
This article was written by David Kalnischkies on apt-get a life and republished here by pulling it from a syndication feed. You should check there for updates and more articles about apt and EDSP.

10 March 2013

Axel Beckert: Up to date Aptitude Documentation Online

Aptitude ships documentation in 7 languages as HTML files. However the latest version available online was 0.4.11.2 from 2008 and hosted on the server by the previous, now unfortunately inactive Aptitude maintainer, and only covered 5 languages. This lack of up to date online documentation even caused others to put more up to date versions online. Nevertheless they age, too, and the one I m aware is not up to date for Wheezy. So the idea was born to keep an up to date version online on Aptitude s Alioth webspace (which currently redirects to a subdirectory of the previous maintainer s personal website). But unfortunately we, the current Aptitude Team, are still lacking administrative rights on Aptitude s Alioth project, which would be necessary to assign new team members who could work on that. As an intermediate step, there s now a (currently ;-) up to date Aptitude User s Manual online in all 7 languages at

http://people.debian.org/~abe/aptitude/ and English at

http://people.debian.org/~abe/aptitude/en/ As this location could also suffer from the same MIA issues as any other personal copy, the plan is to move this to somewhere under http://aptitude.alioth.debian.org/ as soon as we have full access to Aptitude s Alioth project. Our plans for then are:

P.S.: Anyone interested in doing a German translation of the Aptitude User s Manual? Sources are in DocBook, i.e. XML, and available via Git.

16 January 2013

Francois Marier: Moving from Blogger to Ikiwiki and Branchable

In order to move my blog to a free-as-in-freedom platform and support the great work that Joey (of git-annex fame) and Lars (of GTD for hackers fame) have put into their service, I decided to convert my Blogger blog to Ikiwiki and host it on Branchable. While the Ikiwiki tips page points to some old instructions, they weren't particularly useful to me. Here are the steps I followed.

Exporting posts and comments from Blogger Thanks to Google letting people export their own data from their services, I was able to get a full dump (posts, comments and metadata) of my blog in Atom format. To do this, go into "Settings Other" then look under "Blog tools" for the "Export blog" link.

Converting HTML posts to Markdown Converting posts from HTML to Markdown involved a few steps:
  1. Converting the post content using a small conversion library to which I added a few hacks.
  2. Creating the file hierarchy that ikiwiki requires.
  3. Downloading images from Blogger and fixing their paths in the article text.
  4. Extracting comments and linking them to the right posts.
The Python script I wrote to do all of the above will hopefully be a good starting point for anybody wanting to migrate to Ikiwiki.

Maintaining old URLs In order to make sure I wouldn't break any existing links pointing to my blog on Blogger, I got the above Python script to output a list of Apache redirect rules and then found out that I could simply email these rules to Joey and Lars to get them added to my blog. My rules look like this:
# Tagged feeds
Redirect permanent /feeds/posts/default/-/debian http://feeding.cloud.geek.nz/tags/debian/index.rss
Redirect permanent /search/label/debian http://feeding.cloud.geek.nz/tags/debian
# Main feed (needs to come after the tagged feeds)
Redirect permanent /feeds/posts/default http://feeding.cloud.geek.nz/index.rss
# Articles
Redirect permanent /2012/12/keeping-gmail-in-separate-browser.html http://feeding.cloud.geek.nz/posts/keeping-gmail-in-separate-browser/
Redirect permanent /2012/11/prefetching-resources-to-prime-browser.html http://feeding.cloud.geek.nz/posts/prefetching-resources-to-prime-browser/

Collecting analytics Since I am no longer using Google Analytics on my blog, I decided to take advantage of the access log download feature that Joey recently added to Branchable. Every night, I download my blog's access log and then process it using awstats. Here is the cron job I use:
#!/bin/bash
BASEDIR=/home/francois/documents/branchable-logs
LOGDIR=/var/log/feedingthecloud
# Download the current access log
LANG=C LC_PAPER= ssh -oIdentityFile=$BASEDIR/branchable-logbot b-feedingthecloud@feedingthecloud.branchable.com logdump > $LOGDIR/access.log
It uses a separate SSH key I added through the Branchable control panel and outputs to a file that gets overwritten every day. Next, I installed the awstats Debian package, and configured it like this:
$ cat /etc/awstats/awstats.conf.local
SiteDomain=feedingthecloud.branchable.com
LogType=W
LogFormat=1
LogFile="/var/log/feedingthecloud/access.log"
Even if you're not interested in analytics, I recommend you keep an eye on the 404 errors for a little while after the move. This has helped me catch a critical redirection I had forgotten.

Limiting Planet feeds One of the most common things that happen right after someone migrates to a new blogging platform is the flooding of any aggregator that subscribes to their blog. The usual cause being the change in post identifiers. Unsurprisingly, Ikiwiki already had a few ways to avoid this problem. I chose to simply modify each tagged feed and limit them to the posts added after the move to Branchable.

Switching DNS Having always hosted my blog on a domain I own, all I needed to do to move over to the new platform without an outage was to change my CNAME to point to feedingthecloud.branchable.com. I've kept the Blogger blog alive and listening on feeding.cloud.geek.nz to ensure that clients using a broken DNS resolver (which caches records for longer than requested via the record's TTL) continue to see the old posts.

14 April 2012

Axel Beckert: Automatically hardlinking duplicate files under /usr/share/doc with APT

On my everyday netbook (a very reliable first generation ASUS EeePC 701 4G) the disk (4 GB as the product name suggests :-) is nearly always close to full. TL;DWTR? Jump directly to the HowTo. :-) So I came up with a few techniques to save some more disk space. Installing localepurge was one of the earliest. Another one was to implement aptitude filters to do interactively what deborphan does non-interactively. Yet another one is to use du and friends a lot ncdu is definitely my favourite du-like tool in the meanwhile. Using du and friends I often noticed how much disk space /usr/share/doc takes up. But since I value the contents of /usr/share/doc a lot, I condemn how Nokia solved that on the N900: They let APT delete all files and directories under /usr/share/doc (including the copyright files!) via some package named docpurge. I also dislike Ubuntu s solution to truncate the shipped changelog files (you can still get the remainder of the files on the web somewhere) as they re an important source of information for me. So when aptitude showed me that some package suddenly wanted to use up quite some more disk space, I noticed that the new package version included the upstream changelog twice. So I started searching for duplicate files under /usr/share/doc. There are quite some tools to find duplicate files in Debian. hardlink seemed most appropriate for this case. First I just looked for duplicate files per package, which even on that less than four gigabytes installation on my EeePC found nine packages which shipped at least one file twice. As recommended I rather opted for an according Lintian check (see bugs. Niels Thykier kindly implemented such a check in Lintian and its findings are as reported as tags duplicate-changelog-files (Severity: normal, from Lintian 2.5.2 on) and duplicate-files (Severity: minor, experimental, from Lintian 2.5.0 on). Nevertheless, some source packages generate several binary packages and all of them (of course) ship the same, in some cases quite large (Debian) changelog file. So I found myself running hardlink /usr/share/doc now and then to gain some more free disk space. But as I run Sid and package upgrades happen more than daily, I came to the conclusion that I should run this command more or less after each aptitude run, i.e. automatically. Having taken localepurge s APT hook as example, I added the following content as /etc/apt/apt.conf.d/98-hardlink-doc to my system:
// Hardlink identical docs, changelogs, copyrights, examples, etc
DPkg
 
Post-Invoke  "if [ -x /usr/bin/hardlink ]; then /usr/bin/hardlink -t /usr/share/doc; else exit 0; fi"; ;
 ;
So now installing a package which contains duplicate files looks like this:
~ # aptitude install perl-tk
The following NEW packages will be installed:
  perl-tk 
0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,522 kB of archives. After unpacking 6,783 kB will be used.
Get: 1 http://ftp.ch.debian.org/debian/ sid/main perl-tk i386 1:804.029-1.2 [2,522 kB]
Fetched 2,522 kB in 1s (1,287 kB/s)  
Selecting previously unselected package perl-tk.
(Reading database ... 121849 files and directories currently installed.)
Unpacking perl-tk (from .../perl-tk_1%3a804.029-1.2_i386.deb) ...
Processing triggers for man-db ...
Setting up perl-tk (1:804.029-1.2) ...
Mode:     real
Files:    15423
Linked:   3 files
Compared: 14724 files
Saved:    7.29 KiB
Duration: 4.03 seconds
localepurge: Disk space freed in /usr/share/locale: 0 KiB
localepurge: Disk space freed in /usr/share/man: 0 KiB
localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB
localepurge: Disk space freed in /usr/share/omf: 0 KiB
Total disk space freed by localepurge: 0 KiB
Sure, that wasn t the most space saving example, but on some installations I saved around 100 MB of disk space that way and I still haven t found a case where this caused unwanted damage. (Use of this advice on your own risk, though. Pointers to potential problems welcome. :-)

21 March 2012

Axel Beckert: aptitude-gtk will likely vanish

As Christian already wrote, there s an Aptitude revival ongoing. We already saw this young team releasing aptitude 0.6.5 about 6 weeks ago, more commits have been made, and now we re heading towards an 0.6.6 release quickly. But this revival mostly covers the well-known and loved curses interface (TUI) of aptitude and not the seldomly installed GTK interface, which unfortunately never really took off: While aptitude itself (i.e. the curses and commandline interface) is installed on nearly 99% of all Debian installations which take part in Debian s Popularity Contest statistics, aptitude-gtk is only installed on 0.42% of all these installations. One reason is likely that aptitude-gtk still hasn t all the neat features of the curses interface. And another reason is probably that it s still quite buggy. Since nobody from the current Aptitude Team has the experience, leisure or time to resurrect (or even complete) aptitude-gtk, the plan is to stop building aptitude-gtk from the aptitude source package soon, i.e. to remove it from Debian for now. Like the even less finished Qt interface of aptitude, its code will stay in the VCS, but will be unmaintained unless someone steps up to continue aptitude-gtk (or aptitude-qt, or both), maybe even as its own source package. So if you like aptitude-gtk so much that you re still using it and want to continue using it, please think about contributing by joining the Aptitude Team and getting aptitude s GUI interface(s) back in shape. Another option would be to find a mentor so that resurrecting (one of) aptitude s GUI interfaces could become (again) a potential project at Debian s participation at Google s Summer of Code. Please direct any questions about aptitude-gtk or aptitude-qt to the Aptitude Development Mailing List. Or even better, join the discussion in this thread.

19 January 2012

Obey Arthur Liu: Aptitude 0.5.0 (aka Aptitude-gtk) released

Long time no post. Anyway, I have some good news.

The Gtk code for Aptitude has been merged some time ago into the main development trunk and we now have a release in Experimental.



For those that don't know about it, here's what it's all about : "The new frontend is is an effort to bring some of the design principles of the curses frontend to a GUI environment, while also exploiting the unique features a GUI gives us and exploring ways to deal with changes in the environment in the nine years since aptitude was first designed."

I had a very good time this summer working on Aptitude with Daniel Burrows in the Google Summer of Code program and I'm very glad we now have a real release. This version is by no means final or perfect but it's a good start.

Head for the blog post from Daniel for some other informations : [Daniel Burrows]

11 November 2011

Felipe Sateler: Finding packages not directly depended upon

Today I was removing KDE from my laptop, when I realized that kde-window-manager would not go away (even when it was automatically installed) since it provides x-window-manager, which is needed by gdm. I manually removed it, but that got me thinking that maybe I had other packages that were in this situation. So I picked aptitude and tried to build some search-fu. Here is the resulting query:
aptitude search '?for x:
?x:installed
?x:automatic
?x:provides(?virtual ?reverse-depends(?installed))
?not(?x:reverse-depends(?installed(?depends(?=x))))'


Now, lets go over the query. The first part ?for x: allows us to refer to the package currently being matched. See the documentation for details. We'll see later why we need this. Basically we are saying: give me all packages x that match the following pattern. The second and third parts are trivial: we only want automatically installed packages.
The fourth part, ?x:provides(?virtual ?reverse-depends(?installed)) was easy: find all packages that provide a virtual package that is depended upon by some other installed package.
The fifth and final part, ?not(?x:reverse-depends(?installed(?depends(?=x)))) is why we needed the bound variable x. We want to filter out (hence the ?not) the packages that have some reverse dependency that depends directly on the package x. That way, in my case, metacity won't show up since gnome-core directly depends on it and is installed.

Unfortunately, this query doesn't grok alternative dependencies. For example, if gnome-core depended on metacity x-window-manager then this query wouldn't show metacity, since it is depended upon by a package. If someone can make this query understand that so that it only eliminates packages directly depended upon with no alternatives dependencies, please let me know!Tags: aptitude, cleaning, tip, tips

9 April 2011

Axel Beckert: Finding packages for deinstallation on the commandline with aptitude

Although I often don t agree with Erich (especially if GNOME is involved ;-), he recently posted something on Planet Debian which I found very helpful. I also own a netbook where disk space is sacre. It s an ASUS EeePC 701 with just 4GB disk space. And it runs Debian Sid, so dependencies change often, leaving packages installed which formerly had hard dependencies on, but are now left with just recommendations pointing to it. Quite a few times I asked myself if it s possible to find those packages and if so, how to do it. Well, I don t have to ask myself that anymore, since Erich recently posted the appropriate filter patterns for my favourite package manager aptitude for this task in his posting Finding packages for deinstallation . Thanks, Erich! Since those filters aren t very easy to remember, I d like to extend the usefulness of his posting towards the commandline. I for myself added the following aliases to my shell setup:
alias aptitude-just-recommended='aptitude -o "Aptitude::Pkg-Display-Limit=!?reverse-depends(~i) ~M !?essential"'
alias aptitude-also-via-dependency='aptitude -o "Aptitude::Pkg-Display-Limit=~i !~M ?reverse-depends(~i) !?essential"'
As youam suggested on IRC, I also added the filter !?essential since we won t touch essential packages when cleaning up the list of installed packages anyway. Hope this helps further.

31 May 2010

Piotr Galiszewski: Hello Word

Hello Planet Debian readers!

I've never thought that such a thing can ever happen, but I've started blogging ;) So now it is time to introduce myself.

My name is Piotr Galiszewski and I am second year student of computer science at AGH - University of Science and Technology in Krakow (Poland). I have been GNU/Linux user for about 5 years (mostly Debian based distributions).

Thanks for Debian and Google, this summer I will be working on creating Qt-based user interface for aptitude as my Google Summer of Code project. I hope that my mentors Sune Vuorela and Daniel Barrows will be patient with me ;) I am sure I will learn a lot from them. Please look at abstract of my project made by Debian GSoC administrator Obey Arthur Liu:
Qt GUI for aptitude. Currently, KDE users need to use Aptitude via the console interface, or install the newly developed GTK frontend, which does not fit well into KDE desktop. Making Qt frontend to Aptitude would solve this problem and bring an advanced and fully Debian-compliant graphical package manager to KDE.
As I wrote in my proposal I will split my work into three main parts:
  1. writing low level classes which will abstract aptitude signals and slots (which uses sigc++) into Qt slots and signals.
  2. creating and evaluating GUI mockups
  3. implementing GUI on top of classes from the first point
Point 1 and 2 will be made simultaneously and will take all May and half of June. Low level classes should implement all necessary functions for further use in GUI. This classes allow me to avoid direct usage of none Qt code in GUI classes and also give much more time to prepare completed and usability-wise mockups. Every mockups version will be presented and discussed on this blog. First version should be ready in next few days and updates will appear each week (or to weeks).
After finishing this two steps I will start coding GUI. With mockups and finished low level classes this should not be complicated (Yeah, I know that this is only my dream).

Full text of my proposal (including more precise time-line) can be found at Debian wiki.

Currently, project is slightly behind the schedule . It is caused by changes in my studies plan. The Juwanalia students' festival took place earlier, and yesterday it finished. But my first exam will take place one week later on 18 May, so I will have more time to catch up with time-line.

This project is my first direct contribution to Debian, but not first involvement in free software movement. I've been Kadu Instant Messenger developer for more then two year. In last two years I have been second most active developer with more than 700 commits in master branch. During GSoC period my Kadu activities will be limited. If time allows me to do this, I will be still contributing to Kadu. I still can be found at Kadu forum or #kadu channel on irc.freenode.net. I will also continue reviewing patches and fixing low time-consuming bugs.

My plans for the next few days:
If you have any thoughts about this project, please add comment to this post or contact me directly. I will be glad to read all yours opinions

Cheers

PS. As you can see English is not my mother tongue, so please forgive me my mistakes

6 April 2010

Daniel Burrows: aptitude-related projects for Summer of Code 2010

I wasn't planning to be a mentor this year, but it looks like I might have enough time to do a decent job at mentoring after all. So here are a few ideas I have kicking around that would probably be suitable for an SoC coder. I have lots more ideas, but these are the first ones that came to mind. These ideas are deliberately broad: as a student, you are responsible for collaborating with me to figure out the details of your project and put together a workable implementation plan. At the same time, this list is not all-encompassing; feel free to come up with your own idea and explain why I should act as a mentor for it.
  1. Improve the GTK+ interface. The GTK+ interface is only partly complete and I'd like it to be completely complete. A few open ends of interest are noted at aptitude-gtk-status-a-visual-tour, or you can just run the program yourself and see what's missing or ugly.
  2. Work on per-package Wiki pages (the idea is sketched out in my email on the subject). Most likely this can use the existing wiki.debian.org infrastructure.
  3. Create a service for tracking user reviews of Debian packages and implement support in one or more apt clients.
Note that the last two aren't aptitude-specific; they basically amount to working out a spec for clients to access the data in question, and then implementing the spec in one or more clients (possibly including aptitude, apt, synaptic, packages.debian.org, etc).

7 March 2010

Aigars Mahinovs: Linuksys WRT54GL

Daniel, I also recently installed a WRT on the Linksys WRT54GL. But I used DD-WRT instructions went to http://www.dd-wrt.com/site/support/router-database, entered my model number and got direct download links to the firmware along with a README. One very important points in flashing these routers is to clear the NVRAM by doing a hard reset BEFORE and AFTER an upgrade to a different type of software on the device. Also, starting with a micro build is strongly recommended. In any case the hardware looks to be very solid, especially if you don t need gigabit ethernet and n wireless, but might need some advanced networking features.

Daniel Burrows: Installing OpenWRT on a Linksys WRT54GL v1.1

I finally got a few hours free this morning to check an item off my home system administration checklist: upgrading the wireless router's firmware to OpenWRT. There were a couple motivations for this, including the fact that my SoundBridge Radio couldn't maintain a connection to my firefly server using the built-in firmware, and I had read that OpenWRT would work better. Since this is posted on Planet Debian, I should mention why I didn't use DebianWRT. The basic answer is this text at the top of the DebianWRT homepage:
Currently the most common methods used to run Debian on these systems is to install OpenWRT or a similar firmware, add disk space either by USB storage or NFS, create a debian chroot by either running cdebootstrap from inside OpenWRT or debootstrap --foreign on a PC, and running Debian from this chroot. For example, instructions for the WLHDD.
I'm not sure whether this is because DebianWRT is experimental, or because its goal is to use routers as cheap general computers. Either way, it sounds way too complicated and/or fragile for what I'm interested in (i.e., a wireless router with better software). The goal here is to get something that does a better job than the built-in WRT firmware, but doesn't require too much tinkering to get working or to maintain. I have plenty of outlets for my tinkering urges already. Nothing here was exactly difficult, but it was hard to find all the information I needed to get things working. Hopefully documenting the steps I went through here will save someone else some time. Step 1: acquire the firmware This was trickier than you might think. If you click on the download link at the OpenWRT web site, you end up at the top of an FTP server populated with a vast quantity of stuff. Worse, when you find the actual firmware images, you'll quickly discover that there are piles and piles of them divided between sixteen directories, and no guidance as to which one to pick. And picking the wrong one will turn your beautiful router into a doorstop. Luckily, the OpenWRT documentation contains a section called "Getting Started". Unluckily, that section consists of the following text:
1.1 Getting started 1.1.1 Installation 1.1.2 Initial configuration 1.1.3 Failsafe mode
Whoops, someone forgot to execute on a TODO. :-) Undeterred, I consulted the usual fallback reference, Google. It pointed me at several references, some of which were hidden on other parts of the OpenWRT site. Armed with these, I was able to determine that:
  1. My WRT54GL v1.1 probably uses a Broadcom 5352 chipset.
  2. The correct firmware, according to multiple sources, is probably in the kamikaze top-level director, the latest version's subdirectory, and the bcrm-2.4 directory under the version (the link is to 8.09.2, which is current as of March 6, 2010). Apparently the bcrm47xx directory doesn't have wireless support; it helpfully contains a file called NO-BROADCOM-WIRELESS to warn you off, but unhelpfully doesn't include any additional information in that file (like what exactly is broken or that you can find a working firmware in a sibling directory) ... oh well.
  3. The firmware file that you want is openwrt-wrt54g-squashfs.bin, even though it doesn't match the model number of the router. This directory could use a README file explaining which hardware is supported by each of the dozen or so firmware files it contains.
Step 2: install the firmware After Step 1, this was a real relief. I just used the built-in Linksys firmware installer, pointed it at the .bin file, and it went. Step 3: configure the router The installation guide I followed was pretty much silent about what to do after I got the firmware on. Luckily, this is just a software problem, meaning it's much more familiar territory for me.
  1. First things first: I checked that I could still get a DHCP lease. It worked.
  2. Armed with that, I tried telneting to the router. I used the resulting root prompt to set a password on the root account.
  3. I logged out and tried telneting in. Apparently the router is configured to disallow root logins over telnet if you don't have a password. Good for them (although why allow telnet at all?); oops for me.
  4. Luckily, an ssh server was installed by default. I like using keys to log in, so I tracked down the documentation on configuring the server to use public-key authentication; it turns out there's a single global key file named /etc/dropbear/authorized_keys that's exactly like OpenSSH's per-user authorized_keys file. No idea what would happen if I had multiple users, but I won't.
  5. The next obstacle: I didn't have an Internet connection. For some reason, my cable modem didn't want to give the router a DHCP lease. On the off-chance that it was remembering too much, I rebooted it and ran ifup wan. That fixed the problem. I still don't know why.
  6. Not a step, but a useful note: in the process of figuring the above out, I found readlog. It's basically dmesg for syslog files; it shows the most recent lines written to syslog. This is useful because there isn't a real syslog file, due to the fact that there would be no room to store it on the 54GL.
  7. Finally, I had to get wireless working. The documentation is very helpful when it comes to describing the syntax of the wireless configuration. Unfortunately, I read the list of encryption options and missed the section right below where their meanings are explained (although, to be honest, I might not have understood the implications of the explanation without the research I did anyway).
    option encryption none, wep, psk, psk2, wpa, wpa2
    I wanted WPA2 encryption, so I entered wpa2. And nothing worked. After a good hour of trying different options on the client, swapping software components in and out on the router, experimenting with the encryption key syntax, and crawling Google, I finally found my answer. If you just want WPA2 encryption, you must not use wpa2 as the encryption type. Instead, use psk2. It turns out out that wpa2 actually means use WPA2 and also use an external RADIUS server for authentication. psk2 is the system you're familiar with from a typical consumer wireless router.
Step 4: enjoy And with that, it works. Unfortunately, contrary to what I wrote here originally, my Roku still doesn't work. On the other hand, having a real Linux installation is helpful for debugging it. Currently my suspicion is that the router isn't passing multicast packets between the wired and wireless interfaces (broadcast works fine, multicast doesn't). That said, it seems like if I restart Firefly just before I start playing music, I can play reasonably reliably -- as long as I don't stop, because if I do, the Roku forgets that the music server is there. Either way, I've spent about as much time fighting this as I can afford. :-/ One lingering worry I have is security; unlike Debian, which has both a security mailing list and tools to inform me when I need to install a security update, the OpenWRT firmware doesn't seem to have any mechanism for distributing security notices. True, there is an openwrt-security-announce list, but it appears to be entirely unused, as is openwrt-announce. Something to keep an eye on, then. Also (file under note to self), I need to remember to verify that the router isn't exposing services to the outside world. The default iptables configuration is hideously complex; with just a quick glance, it could be setting up a floral shop for all I can tell. I'll need to test this empirically and maybe analyze the rules in more depth.

3 May 2009

Daniel Burrows: aptitude-gtk status, a visual tour

As some of you may know, Arthur Liu did the initial work on a GTK+ interface for aptitude last summer as part of the Google Summer of Code program. Since that program ended, I've been working to complete the GTK+ interface, along with some unrelated but similarly large changes to other parts of aptitude. It's been a while since I've written anything about aptitude-gtk, though, mostly because I set it aside for a few months to improve the dependency solver. While I'll probably be doing more work on the backend over the summer, I also hope to get the GTK+ interface to a point where I can feel comfortable including it in a stable release of aptitude. There are two things that I want to be in place for the GTK+ interface before I release it:
  1. The interface should be functionally complete, meaning that you should be able to do anything you can do in the other interfaces.
  2. The interface should be reasonably straightforward to use and nice to look at. This is a never-ending task, but there are some pieces of the current interface that are particularly ugly or awkward that I'd like to fix before release.
I've gone through the major areas of the current UI and marked them up with notes about things that I think need to be done. This isn't a definitive or exhaustive list, but it should give some idea of where aptitude is and where it's going.
Dashboard Tab annotated dashboard tab The first thing you see when you start aptitude is the dashboard. This is a display that gives you immediate access to some of the most common things you'll want to do: searching for a package, viewing the available upgrades, and upgrading as many packages as aptitude can figure out how to upgrade without removing anything. This is one of the most complete parts of the interface, but there are a few minor aesthetic things that I'd like to take care of. [UPDATE] Michael Johnson writes:
I agree with most of the notes on this screen. However, I'd like to see the Selected Package changelog stay. I often look more than one upgrade back. The summary doesn't show more than what's being upgraded. However, I do like the idea of package description tab. The search box should definately always be available. I found it annoying when I had to click back to a different tab to get it. One thing I definately miss from the console version of aptitude is the list of recommended and suggested packages. I don't let aptitude auto-install recommended packages, but I like to know what they are.

Packages tab annotated packages tab This is the view that's shown after you search for a package by name, keyword, or by other criteria. It's also fairly complete, but there are more things that could be added (mostly enhancements). Related to this is the dialog box for editing which columns are visible: annotated visible columns editor
Dependency solver tab annotated dependency solver tab The dependency solver got a lot of love in the 0.5.2.1 release, and it's now basically functional. There are some minor layout issues to be fixed, and if I have time I'd like to add more ways of controlling the resolver's behavior.
Package information tab annotated package information tab The package information page is probably the weakest part of the GTK+ aptitude interface; it wouldn't be too much of a stretch to say that this tab is holding back a stable release of aptitude-gtk. The big problem I have with it is that I feel that the information tab should open up with an overview of the status of the package. What version is installed, what's available, what archive is it in, who's the maintainer, how big is it, etc. Instead, aptitude-gtk currently shows you a big tree of all the package's dependencies. While that's useful, it should be something the user chooses from. I also find this tab to be visually unappealing, but I'm not entirely sure how to fix that. [UPDATE] Gunnar Wolf writes:
As you have mentioned in your blog post (quote this mail if you so wish) that you want to make the package information screen "sexier": I think it would be natural to integrate screenshots (from http://screenshots.debian.net/) into it. Of course, it would be worth measuring whether it imposes too high a load into that system (maybe it should only get a screenshot if the user explicitly asks for it) - but surely would be a net win.
I think this is a great idea and I'm definitely planning to incorporate this feature. [UPDATE] Michael Johnson writes:
One of the things I really like about aptitude-gtk over console-aptitude is how obvious it makes some things. The version display on this tab is one of those things. Maybe it could be more obvious, I don't know. I agree, the dependency list is not the most important thing. And it's missing the reverse dependencies. I use that more than the dependencies. As far as the actual display of the list, I can't think of anything besides a tree. Although the current implementation does feel quite clunky. What I'd more like to see first is the list of files in the package. But I can see how other people would have different priorties. So it should be really easy to change the default view.

Preferences tab vast gaping hole where the preferences tab should be One of the reasons that it is a stretch to say that the information tab is holding aptitude-gtk back is the preferences tab, which is a huge pile of not-yet-implemented-ness. In fact, it's not even designed, let alone implemented. Releasing a GUI program with no way to alter the settings strikes me as ... silly.
Preview tab annotated The preview tab is a bit weaker than the packages tab, in my opinion, but much stronger than the information tab. Like the information tab, I find it visually unappealing, but I'm not entirely sure how to improve it. I'm currently leaning towards the paned browser idea. [UPDATE] Michael Johnson writes:
I've never been comfortable using this tab. It never made much sense to me. I think the last one or two times I used aptitude-gtk it started making sense, but that seems to be too long a time. I think it's the mass of text at each top level entry. The console version has brief descriptions with more info in the details pane when the entry is selected. Perhaps healthy use of tool tips or an info pane would help with this. The paned browser idea sounds promising. I think it will be easier for new users to follow. And it provides a natural way to provide more information on what each section is about.

That's all folks!

25 April 2009

Daniel Burrows: aptitude 0.5.2 released

I've released version 0.5.2 of the aptitude package manager (release notes). You can download it from Debian's experimental archive, or from the aptitude package page. WARNING: This is an unfinished piece of software. Do not install it if you want a package manager that does everything you need perfectly all the time. It is buggy and incomplete. This update includes major changes to the resolver backend and a new and improved GUI frontend for the resolver: It also includes an assortment of other fixes and improvements. Click through to the release notes to see a list of what changed in this release, with screen-shots.

23 March 2009

Daniel Burrows: Merge of doom

daniel@emurlahn:~/programming/aptitude/head$ hg log -r tip
changeset:   2611:c81f562751d6
tag:         tip
parent:      1665:94b597a663a7
parent:      2610:78100f86756d
user:        Daniel Burrows <dburrows@debian.org>
date:        Mon Mar 23 10:49:47 2009 -0700
summary:     Merge the entire post-lenny branch into head.
daniel@emurlahn:~/programming/aptitude/head$ hg log -p -r tip   diffstat -m   tail --lines 1
 239 files changed, 105227 insertions(+), 16334 deletions(-), 69203 modifications(!)
That's right: active [aptitude /projects/aptitude] development is now taking place in the head branch again, after a hiatus of about a year while I worked in a branch while waiting for Lenny to be released. Lenny is released now, and so the post-lenny and gtk branches have been merged into head and renamed to avoid accidental commits (you can find them under the names post-lenny-branch-is-closed and gtk-branch-is-closed). Future releases will come out of the head branch (http://hg.debian.org/hg/aptitude/head).

Daniel Burrows: Sometimes, tools are useful.

I generally try not to get caught up in writing auxiliary tools for my free software projects. My experience in the past has been that given how little spare time I have each week, a tool that takes more than an hour or two to write quickly ends up becoming a time sink that prevents me from making progress on the main project. But occasionally it's worth the effort. screenshot I spent the last couple weeks putting together a tool to extract information about a dependency search from a log trace of aptitude's resolver. The search is displayed in a log interface that shows both the steps of the search in order, and the tree structure of the search. It took a long time to write, but after working with it for just a few hours, I was able to track down the bugs in the resolver modifications I was trying to write; without a tool like this, it would have taken a lot of painful grepping through log files to get all the information I needed. In other news, the resolver now stores the choices it has made as a single set of choices, instead of using two different sets for the two types of choices it can make. There's no user-visible change (although I did fix another performance problem that affected safe-upgrade), but this will pave the way for adding more types of choices in a cleaner way: in particular, the resolver should get explicit support for package replacements in the near future. The code for this is available in the tools directory of aptitude's experimental Mercurial repository, http://hg.debian.org/hg/aptitude/post-lenny. When I get around to merging it this with the main repository, it will also be available at http://hg.debian.org/hg/aptitude/head.
A brief digression regarding the implementation language The tool is implemented in Haskell, a decision that I'm not sure was the right one. Haskell's data model was perfect for this task: having algebraic datatypes allows me to precisely represent cases where parts of the structure can't be inferred from the log file, or can only be inferred incompletely. And of course there are the other well-known benefits of Haskell (pure, a strong type system, good support for higher-order programming, etc). But all those benefits were wiped out by the time I spent fighting the runtime. The thing is, Haskell is a lazy language. In theory, this means that it never evaluates anything you don't need and that you can work with arbitrarily large (even infinite) values and only allocate the parts you actually use. In practice, this means that you have to pay excruciatingly close attention to every single expression you write, or as soon as you run your program on non-trivial input, it will either run very slowly, eat all your memory, die with a stack overflow, or all of the above. Oh, and did I mention that laziness means that you can't have a runtime debugger, that trace statements are mostly-useless, because your program doesn't run in any predictable order, and that it's impossible to get backtraces when there's a fatal error? I think if I were doing this over again, I'd use O'Caml. I'm less familiar with it and its syntax is clunkier, but I have used it productively in the past and it provides the core stuff I wanted from Haskell (functional programming with algebraic datatypes). If someone could make a language that was pure, had Haskell's nice syntax and type features, but was also strict, and that had decent tool support and development resources, I think I'd be in nerd heaven.

6 March 2009

Daniel Burrows: Help make aptitude faster!

I'm working on aptitude's dependency resolver at the moment. Mostly I'm working on some changes to support better UI, but while I have all the code in my head I'd like to try to fix some of the nagging speed problems it has. I've already managed to clear up some simple but large problems with aptitude safe-upgrade, because I could get them to happen on my computer. The problem is, most of these problems don't happen on my computer. I have a half dozen or so specific ideas that might speed aptitude up, but I don't want to just randomly apply tweaks without some baseline tests that I can use to make sure that I'm actually making things better and not just perturbing the code for no reason. :-) Plus, seeing why aptitude is actually running slowly might help when it comes to finding ways to improve its performance. So, this is where you come in. If you are using lenny and aptitude takes a long time to resolve dependencies on your computer outside of aptitude safe-upgrade, I would like to hear from you. All you need to do is to run this command when aptitude is slow, before you upgrade any packages:
$ aptitude-create-state-bundle aptitude-state.tar.bz2
Then, find a way to get the resulting large file to me. Email will work if your outgoing server accepts it; send to dburrows@algebraicthunk.net. That will tar up /etc/apt, /var/lib/apt, /var/cache/apt, and a few other directories: run aptitude-create-state-bundle --print-inputs to see exactly what it will include. Thanks! [UPDATE]: PS: please also include a brief description of the problem that you're seeing so I know what to test. Something like, e.g., when I run aptitude full-upgrade it takes thirty minutes to complete. [UPDATE]: PPS: of course you have to run aptitude-create-state-bundle before you actually upgrade any packages! i.e., you must type n at aptitude's command-line prompt or quit the program if you're using the curses interface. Otherwise, your state snapshot will just tell me what you upgraded to, which is only half the information I need (I also need the state of all packages prior to the upgrade).

4 March 2009

Daniel Burrows: Well, that was easier than expected

daniel@emurlahn:~/programming/aptitude/post-lenny$ time /usr/bin/aptitude -sy safe-upgrade   tail --lines 3
real    0m46.016s
user    0m41.639s
sys     0m0.172s
daniel@emurlahn:~/programming/aptitude/post-lenny$ time ./src/aptitude -sy safe-upgrade   tail --lines 3
real    0m5.336s
user    0m4.096s
sys     0m0.136s
More of the same coming, although I doubt I'll be able to top those statistics. And all it took was deleting twelve lines of code. Well, that and rewriting the entire resolver framework so that those twelve lines were no longer necessary, but who's counting? I'll post more details once I have everything in place and I'm sure it works as expected (most likely in a week or two).

21 February 2009

Daniel Burrows: Access control FAIL

Seen at work today while grovelling through some old code:
class Foo
 
public:
  Foo(); // Do not use -- no body on purpose

12 February 2009

Daniel Burrows: Should we be scared of the future or not?

I recently received an email from Sigma Xi with various news items. One of them mentioned a Hawking quote:
Science fiction is useful both for stimulating the imagination and for diffusing fear of the future. -- Stephen Hawking?
Now, Sigma Xi is generally a pretty reputable organization, but I wondered: is a scientific organization really encouraging fear of the future? Mind, I think a certain degree of concern about the future is healthy, both individually and societally, and there is a long tradition in science fiction of writing novels about terrifying futures. But Sigma Xi is usually upbeat about the wonders of science and technological progress. Besides, there's a real dissonance in that quote between becoming more creating and imaginative, and becoming more afraid of the future. Maybe it was intentional, but it seems very out-of-place. So, I started to wonder whether this was really what Hawking said. Typing a few key words into Google, I looked at where else this quote showed up. According to the NSF, Sigma Xi's quote is correct.
According to renowned physicist Stephen Hawking, "science fiction is useful both for stimulating the imagination and for diffusing fear of the future." Interest in science fiction may affect the way people think about or relate to science...
ERIC, which is some sort of database of internal government reports, agrees:
According to renowned physicist Stephen Hawking, "science fiction is useful both for stimulating the imagination and for diffusing fear of the future." Indeed, several studies suggest that using science fiction movies as a teaching aid can improve both motivation and achievement.
So, maybe Hawking really does encourage being afraid of the future? Well, here was the third and final reference to this quote that I found, from (of all places) an unofficial transcript of a Larry King Live interview:
HAWKING: I think science fiction is useful, both for stimulating the imagination and for defusing fear of the future. But science fact can be even more amazing. Science fiction never suggested anything as strange as black holes.
So, it looks like this quote comes from a live TV interview. I don't know whether the different interpretations come from different transcripts (this is the only one I can find); defused and diffused are similar enough that it would be easy for either one to be reasonable -- and IIRC, he speaks through some sort of speech-generating machine, which would just make it easier to misunderstand him. But given that Hawking is a crazy-science-guy, I suspect that he would rather defuse fear of the future than spread it around (even if it became more diffuse as it was spread), and that the transcript above is what he meant to say. In which case, I wonder about the versions of the quote that I found everywhere else. Didn't anyone do a double-take when they were typing up those papers, Web pages and news items? Of course, there is a third possibility, which I rather like: the quote could be a play on words, a sort of slant pun (a phrase that I just invented, which is the funnier cousin of a slant rhyme). What's particularly cool about this particular ambiguity is that (a) both sentences are sensible, (b) they have related but opposite meanings, and (c) somewhat oddly, I agree with both of them: science fiction is good at defusing unwarranted fear of the future, while also diffusing fear where it's entirely warranted. I doubt Hawking was going for the double meaning, but it's a nice thought.
And for the inevitable people who can't figure out what this blog entry is about, here's a hint: diffuse:
Verb
  1. (transitive) To spread over or through as in air, water, or other matter, especially by fluid motion or passive means.
  2. (intransitive) To be spread over or through as in air, water, or other matter, especially by fluid motion or passive means.
defuse:
Verb
  1. To remove the fuse from a bomb, etc.
  2. To make something less dangerous, tense, or hostile.

Next.