Search Results: "Antti-Juhani Kaijanaho"

9 November 2014

Antti-Juhani Kaijanaho: About to retire from Debian

I got involved with Debian development in, I think, 1998. In early 1999, I was accepted as a Debian developer. The next two or three years were a formative experience for me. I learned both software engineering and massively international collaboration; I also made two major contributions to Debian that are still around (of this, I am very proud). In consequence, being a Debian developer became a part of my identity. Even after my activity lessened more than a decade ago, after I no longer was a carefree student, it was very hard for me to let go. So I ve hung on. Until now. I created my 4096-bit GPG key (B00B474C) in 2010, but never got around to collecting signatures to it. I ve seen other people send me their key transition statements, but I have not signed any keys based on them. It just troubles me to endorse a better secured key based on the fact that I once verified a less secure key and I have a key signature chain to it. For this reason, I have not made any transition statements of my own. I ve been meaning to set up key signing meetings with Debian people in Finland. I never got around to that, either. That, my friends, was my wakeup call. If I can t be bothered to do that, what business do I have clinging on to my Debian identity? My conclusion is that there is none. Therefore, I will be retiring from Debian. This is not a formal notice; I will be doing the formal stuff (including disposing of my packages) separately in the appropriate forums in the near future. I agree with the sentiment that Joey Hess wrote elsewhere: It s become abundantly clear that this is no longer the project I originally joined . Unlike Joey, I think that is a good thing. Debian has grown, a lot. It s not perfect, but like my elementary-school teacher once said: A thing that is perfect was not made by people. Just remember to continue growing and getting better. And keep making the Universal Operating System. Thank you, all.

29 August 2014

Antti-Juhani Kaijanaho: Licentiate Thesis is now publicly available

My recently accepted Licentiate Thesis, which I posted about a couple of days ago, is now available in JyX. Here is the abstract again for reference: Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyv skyl : University of Jyv skyl , 2014, 243 p.
(Jyv skyl Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach. Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

23 August 2014

Antti-Juhani Kaijanaho: A milestone toward a doctorate

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods (official translation of the Government decree on university degrees 794/2004, Section 23 Paragraph 2). The title and abstract of my Licentiate Thesis follow:
Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyv skyl : University of Jyv skyl , 2014, 243 p.
(Jyv skyl Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach. Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis
A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade very good (4 on a scale of 1 5). The thesis has been accepted for publication published in our faculty s licentiate thesis series and will in due course appear has appeared in our university s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at antti-juhani.kaijanaho@jyu.fi.
Figure 1 of the thesis: an overview of the mapping processFigure 1 of the thesis: an overview of the mapping process
As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.
Figure 8 of the thesis: Core studies per publication yearFigure 8 of the thesis: Core studies per publication year
Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.) I closed my previous post, the latest post in that series, as follows:
In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. [...] Most scientists enjoy not pondering it, for it s a bit like being a cartoon character: so long as you don t look down, you can walk on air.
I wrote my Master s Thesis (PDF) in 2002. It was about the formal method called B ; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before. Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process. For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice). I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

22 October 2011

Antti-Juhani Kaijanaho: Have you edited /etc/grep-dctrl.rc or ~/.grep-dctrlrc ?

If so, please tell me in the comments, how and for what purpose. I am going to remove this configuration feature as being an unnecessary complication, unless I hear compelling user stories.

11 September 2010

Antti-Juhani Kaijanaho: Have you edited /etc/grep-dctrl.rc or ~/.grep-dctrlrc ?

If so, please tell me in the comments, how and for what purpose. I m pondering if this configurability is unnecessary baggage.

28 August 2010

Antti-Juhani Kaijanaho: Dear Lazyweb: Does this software exist?

I ve been wondering if the following kind of testing management software exists (preferably free software, of course). It would allow one to specify a number of test cases. For each, one should be able to describe preconditions, testing instructions and expected outcome. Also, file attachments should be supported in case a test case needs a particular data set. It would publish a web site describing each test case. A tester (who in the free software world could be anyone) would take a test case, follow the instructions given and observe whatever outcome occurs. The tester would then file a test report with this software, either a terse success report or a more verbose failure report. The software should maintain testing statistics so that testers could easily choose test cases that have a dearth of reports. As a bonus, it would be nice if the software could submit a failure report as a bug report . (Note that this would be useful for handling the sort of tests that cannot be automated. There are many good ways already to run automated test suites.)

13 August 2010

Antti-Juhani Kaijanaho: dctrl-tools translations

dctrl-tools 1.14 (targeting squeeze) has the following incomplete translations (as of right now in git): ca: 89 translated messages, 4 fuzzy translations, 18 untranslated messages.
cs: 108 translated messages, 1 fuzzy translation, 2 untranslated messages.
de: 111 translated messages.
en_GB: 89 translated messages, 4 fuzzy translations, 18 untranslated messages.
es: 89 translated messages, 4 fuzzy translations, 18 untranslated messages.
fi: 111 translated messages.
fr: 108 translated messages, 1 fuzzy translation, 2 untranslated messages.
it: 65 translated messages, 8 fuzzy translations, 38 untranslated messages.
ja: 89 translated messages, 4 fuzzy translations, 18 untranslated messages.
pl: 49 translated messages, 2 fuzzy translations, 60 untranslated messages.
pt_BR: 89 translated messages, 4 fuzzy translations, 18 untranslated messages.
ru: 108 translated messages, 1 fuzzy translation, 2 untranslated messages.
sv: 84 translated messages, 4 fuzzy translations, 23 untranslated messages.
vi: 89 translated messages, 4 fuzzy translations, 18 untranslated messages. I have put the relevant pot and po files up. This is not an archival URI, but I ll keep it available long enough for this. Submissions through the BTS are accepted, as usual. Debian developers and others with collab-maint access may, if they wish, push their updates directly to the Git repository. Please use the maint-2.14 branch and please read the README. I will NOT be gathering translations from Rosetta. All contributors and committers are asked to declare whether they can affirm the Developer s Certificate of Origin. The commit tag Signed-off-by is, by convention, interpreted as an affirmation of that document by the person identified on that tag line.

28 March 2010

Antti-Juhani Kaijanaho: Coloring line art in the Gimp, and some other stuff

I have written a tutorial on how to produce colored comic strip from script to finished image, using pencils, ink, paper, scanner, and the Gimp. It, of course, suffers from the fact that I am myself fairly new to this stuff, but there it is. I hope someone finds it useful.
Drawing a speech bubble

Drawing a speech bubble

29 November 2009

Antti-Juhani Kaijanaho: New Netnews (Usenet) RFCs

The RFC editor has finally released They obsolete the venerable RFC 1036 (Standard for Interchange of USENET Messages) from December 1987. The new RFCs are the work of the IETF working group USEFOR, chartered November 1997 and concluded March 2009. I ve heard it was not quite the longest lived IETF working group ever. (I personally missed the group by a couple of months, since I started following Netnews and NNTP standardization in April, due to Alue.) Both RFCs are on the Standards Track, currenlty as Proposed Standards. In my opinion, they are a huge improvement over both RFC 1036 and son-of-1036 (which will probably be published as a Historic RFC real soon now).

20 November 2009

Antti-Juhani Kaijanaho: Socialized vaccination a narrative

Such was the scene I arrived at on Wednesday last week at the municipal health center at Kyll , Jyv skyl , Finland. A queue extended a hundred meters beyond the door. It was not hard to guess what it was about, as it had been announced that the pandemic influenza A/H1N1 vaccine would be administered there to people in specific risk groups from 10 am to 3:30 pm. I should explain here the Finnish health care setup. There are three parallel systems a comprehensive public health care system maintained by the municipalities to standards set by the state, a network of private for-profit health care providers, and a national foundation dedicated to university student health care. Employers are required by law to provide a minimal level of health care to their employees, and most of them also provide, free of charge to the employees, access to near-comprehensive general-practice-level care; most employers buy this service from the for-profit providers. The public system suffers from resource starvation at the general-practice level but provides excellent care in the central hospitals that handle the difficult cases. Anyway, the H1N1 vaccine is only available through the public system and through the foundation free of charge if you qualify for the vaccine, and no amount of money buys it for you in this country if you don t. Thus, I and many others, normally cared by the employee services, found ourselves queuing up at a public health care institute. And clearly, the public health care system was overwhelmed on that first day. When I entered the queue, its tail was a traffic hazard. Fortunately, the queue moved faster than new people arrived thereafter, and the hazard ended. The queue moved surprisingly fast it took me one hour to advance the 100 meters to the door. Even so, this was a failure in the system there is no good reason to have people with underlying illnesses (and we all had them, to qualify for the vaccine) stand around in freezing cold weather for an hour! Once, a nurse came and asked us if any of us was 63 years old or older. Apparently no-one was, since no-one was asked to leave (they will be vaccinated later). Later, another nurse asked everyone in the queue outside to show their KELA cards KELA is the national health insurance system, and its card carries information about extra coverage one qualifies for due to underlying illnesses. Eventually, I reached the door. Two guards stopped anyone who tried to enter directly, sidestepping the queue, and let in only those who had legitimate business other than vaccination. The main hall was full of people, and I quickly realized that the queue took a full circle inside the hall to reach the multiplexing point. It took me another hour to slowly advance my way though the hall. At the multiplexing point, I was asked to wait a bit, and then I was assigned a vaccination room and a waiting number. Some twenty minutes later I was called in. The vaccination room I was assigned to was a nurse s office. Two nurses were there, one who would administer the vaccine, and another at the computer, to keep a record. I gave them my KELA card, and shed my coat and my outer shirt, and bared my left shoulder. I was quickly given the pandemic vaccine; there was no question I was qualified for it, not with my obesity being obvious. Then they asked what my diagnosis was. Prmarily I m here because of my obesity, I said. But I also have paroxysmal atrial fibrillation. That s not what your KELA card says, accused the nurse at the computer. The diagnosis is so new, I countered. There has been no time to do the paperwork for KELA. (And indeed, I later learned, it would come up in my next checkup in the spring.) They stared at each other. I can show you my prescriptions, I said, making no move for them. Stares. I stared back. Do you want the seasonal vaccine or not? asked the nurse with the injectors. I lauged briefly. I might as well. It had, honestly, never crossed my mind that I might qualify. She injected me on my right shoulder. You should stay in out there for ten minutes. I picked up my clothes and found the cafeteria with coffee and pastry. Then the real fun started. Next day, I woke with horrible upper back pain. The thermometer showed mild fever; but since i didn t have any respiratory symptoms, I decided to go to work. In the evening, turning in bed was excruciatingly painful. It took days for the pain to subside. I left the building three hours after I arrived at the end of the queue. What? You think that s excessive? So do I and many others; queuing feels so Soviet Union! But honestly, while it did take time, it worked. I am vaccinated; are you?

2 November 2009

Antti-Juhani Kaijanaho: 0x20 040 32

Now.

16 May 2009

Antti-Juhani Kaijanaho: This is Alue

I have made a couple of references in my blog to the new software suite I am writing, which I am calling Alue. It is time to explain what it is all about. Alue will be a discussion forum system providing a web-based forum interface, a NNTP (Netnews) interface and an email interface, all with equal status. What will be unusual compared to most of the competition is that all these interfaces will be coequal views to the same abstract discussion, instead of being primarily one of these things and providing the others as bolted-on gateways. (I am aware of at least one other such system, but it is proprietary and thus not useful to my needs. Besides, I get to learn all kinds of fun things while doing this.) I have, over several years, come across many times the need for such systems and never found a good, free implementation. I am now building this software for the use of one new discussion site that is being formed (which is graciously willing to serve as my guinea pig), but I hope it will eventually be of use to many other places as well. I now have the first increment ready for beta testing. Note that this is not even close to being what I described above; it is merely a start. It currently provides a fully functional NNTP interface to a rudimentary (unreliable and unscalable) discussion database. The NNTP server implements most of RFC 3977 (the base NNTP spec IHAVE, MODE-READER, NEWNEWS and HDR are missing), all of RFC 4642 (STARTTLS) and a part of RFC 4643 (AUTHINFO USER the SASL part is missing). The article database is intended to support with certain deliberate omissions the upcoming Netnews standards (USEFOR and USEPRO), but currently omits most of the mandatory checks. There is a test installation at verbosify.org (port 119), which allows anonymous reading but requires identification and authentication for posting. I am currently handing out accounts only by invitation. Code can be browsed in a Gitweb; git clone requests should be directed to git://git.verbosify.org/git/alue.git/. There are some tweaks to be done to the NNTP frontend, but after that I expect to be rewriting the message filing system to be at least reliable if not scalable. After that, it is time for a web interface.

14 May 2009

Antti-Juhani Kaijanaho: Asynchronous transput and gnutls

CC0
To the extent possible under law,
Antti-Juhani Kaijanaho has waived all copyright and related or neighboring rights to
Asynchronous transput and gnutls. This work is published from Finland.
GnuTLS is a wonderful thing. It even has a thick manual but nevertheless its documentation is severely lacking from the programmer s point of view (and there doesn t even seem to be independent howtos floating on the net). My hope is to remedy with this post, in small part, that problem. I spent the weekend adding STARTTLS support to the NNTP (reading) server component of Alue. Since Alue is written in C++ and uses the Boost ASIO library as its primary concurrency framework, it seemed prudent to use ASIO s SSL sublibrary. However, the result wasn t stable and debugging it looked unappetizing. So, I wrote my own TLS layer on top of ASIO, based on gnutls. Now, the gnutls API looks like it works only with synchronous transput: all TLS network operations are of the form do this and return when done ; for example gnutls_handshake returns once the handshake is finished. So how does one adapt this to asynchronous transput? Fortunately, there are (badly documented) hooks for this purpose. An application can tell gnutls to call application-supplied functions instead of the read(2) and write(2) system calls. Thus, when setting up a TLS session but before the handshake, I do the following:
                gnutls_transport_set_ptr(gs, this);
                gnutls_transport_set_push_function(gs, push_static);
                gnutls_transport_set_pull_function(gs, pull_static);
                gnutls_transport_set_lowat(gs, 0);
Here, gs is my private copy of the gnutls session structure, and the push_static and pull_static are static member functions in my sesssion wrapper class. The first line tells gnutls to give the current this pointer (a pointer to the current session wrapper) as the first argument to them. The last line tells gnutls not to try treating the this pointer as a Berkeley socket. The pull_static static member function just passes control on to a non-static member, for convenience:
ssize_t session::pull_static(void * th, void *b, size_t n)
 
        return static_cast<session *>(th)->pull(b, n);
 
The basic idea of the pull function is to try to return immediately with data from a buffer, and if the buffer is empty, to fail with an error code signalling the absence of data with the possibility that data may become available later (the POSIX EAGAIN code):
class session
 
        [...]
        std::vector<unsigned char> ins;
        size_t ins_low, ins_high;
        [...]
 ;
ssize_t session::pull(void *b, size_t n_wanted)
 
        unsigned char *cs = static_cast<unsigned char *>(b);
        if (ins_high - ins_low > 0)
         
                errno = EAGAIN;
                return -1;
         
        size_t n = ins_high - ins_low < n_wanted
                ?  ins_high - ins_low
                :  n_wanted;
        for (size_t i = 0; i < n; i++)
         
                cs[i] = ins[ins_low+i];
         
        ins_low += n;
        return n;
 
Here, ins_low is an index to the ins vector specifying the first byte which has not already been passed on to gnutls, while ins_high is an index to the ins vector specifying the first byte that does not contain data read from the network. The assertions 0 <= ins_low, ins_low <= ins_high and ins_high <= ins.size() are obvious invariants in this buffering scheme. The push case is simpler: all one needs to do is buffer the data that gnutls wants to send, for later transmission:
class session
 
        [...]
        std::vector<unsigned char> outs;
        size_t outs_low;
        [...]
 ;
ssize_t session::push(const void *b, size_t n)
 
        const unsigned char *cs = static_cast<const unsigned char *>(b);
        for (size_t i = 0; i < n; i++)
         
                outs.push_back(cs[i]);
         
        return n;
 
The low water mark outs_low (indicating the first byte that has not yet been sent to the network) is not needed in the push function. It would be possible for the push callback to signal EAGAIN, but it is not necessary in this scheme (assuming that one does not need to establish hard buffer limits). Once gnutls receives an EAGAIN condition from the pull callback, it suspends the current operation and returns to its caller with the gnutls condition GNUTLS_E_AGAIN. The caller must arrange for more data to become available to the pull callback (in this case by scheduling an asynchronous write of the data in the outs buffer scheme and scheduling an asynchronous read to the ins buffer scheme) and then call the operation again, allowing the operation to resume. The code so far does not actually perform any network transput. For this, I have written two auxiliary methods:
class session
 
        [...]
        bool read_active, write_active;
        [...]
 ;
void session::post_write()
 
        if (write_active) return;
        if (outs_low > 0 && outs_low == outs.size())
         
                outs.clear();
                outs_low = 0;
         
        else if (outs_low > 4096)
         
                outs.erase(outs.begin(), outs.begin() + outs_low);
                outs_low = 0;
         
        if (outs_low < outs.size())
         
                stream.async_write_some
                        (boost::asio::buffer(outs.data()+outs_low,
                                             outs.size()-outs_low),
                         boost::bind(&session::sent_some,
                                     this, _1, _2));
                write_active = true;
         
 
void session::post_read()
 
        if (read_active) return;
        if (ins_low > 0 && ins_low == ins.size())
         
                ins.clear();
                ins_low = 0;
                ins_high = 0;
         
        else if (ins_low > 4096)
         
                ins.erase(ins.begin(), ins.begin() + ins_low);
                ins_high -= ins_low;
                ins_low = 0;
         
        if (ins_high + 4096 >= ins.size()) ins.resize(ins_high + 4096);
        stream.async_read_some(boost::asio::buffer(ins.data()+ins_high,
                                                   ins.size()-ins_high),
                               boost::bind(&session::received_some,
                                           this, _1, _2));
        read_active = true;
 
Both helpers prune the buffers when necessary. (I should really remove those magic 4096s and make them a symbolic constant.) The data members read_active and write_active ensure that at most one asynchronous read and at most one asynchronous write is pending at any given time. My first version did not have this safeguard (instead trying to rely on the ASIO stream reset method to cancel any outstanding asynchronous transput at need), and the code sent some TLS records twice which is not good: sending the ServerHello twice is guaranteed to confuse the client. Once ASIO completes an asynchronous transput request, it calls the corresponding handler:
void session::received_some(boost::system::error_code ec, size_t n)
 
        read_active = false;
        if (ec)   pending_error = ec; return;  
        ins_high += n;
        post_pending_actions();
 
void session::sent_some(boost::system::error_code ec, size_t n)
 
        write_active = false;
        if (ec)   pending_error = ec; return;  
        outs_low += n;
        post_pending_actions();
 
Their job is to update the bookkeeping and to trigger the resumption of suspended gnutls operations (which is done by post_pending_actions). Now we have all the main pieces of the puzzle. The remaining pieces are obvious but rather messy, and I d rather not repeat them here (not even in a cleaned-up form). But their essential idea goes as follows: When called by the application code or when resumed by post_pending_actions, an asynchronous wrapper of a gnutls operation first examines the session state for a saved error code. If one is found, it is propagated to the application using the usual ASIO techniques, and the operation is cancelled. Otherwise, the wrapper calls the actual gnutls operation. When it returns, the wrapper examines the return value. If successful completion is indicated, the handler given by the application is posted in the ASIO io_service for later execution. If GNUTLS_E_AGAIN is indicated, post_read and post_write are called to schedule actual network transput, and the wrapper is suspended (by pushing it into a queue of pending actions). If any other kind of failure is indicated, it is propagated to the application using the usual ASIO techniques. The post_pending_actions merely empties the queue of pending actions and schedules the actions that it found in the queue for resumption. The code snippets above are not my actual working code. I have mainly removed from them some irrelevant details (mostly certain template parameters, debug logging and mutex handling). I don t expect the snippets to compile. I expect I will be able to post my actual git repository to the web in a couple of days. Please note that my (actual) code has received only rudimentary testing. I believe it is correct, but I won t be surprised to find it contains bugs in the edge cases. I hope this is, still, of some use to somebody :)

8 May 2009

Antti-Juhani Kaijanaho: Star Trek

It is curious to see that the eleventh movie in a series is the first to bear the series name with no adornment. It is apt, however: Star Trek is a clear attempt at rebooting the universe and basically forgetting most of the decades-heavy baggage. It seems to me that the reboot was fairly well done, too. The movie opens with the birth of James Tiberius Kirk, and follows his development into the Captain of the Enterprise. Along the way, we also see the growth of Spock from adolescence into Kirk s trusted sidekick and also into well. Despite the fact that the action plot macguffins are time travel and planet-killer weaponry, it is mainly a story of personal vengeance, personal tragedy, and personal growth. Curiously enough, although Kirk gets a lot of screen time, it is really the personal story of Spock. Besides Kirk and Spock, we also get to meet reimagined versions of Uhura (I like!), McCoy, Sulu, Chekov and Scott. And Christopher Pike, the first Captain of the Enterprise. The appearance of Leonard Nimoy as the pre-reboot Spock merits a special mention and a special thanks. I overheard someone say in the theatre, after the movie ended, that the movie was a ripoff and had nothing to do with anything that had gone before. I respectfully disagree. The old Star Trek continuum had been weighed down by all the history into being a 600-pound elderly man who is unable to leave the couch on his own. This movie provided a clearn reboot, ripping out most of the baggage, retaining the essence of classic Star Trek and giving a healthy, new platform for good new stories. One just hopes Paramount is wise enough not to foul it up again. It was worth it, I thought.

2 May 2009

Antti-Juhani Kaijanaho: dpkg tip

If your dpkg runs seem to take a long time in the reading database step, try this: Step One: Clear the available file dpkg --clear-avail Step Two: Forget old unavailable packages dpkg --forget-old-unavail Step Three: If you use grep-available or other tools that rely on a useful available file, update the available file using sync-available (in the dctrl-tools package). The few times I ve tried it (all situations where the reading database step seemed to take ages), it has always sped the process up dramatically. There probably are situations where it won t make much difference, but I haven t run into them.

29 April 2009

Antti-Juhani Kaijanaho: Initramfs problems with the new kernel-package, and a solution

I ve been using Manoj s new kernel-package for some weeks now, and used it to compile two kernels (a reconfigured 2.6.29.1 and the new 2.6.29.2). Both times I ve had trouble with initrd. As the documentation says, kernel-package kernel packages no longer do their own initramfs generation. One must copy the example scripts at /usr/share/doc/kernel-package/examples/etc.kernel/postinst.d/initramfs and /usr/share/doc/kernel-package/examples/etc.kernel/postrm.d/initramfs to the appropriate subdirectories of /etc/kernel/. However, this is not enough. My /etc/kernel-img.conf file had the usual posthook and prehook lines calling update-grub. Unfortunately, those hooks are called before the postinst.d hooks, and so update-grub never saw my initramfs images. Fix? I removed those lines from /etc/kernel-img.conf and created a very simple postinst.d and postrm.d script:
#!/bin/sh
update-grub
I call the script zzz-grub-local, to ensure that it runs last.

15 March 2009

Antti-Juhani Kaijanaho: I have inhibitions

There is a particular issue about which I am incapable of expressing my opinion. I do have an opinion about it, and I have developed it quite a bit in my own head over the years. However, if I try to voice it, something in me tells me not to. There is another issue about which I am capable of expressing my opinion, but I choose not to. It is a highly controversial topic, and it is very likely that if I expressed my opinion about it, it would be interpreted as being another opinion, which I do not hold. I do not know whether to call that Wittgenstein s curse or not.

Antti-Juhani Kaijanaho: Some things to avoid when triaging other people s bugs

DO NOT send your query for more information only to nnn@bugs.debian.org. That address (almost) never reaches the submitter. (The correct address is nnn-submitter@bugs.debian.org or you can CC the submitter directly.) DO NOT close a bug just because your query can you still reproduce it has not been promptly answered. And, actually: DO NOT close a bug if you do not have the maintainer s permission to do so. You may, if you wish, state in the bug logs that you think the bug should be closed. This ends today s public service announcement. Thank you for your attention.

7 March 2009

Antti-Juhani Kaijanaho: Eric Flint on copyright and DRM

[Originally posted in June 2006; updated with new links several times, most recently in March 2009] Eric Flint: A Matter of Principle, Jim Baen s Universe 1 (1), 2006.
Eric Flint: Copyright: What Are the Proper Terms for the Debate?, Jim Baen s Universe 1 (2), 2006.
Eric Flint: Copyright: How Long Should It Be?, Jim Baen s Universe 1 (3), 2006.
Eric Flint: What is Fair Use, Jim Baen s Universe 1 (4), 2006.
Eric Flint: Lies, and More Lies, Jim Baen s Universe 1 (5), 2007.
Eric Flint: There Ain t No Such Thing as a Free Lunch, Jim Baen s Universe 1 (6), 2007.
Eric Flint: Books: The Opaque Market, Jim Baen s Universe 2 (1), 2007.
Eric Flint: Spillage: or, The Way Fair Use Works in Favor of Authors and Publishers, Jim Baen s Universe 2 (2), 2007.
Eric Flint: The Economics of Writing, Jim Baen s Universe 2 (3), 2007.
Eric Flint: The Pig-in-a-Poke Factor, Jim Baen s Universe 2 (4), 2007.
Eric Flint: Paper books are not going to be joining the dodo any time soon. If ever., Jim Baen s Universe 2 (5), 2008
Eric Flint: A Matter of Symbiosis. Jim Baen s Universe 2 (6), 2008
Eric Flint: The Nature of Transitions. Jim Baen s Universe 3 (1), 2008
Eric Flint: Adventures with a Search Engine. Jim Baen s Universe 3 (2), 2008
Eric Flint: The Problem is Legal Scarcity, not Illegal Greed. Jim Baen s Universe 3 (3), 2008
Eric Flint: Foam and Froth and Mighty (Upside-down) Pyramids. Jim Baen s Universe 3 (4), 2009 Eric Flint is a fairly successful sf author. These columns explore the evils of Digital Restrictions Management (DRM, also known as Don t Read Me).

5 March 2009

Colin Watson: Bug triage, redux

I've been a bit surprised by the strong positive response to my previous post. People generally seemed to think it was quite non-ranty; maybe I should clean the rust off my flamethrower. :-) My hope was that I'd be able to persuade people to change some practices, so I guess that's a good thing. Of course, there are many very smart people doing bug triage very well, and I don't want to impugn their fine work. Like its medical namesake, bug triage is a skilled discipline. While it's often repetitive, and there are lots of people showing up with similar symptoms, a triage nurse can really make a difference by spotting urgent cases, cleaning up some of the initial blood, and referring the patient quickly to a doctor for attention. Or, if a pattern of cases suddenly appears, a triage nurse might be able to warn of an incipient epidemic. [Note: I have no medical experience, so please excuse me if I'm talking crap here. :-)] The bug triagers who do this well are an absolute godsend; especially when they respond to repetitive tasks with tremendously useful pieces of automation like bughelper. The cases I have trouble with are more like somebody showing up untrained, going through everyone in the waiting room, and telling each of them that they just need to go home, get some rest, and stop complaining so much. Sometimes of course they'll be right, but without taking the time to understand the problem they're probably going to do more harm than good. Ian Jackson reminded me that it's worth mentioning the purpose of bug reports on free software: namely, to improve the software. The GNU Project has some advice to maintainers on this. I think sometimes we stray into regarding bug reports more like support tickets. In that case it would be appropriate to focus on resolving each case as quickly as possible, if necessary by means of a workaround rather than by a software change, and only bother the developers when necessary. This is the wrong way to look at bug reports, though. The reason that we needed to set up a bug triage community in Ubuntu was that we had a relatively low developer-to-package ratio and a very high user-to-developer ratio, and we were getting a lot of bug reports that weren't fleshed out enough for a developer to investigate them without spending a lot of time in back-and-forth with the reporter, so a number of people volunteered to take care of the initial back-and-forth so that good clear bug reports could be handed over to developers. This is all well and good, and indeed I encouraged it because I was personally finding myself unable to keep up with incoming bugs and actually fix anything at the same time. Somewhere along the way, though, some people got the impression that what we wanted was a first-line support firewall to try to defend developers from users, which of course naturally leads to ideas such as closing wishlist bugs containing ideas because obviously those important developers wouldn't want to be bothered by them, and closing old bugs because clearly they must just be getting in developers' way. Let me be clear about this now: I absolutely appreciate help getting bug reports into a state where I can deal with them efficiently, but I do not want to be defended from my users! I don't have a basis from which to state that all developers feel the same way, but my guess is that most do. Antti-Juhani Kaijanaho said he'd experienced most of these problems in Debian. I hadn't actually intended my post to go to Planet Debian - I'd forgotten that the "ubuntu" category on my blog goes there too, which generally I see as a feature, but if I'd remembered that I would have been a little clearer that I was talking about Ubuntu bug triage. If I had been talking about Debian bug triage I'd probably have emphasised different things. Nevertheless, it's interesting that at least one Debian (and non-Ubuntu) developer had experienced similar problems. Justin Dugger mentions a practice of marking duplicate bugs invalid that he has problems with. I agree that this is suboptimal and try not to do it myself. That said, this is not something I object to to the same extent. Given that the purpose of bugs is to improve the software, the real goal is to be able to spend more time fixing bugs, not to get bugs into the ideal state when the underlying problem has already been solved. If it's a choice between somebody having to spend time tracking down the exact duplicate bug number versus fixing another bug, I know which I'd take. Obviously, when doing this, it's worth apologising that you weren't able to find the original bug number, and explaining what the user can do if they believe that you're mistaken (particularly if it's a bug that's believed to be fixed); the stock text people often use for this doesn't seem informative enough to me. Sebastien Bacher commented that preferred bug triage practices differ among teams: for instance, the Ubuntu desktop team deals with packages that are very much to the forefront of users' attention and so get a lot of duplicate bugs. Indeed - and bug triagers who are working closely with the desktop team on this are almost certainly doing things the way the developers on the desktop team prefer, so I have no problem with that. The best advice I can give bug triagers is that their ultimate aim is to help developers, and so they should figure out which developers they need to work with and go and talk to them! That way, rather than duplicating work or being counterproductive, they can tailor their work to be most effective. Everybody wins.

Next.