Search Results: "epg"

16 October 2022

Vincent Fourmond: Tutorial: analysis of multiwavelength fast kinetics data

The purpose of this post is to demonstrate a first approach to the analysis of multiwavelength kinetic data, like those obtained using stopped-flow data. To practice, we will use data that were acquired during the stopped flow practicals of the MetBio summer school from the FrenchBIC. During the practicals, the student monitored the reaction of myoglobin (in its Fe(III) state) with azide, which yields a fast and strong change in the absorbance spectrum of the protein, which was monitored using a diode array. The data is publicly available on zenodo. Aims of this tutorial The purpose of this tutorial is to teach you to use the free softwareQSoas to run a simple, multiwavelength exponential fit on the data, and to look at the results. This is not a kinetics lecture, so that it will not go in depth about the use of the exponential fit and its meaning. Getting started: loading the file First, make sure you have a working version of QSoas, you can download them (for free) there. Then download the data files from zenodo. We will work only on the data file Azide-1.25mm_001.dat, but of course, the purpose of this tutorial is to enable you to work on all of them. The data files contain the time evolution of the absorbance for all wavelengths, in a matrix format, in which each row correpond to a time point and each column to a wavelength. Start QSoas, and launch the command:
QSoas> load /comments='"'
Then, choose the Azide-1.25mm_001.dat data file. This should bring up a horizontal red line at the bottom of the data display, with X values between about 0 and 2.5. If you zoom on the red line with the mouse wheel, you'll realize it is data. The /comments='"' part is very important since it allows the extraction of the wavelength from the data. We will look at what it means another day. At this stage, you can look at the loaded data using the command:
QSoas> edit
You should have a window looking like this:
The rows each correspond to a data point displayed on the window below. The first column correspond to the X values, the second the Y values, and all the other ones to extra Y columns (they are not displayed by default). What is especially interesting is the first row, which contains a nan as the X value and what is obviously the wavelength for all the Y values. To tell that QSoas should take this line as the wavelength (which will be the perpendicular coordinate, the coordinate of the other direction of the matrix), first close the edit window and run:
QSoas> set-perp /from-row=0
Splitting and fitting Now, we have a single dataset containing a lot of Y columns. We want to fit all of them simultaneously with a (mono) exponential fit. For that, we first need to split the big matrix into a series of X,Y datasets (because fitting only works on the first Y). This is possible by running:
QSoas> expand /style=red-to-blue /flags=kinetics
Your screen should now look like this:
You're looking at the kinetics at all wavelengths at the same time (this may take some time to display on your computer, it is after all a rather large number of data points). The /style=red-to-blue is not strictly necessary, but it gives the red to blue color gradient which makes things easier to look at (and cooler !). The /flags=kinetics is there to attach a label (a flag) to the newly created datasets so we can easily manipulate all of them at the same time. Then it's time to fit, with the following command:
QSoas> mfit-exponential-decay flagged:kinetics
This should bring up a new window. After resizing it, you should have something that looks like this:
The bottom of the fit window is taken by the parameters, each with two checkboxes on the right to set them fixed (i.e. not determined by the fitting mechanism) and/or global (i.e. with a single value for all the datasets, here all the wavelengths). The top shows the current dataset along with the corresponding fit (in green), and, below, the residuals. You can change the dataset by clicking on the horizontal arrows or using Ctrl+PgUp or Ctrl+PgDown (keep holding it to scan fast). See the Z = 728.15 showing that QSoas has recognized that the currently displayed dataset corresponds to the wavelength 728.15. The equation fitted to the data is: $$y(x) = A_\infty + A_1 \times \exp -(x - x_0)/\tau_1$$ In this case, while the \(A_1\) and \(A_\infty\) parameters clearly depend on the wavelength, the time constant of evolution should be independent of wavelength (the process happens at a certain rate regardless of the wavelength we're analyzing), so that the \(\tau_1\) parameter should be common for all the datasets/wavelengths. Just click on the global checkbox at the right of the tau_1 parameter, make sure it is checked, and hit the Fit button... The fit should not take long (less than a minute), and then you end up with the results of the fits: all the parameters. The best way to look at the non global parameters like \(A_1\) and \(A_\infty\) is to use the Show Parameters item from the Parameters menu. Using it and clicking on A_inf too should give you a display like this one:
The A_inf parameter corresponds to the spectum at infinite time (of azide-bound heme), while the A_1 parameter corresponds to the difference spectrum between the initial (azide-free) and final (azide-bound) states. Now, the fit is finished, you can save the parameters if you want to reload them in a later fit by using the Parameters/Save menu item or export them in a form more suitable for plotting using Parameters/Export (although QSoas can also display and the parameters saved using Save). This concludes this first approach to fitting the data. What you can do is How to read the code above All the lines starting by QSoas> in the code areas above are meant to be typed into the QSoas command line (at the bottom of the window), and started by pressing enter at the end. You must remove the QSoas> bit. The other lines (when applicable) show you the response of QSoas, in the terminal just above the command-line. You may want to play with the QSoas tutorial to learn more about how to interact with QSoas. About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.1. You can freely (and at no cost) download its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Contact: find my email address there, or contact me on LinkedIn.

25 May 2021

Shirish Agarwal: Pandemic, Toolkit and India

Pandemic Situation in India. I don t know from where I should start. This is probably a good start. I actually would recommend Indiacable as they do attempt to share some things happening in India from day to day but still there is a lot thatt they just can t cover, nobody can cover. There were two reports which kind of shook me all inside. One which sadly came from the UK publication Independent, probably as no Indian publication would dare publish it. The other from Rural India. I have been privileged in many ways, including friends who have asked me if I need any financial help. But seeing reports like above, these people need more help, guidance and help than I. While I m never one to say give to Foundations. If some people do want to help people from Maharashtra, then moneylifefoundation could be a good place where they could donate. FWIW, they usually use the foundation to help savers and investors be safe and help in getting money when taken by companies with dubious intentions. That is their drive. Two articles show their bent. The first one is about the Algo scam which I have written previously about the same in this blog. Interestingly, when I talk about this scam, all Modi supporters are silent. The other one does give some idea as to why the Govt. is indifferent. That is going to a heavy cross for all relatives to bear. There has been a lot that has been happening. Now instead of being limited to cities, Covid has now gone hinterland in a big way. One could ask also Praveen as he probably knows what would be good for Kerala and surrounding areas. The biggest change, however, has been that India is now battling not just the pandemic but also Mucormycosis also known as black fungus and its deadlier cousin the white fungus. Mucormycosis came largely due to an ill-advise given that applying cow dung gives protection to Corona. And many applied it due to faith. And people who know science do know that in fact it has that bacteria. Sadly, those of us who are and were more interested in law, computer science etc. has now also have to keep on top of what is happening in the medical field. It isn t that I hate it, but it has a lot of costs. From what I could gather on various social media and elsewhere, a single injection of anti-fungal for the above costs INR 3k/- and that needs to be 5 times in a day and that course has to be for three weeks. So even the relatively wealthy people can and will become poor in no time. No wonder thousands of those went to UK, US, Dubai or wherever they could find safe-harbor from the pandemic with no plans of arriving back soon. There was also the whole bit about FBS or Fetal Bovin Serum. India ordered millions of blood serum products from abroad and continues to. This was quickly shut down as news on Social Media. Apparently, it is only the Indian cow which is worthy of reverence. All other cows and their children are fair game according to those in power. Of course, that discussion was quickly shut down as was the discussion about IGP (Indian Genome Project). People over the years had asked me why India never participated for the HGP (Human Gnome Project). I actually had no answer for that. Then in 2020, there was idea of IGP which was put up and then it was quickly shot down as the results could damage a political party s image. In fact, a note to people who want to join Indian civil services tells the reason exactly. While many countries in the world are hypocrites, including the U.S. none can take the place that India has made for itself in that field.

The Online experience The vaccination process has been made online and has led to severe heartburn and trouble for many including many memes. For e.g.

Daily work, get up, have a bath, see if you got a slot on the app, sleep.
People trying desperately to get a slot, taken from Hindi Movie Dilwale Dulhania Le Jaygenge.
Just to explain what is happening, one has to go to the website of cowin. Sharing a screenshot of the same.
Cowin app. sceeenshot
I have deliberately taken a screenshot of the cowin app. in U.P. which is one of the areas where the ruling party, BJP has. I haven t taken my state for the simple reason, even if a slot is open, it is of no use as there are no vaccines. As have been shared in India Cable as well as in many newspapers, it is the Central Govt. which holds the strings for the vaccines. Maharashtra did put up an international tender but to no effect. All vaccine manufacturers want only Central Govt. for purchases for multiple reasons. And GOI is saying it has no money even though recently it got loans as well as a dividend from RBI to the tune of 99k crore. For what all that money is, we have no clue. Coming back though, to the issue at hand. the cowin app. is made an open api. While normally, people like us should and are happy when an API is open, it has made those who understand how to use git, compile, etc. better than others. A copy of the public repo. of how you can do the same can be found on Github. Now, obviously, for people like me and many others it has ethical issues.

Kiran s Interview in Times of India (TOI) There isn t much to say apart from I haven t used it. I just didn t want to. It just is unethical. Hopefully, in the coming days GOI does something better. That is the only thing we are surviving on, hope.

The Toolkit saga A few days before, GOI shared a toolkit apparently made by Congress to defame the party in power. That toolkit was shared before the press and Altnews did the investigation and promptly shredded the claims. Congress promptly made an FIR in Chhattisgarh where it is in power. The gentleman who made the claims Mr. Sambit Patra refused to appear against the police without evidence citing personal reasons and asking 1 week to appear before them. Apart from Altnews which did a great job, sadly many people didn t even know that there is something called WYSIWYG. I had to explain that so many Industries, whether it is politics, creative industries, legal, ad industries, medical transcription, and imaging all use this, and all the participants use the same version of the software. The reason being that in most Industries, there is a huge loss and issue of legal liabilities if something untoward happens. For e.g. if medical transcription is done in India is wrong (although his or her work will be checked by a superior in the West), but for whatever reason is not, and a wrong diagnosis is put (due to wrong color or something) then a patient could die and the firm who does that work could face heavy penalties which could be the death of them. There is another myth that Congress has unlimited wealth or huge wealth. I asked if that was the case, why didn t they shift to Mac. Of course, none have answers on this one. There is another reason why they didn t want to appear. The Rona Wilson investigation by Arsenal Experts also has made them cautious. Previously, they had a free run. Nowadays, software forensic tools are available to one and all. For e.g. Debian itself has a good variety of tools for the same. I remember Vipin s sharing few years back. For those who want to start, just install the apps. and try figuring out. Expertise on using the tools takes years though, as you use the tool day in night. Update 25/05/2021 Apparently because Twitter made and showcased few tweets as Manipulated Media , those in Govt. are and were dead against it. So they conducted a raid against Twitter India headquarters, knowing fully well that there would be nobody except security. The moment I read this, my mind went to the whole Fruit of the poisonous tree legal doctrine. Sadly though, India doesn t recognize it and in fact, still believes in the pre-colonial era that evidence however collected is good. A good explanation of the same can be found here. There are some exceptions to the rule, but they are done so fine that more often than not, they can t be used in the court of law in India. Although a good RTI was shared by Mr. Saket Gokhale on the same issue, which does raise some interesting points
Twitter India Raid, Saket Gokhale RTI 1
Saket Gokhale RTI query , Twitter India Raid 2
FWIW, Saket has been successful in getting his prayers heard either as answers to RTI queries or then following it up in the various High Courts of India. Of course, those who are in the ruling party ridicule him but are unable to find faults in his application of logic. And quite a few times, I have learned from his applications as well as nuances or whatever is there in law, a judgment or a guideline which he invokes in his prayer. For e.g. the Lalitha Kumari Guidelines which the gentleman has shared in his prayer can be found here. Hence now, it would be upto the Delhi Police Cell to prove their case in response to RTI. He has also trapped them as he has shared they can t give excuses/exemptions which they have tried before. As I had shared earlier, High Courts in India have woken up, whether it is Delhi, Mumbai, Aurangabad, Madhya Pradesh, Uttar Pradesh, Odisha or Kerala. Just today i.e. on 25th May 2021, Justices Bela Trivedi and Justice Kalra had asked how come all the hospitals don t have NOC from the Fire De[partment. They also questioned the ASG (Assistant Solicitor General) as how BU (Building Use Certificate) has been granted as almost all the 400 hospitals are in residential area. To which the ASG replies, it is the same state in almost 4000 schools as well as 6000 odd factories in Ahemdabad alone, leave the rest of the district and state alone. And this is when last year strict instuctions were passed. They chose to do nothing sadly. I will share a link on this when bar and bench gives me  The Hindu also shared the whole raid on twitter saga.

Conclusion In conclusion, I sincerely do not where we are headed. The only thing I know is that we cannot expect things to be better before year-end and maybe even after that. It all depends on the vaccines and their availability. After that ruralindia article, I had to see quite a few movies and whatnot just to get that out of my head. And this is apart from the 1600 odd teachers and workers who have died in the U.P. poll duty. Now, what a loss, not just to the family members of the victims, but a whole generation of school children who would not be able to get quality teaching and be deprived of education. What will be their future, God only knows. The only good Bollywood movie which I saw was Ramprasad ki Teravi . The movie was an accurate representation of most families in and around me. There was a movie called Sansar (1987) which showed the breakup of the joint family and into a nuclear family. This movie could very well have been a continuation of the same. Even Marathi movies which at one time were very progressive have gone back to the same boy, girl love story routine. Sameer, though released in late 2020, was able to see it only recently. Vakeel Saab was an ok copy of Pink . I loved Sameer as, unlike Salman Khan films, it showed pretty much an authentic human struggle of a person who goes to the Middle East without any qualifications and works as a laborer and the trials he goes through. Somehow, Malayalam movies have a knack for showing truth without much of budget. Most of the Indian web series didn t make an impact. I think many of them were just going through the motions, it seems as everybody is concerned with the well-being of their near and dear ones. There was also this (Trigger Warning: This story discusses organized campaigns glorifying and advocating sexual violence against Muslim women.) Hoping people somehow make it to the other side of the pandemic.

22 October 2020

Vincent Fourmond: QSoas tips and tricks: generating smooth curves from a fit

Often, one would want to generate smooth data from a fit over a small number of data points. For an example, take the data in the following file. It contains (fake) experimental data points that obey to Michaelis-Menten kinetics: $$v = \frac v_m 1 + K_m/s $$ in which \(v\) is the measured rate (the y values of the data), \(s\) the concentration of substrate (the x values of the data), \(v_m\) the maximal rate and \(K_m\) the Michaelis constant. To fit this equation to the data, just use the fit-arb fit:
QSoas> l michaelis.dat
QSoas> fit-arb vm/(1+km/x)
After running the fit, the window should look like this:
Now, with the fit, we have reasonable values for \(v_m\) (vm) and \(K_m\) (km). But, for publication, one would want to generate "smooth" curve going through the lines... Saving the curve from "Data.../Save all" doesn't help, since the data has as many points as the original data and looks very "jaggy" (like on the screenshot above)... So one needs a curve with more data points. Maybe the most natural solution is simply to use generate-buffer together with apply-formula using the formula and the values of km and vm obtained from the fit, like:
QSoas> generate-buffer 0 20
QSoas> apply-formula y=3.51742/(1+3.69767/x)
By default, generate-buffer generate 1000 evenly spaced x values, but you can change their number using the /samples option. The two above commands can be combined to just one call to generate-buffer:
QSoas> generate-buffer 0 20 3.51742/(1+3.69767/x)
This works, but it is quite cumbersome and it is not going to work well for complex formulas or the results of differential equations or kinetic systems... This is why to each fit- command corresponds a sim- command that computes the result of the fit using a "saved parameters" file (here, michaelis.params, but you can also save it yourself) and buffers as "models" for X values:
QSoas> generate-buffer 0 20
QSoas> sim-arb vm/(1+km/x) michaelis.params 0
This strategy works with every single fit ! As an added benefit, you even get the fit parameters as meta-data, which are displayed by the show command:
QSoas> show 0
Dataset generated_fit_arb.dat: 2 cols, 1000 rows, 1 segments, #0
Flags: 
Meta-data:	commands =	 sim-arb vm/(1+km/x) michaelis.params 0	fit =	 arb (formula: vm/(1+km/x))	km =	 3.69767
	vm =	 3.5174
They also get saved as comments if you save the data. Important note: the sim-arb command will be available only in the 3.0 release, although you can already enjoy it if you use the github version. About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

21 March 2016

Lunar: Reproducible builds: week 47 in Stretch cycle

What happened in the reproducible builds effort between March 13th and March 19th 2016:

Toolchain fixes
  • Petter Reinholdtsen uploaded naturaldocs/1.51-1.1 which makes the output reproducible. Original patch by Chris Lamb.
  • Damyan Ivanov uploaded libpdf-api2-perl/2.025-2 which will make internal font ID reproducible.
  • Christian Hofstaedtler uploaded ruby2.3/2.3.0-5 which sets gzip embedded mtime field to fixed value for rdoc-generated compressed javascript data.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: diction, doublecmd, ruby-hiredis, vdr-plugin-epgsearch. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet:
  • #818128 on nethack by Reiner Herrmann: implement support for SOURCE_DATE_EPOCH, set LC_ALL to C, and ensure deterministic build order when running parallel builds.
  • #818111 on debian-keyring by Satyam Zode: fix the order of files in md5sums.
  • #818067 on ncurses by Niels Thykier: strip trailing whitespaces introduced when using dash as system shell.
  • #818230 on aircrack-ng by Reiner Herrmann: build assembly code as a separate .o file.
  • #818419 on mutt by Daniel Shahaf: use C locale when listing files to be put in README.Patches.
  • #818430 on ruby-coveralls by Dhole: ensure UTC is used as the timezone when generating the documentation.
  • #818686 on littlewizard by Reiner Herrmann: use the C locale in the script for iterating over the files.
  • #818704 on strigi by Reiner Herrmann: sort keys when traversing hashes in makecode.pl.

Package reviews 44 reviews have been removed, 40 added and 5 updated in the previous week. Chris Lamb has reported 16 FTBFS.

5 March 2016

Lunar: Reproducible builds: week 44 in Stretch cycle

What happened in the reproducible builds effort between February 21th and February 27th:

Toolchain fixes Didier Raboud uploaded pyppd/1.0.2-4 which makes PPD generation deterministic. Emmanuel Bourg uploaded plexus-maven-plugin/1.3.8-10 which sorts the components in the components.xml files generated by the plugin. Guillem Jover has implemented stable ordering for members of the control archives in .debs. Chris Lamb submitted another patch to improve reproducibility of files generated by cython.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: dctrl-tools, debian-edu, dvdwizard, dymo-cups-drivers, ekg2, epson-inkjet-printer-escpr, expeyes, fades, foomatic-db, galternatives, gnuradio, gpodder, gutenprint icewm, invesalius, jodconverter-cli latex-mk, libiio, libimobiledevice, libmcrypt, libopendbx, lives, lttnganalyses, m2300w, microdc2, navit, po4a, ptouch-driver, pxljr, tasksel, tilda, vdr-plugin-infosatepg, xaos. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them:

tests.reproducible-builds.org The reproducibly tests for Debian now vary the provider of /bin/sh between bash and dash. (Reiner Herrmann)

diffoscope development diffoscope version 50 was released on February 27th. It adds a new comparator for PostScript files, makes the directory tests pass on slower hardware, and line ordering variations in .deb md5sums files will not be hidden anymore. Version 51 uploaded the next day re-added test data missing from the previous tarball. diffoscope is looking for a new primary maintainer.

Package reviews 87 reviews have been removed, 61 added and 43 updated in the previous week. New issues: captures_shell_variable_in_autofoo_script, varying_ordering_in_data_tar_gz_or_control_tar_gz. 30 new FTBFS have been reported by Chris Lamb, Antonio Terceiro, Aaron M. Ucko, Michael Tautschnig, and Tobias Frost.

Misc. The release team reported on their discussion about the topic of rebuilding all of Stretch to make it self-contained (in respect to reproducibility). Christian Boltz is hoping someone could talk about reproducible builds at the openSUSE conference happening June 22nd-26th in N rnberg, Germany.

21 February 2016

Lunar: Reproducible builds: week 43 in Stretch cycle

What happened in the reproducible builds effort between February 14th and February 20th 2016:

Toolchain fixes Yaroslav Halchenko uploaded cython/0.23.4+git4-g7eed8d8-1 which makes its output deterministic. Original patch by Chris Lamb. Didier Raboud uploaded pyppd/1.0.2-3 to experimental which now serialize PPD deterministically. Lunar submitted two patches for lcms to add a way for clients to set the creation date/time in profile headers and initialize all bytes when writing named colors.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: dbconfig-common, dctrl-tools, dvdwizard, ekg2, expeyes, galternatives, gpodder, icewm, latex-mk, libiio, lives, navit, po4a, tasksel, tilda, vdr-plugin-infosatepg, xaos. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Unknown status:
  • tomcat7/7.0.68-1 by Emmanuel Bourg (test suite fails in test environment).
Patches submitted which have not made their way to the archive yet:
  • #814840 on tor by Petter Reinholdtsen: use the UTC timezone when calling asciidoc.
  • #815082 on arachne-pnr by Dhole: use the C locale to format the changelog date.
  • #815192 on manpages-de by Reiner Herrmann: tell grep to always treat the input as text so that it works with non-UTF-8 locales.
  • #815193 on razorqt by Reiner Herrmann: tell grep to always treat the input as text so that it works with non-UTF-8 locales.
  • #815250 on jacal by Reiner Herrmann: use the C locale to format the build date.
  • #815252 on colord by Lunar: remove extra timestamps when generating CMF and spectra and implement support for SOURCE_DATE_EPOCH.

reproducible.debian.net Two new package sets have been added: freedombox and freedombox_build-depends. (h01ger)

diffoscope development diffoscope version 49 was released on February 17th. It continues to improve handling of debug symbols for ELF files. Their content will now be compared separately to make them more readable. The search for matching debug packages is more efficient by looking only for .deb files in the same parent directory. Alongside more bug fixes, support for ICC profiles has been added, and libarchive is now also used to read metadata for ar archives.

strip-nondeterminism development Reiner Herrmann added support to normalize Gettext .mo files.

Package reviews 170 reviews have been removed, 172 added and 54 updated in the previous week. 34 new FTBFS bugs have been opened by Chris Lamb, h01ger and Reiner Herrmann. New issues added this week: lxqt_translate_desktop_binary_file_matched_under_certain_locales, timestamps_in_manpages_generated_by_autogen. Improvements to the prebuilder script: avoid ccache, skip disorderfs hook if device nodes cannot be created, compatibility with grsec trusted path execution (Reiner Herrmann), code cleanup (Esa Peuha).

Misc. Steven Chamberlain highlighted reproducibility problems due to differences in how Linux and FreeBSD handle permissions for symlinks. Some possible ways forward have been discussed on the reproducible-builds mailing list. Bernhard M. Wiedemann reported on some reproducibility tests made on OpenSuse mentioning the growing support for SOURCE_DATE_EPOCH. If you are eligible for Outreachy or Google Summer of Code, consider spending the summer working on reproducible builds!

26 July 2015

Lunar: Reproducible builds: week 13 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes akira uploaded a new version of doxygen in the experimental reproducible repository incorporating upstream patch for SOURCE_DATE_EPOCH, and now producing timezone independent timestamps. Dhole updated Peter De Wachter's patch on ghostscript to use SOURCE_DATE_EPOCH and use UTC as a timezone. A modified package is now being experimented. Packages fixed The following 14 packages became reproducible due to changes in their build dependencies: bino, cfengine2, fwknop, gnome-software, jnr-constants, libextractor, libgtop2, maven-compiler-plugin, mk-configure, nanoc, octave-splines, octave-symbolic, riece, vdr-plugin-infosatepg. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Packages identified as failing to build from source with no bugs filed and older than 10 days are scheduled more often now (except in experimental). (h01ger) Package reviews 178 obsolete reviews have been removed, 59 added and 122 updated this week. New issue identified this week: random_order_in_ruby_rdoc_indices. 18 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and h01ger.

Lunar: Reproducible builds: week 12 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Eric Dorlan uploaded automake-1.15/1:1.15-2 which makes the output of mdate-sh deterministic. Original patch by Reiner Herrmann. Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-8 which now honors SOURCE_DATE_EPOCH. Original patch by Reiner Herrmann. Chris Lamb submitted a patch to dh-python to make the order of the generated maintainer scripts deterministic. Chris also offered a fix for a source of non-determinism in dpkg-shlibdeps when packages have alternative dependencies. Dhole provided a patch to add support for SOURCE_DATE_EPOCH to gettext. Packages fixed The following 78 packages became reproducible in our setup due to changes in their build dependencies: chemical-mime-data, clojure-contrib, cobertura-maven-plugin, cpm, davical, debian-security-support, dfc, diction, dvdwizard, galternatives, gentlyweb-utils, gifticlib, gmtkbabel, gnuplot-mode, gplanarity, gpodder, gtg-trace, gyoto, highlight.js, htp, ibus-table, impressive, jags, jansi-native, jnr-constants, jthread, jwm, khronos-api, latex-coffee-stains, latex-make, latex2rtf, latexdiff, libcrcutil, libdc0, libdc1394-22, libidn2-0, libint, libjava-jdbc-clojure, libkryo-java, libphone-ui-shr, libpicocontainer-java, libraw1394, librostlab-blast, librostlab, libshevek, libstxxl, libtools-logging-clojure, libtools-macro-clojure, litl, londonlaw, ltsp, macsyfinder, mapnik, maven-compiler-plugin, mc, microdc2, miniupnpd, monajat, navit, pdmenu, pirl, plm, scikit-learn, snp-sites, sra-sdk, sunpinyin, tilda, vdr-plugin-dvd, vdr-plugin-epgsearch, vdr-plugin-remote, vdr-plugin-spider, vdr-plugin-streamdev, vdr-plugin-sudoku, vdr-plugin-xineliboutput, veromix, voxbo, xaos, xbae. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net The statistics on the main page of reproducible.debian.net are now updated every five minutes. A random unreviewed package is suggested in the look at a package form on every build. (h01ger) A new package set based new on the Core Internet Infrastructure census has been added. (h01ger) Testing of FreeBSD has started, though no results yet. More details have been posted to the freebsd-hackers mailing list. The build is run on a new virtual machine running FreeBSD 10.1 with 3 cores and 6 GB of RAM, also sponsored by Profitbricks. strip-nondeterminism development Andrew Ayer released version 0.009 of strip-nondeterminism. The new version will strip locales from Javadoc, include the name of files causing errors, and ignore unhandled (but rare) zip64 archives. debbindiff development Lunar continued its major refactoring to enhance code reuse and pave the way to fuzzy-matching and parallel processing. Most file comparators have now been converted to the new class hierarchy. In order to support for archive formats, work has started on packaging Python bindings for libarchive. While getting support for more archive formats with a common interface is very nice, libarchive is a stream oriented library and might have bad performance with how debbindiff currently works. Time will tell if better solutions need to be found. Documentation update Lunar started a Reproducible builds HOWTO intended to explain the different aspects of making software build reproducibly to the different audiences that might have to get involved like software authors, producers of binary packages, and distributors. Package reviews 17 obsolete reviews have been removed, 212 added and 46 updated this week. 15 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and Mattia Rizzolo. Presentations Lunar presented Debian efforts and some recipes on making software build reproducibly at Libre Software Meeting 2015. Slides and a video recording are available. Misc. h01ger, dkg, and Lunar attended a Core Infrastructure Initiative meeting. The progress and tools mode for the Debian efforts were shown. Several discussions also helped getting a better understanding of the needs of other free software projects regarding reproducible builds. The idea of a global append only log, similar to the logs used for Certificate Transparency, came up on multiple occasions. Using such append only logs for keeping records of sources and build results has gotten the name Binary Transparency Logs . They would at least help identifying a compromised software signing key. Whether the benefits in using such logs justify the costs need more research.

24 November 2013

Petter Reinholdtsen: New chrpath release 0.15

After many years break from the package and a vain hope that development would be continued by someone else, I finally pulled my acts together this morning and wrapped up a new release of chrpath, the command line tool to modify the rpath and runpath of already compiled ELF programs. The update was triggered by the persistence of Isha Vishnoi at IBM, which needed a new config.guess file to get support for the ppc64le architecture (powerpc 64-bit Little Endian) he is working on. I checked the Debian, Ubuntu and Fedora packages for interesting patches (failed to find the source from OpenSUSE and Mandriva packages), and found quite a few nice fixes. These are the release notes: New in 0.15 released 2013-11-24: You can download the new version 0.15 from alioth. Please let us know via the Alioth project if something is wrong with the new release. The test suite did not discover any old errors, so if you find a new one, please also include a testsuite check.

6 September 2013

Tanguy Ortolo: WebPG, a PGP addon for web browsers

WebPG logo, i.e. GnuPG logo with a web over a spider web One problem with PGP, at least with GnuPG, is that it does not interact with the web. There used to be a Firefox addon for that, called FirePGP, but its development was stopped. So, good news, a new addon has come to fill the gap it left: WebPG, an addons for Firefox and Chrome. I am using it since a while, and it seems to work fine, being able to encrypt, sign, decrypt and check text blocks. Of course, it cannot handle PGP/MIME unless explicitly adapted to the webmail you use, but there seem to be some experimental support for GMail.

15 November 2012

Axel Beckert: Tools to handle archives conveniently

TL;DR: There s a summary at the end of the article. Today I wanted to see why a dependency in a .deb-package from an external APT repository changed so that it became uninstallable. While dpkg-deb --info foobar.deb easily shows the control information, the changelog is in the filesystem part of the package. I could extract that one dpkg-deb, too, but I d have to extract either to some temporary directory or pipe it into tar which then can extract a single file from the archive and sent it to STDOUT:
dpkg-deb --fsys-tarfile foobar.deb   tar xOf - ./usr/share/doc/foobar/changelog.Debian.gz   zless
But that s tedious to type. The following command is clearly less to type and way easier to remember:
acat foobar.deb ./usr/share/doc/foobar/changelog.Debian.gz   zless
acat stands for archive cat is part of the atool suite of commands:
als
lists files in an archive.
$ als foobar.tgz
drwxr-xr-x abe/abe           0 2012-11-15 00:19 foobar/
-rw-r--r-- abe/abe          13 2012-11-15 00:20 foobar/bar
-rw-r--r-- abe/abe          13 2012-11-15 00:20 foobar/foo
acat
extracts files in an archive to standard out.
$ acat foobar.tgz foobar/foo foobar/bar
foobar/bar
bar contents
foobar/foo
foo contents
adiff
generates a diff between two archives using diff(1).
$ als quux.zip
Archive:  quux.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  2012-11-15 00:23   quux/
       16  2012-11-15 00:22   quux/foo
       13  2012-11-15 00:20   quux/bar
---------                     -------
       29                     3 files
$ adiff foobar.tgz quux.zip
diff -ru Unpack-3594/foobar/foo Unpack-7862/quux/foo
--- Unpack-3594/foobar/foo      2012-11-15 00:20:46.000000000 +0100
+++ Unpack-7862/quux/foo        2012-11-15 00:22:56.000000000 +0100
@@ -1 +1 @@
-foo contents
+foobar contents
arepack
repacks archives to a different format. It does this by first extracting all files of the old archive into a temporary directory, then packing all files extracted to that directory to the new archive. Use the --each (-e) option in combination with --format (-F) to repack multiple archives using a single invocation of atool. Note that arepack will not remove the old archive.
$ arepack foobar.tgz foobar.txz
foobar.tgz: extracted to  Unpack-7121/foobar'
foobar.txz: grew 36 bytes
apack
creates archives (or compresses files). If no file arguments are specified, filenames to add are read from standard in.
aunpack
extracts files from an archive. Often one wants to extract all files in an archive to a single subdirectory. However, some archives contain multiple files in their root directories. The aunpack program overcomes this problem by first extracting files to a unique (temporary) directory, and then moving its contents back if possible. This also prevents local files from being overwritten by mistake.
(atool subcommand descriptions from the atool man page which is licensed under GPLv3+. Examples by me.) I though miss the existence of an agrep subcommand. Guess why? atool supports a wealth of archive types: tar (gzip-, bzip-, bzip2-, compress-/Z-, lzip-, lzop-, xz-, and 7zip-compressed), zip, jar/war, rar, lha/lzh, 7zip, alzip/alz, ace, ar, arj, arc, rpm, deb, cab, gzip, bzip, bzip2, compress/Z, lzip, lzop, xz, rzip, lrzip and cpio. (Not all subcommands support all archive types.) Similar Utilities There are some utilities which cover parts of what atool does, too: Tools from the mtools package Yes, they come from the handle MS-DOS floppy disks tool package, don t ask me why. :-)
uz
gunzips and extracts a gzip d tar d archives
Advantage over aunpack: Less to type. :-)
Disadvantage compared to aunpack: Supports only one archive format.
lz
gunzips and shows a listing of a gzip d tar d archive
Advantage over als: One character less to type. :-)
Disadvantage compared to als: Supports only one archive format.
unp unp extracts one or more files given as arguments on the command line.
$ unp -s
Known archive formats and tools:
7z:           p7zip or p7zip-full
ace:          unace
ar,deb:       binutils
arj:          arj
bz2:          bzip2
cab:          cabextract
chm:          libchm-bin or archmage
cpio,afio:    cpio or afio
dat:          tnef
dms:          xdms
exe:          maybe orange or unzip or unrar or unarj or lha 
gz:           gzip
hqx:          macutils
lha,lzh:      lha
lz:           lzip
lzma:         xz-utils or lzma
lzo:          lzop
lzx:          unlzx
mbox:         formail and mpack
pmd:          ppmd
rar:          rar or unrar or unrar-free
rpm:          rpm2cpio and cpio
sea,sea.bin:  macutils
shar:         sharutils
tar:          tar
tar.bz2,tbz2: tar with bzip2
tar.lzip:     tar with lzip
tar.lzop,tzo: tar with lzop
tar.xz,txz:   tar with xz-utils
tar.z:        tar with compress
tgz,tar.gz:   tar with gzip
uu:           sharutils
xz:           xz-utils
zip,cbz,cbr,jar,war,ear,xpi,adf: unzip
zoo:          zoo
So it s very similar to aunpack, just a shorter command and it supports some more exotic archive formats which atool doesn t support. Also part of the unp package is ucat which does more or less the same as acat, just with unp as backend. dtrx From the man page of dtrx:
In addition to providing one command to extract many different archive types, dtrx also aids the user by extracting contents consistently. By default, everything will be written to a dedicated directory that s named after the archive. dtrx will also change the permissions to ensure that the owner can read and write all those files. Supported archive formats: tar, zip (including self-extracting .exe files), cpio, rpm, deb, gem, 7z, cab, rar, and InstallShield. It can also decompress files compressed with gzip, bzip2, lzma, or compress.
dtrx -l lists the contents of an archive, i.e. works like als or lz. dtrx has two features not present in the other tools mentioned so far: Unfortunately you can t mix those two features. But you can use the following tool for that purpose: deepfind deepfind is a command from the package strigi-utils and recursively lists files in archives, including archives in archives. I ve already written a detailed blog-posting about deepfind and its friend deepgrep. tardiff tardiff was written to check what changed in source code tarballs from one release to another. By default it just lists the differences in the file lists, not in the files contents and hence works different than adiff. Summary atool and friends are probably the first choice when it comes to DWIM archive handling, also because they have an easy to remember subcommand scheme. uz and lz and the shortest way to extract or list the contents of a .tar.gz file. But nothing more. And you have to install mtools even if you don t have a floppy drive. unp comes in handy for exotic archive formats atool doesn t support. And it s way easier to remember and type than aunpack. dtrx is neat if you want to extract archives in archives or if you want to extract metadata from some package files with just a few keystrokes. For listing all files in recursive archives, use deepfind.

30 August 2012

Axel Beckert: deepgrep: grep nested archives with one command

Several months ago, I wrote about grep everything and listed grep-like tools which can grep through compressed files or specific data formats. The blog posting sparked several magazine articles and talks by Frank Hofmann and me. Frank recently noticed that we though missed one more or less mighty tool so far. We missed it, because it s mostly unknown, undocumented and hidden behind a package name which doesn t suggest a real recursive grep everything : deepgrep deepgrep is part of the Debian package strigi-utils, a package which contains utilities related to the KDE desktop search Strigi. deepgrep especially eases the searching through tar balls, even nested ones, but can also search through zip files and OpenOffice.org/LibreOffice documents (which are actually zip files). deepgrep seems to support at least the following archive and compression formats: A search in an archive which is deeply nested looks like this:
$ deepgrep bar foo.ar
foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt:foobar
foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt:bar
deepgrep though neither seems to support any LZMA based compression (lzma, xz, lzip, 7z), nor does it support lzop, rzip, compress (.Z suffix), cab, cpio, xar, or rar. Further current drawbacks of deepgrep: deepfind If you just need the file names of the files in nested archives, the package also contains the tool deepfind which does nothing else than to list all files and directories in a given set of archives or directories:
$ deepfind foo.ar
foo.ar
foo.ar/foo.tar
foo.ar/foo.tar/foo.tar.gz
foo.ar/foo.tar/foo.tar.gz/foo.zip
foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2
foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz
foo.ar/foo.tar/foo.tar.gz/foo.zip/foo.tar.bz2/foo.txt.gz/foo.txt
As with deepgrep, deepfind does not implement any common options of it s normal sister tool find. Dependencies The package strigi-utils doesn t pull in the complete Strigi framework (i.e. no daemon), just a few libraries (libstreams, libstreamanalyzer, and libclucene). On Wheezy it also pulls in some audio/video decoding libraries which may make some server administrators less happy. Conclusion Both tools are quite limited to some basic use cases, but can be worth a fortune if you have to work with nested archives. Nevertheless the claim in the Debian package description of strigi-utils that they re enhanced versions of their well known counterparts is IMHO disproportionate. Most of the missing features and documentation can be explained by the primary purpose of these tools: Being backend for desktop searches. I guess, there wasn t much need for proper commandline usage yet. Until now. ;-) 42.zip And yes, I was curious enough to let deepfind have a look at 42.zip (the one from SecurityFocus, unzip seems not able to unpack 42.zip from unforgettable.dk due a missing version compatibility) and since it just traverses the archive sequentially, it has no problem with that, needing just about 5 MB of RAM and a lot of time:
[ ]
42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page e.zip
42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page e.zip/0.dll
42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page f.zip
42.zip/lib f.zip/book f.zip/chapter f.zip/doc f.zip/page f.zip/0.dll
deepfind 42.zip  11644.12s user 303.89s system 97% cpu 3:24:02.46 total
I though won t try deepgrep on 42.zip. ;-)

8 September 2011

Wouter Verhelst: Why I think MySQL is a toy.

A commentor on my previous post asked why I think MySQL is a toy. I've actually blogged about that a number of times, but when wanting to point that out, I found that most of those posts point out just one thing, rather than having one post that enumerates them all. So let's remedy that, shall we? There are many things wrong with MySQL, including, but not limited to: So it's my opinion that any database which fails to store data correctly in its default settings can't be anything but a toy; or that a database which has a comparatively small feature set can't be anything but a toy. But maybe that's just me. [1] No, I haven't used all those features; but I have used asynchronous notification, sequences (other than for primary keys), kerberos auth, custom data types, and (obviously) I have enjoyed the extra peace of mind of knowing that my database is ACID compliant, meaning that it will either accept my transaction as a whole, or reject it as a whole (but usually the former). In addition, I've seen customers use the table inheritance feature.

2 July 2011

Guillaume Mazoyer: Status report Jigsaw num. 3 for GSoC 2011

code.pngAt beginning of the last two weeks, I was working on the final package of JTReg. I was working on generating man pages and I finally found a solution. I decided to use help2man to generate the man pages but I had to patch the two scripts jtreg.sh and jtdiff.sh . Basically, these two scripts launch main classes of the jtreg.jar JAR file. The problem with them is that they don t know where they can find the JAR file on Debian. So I patched the two scripts shell so they can locate JAR files in /usr/share/java .
To be able to generate man pages with help2man, I needed to be able to use the scripts. But with jtdiff.sh and jtreg.sh using jtreg.jar which is not in /usr/share/java during the build, I had to find a way to make the scripts work. I decided to patch the scripts (again) and make them depend of some environment variables. So in each script, there are 4 used variables: So during the packaging only JTREG_HOME needed to be changed to be able to use the script. Once the scripts patch done, I finally wrote the rules file of the package. Once again, I used javahelper so the rules file contains the following code:

#!/usr/bin/make -f
JAVA_HOME = /usr/lib/jvm/default-java

override_dh_auto_build:
ant -f make/build.xml
dh_auto_build
JTREG_HOME=./dist/jtreg/lib/ help2man \
--name="Regression Test Harness" \
--help-option="-help" \
./dist/jtreg/linux/bin/jtdiff > jtdiff.1
JTREG_HOME=./dist/jtreg/lib/ help2man \
--name="Regression Test Harness" \
--help-option="-help" \
./dist/jtreg/linux/bin/jtreg > jtreg.1

override_dh_auto_clean:
rm -r dist :
rm -r build :
rm jtdiff.1 :
rm jtreg.1 :
dh_auto_clean

%:
dh $@ --with javahelper


As you can see it is not really complicated. The most interesting part is the one when the man pages are generated. The JTREG_HOME variable is set to ./dist/jtreg/lib/ because it is the path where jtreg.jar is once it is built. I also used the help-option option of help2man because the help option of jtreg and jtdiff is -help and not help .
The jtreg package is now on the Debian Java team SVN so anyone can get it using:

svn checkout svn+ssh://$ SVN_USER @svn.debian.org/svn/pkg-java/trunk/jtreg

Thanks to Sylestre Ledru, jtreg is now available in Sid and the ITP is now closed.

The second part of my work consisted in testing Jigsaw and fixed tests if necessary. All the packaging work is now useful since the tests need JTReg to be run. The problem with the current build system of Jigsaw is that it is enough generic to be used on any system. So I had to patch several makefiles. To find them:

cd jigsaw-tests
find . -name 'Makefile' grep '/test/Makefile'

Then, the path to find jtreg has to be modified to match /usr/bin/jtreg . After that it is possible to run the tests with the following command:

make test

Passing all the tests is something which takes a lot of time. After that, I identified 42 (that s a cool number right?) failing tests. Some of them came from the lack of X11 server and other from unreachable network hosts. Some hackers of IcedTea gave me some ideas to fix several tests. They told me to look here and here because I just had the same problems that they had before.
With this, I was able to fix the tests related to unreachable hosts and to the lack of X server. I used Xvfb to fix the X server depending tests.
After running the tests again, only 13 tests were still failing. That is a good result. Tom Marble and me decided to apply the patch of Alan Bateman and re-run the tests to see if it will break something:

cd jigsaw-tests
mkdir patches && cd patches
wget http://cr.openjdk.java.net/~alanb/jigsaw-mp-prototype1/webrev/jdk.patch
cd jdk
patch -p1 < ../patches/jdk.patch
make clean
make sanity
make all
make modules
xvfb-run -e xvfb-errors -a -s -ac make test -k


15 tests failed against 13 before. The 2 new failing tests are: I didn t go further but I ll try to understand why we have 15 failing tests in few days. All the modifications that I have made on the source code of Jigsaw can be found in a patch format here:

This 2 weeks were also dedicated to the understanding of Jigsaw and its modules. Reading the jigsaw-dev mailing list, I have found some interesting conversation about jigsaw and how to write modules. So I decided to write a module to see how it works. I followed the instruction of the quick start guide but when compiling the module didn t work. So I tried to understand why and then to fix it. I eventually compiled and installed my module and here is how I did it:

mkdir -p src/com.greetings/com/greetings/
mkdir -p src/org.astro/org/astro/
mkdir modules
vim src/com.greetings/module-info.java
vim src/com.greetings/com/greetings/Hello.java
vim src/org.astro/module-info.java
vim src/org.astro/org/astro/World.java
./jigsaw/build/linux-amd64/bin/javac -d modules \
-modulepath ./jigsaw/build/linux-amd64/modules/modules \
-sourcepath src find src -name '*.java'
./jigsaw/build/linux-amd64/bin/jmod create -L mlib
./jigsaw/build/linux-amd64/bin/jmod install \
modules org.astro com.greetings -L mlib
./jigsaw/build/linux-amd64/bin/java -L mlib -m com.greetings


And here it is, we have a nice hello world module. The documentation says to put the module-info.java of org.astro in src/org.astro/org/astro/ but it didn t work for me so I tried to put it in src/org.astro/ and it worked. Also the quick start guide says that we have to use the -modulepath option to specify where the modules are. I first tried to use ./jigsaw/build/linux-amd64/modules/ but javac told me he could not find java.lang so I noticed that there is a subdirectory called modules in ./jigsaw/build/linux-amd64/modules/ so I used it as module path and it worked.

Writing a module and seeing how Jigsaw modules are made started to make me think about the packaging of Jigsaw. Dependencies being available, knowing how to build and test Jigsaw and seeing how modules are made, I think that I will soon try to start the packaging of Jigsaw and getting started with the packaging will be my goal for the next two weeks.

24 December 2009

Jonathan McDowell: Neat find of the day: inputlirc

I've recently been fixing up my VDR setup to work with FreeSat and make it brother/parent friendly. I've applied the EPG patches to get the 7 day guide, setup an autologging in user under gdm with vdr-sxfe running and that left getting the remote working. For some reason my old serial dongle wasn't happy with lirc - it got detected ok, and would show some signal when buttons were pressed but didn't work with the old config. The entire hardware of the box has changed, so it seems likely something isn't quite right (in particular the lirc drivers spew out warnings about SMP bits so I should probably try the dongle under a single core setup to rule that out, but there's also a move to 64 bit involved).

The easy solution to have something sorted for Christmas was to pickup a cheap remote from eBay. This ended up being a Cyberlink remote + USB dongle combo. Worked just fine when plugged in, turning up as a normal input device and the obvious keys doing the obvious things. I wanted all the keys to work though, as I'd got used to having a lot of the VDR functions instantly accessible rather than having to work my way through the menus. Various searches suggested I'd need to use LIRC to access the odder keys. That seemed a lot of hassle for something that was doing the decoding itself. Some playing with xev turned up keycodes for a number of the keys, but there were still a few missing (and important ones at that, such as Red/Green/Yellow/Blue). Further digging found me a suggestion of an Xorg keyboard map that would map the KEY_RED etc from the evdev device into something workable under X. And then I found inputlirc via the Debian package. This is really bloody neat - point it at an evdev device and it will present all of the KEY_* codes out as lirc keys. If you pass the -g parameter it makes sure the key presses only go to lirc as well. Exactly what I want and a doddle to setup - no messing with a big configuration file, just edit /etc/defaults/inputlirc to point to the correct /dev/input/by-id/ file, add the -g to the options in that file and restart.

Now the main remaining task is to get it working with BBC/ITV HD.

11 November 2008

MJ Ray: Five goes free-to-air, ASTEFAQ updated

After seeing Five to launch on Freesat over on the DTG website, I tried rescanning 28e and a channel called 6335 (or similar) appeared in the channel list but it had a FIVE logo in the corner. Maybe those DOGs do serve some purpose sometimes - but it would be better if they actually set the channel name correctly. So, if you have a free-to-air satellite set pointed at 28e (which most UK dishes are), then you now have Five. If you have Freesat, you’ve another week before it appears on your EPG. It’s better to be free-to-air. I’ve not seen much of five since leaving Norwich about a decade ago: North End King’s Lynn is “fringe” reception for even the main channels, while neither the Kewstoke nor Cardiff transmitters broadcast it yet. It looks like it’s changed quite a lot. Anyway, I’ve updated the alt.satellite.tv.europe FAQ to move five into the list of FTA channels. Anyone know about getting the Freesat EPG on MythTV yet?

16 September 2008

Runa Sandvik: Can has lolcode?

HAI
CAN HAS ATTENTION? About a year ago I wrote a blog post about syntax highlighting and indenting for lolcode in vim. A lot has happened with the programming language since then, and I haven’t had the time to update the two files. Chris (aka cheepguava) was nice enough to do just that. The things he added includes multiline comments (OBTW … TLDR), special characters within strings (:) or :> or :o or :: or :”) and more language keywords to get it closer to the 1.2 spec. You can find the tarball at http://www.indentedlines.net/lolcode/vim.tar.gz Please note that you have to rename the two files to lolcode.vim Put the two files in $HOME/.vim/syntax and $HOME/.vim/indent respectively. You also need to have filetype plugin indent on in your .vimrc file. Using syntax on and filetype on is also a good idea. Because lolcode is a new filetype, you need to make sure it is detected by the system: mkdir ~/.vim/ftdetect Create a new file named lol.vim in that directory. In this file you need to have the following line: au BufRead,BufNewFile *.lol set filetype=lolcode Thanks to Chris for helping out, and I hope you enjoy! KTHXBYE

9 September 2008

Jan Wagner: life

1 July 2008

Stefano Zacchiroli: python-debian w dependency parsing

New python-debian feature: Dependency Parsing Since my merge commit of this afternoon python-debian has grown dependency parsing support. (But first things first: you know about python-debian, don't you? If you don't, and you always wanted to program with Debian-related files with Python, then shame on you!) Thus far, playing with Packages-like files using dictionary-like objects was already as simple as (quoting from /usr/share/doc/python-debian/examples/deb822/grep-maintainer):
for pkg in deb822.Packages.iter_paragraphs(file('/var/lib/dpkg/status')):
    if pkg.has_key('Maintainer') and maint_RE.search(pkg['maintainer']):
        print pkg['package']

However, it was a bit unfortunate that Depends-like fields were only returned as strings. Now each package (as iterated upon in the above snippet) is equipped with a .relations property returning a dictionary of inter-package relationship fields. Looking up keys like "depends" you will get back a conjunctive normal form (CNF) representation of dependencies, together with parsed constraints on version or architectures, if any. A simple example is shipped as /usr/share/doc/python-debian/examples/deb822/depgraph, which outputs a labeled Graphviz script of all inter-package dependencies extracted from a Packages file. Here is its most relevant snippet:
name = pkg['package']
rels = pkg.relations
for deps in rels['depends']:
    if len(deps) == 1:
        emit_arc(name, deps[0]['name'])
    else:   # output an OR node
        or_node = get_id()
        emit_arc(name, or_node)
        emit_node(or_node, 'OR')
        for dep in deps:
            emit_arc(or_node, dep['name'].lower())

The appropriate sub-classes of Deb822 have been customized with the knowledge of which inter-package relationship fields are supported in their stanzas (e.g. Build-Depends are supported by Sources, but not by Packages; the other way around for Recommends). Testing from the git repo is more than welcome.

24 April 2008

Joachim Breitner: Pausable IO actions for better GUI responsiveness

For a university seminar I m currently writing a GUI program to view fractals based on simple iterated function systems (only similaritudes allowed). It supports three different drawing algorithms and you can edit the IFS by dragging squares around on the screen. But this post is not about this program (I might present it later), but how Haskell allowed me to solve a problem very nicely:The first instances of the code had a problem that a lot of GUI programmers know: The drawing of the fractal took quite some time, and during that time the gtk main loop is blocked and the program becomes unresponsive. At first I avoided this problem by manually splitting the drawing function (e.g. by repeatedly increasing the resolution) and kept re-drawing it at higher resolutions in an idle handler, so at least I could interact whenever one resolution has finished drawing. It worked somewhat but it was not easily done for the other, not pixel based, algorithms and it was ugly.So I wanted a way to (a) pause the drawing at any convenient point, to resume it later, and (b) safely abort the drawing if what I m drawing has changed and I need to restart.A common solution to this would be to do the drawing in a separate thread, so (a) is actually not needed, but I did not know if I can safely do (b), and I have heard that threads cause problems with gtk.So I tried to dig deeper for the hidden treasures of advanced haskell programming: I need a monad transformer! I expected the infamous ContT monad transformer to help, but I couldn t figure out how, and I started to create my own monad transformer, called CoroutineT.I tried to figure out what an action of type (CoroutineT IO a) should do, and I came up with this type signature:
pausingAction :: IO (Either (CoroutineT IO a) a)
which means that after the pausingAction is done, it is Either paused (and I get back another pausingAction to run when I want it), or it is done (and I get the result). Note that I m writing IO here, but it can be any monad.The definition of the datatype and the monad instance came mostly from trying to make this type work (yay to haskell s type system, less thinking required), and looks like this:
data CoroutineT m a = CoroutineT  unCoroutineT :: (m (Either (CoroutineT m a) a))  

instance (Monad m) => Monad (CoroutineT m) where
return v = CoroutineT (return (Right v))
a >>= b = CoroutineT $ do
r <- unCoroutineT a
case r of
Left paused -> return $ Left (paused >>= b)
Right unpaused -> unCoroutineT (b unpaused)
This translates to english like this: A call to return is not paused. When an action is already paused, further actions should be run after the paused action is resumed. When an action is not paused, further actions can happen now.Like every well behaving monad transfomer, I also need a "runCoroutineT" function to start the coroutine. I probably could have used unCoroutineT directly, but for my use case (GUI drawing) I did not need a return value, so this function is more handy:
runCoroutineT :: Monad m => CoroutineT m () -> m (Maybe (CoroutineT m ()))
runCoroutineT a = either (Just) (const Nothing) liftM unCoroutineT a
Nothing surprising here, basically just turning the Either into a Maybe. So it becomes clear how to do (b): We can just throw away the resume action returned by runCoroutineT (if any). The more interesting thing is how we do (a): We need a pause action of type (Coroutine m ()). But how should it work? I did not really try to understand why it works, but by looking at the types, I came up with this:
pause :: Monad m => CoroutineT m ()
pause = CoroutineT (return (Left (CoroutineT (return (Right ())))))
Yes, it sounds like some dance step instructions (read the second line out aloud!), but it works somehow.So here is some example code: I have a pausable IO action that counts from one to ten, pausing after each number. I also have function that resumes an pausable action up to n times:
example n = keepGoingFor n $ do
        liftIO $ putStrLn "This is the coroutine"
        forM_ [1..10] $ \i -> do
            liftIO $ putStrLn $ "Counting to "++ show i ++" while you keep calling it"
            pause
  where --keepGoing :: Monad m => CoroutineT m () -> m ()
      keepGoingFor 0 _   = putStrLn "Here I just abort the run"
      keepGoingFor n cor = do
        resume <- runCoroutineT cor
        case resume of
            Just runAgain -> keepGoingFor (n-1) runAgain
            Nothing       -> putStrLn "Finally stopped"
And here is the output of two different runs:
*CouroutineT> example 5
This is the coroutine
Counting to 1 while you keep calling it
Counting to 2 while you keep calling it
Counting to 3 while you keep calling it
Counting to 4 while you keep calling it
Counting to 5 while you keep calling it
Here I just abort the run
*CouroutineT> example 14
This is the coroutine
Counting to 1 while you keep calling it
Counting to 2 while you keep calling it
Counting to 3 while you keep calling it
Counting to 4 while you keep calling it
Counting to 5 while you keep calling it
Counting to 6 while you keep calling it
Counting to 7 while you keep calling it
Counting to 8 while you keep calling it
Counting to 9 while you keep calling it
Counting to 10 while you keep calling it
Finally stopped
So it does really works fine, and it proved very useful in my GUI drawing problem. For that, I created this nice control structure which works like mapM_, but calls pause every n iterations, and therefore hides the pausing stuff almost completely:
pausingForM_ :: Monad m => Int -> [a] -> (a -> CoroutineT m ()) -> CoroutineT m ()
pausingForM_ period list action = pausing' 0 list
where pausing' _ [] = return ()
pausing' n (x:xs) = do action x
if n==period then pause >> pausing' 0 xs
else pausing' (n+1) xs
I have put the complete module (including instances omitted here) in the darcs repository that might later also contain the fractal drawing program.

Next.