Search Results: "lele"

22 December 2023

Joachim Breitner: The Haskell Interlude Podcast

It was pointed out to me that I have not blogged about this, so better now than never: Since 2021 I am together with four other hosts producing a regular podcast about Haskell, the Haskell Interlude. Roughly every two weeks two of us interview someone from the Haskell Community, and we chat for approximately an hour about how they came to Haskell, what they are doing with it, why they are doing it and what else is on their mind. Sometimes we talk to very famous people, like Simon Peyton Jones, and sometimes to people who maybe should be famous, but aren t quite yet. For most episodes we also have a transcript, so you can read the interviews instead, if you prefer, and you should find the podcast on most podcast apps as well. I do not know how reliable these statistics are, but supposedly we regularly have around 1300 listeners. We don t get much feedback, however, so if you like the show, or dislike it, or have feedback, let us know (for example on the Haskell Disourse, which has a thread for each episode). At the time of writing, we released 40 episodes. For the benefit of my (likely hypothetical) fans, or those who want to train an AI voice model for nefarious purposes, here is the list of episodes co-hosted by me: Can t decide where to start? The one with Ryan Trinkle might be my favorite. Thanks to the Haskell Foundation and its sponsors for supporting this podcast (hosting, editing, transscription).

24 September 2023

Sahil Dhiman: Abraham Raji

Abraham with Polito Man, you re no longer with us, but I am touched by the number of people you have positively impacted. Almost every DebConf23 presentations by locals I saw after you, carried how you were instrumental in bringing them there. How you were a dear friend and brother. It s a weird turn of events, that you left us during one thing we deeply cared and worked towards making possible since the last 3 years together. Who would have known, that Sahil, I m going back to my apartment tonight and casual bye post that would be the last conversation we ever had. Things were terrible after I heard the news. I had a hard time convincing myself to come see you one last time during your funeral. That was the last time I was going to get to see you, and I kept on looking at you. You, there in front of me, all calm, gave me peace. I ll carry that image all my life now. Your smile will always remain with me. Now, who ll meet and receive me on the door at almost every Debian event (just by sheer co-incidence?). Who ll help me speak out loud about all the Debian shortcomings (and then discuss solutions, when sober :)). Abraham and me during Debian discussion in DebUtsav Kochi It was a testament of the amount of time we had already spent together online, that when we first met during MDC Palakkad, it didn t feel we were physically meeting for the first time. The conversations just flowed. Now this song is associated with you due to your speech during post MiniDebConf Palakkad dinner. Hearing it reminds me of all the times we spent together chilling and talking community (which you cared deeply about). I guess, now we can t stop caring for the community, because your energy was contagious. Now, I can t directly dial your number to listen - Hey Sahil! What s up? from the other end, or Tell me, tell me on any mention of the problem. Nor would I be able to send reference usage of your Debian packaging guide in the wild. You already know how popular this guide of yours. How many people that guide has helped with getting started with packaging. Our last telegram text was me telling you about guide usage in Ravi s DebConf23 presentation. Did I ever tell you, I too got my first start with packaging from there. I started looking up to you from there, even before we met or talked. Now, I missed telling you, I was probably your biggest fan whenever you had the mic in hand and started speaking. You always surprised me all the insights and idea you brought and would kept on impressing me for someone who was just my age but was way more mature. Reading recent toots from Raju Dev made me realize how much I loved your writings. You wrote How the Future will remember Us , Doing what s right and many more. The level of depth in your thought was unparalleled. I loved reading those. That s why I kept pestering you to write more, which you slowly stopped. Now I fully understand why though. You were busy; really busy helping people out or just working for making things better. You were doing Debian, upstream projects, web development, designs, graphics, mentoring, free software evangelism while being the go-to person for almost everyone around. Everyone depended on you, because you were too kind to turn down anyone. Abraham and me just chilling around. We met for the first time there Man, I still get your spelling wrong :) Did I ever tell you that? That was the reason, I used to use AR instead online. You ll be missed and will always be part of our conversations, because you have left a profound impact on me, our friends, Debian India and everyone around. See you! the coolest man around. In memory: PS - Just found you even had a Youtube channel, you one heck of a talented man.

24 February 2022

Dirk Eddelbuettel: #36: pub/sub for live market monitoring with R and Redis

Welcome to the 36th post of the really randomly reverberating R, or R4 for short, write-ups. Today s post is about using Redis, and especially RcppRedis, for live or (near) real-time monitoring with R. market monitor There is an saying that you can take the boy out of the valley, but you cannot the valley out of the boy so for those of us who spent a decade or two in finance and on trading floors, having some market price information available becomes second nature. And/or sometimes it is just good fun to program this. A good while back Josh posted a gist on a simple-yet-robust while loop. It (very cleverly) uses his quantmod package to access the SP500 in real-time . (I use quotes here because at the end of retail broadband one is not at the same market action as someone co-located in a New Jersey data center. It is however not delayed: as an index, it is not immediately tradeable as a stock, etf, or derivative may be all of which are only disseminated as delayed price information, usually by ten minutes.) I quite enjoyed the gist and used it and started tinkering with it. For example, it collects data but only saves (i.e. persists ) it after market close. If for whatever reason one needs to restart recent history is gone. In any event, I used his code and generalized it a little and published this about a year ago as function intradayMarketMonitor() in my dang package. (See this blog post announcing it.) The chart of the left shows this in action, the chart is a snapshot from a couple of days ago when the vignettes (more on them below) were written. As lovely as intradayMarketMonitor() is, it also limits itself to market hours. And sometimes you want to see, say, how the market opens on Sunday (futures usually restart at 17h Chicago time), or how news dissipates during the night, or where markets are pre-open, or . So I both wanted to complement this with futures, and also cache it locally so that, say, one machine might collect data and one (or several others) can visualize. For such tasks, Redis is unparalleled. (Yet I also always felt Redis could do with another, simple, short and sweet introduction stressing the key features of i) being multi-lingual: write in one language, consume in another and ii) loose coupling: no linking as one talks to Redis via standard tcp/ip networking. So I wrote a new intro vignette that is now in RcppRedis. I hope this comes in handy. Comments welcome!) Our RcppRedis package had long been used for such tasks, and it was easy to set it up. Standard use is to loop, fetch some data, push it to Redis, sleep, and start over. Clients do the same: fetch most recent data, plot or report it, sleep, start over. That works, but it has a dual delay as the client sleeping may miss the data update! The standard answer to this is called publish/pubscribe, or pub/sub. Libraries such as 0mq or zeromq specialise in this. But it turns out Redis already has it. I had some initial difficulty adding it to RcppRedis so for a trial I tested the marvellous rredis package by Bryan and simply instantiated two Redis clients. Now the data getter simply publishes a new data point in a given channel, by convention named after the security it tracks. Clients register with the Redis server which does all the actual work of keeping track of who listens to what. The clients now simply listen (which is a blocking operation) and as soon as data comes in receive it. market monitor This is quite mesmerizing when you just run two command-line clients (in a byobu session, say). As sone as the data is written (as shown on console log) it is consumed. No measruable overhead. Just lovely. Bryan and I then talked a litte as he may or may not retire rredis. Having implemented the pub/sub logic for both sides once, he took a good hard look at RcppRedis and just like that added it there. With some really clever wrinkles for (optional) per-symbol callback as closure attached to the instance. Truly amazeballs And once we had it in there, generalizing from publishing or subscribing to just one symbol easily generalizes to having one listener collect and publish for multiple symbols, and having one or more clients subscribe and listen one, more or even all symbol. All with ease thanks tp Redis. The second chart, also from a few days ago, shows four symbols for four (front-contract) futures for Bitcoin, Crude Oil, SP500, and Gold. As all this can get a little technical, I wrote a second vignette for RcppRedis on just this: market monitoring. Give this a read, if interested, feedback on this one is most welcome too! But all the code you need is included in the package just run a local Redis instance. Before closing, one sour note. I uploaded all this in a new and much improved updated RcppRedis 0.2.0 to CRAN on March 13 ten days ago. Not only is it still not there , but CRAN in their most delightful way also refuses to answer any emails of mine. Just lovely. The package exhibited just one compiler warning: a C++ compiler objected to the (embedded) C library hiredis (included as a fallback) for using a C language construct. Yes. A C++ compiler complaining about C. It s a non-issue. Yet it s been ten days and we still have nothing. So irritating and demotivating. Anyway, you can get the package off its GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 August 2021

Paul Wise: FLOSS Activities July 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.




  • libusbgx/gt: triage issues
  • Debian packages: triaged bugs for reintroduced packages
  • Debian servers: debug lists mail issue, debug lists subscription issue
  • Debian wiki: unblock IP addresses, approve accounts

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The microsoft-authentication-library-for-python and purple-discord work was sponsored by my employer. All other work was done on a volunteer basis.

28 November 2011

Dirk Eddelbuettel: A Story of Life and Death. On CRAN. With Packages.

The Comprehensive R Archive Network, or CRAN for short, has been a major driver in the success and rapid proliferation of the R statistical language and environment. CRAN currently hosts around 3400 packages, and is growing at a rapid rate. Not too long ago, John Fox gave a keynote lecture at the annual R conference and provided a lot of quantitative insight into R and CRAN---including an estimate of an incredible growth rate of 40% as a near-perfect straight line on a log-log chart! So CRAN does in fact grow exponentially. (His talk morphed into this paper in the R Journal, see figure 3 for this chart.) The success of CRAN is due to a lot of hard work by the CRAN maintainers, lead for many years and still today by Kurt Hornik whose dedication is unparalleled. Even at the current growth rate of several packages a day, all submissions are still rigorously quality-controlled using strong testing features available in the R system. And for all its successes, and without trying to sound ungrateful, there have always been some things missing at CRAN. It has always been difficult to keep a handle on the rapidly growing archive. Task Views for particular fields, edited by volunteers with specific domain knowledge (including yours truly) help somewhat, but still cannot keep up with the flow. What is missing are regular updates on packages. What is also missing is a better review and voting system (and while Hadley Wickham mentored a Google Summer of Code student to write CRANtastic, it seems fair to say that this subproject didn't exactly take off either). Following useR! 2007 in Ames, I decided to do something and noodled over a first design on the drive back to Chicago. A weekend of hacking lead to CRANberries. CRANberries uses existing R functions to learn which packages are available right now, and compares that to data stored in a local SQLite database. This is enough to learn two things: First, which new packages were added since the last run. That is very useful information, and it feeds a website with blog subscriptions (for the technically minded: an RSS feed, at this URL). Second, it can also compare current versions numbers with the most recent stored version number, and thereby learns about updated packages. This too is useful, and also feeds a website and RSS stream (at this URL; there is also a combined one for new and updated packages.) CRANberries writes out little summaries for both new packages (essentially copying what the DESCRIPTION file contains), and a quick diffstat summary for updated packages. A static blog compiler munges this into static html pages which I serve from here, and creates the RSS feed data at the same time. All this has been operating since 2007. Google Reader tells me the the RSS feed averages around 137 posts per week, and has about 160 subscribers. It does feed to Planet R which itself redistributes so it is hard to estimate the absolute number of readers. My weblogs also indicate a steady number of visits to the html versions. The most recent innovation was to add tweeting earlier in 2011 under the @CRANberriesFeed Twitter handle. After all, the best way to address information overload and too many posts in our RSS readers surely is to ... just generate more information and add some Twitter noise. So CRANberries now tweets a message for each new package, and a summary message for each set of new packages (or several if the total length exceeds the 140 character limit). As of today, we have sent 1723 tweets to what are currently 171 subscribers. Tweets for updated packages were added a few months later. Which leads us to today's innovation. One feature which has truly been missing from CRAN was updates about withdrawn packages. Packages can be withdrawn for a number of reasons. Back in the day, CRAN carried so-called bundles carrying packages inside. Examples were VR and gregmisc. Both had long been split into their component packages, making VR and gregmisc part of the set of packages no longer on the top page of CRAN, but only its archive section. Other examples are packages such as Design, which its author Frank Harrell renamed to rms to match to title of the book covering its methodology. And then there are of course package for which the maintainer disappeared, or lost interest, or was unable to keep up with quality requirements imposed by CRAN. All these packages are of course still in the Archive section of CRAN. But how many packages did disappear? Well, compared to the information accumulated by CRANberries over the years, as of today a staggering 282 packages have been withdrawn for various reasons. And at least I would like to know more regularly when this happens, if only so I have a chance to see if the retired package is one the 120+ packages I still look after for Debian (as happened recently with two Rmetrics packages). So starting with the next scheduled run, CRANberries will also report removed packages, in its own subtree of the website and its own RSS feed (which should appear at this URL). I made the required code changes (all of about two dozen lines), and did some light testing. To not overwhelm us all with line noise while we catch up to the current steady state of packages, I have (temporarily) lowered the frequency with which CRANberries is called by cron. I also put a cap on the number of removed packages that are reported in each run. As always with new code, there may be a bug or two but I will try to catch up in due course. I hope this is of interest and use to others. If so, please use the RSS feeds in your RSS readers, and subscribe to the @CRANberriesFeed, And keep using CRAN, and let's all say thanks to Kurt, Stefan, Uwe, and everybody who is working on CRAN (or has been in the past).

17 January 2011

David Welton: The Kindle and Book Sharing

I'm very pleased with the Kindle; all of a sudden, I can quickly, easily, and fairly cheaply buy lots of English language books that otherwise I would have had to order (not cheap, not quick). Furthermore, they're not cluttering up my house, either! Not that I view books as clutter, but sooner or later we're going to have to move out of the apartment we're in, and the less I have to haul around, the better. However, the fact that it's not easy to share the books with my wife is starting to annoy me. I could always give her the Kindle itself to read something, but that would deprive me of not only that book, but all the others I have as well. Indeed, the difficulty of the problem lies in the difference between digital goods and physical goods. Real books are: Ebooks are: Setting aside the costs and benefits to society of DRM, intellectual property, and so on, let's take the point of view of Amazon, book publishers and authors, who all want to maximize their income, and of readers, who want as much freedom as possible, and also, for the sake of argument, believe that paying for books leads to more money for authors, which leads to more books. I think the trick is to try and make the ebook market as similar as possible to that for real books. For instance, if you had the freedom to resell an ebook quickly and easily, the price would likely start dropping quickly, as people started selling their "used" copies of books they didn't want to keep. This would be good for the consumer, but potentially quite bad for authors, as fewer new copies would be sold. Sure, we have used book stores, but the practical limitations involved, and the fact that real books show some wear and tear, mean that there's still an incentive to buy new books. On the other hand, an ebook ought to be a bit cheaper than a real book if it's something you'd consider reading and then selling, because you no longer have that option, so the lack of freedom should be priced in to the original price. The other problem they seem to be having is with the lending of books. If you lend a real book, you can only lend it to someone who you see with some frequency, so there's already a big geographical limitation. Also, while they're reading it, you obviously can't. You have to have some trust in them, that they'll give it back to you sooner or later, so you won't just loan it to anyone: if you wanted a transaction with more finality, you'd simply sell it. Currently, Kindle books can only be lent for two weeks, and you can't access it while the other person has it. I think the latter part of that is fair, but two weeks seems like an arbitrary number. Remember though, that they don't want you to be able to just give someone a book, with no strings attached and no return-by date, because then you could also sell it, undercutting their store prices, and so on, as above. So to make lending work, they have to have some "strings attached". I think "two weeks" is a fairly arbitrary string though. What would work better? Perhaps a way that permanently differentiates the original purchaser from anyone borrowing it. For instance, a "recall now" button? This button would, when the original purchaser pressed it, disable it on the borrower's device and reenable it on the buyer's device. This would prevent secondary markets, or at least limit them, because the 'borrowed' books would be inferior to the originals. And yet it might still work well for lending; I could lend books to people who trust me without problems. Interestingly, I became aware of this site the other day: so it looks like whatever Amazon does, there are going to be tradeoffs involved. And of course, there is the point of view that all of the costs involved by DRM are too high, but that's a fairly involved discussion in its own right. What do you think?

25 August 2010

Eddy Petrișor: [content:lang:ro] I wrote to my representative

This post concerns Romanian politics, internal affairs and improper direction of public funding for new hideous churches in spite of education, culture, health, technic development and old monuments. Thus I will write this in Romanian. Curious foreigners can try to read the text via Google's translation.

Din cauz c suntem n secolul XXI i consider c e o prostie imens s b g m bani n biserici n detrimentul educa iei, culturii, s n t ii, dezvolt rii tehnice i a vechilor monumente, iar politicul din Rom nia pare dornic s dea bani cultelor, am ajuns la concluzia c este cazul s -i scriu reprezntantului meu (teoretic) n Parlament, Deputatul Florin Iordache. Am aflat de ce colegiu apar in uit ndu-m aici i apoi n lista oficial cu deputa ii.

I-am trimis textul de mai jos:

M numesc Petri or Eddy i sunt unul dintre cei pe care i reprezenta i n Parlamentul Rom niei. Este pentru prima dat c nd scriu unui parlamentar care m reprezint n conducerea statului rom n i o fac pentru a-mi exprima n mod clar i r spicat ideile i opinia n ceea ce prive te subiectul pe care l voi discuta mai jos. Am 30 de ani, sunt programator, am muncit i mi-am pl tit taxele de c nd am fost angajat, din anul 4 de facultate p n n prezent, adic de vreo 6-7 ani. Nu, acest mesaj nu este un strig t de ajutor, nu a tept pomeni, nu am milogit niciodat , mereu am muncit de i v d c pentru unii parazitismul fiscal a devenit un mod de via . Din acest motiv sunt revoltat.

Sunt revoltat pentru c :
- de i pl tesc taxe i impozite care ajung la peste 60% din venitul brut, nu primesc n schimb nici un fel de servicii, nici un fel de infrastructur , nici un fel de respect, nici un fel de eficien
- de i lucrez de ceva timp, am observat c pe unde am lucrat competen a i performa a erau o necesitate; n aparatul de stat nu este cazul
- de i sunt destui ca mine care pl tesc taxe i impozite, vocile noastre sunt ignorate i banii no tri sunt arunca i pe fereastr de cei ce administreaz fondurile colectate.

Domnule deputat sunt revoltat c de 15 ani de zile dispar n medie aproximativ 4 coli zilnic i apare, tot n medie, o biseric la fiecare 2 zile, iar asta se nt mpl cu accep iunea aparatului politic. Spitalul din Caracal, ca orice alt spital din Rom nia, e prost dotat, nu exist personal suficient, nu are fonduri pentru medica ie pentru pacien i, pe scurt, este un dezastru care e pe cale s se nt mple. i cu toate astea, banii se duc spre biserici. Se fac dona ii de coli i terenuri c tre biseric , n mod ilegal i abuziv chiar n jude ul Olt.

Ultima perioad n care religia a condus lumea s-a numit Evul Mediu i cred c orice om care a tr it m car 5 ani n secolul XXI poate s observe c nu religia ne-a adus mari beneficii, ci tiin a i tehnologia. n Evul Mediu tratamentul se f cea prin s nger ri , t ieri sub limb , scoaterea argintului viu i tot felul de alte supersti ii f r nici un efect pozitiv sau chair cu efecte negative.

ntre timp umanitatea a progresat i am descoperit medicina, curentul electric, am explorat universul, am aterizat pe Lun , am pus zeci de sateli i artificiali pe orbit , am aflat foarte multe despre univers i lumea care ne nconjoar , am inventat noi materiale (plasticul, p nzele impermeabile, kevlarul), am investigat lumea f r idei preconcepute dup metoda tiin ific , iar lucrul acesta ne-a mbun t it via a n toate aspectele ei. Sperant de via a crescut de la aproximativ 30-40 de ani n Evul Mediu, la 70-80 de ani n prezent, totul datorit tiin ei aplicate. Datorit tehnologiei i tiin ei pot s v scriu aceste cuvinte i tiu c vor ajunge la dumneavoastr , nu datorit unor incanta ii sau ritualuri, nu datorit telepatiei, de i mediul sta de transmisie se apropie foarte mult de ceea ce define te ca fiind telepatie :-) . Cu toate astea, unii consider c religia trebuie finan at pentru c e religie, nici m car pentru c ar avea programe sociale, contribu ii cuantificabile reale, nu, doar pentru c este religie.

Am citit foarte multe articole din pres , articole bine documentate, i am senza ia c tr iesc ntr-o lume incon tient c nd v d cum banii care ar trebui s ajung n drumuri, coli, spitale, cercetare, medicamente, programe sociale cu real impact, ajung n noi i noi biserici de beton n timp ce monumentele istorice, bisericile din lemn, m n stirile cu adev rat valoroase sunt l sate n paragin . Asist cu o senza ie de dezgust continuu la felul n care ne batem joc de cei ce ne salveaz vie ile, medicii, de cei ce ar trebui s ne fac educa ia, profesorii, de cei ce ne sting casele dac iau foc, pompierii, i, de felul n care ridic m misticismul religios la rang de virtute. Este de-a dreptul jignitor i aberant ca alfabetizarea tiin ific n secolul XXI s fie at t de joas nc t s avem 42% din popula ie care cred c Terra este centrul universului, s avem oameni care cred c astrologia este tiin ific , s NU avem nici m car o universitate n top 500 n lume, este jignitor s fim n permanen n coada tuturor clasamentelor pozitive i n fruntea tuturor celor negative.

Recent a publicat un articol n care esitma foarte conservator c averea Bisericii Ortodoxe Rom ne se nv rte la peste 3 miliarde de euro, are scutiri de impozite, are zeci de afaceri i cu toate acestea, statul continu s scuteasc cultele de impozite, ba chiar le ofer bani ntr-una. La o asemenea avere nu cred c mai poate fi vorba de necesitatea finan rii de c tre stat. Oamenii pot decide s doneze direct c tre bisericile lor bani i nici nu mai implic pierderile inerente ce apar datorit administr rii fondurilor de c tre stat.

Statul nu este un organ aparte, statul trebuie s munceasc pentru mine, cet eanul, iar eu sunt mpotriva acestui tip de batjocorire i napoiere voluntar a noastr ndreptat mpotriva noastr . Alexandru Ioan Cuza a secularizat averile m n stirilor nchinate celor de la muntele Athos, deci unor ter i extreni rii, a a cum afl m din scrierile istoricului A.D. Xenopol, deci nu exist nici o obliga ie a statului s finan eze o institu ie anacronistic :

A.D. Xenopol Domnia lui Cuza Vod Vol.I , CAP. VIII. AI doile minister Cogalniceanu Secularizarea , pagina 291:
Nu vom atinge de c t mprejur rile care au condus la redob ndirea averilor, ajunse n st p nirea c lug rilor str ini, ating nd chestiunea nchin rei m n stirilor p n ntene c tre acele din R s rit, numai ntru c t va fi de nevoie, spre a n alege m sura seculariz rei nsu i.

Din aceste motive i multe altele, deoarece sunte i i reprezentantul meu n Parlamentul Rom niei, v cer ca s vota i pentru proiectul de lege ini iat de domnul Prigoan n ceea ce prive te t ierea finan rii personalui clerical de c tre stat i c t i a altor legi care ar putea duce la eliminarea privilegiilor pe care le au cultele, n condi iile n care singurul motiv real i corect pentru care o entitate ar putea beneficia de scutiri de taxe sau privilegii fiscale sunt doar cele ce nu urm resc interesul financiar i, totodat , au un impact social cuantificabil n mod clar.

Dac dori i, a fi bucuros s v r spund la eventualele ntreb ri sau neclarit i n ceea ce am scris, dac este nevoie.

V mul umesc anticipat!

L-am trimis joi, nc n-am primit r spuns de nici un fel. Pasul doi e s sun. M car s nnebuneasc dac tot sunt nesim i i. A a c , pute i afla de ce colegiu apar ine i i apoi pute i identifica pe cel/cea ce v reprezint n Parlament. Scrie i-le, suna i-i, a venit momentul m car s se enerveze, dac nu s - i fac m car o poleial de datorie.

6 July 2010

Matt Zimmerman: We ve packaged all of the free software what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications. This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I m occasionally surprised to find something very useful which isn t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the long tail of niche software is generally packaged very effectively. Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved. Rough edges This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape. Why are we stuck?
I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
Abraham Maslow
The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it fits into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job. Various attempts have been made to extend the packaging concept to make it more general, for example: Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems. Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them. This lock-in effect makes it difficult for new packaging technologies to succeed. Divide and Conquer No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system. But I like things how they are! We don t have a choice. The world is changing around us, and distributions need to evolve with it. If we don t adapt, we will eventually give way to systems which do solve these problems. Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users. These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I ve presented here are only one possible way forward, and I m sure there are more and better ideas brewing in distribution communities. I m sure that I m not the only one thinking about these problems. Whatever it looks like in the end, I have no doubt that change is ahead.

8 March 2010

Vincent Sanders: Music hath charms to soothe a savage breast, to soften rocks, or bend a knotted oak.

Since I last mentioned music back in January I have accumulated another ten albums and unlike last time where there were only a couple of stand outs, this time I have he opposite problem.

The unordered list:

Justin Sandercoe - "Small town eyes"

I am learning to play the guitar, I have been using Justins' course, it is very very good, this album? Also very good. If you like melodic guitar lead music with varied influences this is for you. A couple of tracks made me immediately think of some Crowded House riffs (which is not a bad thing). Only minor niggle is the uneven levels on some of the louder pieces, but it really is a minor observation on an otherwise fine first album.

Molly Lewis - "I made you a CD, but I eated it."

Although this is only a short selection of original material from Molly, it is a very promising first album. I really like her voice and although a ukulele is not generally the most well respected of instruments, in her hands, it has an odd charm. This album is available from DFTBA records.

Rhett and Link - "Up to this point"

A pair of talented comedians who use music very effectively to highlight their humour. I originally stumbled across them on youtube and decided to take a punt. The album is 27 short pieces which fit together surprisingly well. Difficult to categorise but think a cross between Flight of the Choncords and Jonathan Coulton with a dash of youtube immediacy.

They Might Be Giants - "Flood", "Apollo 18" and "John Henry"

Strictly a replacement of the old tapes which have completely disintegrated in the intervening couple of decades since first purchased. Flood is still one of my favourite albums ever, certainly in my top 10. If you do not know them TMBG are just ace, please try their music!

Seasick Steve - "Started out with nothin and i still got most of it left"

Well its a kinda fun album primarily based on blues electric "guitar" (some of the instruments are little more than a stick with a nail in and a guitar pickup.) Nothing bad, easy to get along with, definitely worth a listen.

La Roux - "La Roux"

This synth pop album was on remainder in ASDA and I took a gamble. Its OK I guess and for 3quid I cannot really complain.

Red Hot Chili Peppers - "By the way"

Not their best, but competent enough.

Aqua - "Aquarium" , "Aquarius"

Um...yes, I have a soft spot for 90's cheese OK? Nothing more than a gross self indulgence of my silly side. But they are fun ;-)

So that is my new music since January all 166 tracks of it . Most of it pretty good, certainly no lemons (well aside from the Aqua but that is supposed to be silly!)

Oh and The XX has really grown on me from last time and I am looking forward to their next release.

31 March 2009

Gunnar Wolf: E-voting and paper-based-voting - UNAM teaches us how to achieve the worst of all worlds

As my Institute's sysadmin, I was appointed as the responsible for my Institute's certificate handling for today's voting session for the Universitary Council (Consejo Universitario). UNAM, Mexico's largest University, is moving towards an e-voting platform. I talked about this with our (sole) candidate for the Council, and she told me this has been used a couple of times already - And, as expected, it has led to having to repeat voting sessions, due in part to e-voting's inherent lackings: It is impossible to act on any kind of impugnation. The only thing we have is an electronic vote trail, no way to recount or to make sure that all votes got in. Besides, we had a perfectly antinatural and inadequate identification system, which means voter's identity have no way to be trusted. Besides, we still have all the traditional Universitary bureaucratic paper flow, which completely obscures any positive points this e-voting system might have had. Before going any further, if you are interested: There is a so-called security audit certificate for this system. In Spanish, yes. Take a look at it if you understand the language and want to crack some laughs. I will not make a detailed review of (what I could gather about) the setup. But to make things short: I had to go to the central administrative offices to get a CD-ROM with the monitoring station's SSL certificate. This certificate is tied to an IP address, so only one computer was able to be set up as a monitoring station. So far, so good. But, what is the monitoring station's real role? You will probably laugh. The voting session (at my Institute - Each dependency can specify its own opening and closing times) was from 10:00 and until 18:00. We were instructed to place this computer at a public location, from where: So, what is strange here? That there is a tremendous apparatus providing supposed security to... Information that is completely worthless. Just protecting a number that is, for all purposes, public. Oh, and the opening and closing of the booth - Of course, the system could have flaws during the process, or inject spurious votes along the way, or flip-flop the votes cast whichever way. But, did I mention votes? So far I have not mentioned how people are supposed to vote. Together with our last paycheck, we got a piece of paper with all of the needed information: A randomly generated, 10-character-long-with-mixed-case-and-symbols password, and the link to a web page2. This paper was folded, yes, but it was in no way secured - So, whoever wanted to have all of our passwords could just go through the bunch of papers and get them. Now, contrasting to the strong perception of physical security surrounding the oh-so-important monitoring stations, how can a person vote? Oh, sure, just fire up your favorite browser and go to, produce your student number if you are a student or your full RFC3, select via checkboxes4, click on "submit", and voil , you have voted. From any location, from any machine. Yes, the University's population is largely itinerant, many people will be voting from abroad and all. It is good to give them a voice. But... At what price? Lets see... The security audit mentions the system is free from any malicious routine that can automatically alter the results and it has the minimum needed validations against spurious data injections from the most common Web browsers. However, if I am interested in modifying the results... I could put a trojan in a Faculty's laboratories, which modifies the votes sent by their users (students vote as well). Yes, I'd have to know how the system works, but lets accept security through obscurity does not work, and that this is a well-known system (as it has been used for over 3 years and is at version 3.5). PHP-based, for further points. Oh, and (if I recall correctly) a voter does not even get feedback as for which formula did he vote for, so no way of knowing if the computer really sent the information I requested. And given the low security for the password handling, I would not bet on it being worth much. Besides, this system was partly established to allow people voting from abroad - as long as they picked up their March 10 paycheck. That excludes anybody who has spent over three weeks away! Many other things can be said. Last detail: e-voting's main selling point is that the results are known instantaneously, and (if no paper trail exists) no tedious re-counting is ever done, right? Meet universitary bureaucracy. Technology changes, but processes don't. The Local Electoral Surveillance Commission has the responsability to enter once again the system after the vote has finished, and ask the server for the preliminary results. This consists of a tarball with the tally sheet (from the voters, who voted and who didn't), the total votes for each formula, and... one more file I don't remember. They also have to generate the signed legal documents where they testify to the received information. And then, ahem, they have to burn those files5 onto a CD-ROM, print them, and physically take them to the central administrative offices. Yes, take something from the server and get it to the server. For us it is not terrible (1.5Km can be readily done), but this same procedure must be done by people in other cities where there are University campii holding elections. How Nice! Anyway... Worst of both worlds. The inefficacies of a paper-based ellection, together with the unaccountability of an e-voting ellection, sprinkled with fake sense of security here and there. Bah.
  1. 1. Except that it didn't. I guess they didn't stress-test the server, so every couple of minutes it returned a connection error. Of course, the page would no longer self-update. And after noticing that, I (and nobody else but me) had to go and give the password and certificate for the system to continue to operate.
  2. 2. which is - The Schooling Administration General Direction (DGAE), an universitary body which has no relation with electoral issues. DGAE made available a poster detailing how to vote... But, again, lets ignore that fact for now
  3. 3. A nationwide ID number, largely derived from name and birth date data - Both numbers are often widely known, they cannot be considered private in any way.
  4. 4. Oh, for goodness sake... The "ballot" has 1..n options, and each has a checkbox, not a radio button. That means, you can select multiple options, which is of course invalid. Why? Because the electoral rules indicate that selecting more than one option in a ballot makes the ballot invalid, and thus, a way for making it invalid must be provided. Isn't logic beautiful?!
  5. 5. Want some more insight on what needs to be done? Take a look at the instructions. Don't forget paying attention to the lexicon used - We are still asked to count the votes, an impossible feat given the vote is 100% system-based - Quote: Los miembros de la CLVE realizar n, con base en el reporte del sistema, el c mputo de los votos depositados en la urna a favor de cada una de las f rmulas, declarando nulos los votos que procedan.

3 January 2009

David Moreno: Ruby goodies: Modules and methods for my everyday Ruby

I make a lot of Web script processing, whether scraping, webservices, systems administration, etc. Because I sometimes happen to repeat small and useful chunks of code for different projects, I thought that, given making new modules and methods is usually hassleless in Ruby, I should make my own set of methods and goodies I constantly use. Example 1, I sometimes miss Perl s LWP::Simple simplicity where I just pass a URL to a subroutine and get the content on a variable, quick, one-liner. Example 2, extract all links on a given URL on an array that I can then iterate and maybe fetch given the first example. Getting all A links is very easy to do, say with regex or with Hpricot (which should the best way to parse HTML), but most of the time I (and people, I d bet) need absolute URLs which is fairly more complex (relative, absolute URLs, BASE href declarations, etc, the same case as in Feedbag). Well, for different cases like that one, I ve started my own set of Ruby goodies. If you don t find them useful, I understand, they are mostly for my needs, I just want to make it public, because maybe, some other people might indeed find them useful. Simple installation:
sudo gem install damog-goodies -s
As time and needs pass, I will be adding stuff into it. For the time being, here s both above examples in action:

>> require "goodies"
requiring /var/lib/gems/1.8/gems/damog-goodies-0.1/lib/goodies/array.rb
requiring /var/lib/gems/1.8/gems/damog-goodies-0.1/lib/goodies/lwr.rb
requiring /var/lib/gems/1.8/gems/damog-goodies-0.1/lib/goodies/html.rb
=> true
>> pp HTML::Links.find ""
=> nil
>> pp HTML::Links.find("").first(10)
=> nil
>> get ""
>> get("")[0, 100]
=> "<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""
Repository is, as usual, kindly hosted at GitHub on damog/goodies.

6 December 2008

Biella Coleman: FLOSS Manuals: How to Bypass Internet Censorship

If you don’t know about this great project, you should. The nitty gritty details are to be found here. FLOSS Manuals Release Circumvention Book, How To Bypass Internet
Censorship December 4, 2008, Amsterdam A new book released by FLOSS Manuals, How to Bypass Internet Censorship,
describes circumvention tools and explains why you might want to use
them, and honestly describes the risks you must consider before
circumventing blockers or monitors. Blockers and monitors restrict
access to areas of the Internet, and this book describes simple
techniques for bypassing those restrictions. The book can be read or
downloaded for free as a PDF from, or you can purchase
a high-quality printed copy of the 200 page book through Lulu, an
on-demand printer, at for 10.83
($14.00). The growth of the Internet has been paralleled by attempts to control
how people use it, motivated by a desire to protect children,
businesses, personal information, the capacity of networks, or moral
interests, for example. Some of these concerns involve allowing people
to control their own experience of the Internet (for instance, letting
people use spam-filtering tools to prevent spam from being delivered to
their own e-mail accounts), but others involve restricting how other
people can use the Internet and what those other people can and can’t
access. The latter case causes significant conflicts and disagreements
when the people whose access is restricted don’t agree that the blocking
is appropriate or in their interest. Problems also arise when blocking
mechanisms and filters reduce access to useful business, health,
educational, and other information. Because of concerns about the effect of internet blocking mechanisms,
and the implications of censorship, many individuals and groups are
working hard to ensure that the Internet, and the information on it, are
freely available to everyone who wants it. There is a vast amount of
energy, from commercial, non-profit and volunteer groups, devoted to
creating tools and techniques to bypass Internet censorship. Some
techniques require no special software, just a knowledge of where to
look for the same information. Programmers have developed a variety of
more capable tools, which address different types of filtering and
blocking. These tools, often called “circumvention tools” help Internet
users access information that they might not otherwise be able to see.
This book documents simple circumvention techniques such as a cached
file or web proxy, and also describes more complex methods using Tor,
which stands for The Onion Router, involving a sophisticated network of
proxy servers. How to Bypass Internet Censorship was written by eight writers in a
FLOSS Manuals ‘book sprint’ - a week-long intensive writing session, and
it also includes content from many different authors’ previous works on
the subject.
How to Bypass Internet Censorship will always be available for free from
the FLOSS Manuals Website. Each sale of the book generates $2.50 (USD).
100% of this income goes back into the development of more manuals about
free software. About FLOSS Manuals
FLOSS Manuals is a non-profit foundation and community creating a
collection of manuals that explain how to install and use a range of
free and open source software. The manuals are friendly and simple, and
they are intended to encourage people to explore the wide range of free
and open source software. FLOSS stands for Free, Libre Open Source
Software, and FLOSS Manuals intends to provide free manuals for free
software. The manuals on FLOSS Manuals are written by a community of
people, writers, editors, and technicians do a variety of things to keep
the manuals as up to date and accurate as possible. The way in which
FLOSS Manuals are written mirrors the way in which FLOSS software itself
is written: by a community who contribute to and maintain the content. FLOSS Manuals produces printed books, PDF books, and HTML output. Each
chapter from each manual can be recombined with other chapters to create
a new manual, which we call remix capability. An embed API lets you use
FLOSS Manuals to write the content and then embed the content into your
website. For more information contact:
Adam Hyde
Founder, FLOSS Manuals

10 March 2008

Dirk Eddelbuettel: PGApack 1.1: Almost as good as new

PGAPack is rather nice and fairly small library for 'parallel' optimisation via generic algorithms using the MPI message passing protocol. PGAPack 1.0 was written by David Levine while doing graduate work during the mid-1990s at Argonne Labs / University of Chicago. PGAPack has also been in or around Debian for a rather long time, but it suffered from benign neglect in the last few years. Some of this came to the fore in this bugreport which lead to my offer to the then-maintainer Andreas to help on the relicencing request. After all, Argonne Labs is just a few miles from where I live, and I had already spent a little bit of time polishing and upgrading the package for my own exploratory use. So I called Rusty Lust, head of the Mathematics and Computer Science section at Argonne to try to sort this out. He was sympathetic and put me in email contact with David Levine. As we are all somewhat busy, this dragged on for a little longer than we thought --- but as of today, about and a half years later, we have a new and shiny PGAPack 1.1 release, or around twelve years after the initial 1.0 version came out. I have done a fair amount of polishing: there are now two library packages for serial use (i.e. for debugging) as well as parallel use via MPI. We use Open MPI where available and LAM where not. All open Debian bugs have been addressed. One minor issue in the postscript documentation remains as David can no longer locate his LaTeX sources; I may just have to extract the text and re-latex this from scratch to update it. One day. Anyway, for full reference, the changelog entry is below. The package is currently in the NEW queue (as the new sub-package require manual inspection and approval) but should hit mirrors in a couple of days. My thanks to the two previous Debian maintainers; to Rusty Lusk for helping with the from the end MSC department at Argonne Labs and for suggesting the rather liberal and easy MPICH2 license (and he happens to be one of the MPICH2 authors); and of course to David Levine for writing PGAPack in the first place, for agreeing to relicense it and giving valuable feedback on my repackaging of what is now version 1.1 on the MCS ftp server at Argonne --- this library has held up really well over the years; let's hope it will find more good use going forward.
pgapack (1.1-1) unstable; urgency=low
  * Really good news:  The MCS divsion of Argonne National Laboratories has
    agreed to relicense pgapack using the MPICH2 license. So pgapack
    is now Free Software and can move into Debian's main archive!
    Our thanks go to David Levine and Rusty Lusk to make this possible.
  * New maintainer, following Andreas' offer dated 2006-10-04 in #379388
  * debian/control: Change section to math		(Closes: #379388)
  * Added new brinary packages libpgapack-mpi1 and libpgapack-serial1
  * The MPI package is configured using Open MPI where available and LAM
    where not. 
  * debian/control: Changed Build-Depends: to use OpenMPI where available, 
    and LAM otherwise.
  * Finally acknowledges old NMUs 		(Closes: #379168,#359549)
  * source/integer.c: Apply patch for one-off error 	(Closes: #333381)
  * source/report.c:  Do not unconditionally print at generation 1
  * debian/rules: Remove a bashism 			(Closes: #379168)
  * debian/rules: Install examples directly 		(Closes: #134331)
  * debian/control: libpgapack-lam1 Depends on lam4 	(Closes: #60376)
  * debian/rules: Rewritten using debhelper
  * debian/control: Added Build-Depends: section for debhelper
  * No longer install mpi.h in /usr/include		(Closes: #404027)
  * debian/control: Updated Standards-Version: to current version
  * man/man1/PGAGetCharacterAllele.1: fix whatis entry 		(lintian)
 -- Dirk Eddelbuettel   Mon, 10 Mar 2008 18:03:34 -0500

16 October 2007

Marc 'Zugschlus' Haber: A thousand things I never wanted to know about X

In the last few days, I have replaced the two 20 inch CRT monitors that I have hardly used the last years with two 20 inch TFT displays, and my company (finally) gave me a 19 inch TFT display to accompany my notebook display at work. Maybe I should take that as a hint that they want to see me in the office more frequently rather than in my home office which I generously use these days. At home, I built a “new” computer from mainly used parts to drive the two 20 inchers. I have learned a lot about X in the last days, but spent too much time with it. I have been running my Desktop with older X and two CRTs for quite some time, but I hardly ever used it due to the setup of my now ex-flat. Back then, I learned that the “two monitors, one display, one screen, one desktop” option of X was called Xinerama and statically configured: You write down your display layout in xorg.conf and it uses it. Configuration changes mean restarting the X server. KDE notices Xinerama automatically, designates one monitor the “main monitor” and displays its panel there. You can move windows from one monitor to the other one, but changing virtual desktops always changes what both monitors display. This is not really what I want, since I often have “static” contents (news reader, mail reader, chat windows, help screens etc) on one monitor while I do my normal work on the other monitor, which involves changing virtual desktops. So I was rather happy when I learned upon the arrival of the external TFT display for the notebook that you can work without Xinerama. In that case, the X server comes up with one display and two screens, which KDE uses to display two independent desktops which can be independently configured, but can both be accessed with one mouse and one keyboard. For me, this means that I finally can independently change virtual desktops on both monitors, but pay the price that I cannot display a certain virtual desktop on monitor 1 now, and on monitor 2 in five minutes, and that a window opened on monitor 1 is never going to be shown on monitor 2 without serious interference. This looked ok for me. Additionally, you can use different font sizes on the different monitors as defaults, which comes in handy if both monitors are blatantly different in size (which is the case with the notebook and its external monitor). However, with that setup, KDE seems to be challenged when it comes to saving the configuration. Some things (such as desktop background and panel config) seem to save fine, desktop independent, other things (such as open windows and their position) are lost when the session exitst. My daily visit to Kevin&Kell (which used to be in a browser window that was saved with my session) has badly suffered since then. Then, a few weeks ago, somebody talked me into trying the X server from experimental. Which is when my dual-desktop setup ceased working. The nice people on #debian-x told me that Xinerama is a thing of the past, and is currently in the process of being replaced by Xrandr 1.2. The removal of “my” two monitor, two screens, two desktop setup is collateral damage, and if I still want that behavior, the upstream considers this an issue to be handled by the Window Manager / Desktop Environment now. Gee, thanks. That’s what I call a regression. After spending half a day cursing and ranting, and even considering to go back to lenny’s (or even etch’s), I finally settled on going with the “new” way as I was going to lose my independent desktops sooner or later anyway. Again, with the friendly help of the people on #debian-x, I found out about a lot of advantages of the latest X servers. In most basic cases, the latest X server does not need a configuration file any more. It autodetects your hardware and chooses whatever modes it finds optimal. It does a pretty decent job in doing so. If one wants to influence how the X system comes up, once can always write an Xorg.conf file - the configfileless server even writes the “virtual config file” it uses to the log as a starting point. Additionally, almost everything regarding screen layout can be reconfigured at run time using the xrandr binary from a command line. Even a hardware change (such a new monitor plugged into the powered on notebook) is correctly detected. Cute. This allows me to reconfigure my notebook depending on where it currently is and which external monitor (or projector) is connected. This can be perfected if I find out whether X offers a hook to plug a reconfiguration script into that is called when the X server starts or when the system is waking up from standby or hibernation with the X server running. A lot to do, and great potential for elegant stuff. However, while I am talking about hibernation, this is a downside of the new driver: When the system wakes up from hibernation, the X server sometimes confuses itself enough to become nearly unuseable: I had it become psychotic when plugging in the external monitor (which can do 1600x1200 points) after waking it up from hibernation at the office. The internal display can only do 1400x1050, and the box thought that the external monitor only had 1050 y axis pixels, displaying bit garbage in the lower part of the big monitor, and showing a useable, but unintelligible panel. When opening any browser or any other application with considerable text amounts in the window, all fonts become unreadable once one scrolls down. Restarting the X server helps and everyting is fine. But I do not use hibernation to force myself to log out of X after waking up. After going back to the “one monitor mode”, the X server now thinks that my virtual desktop is 1600x1200 and only shows 1400x1050, with dead space at the right and lower side of the virtual desktop. xrandr does not allow me to reisize the FB below 1600x1200. No idea what’s going on here. In most of the cases, calling xrandr --auto for each output will fix the issues at run time, but sometimes a session restart is needed. Code needs to stabilize quite a bit before becoming useable in production. There are a few disadvantages that will remain: Both monitors will change their contents when the virtual desktop is changed, and both displays will use the same default font size (which might be inappropriate if both displays are of different size). But I think, when the issues are cleared up, that I can live with these disadvantages. The “new” desktop system that I built has a Matrox G450 dual head graphics card, and unfortunately, the Xrandr 1.2 enabled X server from experiemental has shown to be unuseable: Upon some mode changes, the X server goes into an endless loop and proves itself to be unkillable short of SIGKILL. And if it dies, it sometimes pulls the entire system with it. There is a corresponding bug in the upstream BTS, but it hasn’t received any noticeable attention by the upstream developers. So I might find myself configuring one last Xinerama config before I can finally completly migrate to Xrandr 1.2 on the desktop as well. Then I’ll probably need to bug the KDE upstream people to implement a two-desktops-on-a-single-screen feature in one of their future versions so that I can get my independent-desktop-things back. But that’s a far far away future.

26 October 2006

Mike Hommey: Facts about Debian, Mozilla Firefox and Ubuntu

or How Mozilla Corporation is FUDing on Debian. Again. This time, Christopher Beard, Mr Mozilla Marketing is talking about Firefox in Ubuntu, that it’s great, Ubuntu is a good boy, that it cooperates with Mozilla , and that by cooperating, it can use the great Firefox name and logo. To be honest, I don’t care that Ubuntu calls Firefox Firefox instead of Iceweasel or whatever. I don’t care that Ubuntu doesn’t care about freedom as much as Debian does. I could not care less. But, indeed, I do care about (yet other) false claims about Firefox in Debian. Let’s see what Christopher has to say about it:
I understand that Ubuntu is based upon Debian. Is that the same or different than the IceWeasel browser that Debian is shipping with their latest release?
It’s different. The patches that Debian applied to the Mozilla source code (which then resulted in their IceWeasel product) are more significant in scope than those in what Ubuntu is shipping (and branding as an official Mozilla Firefox release). Firefox in Ubuntu represents a somewhat more modest set of divergences from original Mozilla source code.
Let’s check out what it is about. And as I don’t want people to doubt my words, I’ll give you, readers, the ability to check for yourself what it is all about. First, please download the latest currently available patch for Firefox 2.0 beta 2 in Debian. It has been in experimental for some weeks now. It is a bit different from the patch I described earlier, so I’ll also tell you what has been changed since this 2.0b2 patch: (Also note that another change has been done since then, which is actually reverting a previous patch on gfx/src/gtk/mozilla-decoder.cpp, that was adding code that became dead code when we changed the way to fix the bug.) Next, please download the official and supported Firefox diff from Ubuntu. Uncompress both these diff files, and create two directories (let’s call them ubuntudiff and debdiff) which will contain the individual patches for each file. To fill these directories, please use the following commands:

$ filterdiff --strip=1 firefox_2.0+0dfsg-0ubuntu3.diff > ubuntu.diff
$ filterdiff --strip=1 firefox_1.99+2.0b2+dfsg-1.diff > deb.diff
$ cd ubuntudiff
$ splitdiff -d -a ../ubuntu.diff
$ cd ../debdiff
$ splitdiff -d -a ../deb.diff
If you don’t have the splitdiff and filterdiff utilities, you can get them from the patchutils tools (you will also need another of these tools later). Now, let’s have fun with these split patches… Files that are only patched in Debian: (excluding any file that would be in the debian subdirectory, that contains maintainer scripts for build and installation)

$ LANG=C diff -ru debdiff/ ubuntudiff/ grep "^Only in deb" grep -v debian wc -l
So, that’s 6 files that Debian patches that Ubuntu doesn’t. Let’s check what they are:

$ LANG=C diff -ru debdiff/ ubuntudiff/ grep "^Only in deb" grep -v debian

(reordering result for convenience)

Only in debdiff/: browser_base_content_aboutDialog.xul

Adds rows=5 to the user agent textbox so as to display the user agent string uncut by default

Only in debdiff/: content_html_content_src_nsGenericHTMLElement.cpp
Only in debdiff/: content_html_content_src_nsHTMLInputElement.cpp
Only in debdiff/: dom_src_base_nsGlobalWindow.cpp

It’s a patch I submitted in bugzilla #343953 and that got applied in Firefox 2.0. No surprise the patch is not applied on Ubuntu.

Only in debdiff/:

Patch I submitted in bugzilla #354413 and that got applied in Firefox 2.0.

Only in debdiff/:

Patch from bugzilla #325148 that got applied in Firefox 2.0 (see above), and a small patch to build the NSS library with debugging symbols (put -g in CFLAGS). That leaves only 2 small patches : CFLAGS = -g added to security/coreconf/ and rows=5 to browser/base/content/aboutDialog.xul Files that are only patched in Ubuntu: (same exceptions as for Debian)

$ LANG=C diff -ru debdiff/ ubuntudiff/ grep "^Only in ubuntu" grep -v debian wc -l
Yes, that’s right, that’s 45 files that Ubuntu patches that Debian doesn’t. Most are windows sizes and similar things that upstream can’t get right because they are values adapted to Windows, but that also includes some changes to the code and some other things. Let’s now check differences in files that are patched in both. It’s a bit of shell black magic, but you can check by hand that it does nothing wrong :

$ LANG=C diff -ru debdiff/ ubuntudiff/ filterdiff -p1 -x configure -x debian"*" lsdiff --strip=1 while read f; do diff -u debdiff/$f ubuntudiff/ awk "! /^--- ^\+\+\+ ^ ^[-+]*\@\@/ print \"$f\"; exit "; done

That gives a list of patch files that have more differences than line numbers changes, and that do not apply to the debian directory or the configure script (which is generated from, which is what we really want to compare here. You can take this list and run interdiff between these files from the ubuntudiff and debdiff directories. I’ll explain for you what these differences are: Overall, Ubuntu applies the same set of patches as Debian, plus some more. A somewhat more modest set of divergences, huh ?!? For what it’s worth, Ubuntu, like Debian, builds its Firefox with flat chrome and pango enabled.
What’s different in the shipping Ubuntu version of Firefox than the proposed Debian version of Firefox (that didn’t ultimately ship)?
Technically, changes include fixes to the User Agent string and the feed preview, a well as addressing issues of coherent branding. More significant than any specific difference in code, however, is Ubuntu’s commitment to work together with Mozilla and our community on releases going forward to insure product quality and integrity.
Why are you working with Ubuntu when you wouldn’t work with Debian?
We did try to work with Debian and would prefer a situation in which we work together. Ultimately, Debian took a position that was consistent with their own policies, and not compatible with some of the exceptions to Mozilla trademark policies that we offered. While we understand and respect their decision not to work with us under our branding guidelines, Mozilla believes that brands like Firefox are important for consumer protection. In any event, Ubuntu developers are working closely with Mozilla developers to insure product quality and features that are what users expect when they use Mozilla Firefox, which means that they’ll ship (and will continue to ship) a fully branded version.
Reading between the lines, that means Debian is not working with Mozilla. It’s not like we’re submitting patches. No. Never. Ever. I’m also glad to hear from Asa Dotzler, in comments to Mr Beard’s article, that this great collaboration with Ubuntu will lead to patches applied to Firefox (I guess the paid Canonical employees having more time to deal with Mozilla than the volunteer Debian maintainers may have helped, especially considering they didn’t have to do all what we already prepared). Anyways, it’s not like some of the patches we sent got applied. So, while I’m at it, here is an exhaustive list of the bugs where we took or sent the patches that are applied to Iceweasel: #51429, #161826, #252033, #258429, #273524, #287150, #289394, #294879, #307168, #307418, #314927, #319012, #322806, #323114, #325148, #326245, #330628, #331781, #331785, #331818, #333289, #333308, #343953, #345077, #345079, #345080, #345413. These don’t cover the following patches (see the rationale for these in my previous article): It’s so great to spend a great amount of time on a package, send patches, try to understand how things work to get patches applied, and yet, see such denial and false claims about our work. So please, Christopher, Asa, and the others, just stop talking about Debian, it will be better for everyone. PS for Rob in comments out there: No, ColorZilla won’t work on Ubuntu, because of the ABI incompatibility I explained in my previous entry, that Mozilla doesn’t seem to care much about.

2 October 2006

Holger Levsen: While my keyboard gently beeps

I intended to write this blog entry yesterday but then I was having too much fun with live-package, fai, vservers, munin and other more serious stuff...

Let's start with a disclaimer: While I still think that encoding encoded videos in flash is braindead, I sometimes now happily watch those videos, which I download through some webservice. I haven't yet bothered to play with this command-line tool, simply because I need it less than once a week... But at least I don't need flash in the browser, mplayer plays the following videos just fine, but xine based players have problems with the sound, vlc didn't work at all)...

Anyway, on saturday I was pointed to "white and nerdy" which is completly hillarious, for both the lyrics and the video. And you collect additional geek-bonuspoints by spotting the error in the formula without the help of the internet! :)

Also really awesome is "while my guitar gently weeps" performed with an ukulele. This song alone is worth dealing with the flash crap. Copy kills music?? Hahahaha. The music industry kills music. Culture improves through "copying" and stays alive through copying.

28 December 2005

Axel Beckert: I changed my mind. I want a camera mobile phone.

Today I read and wrote about Semapedia, a service respective toolset to encode Wikipedia URLs (and also others) as dot-matrix barcode, print them out on leaflets together with mentioning Wikipedia and the URL. Then any visitor with a modern camera cell phone can take an image of the barcode, decode it with the right software on your phone, which passes the decoded URL directly to the phones webbrowser. This is the first useful application of camera phones I ever heard about. But I see it as so useful that I may consider buying me a camera cell phone with the next contract renewal, although until now, I focused all my search for a worthy successor to my Nokia 6310i on non-camera phones. (Update: And I’m not alone with the wish for a useful mobile phone.) The 6310i had nearly everything I needed: A big memory, long standby times (1.5 to 2 weeks), WAP incl. WAP browser for reading Symlink on the road, GPRS, GSM 900/1800, T9, Infrared, gnokii support, the same battery bay than my former mobile phones (Nokia 6210 and 6130) and the Nokia typical, very intuïtive and blindly usable user interface. (Siemens mobiles suck!). It also had some things, I didn’t need yet, but sounded useful: Voice dialing and voice recording, Java for playing with own programs, Bluetooth for a cableless headset or so and GSM-1900 because perhaps also other countries than the US use that frequency band. (I refuse to travel to the US, so I won’t need the GSM-1900 there.) It had nothing I didn’t want to have in a mobile phone: Camera, radio, MP3 player, standby time munching color display, e-mail client, MMS, MP3 ring tones or flip covers. The only thing I missed, was a more modern Java VM and even more memory when Opera Mini came out and maybe polyphone ring tones, so I could have the Monkey Island theme as ring tone. ;-) So what now? Being able to use Opera Mini and Semapedia means to have a mobile phone with camera and — and that’s the drawback — a color display. Anyone knows a Nokia camera phone on which Opera Mini runs but without color display? And with the battery bay from the 6x10 series? No? Or maybe I should just stay with the 6310i and get me a second one in better condition (no broken case) from eBay or so? There were also (yet unconfirmed) rumours that my GSM provider E-Plus will have the Linux based internet tablet Nokia 770 for a contract renewal plus 80€ to 90€… Difficult decision…