Search Results: "lars"

11 June 2020

Antoine Beaupr : CVE-2020-13777 GnuTLS audit: be scared

So CVE-2020-13777 came out while I wasn't looking last week. The GnuTLS advisory (GNUTLS-SA-2020-06-03) is pretty opaque so I'll refer instead to this tweet from @FiloSottile (Go team security lead):
PSA: don't rely on GnuTLS, please. CVE-2020-13777 Whoops, for the past 10 releases most TLS 1.0 1.2 connection could be passively decrypted and most TLS 1.3 connections intercepted. Trivially. Also, TLS 1.2 1.0 session tickets are awful.
You are reading this correctly: supposedly encrypted TLS connections made with affected GnuTLS releases are vulnerable to passive cleartext recovery attack (and active for 1.3, but who uses that anyways). That is extremely bad. It's pretty close to just switching everyone to HTTP instead of HTTPS, more or less. I would have a lot more to say about the security of GnuTLS in particular -- and security in general -- but I am mostly concerned about patching holes in the roof right now, so this article is not about that. This article is about figuring out what, exactly, was exposed in our infrastructure because of this.

Affected packages Assuming you're running Debian, this will show a list of packages that Depends on GnuTLS:
apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u
This assumes you run this only on hosts running Buster or above. Otherwise you'll need to figure out a way to pick machines running GnuTLS 3.6.4 or later. Note that this list only first level dependencies! It is perfectly possible that another package uses GnuTLS without being listed here. For example, in the above list I have libcurl3-gnutls, so the be really thorough, I would actually need to recurse down the dependency tree. On my desktop, this shows an "interesting" list of targets:
  • apt
  • cadaver - AKA WebDAV
  • curl & wget
  • fwupd - another attack on top of this one
  • git (through the libcurl3-gnutls dependency)
  • mutt - all your emails
  • weechat - your precious private chats
Arguably, fetchers like apt, curl, fwupd, and wget rely on HTTPS for "authentication" more than secrecy, although apt has its own OpenPGP-based authentication so that wouldn't matter anyways. Still, this is truly distressing. And I haven't mentioned here things like gobby, network-manager, systemd, and others - the scope of this is broad. Hell, even good old lynx links against GnuTLS. In our infrastructure, the magic command looks something like this:
cumin -o txt -p 0  'F:lsbdistcodename=buster' "apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u"   tee gnutls-rdepds-per-host   awk ' print $NF '   sort   uniq -c   sort -n
There, the result is even more worrisome, as those important packages seem to rely on GnuTLS for their transport security:
  • mariadb - all MySQL traffic and passwords
  • mandos - full disk encryption
  • slapd - LDAP passwords
mandos is especially distressing although it's probably not vulnerable because it seems it doesn't store the cleartext -- it's encrypted with the client's OpenPGP public key -- so the TLS tunnel never sees the cleartext either. Other reports have also mentioned the following servers link against GnuTLS and could be vulnerable:
  • exim
  • rsyslog
  • samba
  • various VNC implementations

Not affected Those programs are not affected by this vulnerability:
  • apache2
  • gnupg
  • python
  • nginx
  • openssh
This list is not exhaustive, naturally, but serves as an example of common software you don't need to worry about. The vulnerability only exists in GnuTLS, as far as we know, so programs linking against other libraries are not vulnerable. Because the vulnerability affects session tickets -- and those are set on the server side of the TLS connection -- only users of GnuTLS as a server are vulnerable. This means, for example, that while weechat uses GnuTLS, it will only suffer from the problem when acting as a server (which it does, in relay mode) or, of course, if the remote IRC server also uses GnuTLS. Same with apt, curl, wget, or git: it is unlikely to be a problem because it is only used as a client; the remote server is usually a webserver -- not git itself -- when using TLS.

Caveats Keep in mind that it's not because a package links against GnuTLS that it uses it. For example, I have been told that, on Arch Linux, if both GnuTLS and OpenSSL are available, the mutt package will use the latter, so it's not affected. I haven't confirmed that myself nor have I checked on Debian. Also, because it relies on session tickets, there's a time window after which the ticket gets cycled and properly initialized. But that is apparently 6 hours by default so it is going to protect only really long-lasting TLS sessions, which are uncommon, I would argue. My audit is limited. For example, it might have been better to walk the shared library dependencies directly, instead of relying on Debian package dependencies.

Other technical details It seems the vulnerability might have been introduced in this merge request, itself following a (entirely reasonable) feature request to make it easier to rotate session tickets. The merge request was open for a few months and was thoroughly reviewed by a peer before being merged. Interestingly, the vulnerable function (_gnutls_initialize_session_ticket_key_rotation), explicitly says:
 * This function will not enable session ticket keys on the server side. That is done
 * with the gnutls_session_ticket_enable_server() function. This function just initializes
 * the internal state to support periodical rotation of the session ticket encryption key.
In other words, it thinks it is not responsible for session ticket initialization, yet it is. Indeed, the merge request fixing the problem unconditionally does this:
memcpy(session->key.initial_stek, key->data, key->size);
I haven't reviewed the code and the vulnerability in detail, so take the above with a grain of salt. The full patch is available here. See also the upstream issue 1011, the upstream advisory, the Debian security tracker, and the Redhat Bugzilla.

Moving forward The impact of this vulnerability depends on the affected packages and how they are used. It can range from "meh, someone knows I downloaded that Debian package yesterday" to "holy crap my full disk encryption passwords are compromised, I need to re-encrypt all my drives", including "I need to change all LDAP and MySQL passwords". It promises to be a fun week for some people at least. Looking ahead, however, one has to wonder whether we should follow @FiloSottile's advice and stop using GnuTLS altogether. There are at least a few programs that link against GnuTLS because of the OpenSSL licensing oddities but that has been first announced in 2015, then definitely and clearly resolved in 2017 -- or maybe that was in 2018? Anyways it's fixed, pinky-promise-I-swear, except if you're one of those weirdos still using GPL-2, of course. Even though OpenSSL isn't the simplest and secure TLS implementation out there, it could preferable to GnuTLS and maybe we should consider changing Debian packages to use it in the future. But then again, the last time something like this happened, it was Heartbleed and GnuTLS wasn't affected, so who knows... It is likely that people don't have OpenSSL in mind when they suggest moving away from GnuTLS and instead think of other TLS libraries like mbedtls (previously known as PolarSSL), NSS, BoringSSL, LibreSSL and so on. Not that those are totally sinless either... "This is fine", as they say...

2 June 2020

Lisandro Dami n Nicanor P rez Meyer: Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa) - Fighting COVID-19

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.


This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

All the above done with Free/Libre/Open Source software and hardware designs. Any manufacturing company can then use them for mass production.

The importance of early pneumonia detection


We were already working in MoSimPa when an NYTimes article caught or attention: "The Infection That s Silently Killing Coronavirus Patients". From the article:

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint


As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

System schema

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware


As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software


Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

Between the database and MQTT is mosimpa-datakeeper. The piece of software from which medical staff monitor patients is unsurprisingly called mosimpa-monitor.

mosimpa-monitor
MoSimPa's monitor main screen

mosimpa-monitor plots
Plots of a patient's data


mosimpa-monitor-alarms-setup
Alarm thresholds setup


And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

mosimpa-abm
ABM main screen


mosimpa-abm-internments
ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

Even if I am a packager I still have some stuff to fix from the packaging itself, like letting datakeeper run with its own user. I just haven't got to it yet.



Certifications


We are working towards getting the system certified by ANMAT, which is the Argentinian equivalent for EEUU's FDA.

Funding


While all the people involved are working ad-honorem funding is still required in order to buy materials, create the prototypes, etc. The project created payments links with Mercado Pago (in spanish and argentinian pesos) and other bank methods (PDF, also in spanish).

I repeat the links here with an aproximation to US$.

- 500 AR$ (less than 8 US$)
- 1000 AR$ (less than 15 US$)
- 2000 AR$ (less than 30 US$)
- 3000 AR$ (less than 45 US$)
- 5000 AR$ (less than 75 US$)

You can check the actual convertion rate in https://www.google.com/search?q=argentine+peso+to+us+dollars

The project was also presented at a funding call of argentinian Agencia de Promoci n de la Investigaci n, el Desarrollo Tecnol gico y la Innovaci n (Agencia I+D+i). 900+ projects where presented and 64 funded, MoSimPa between them.

26 May 2020

Russ Allbery: Review: The Ten Thousand Doors of January

Review: The Ten Thousand Doors of January, by Alix E. Harrow
Publisher: Redhook
Copyright: September 2019
ISBN: 0-316-42198-7
Format: Kindle
Pages: 373
In 1901, at the age of seven, January found a Door. It was barely more than a frame in a ruined house in a field in Kentucky, but she wrote a story about opening it, and then did.
Once there was a brave and temeraryous (sp?) girl who found a Door. It was a magic Door that's why it has a capital D. She opened the Door.
The Door led to a bluff over the sea and above a city, a place very far from Kentucky, and she almost stayed, but she came back through the Door when her guardian, Mr. Locke, called. The adventure cost her a diary, several lectures, days of being locked in her room, and the remnants of her strained relationship with her father. When she went back, the frame of the Door was burned to the ground. That was the end of Doors for January for some time, and the continuation of a difficult childhood. She was cared for by her father's employer as a sort of exotic pet, dutifully attempting to obey, grateful for Mr. Locke's protection, and convinced that he was occasionally sneaking her presents through a box in the Pharaoh Room out of some hidden kindness. Her father appeared rarely, said little, and refused to take her with him. Three things helped: the grocery boy who smuggled her stories, an intimidating black woman sent by her father to replace her nurse, and her dog.
Once upon a time there was a good girl who met a bad dog, and they became the very best of friends. She and her dog were inseparable from that day forward.
I will give you a minor spoiler that I would have preferred to have had, since it would have saved me some unwarranted worry and some mental yelling at the author: The above story strains but holds. January's adventure truly starts the day before her seventeenth birthday, when she finds a book titled The Ten Thousand Doors in the box in the Pharaoh Room. As you may have guessed from the title, The Ten Thousand Doors of January is a portal fantasy, but it's the sort of portal fantasy that is more concerned with the portal itself than the world on the other side of it. (Hello to all of you out there who, like me, have vivid memories of the Wood between the Worlds.) It's a book about traveling and restlessness and the possibility of escape, about the ability to return home again, and about the sort of people who want to close those doors because the possibility of change created by people moving around freely threatens the world they have carefully constructed. Structurally, the central part of the book is told by interleaving chapters of January's tale with chapters from The Ten Thousand Doors. That book within a book starts with the framing of a scholarly treatment but quickly becomes a biography of a woman: Adelaide Lee Larson, a half-wild farm girl who met her true love at the threshold of a Door and then spent much of her life looking for him. I am not a very observant reader for plot details, particularly for books that I'm enjoying. I read books primarily for the emotional beats and the story structure, and often miss rather obvious story clues. (I'm hopeless at guessing the outcomes of mysteries.) Therefore, when I say that there are many things January is unaware of that are obvious to the reader, that's saying a lot. Even more clues were apparent when I skimmed the first chapter again, and a more observant reader would probably have seen them on the first read. Right down to Mr. Locke's name, Harrow is not very subtle about the moral shape of this world. That can make the early chapters of the book frustrating. January is being emotionally (and later physically) abused by the people who have power in her life, but she's very deeply trapped by false loyalty and lack of external context. Winning free of that is much of the story of the book, and at times it has the unpleasantness of watching someone make excuses for her abuser. At other times it has the unpleasantness of watching someone be abused. But this is the place where I thought the nested story structure worked marvelously. January escapes into the story of The Ten Thousand Doors at the worst moments of her life, and the reader escapes with her. Harrow uses the desire to switch scenes back to the more adventurous and positive story to construct and reinforce the emotional structure of the book. For me, it worked extremely well. It helps that the ending is glorious. The payoff is worth all the discomfort and tension-building in the first half of the book. Both The Ten Thousand Doors and the surrounding narrative reach deeply satisfying conclusions, ones that are entangled but separate in just the ways that they need to be. January's abilities, actions, and decisions at the end of the book were just the outcome that I needed but didn't entirely guess in advance. I could barely put down the last quarter of this story and loved every moment of the conclusion. This is the sort of book that can be hard to describe in a review because its merits don't rest on an original twist or easily-summarized idea. The elements here are all elements found in other books: portal fantasy, the importance of story-telling, coming of age, found family, irrepressible and indomitable characters, and the battle of the primal freedom of travel and discovery and belief against the structural forces that keep rulers in place. The merits of this book are in the small details: the way that January's stories are sparse and rare and sometimes breathtaking, the handling of tattoos, the construction of other worlds with a few deft strokes, and the way Harrow embraces the emotional divergence between January's life and Adelaide's to help the reader synchronize the emotional structure of their reading experience with January's.
She writes a door of blood and silver. The door opens just for her.
The Ten Thousand Doors of January is up against a very strong slate for both the Nebula and the Hugo this year, and I suspect it may be edged out by other books, although I wouldn't be unhappy if it won. (It probably has a better shot at the Nebula than the Hugo.) But I will be stunned if Harrow doesn't walk away with the Mythopoeic Award. This seems like exactly the type of book that award was created for. This is an excellent book, one of the best I've read so far this year. Highly recommended. Rating: 9 out of 10

12 May 2020

Daniel Silverstone: The Lars, Mark, and Daniel Club

Last night, Lars, Mark, and I discussed Jeremy Kun's The communicative value of using Git well post. While a lot of our discussion was spawned by the article, we did go off-piste a little, and I hope that my notes below will enlighten you all as to a bit of how we see revision control these days. It was remarkably pleasant to read an article where the comments section wasn't a cesspool of horror, so if this posting encourages you to go and read the article, don't stop when you reach the bottom -- the comments are good and useful too.
This was a fairly non-contentious article for us though each of us had points we wished to bring up and chat about it turned into a very convivial chat. We saw the main thrust of the article as being about using the metadata of revision control to communicate intent, process, and decision making. We agreed that it must be possible to do so effectively with Mercurial (thus deciding that the mention of it was simply a bit of clickbait / red herring) and indeed Mark figured that he was doing at least some of this kind of thing way back with CVS. We all discussed how knowing the fundamentals of Git's data model improved our ability to work wih the tool. Lars and I mentioned how jarring it has originally been to come to Git from revision control systems such as Bazaar (bzr) but how over time we came to appreciate Git for what it is. For Mark this was less painful because he came to Git early enough that there was little more than the fundamental data model, without much of the porcelain which now exists. One point which we all, though Mark in particular, felt was worth considering was that of distinguishing between published and unpublished changes. The article touches on it a little, but one of the benefits of the workflow which Jeremy espouses is that of using the revision control system as an integral part of the review pipeline. This is perhaps done very well with Git based workflows, but can be done with other VCSs. With respect to the points Jeremy makes regarding making commits which are good for reviewing, we had a general agreement that things were good and sensible, to a point, but that some things were missed out on. For example, I raised that commit messages often need to be more thorough than one-liners, but Jeremy's examples (perhaps through expedience for the article?) were all pretty trite one-liners which perhaps would have been entirely obvious from the commit content. Jeremy makes the point that large changes are hard to review, and Lars poined out that Greg Wilson did research in this area, and at least one article mentions 200 lines as a maximum size of a reviewable segment. I had a brief warble at this point about how reviewing needs to be able to consider the whole of the intended change (i.e. a diff from base to tip) not just individual commits, which is also missing from Jeremy's article, but that such a review does not need to necessarily be thorough and detailed since the commit-by-commit review remains necessary. I use that high level diff as a way to get a feel for the full shape of the intended change, a look at the end-game if you will, before I read the story of how someone got to it. As an aside at this point, I talked about how Jeremy included a 'style fixes' commit in his example, but I loathe seeing such things and would much rather it was either not in the series because it's unrelated to it; or else the style fixes were folded into the commits they were related to. We discussed how style rules, as well as commit-bisectability, and other rules which may exist for a codebase, the adherence to which would form part of the checks that a code reviewer may perform, are there to be held to when they help the project, and to be broken when they are in the way of good communication between humans. In this, Lars talked about how revision control histories provide high level valuable communication between developers. Communication between humans is fraught with error and the rules are not always clear on what will work and what won't, since this depends on the communicators, the context, etc. However whatever communication rules are in place should be followed. We often say that it takes two people to communicate information, but when you're writing commit messages or arranging your commit history, the second party is often nebulous "other" and so the code reviewer fulfils that role to concretise it for the purpose of communication. At this point, I wondered a little about what value there might be (if any) in maintaining the metachanges (pull request info, mailing list discussions, etc) for historical context of decision making. Mark suggested that this is useful for design decisions etc but not for the style/correctness discussions which often form a large section of review feedback. Maybe some of the metachange tracking is done automatically by the review causing the augmentation of the changes (e.g. by comments, or inclusion of design documentation changes) to explain why changes are made. We discussed how the "rebase always vs. rebase never" feeling flip-flopped in us for many years until, as an example, what finally won Lars over was that he wants the history of the project to tell the story, in the git commits, of how the software has changed and evolved in an intentional manner. Lars said that he doesn't care about the meanderings, but rather about a clear story which can be followed and understood. I described this as the switch from "the revision history is about what I did to achieve the goal" to being more "the revision history is how I would hope someone else would have done this". Mark further refined that to "The revision history of a project tells the story of how the project, as a whole, chose to perform its sequence of evolution." We discussed how project history must necessarily then contain issue tracking, mailing list discussions, wikis, etc. There are exist free software projects where part of their history is forever lost because, for example, the project moved from Sourceforge to Github, but made no effort (or was unable) to migrate issues or somesuch. Linkages between changes and the issues they relate to can easily be broken, though at least with mailing lists you can often rewrite URLs if you have something consistent like a Message-Id. We talked about how cover notes, pull request messages, etc. can thus also be lost to some extent. Is this an argument to always use merges whose message bodies contain those details, rather than always fast-forwarding? Or is it a reason to encapsulate all those discussions into git objects which can be forever associated with the changes in the DAG? We then diverted into discussion of CI, testing every commit, and the benefits and limitations of automated testing vs. manual testing; though I think that's a little too off-piste for even this summary. We also talked about how commit message audiences include software perhaps, with the recent movement toward conventional commits and how, with respect to commit messages for machine readability, it can become very complex/tricky to craft good commit messages once there are multiple disparate audiences. For projects the size of the Linux kernel this kind of thing would be nearly impossible, but for smaller projects, perhaps there's value. Finally, we all agreed that we liked the quote at the end of the article, and so I'd like to close out by repeating it for you all... Hal Abelson famously said:
Programs must be written for people to read, and only incidentally for machines to execute.
Jeremy agrees, as do we, and extends that to the metacommit information as well.

24 April 2020

Mark Brown: Book club: Our Software Dependency Problem

A short while ago Daniel, Lars and I met to discuss Russ Cox s excellent essay Our Software Dependency Problem. This essay looks at software reuse in general, especially in the context of modern distribution methods like PyPI and NPM which make the whole process much more frictionless than traditional distribution methods used with languages like C. Possibly our biggest conclusion was that the essay is so eminently sensible that we mostly just talked about how much we agreed with it and how comprehensive it was, we particularly admired the clarity with which it explores how to evaluate the quality of free software projects. Next time we ll have to pick something more controversial to discuss!

7 April 2020

Shirish Agarwal: GMRT 2020 and lots of stories

First of all, congratulations to all those who got us 2022 Debconf, so we will finally have a debconf in India. There is of course, lot of work to be done between now and then. For those who would be looking forward to visit India and especially Kochi I would suggest you to hear this enriching tale
I am sorry I used youtube link but it is too good a podcast not to be shared. Those who don t want youtube can use the invidio.us link for the same as shared below. https://www.invidio.us/watch?v=BvjgKuKmnQ4 I am sure there are lot more details, questions, answers etc. but would direct them gently to Praveen, Shruti, Balasankar and the rest who are from Kochi to answer if you have any questions about that history.

National Science Day, GMRT 2020 First, as always, we are and were grateful to both NCRA as well as GMRT for taking such good care of us. Even though Akshat was not around, probably getting engaged, a few of us were there. About 6-7 from the Mozilla Nasik while the rest representing the foss community. Here is a small picture which commentrates the event
National Science Day, GMRT 2020
While there is and was a lot to share about the event. For e.g. Akshay had bought RPI- Zero as well as RPI-2 (Raspberry Pi s ) and showed some things. He had also bought up a Debian stable live drive with persistence although the glare from the sun was too much that we couldn t show it to clearly to students. This was also the case with RPI but still we shared what and how much we could. Maybe next year, we either ask them to have double screens or give us dark room so we can showcase things much better. We did try playing with contrast and all but it didn t have much of an effect  . Of course in another stall few students had used RPI s as part of their projects so at times we did tell some of the newbies to go to those stalls and see and ask about those projects so they would have a much wider experience of things. The Mozilla people were pushing VR as well as Mozilla lite the browser for the mobile. We also gossiped quite a bit. I shared about indicatelts , a third-party certificate extension although I dunno if I should file a wnpp about it or not. We didn t have a good experience of when I had put an RFP (Request for Package) which was accepted for an extension which had similar functionality which we later come to know was sharing the sites people were using the extension to call home and share both the URL and the IP Address they were using it from. Sadly, didn t leave a good taste in mouth

Delhi Riots One thing I have been disappointed with is the lack of general awareness about things especially in the youth. We have people who didn t know that for e.g. in the Delhi riots which happened recently the law and order (Police) lies with Home Minister of India, Amit Shah. This is perhaps the only capital in the world which has its own Chief Minister but doesn t have any say on its law and order. And this has been the case for last 70 years i.e. since independance. The closest I know so far is the UK but they too changed their tune in 2012. India and especially Delhi seems to be in a time-capsule which while being dysfunctional somehow is made to work. In many ways, it s three body or a body split into three personalities which often makes governance a messy issue but that probably is a topic for another day. In fact, scroll had written a beautiful editorial that full statehood for Delhi was not only Arvind Kejriwal s call (AAP) but also something that both BJP as well as Congress had asked in the past. In fact, nothing about the policing is in AAP s power. All salaries, postings, transfers of police personnel everything is done by the Home Ministry, so if any blame has to be given it has to be given to the Home Ministry for the same.

American Capitalism and Ventilators America had been having a history of high cost healthcare as can be seen in this edition of USA today from 2017 . The Affordable Care Act was signed as a law by President Obama in 2010 which Mr. Trump curtailed when he came into power couple of years back. An estimated 80,000 people died due to seasonal flu in 2018-19 . Similarly, anywhere between 24-63,000 have supposed to have died from Last October to February-March this year. Now if the richest country can t take care of their population which is 1/3rd of the population of this country while at the same time United States has thrice the area that India has. This I am sharing as seasonal flu also strikes the elderly as well as young children more than adults. So in one senses, the vulnerable groups overlap although from some of the recent stats, for Covid-19 even those who are 20+ are also vulnerable but that s another story altogether. If you see the CDC graph of the seasonal flu it is clear that American health experts knew about it. One another common factor which joins both the seasonal flu and covid is both need ventilators for the most serious cases. So, in 2007 it was decided that the number of ventilators needed to be ramped up, they had approximately 62k ventilators at that point in time all over U.S. The U.S. in 2010, asked for bids and got bid from a small californian company called Newport Medic Instruments. The price of the ventilators was approximately INR 700000 at 2010 prices, while Newport said they would be able to mass-produce at INR 200000 at 2010 prices. The company got the order and they started designing the model which needed to be certified by FDA. By 2011, they got the product ready when a big company called Covidgen bought Newport Medic and shutdown the project. This was shared in a press release in 2012. The whole story was broken by New York Times again, just a few days ago which highlighted how America s capitalism rough shod over public health and put people s life unnecessarily in jeopardy. If those new-age ventilators would have been a reality then not just U.S. but India and many other countries would have bought the ventilators as every county has same/similar needs but are unable to pay the high cost which in many cases would be passed on to their citizens either as price of service, or by raising taxes or a mixture of both with public being none the wiser. Due to dearth of ventilators and specialized people to operate it and space, there is possibility that many countries including India may have to make tough choices like Italian doctors had to make as to who to give ventilator to and have the mental and emotional guilt which would be associated with the choices made.

Some science coverage about diseases in wire and other publications Since Covid coverage broke out, the wire has been bringing various reports of India s handling of various epidemics, mysteries, some solved, some still remaining unsolved due to lack of interest or funding or both. The Nipah virus has been amply discussed in the movie Virus (2019) which I shared in the last blog post and how easily it could have been similar to Italy in Kerala. Thankfully, only 24 people including a nurse succumbed to that outbreak as shared in the movie. I had shared about Kerala nurses professionalism when I was in hospital couple of years back. It s no wonder that their understanding of hygeine and nursing procedures are a cut above the rest hence they are sought after not just in India but world-over including US and UK and the middle-east. Another study on respitory illness was bought to my attention by my friend Pavithran.

Possibility of extended lockdown in India There was talk in the media of extended lockdown or better put an environment is being created so that an extended lockdown can be done. This is probably in part due to a mathematical model and its derivatives shared by two Indian-origin Cambridge scholars who predict that a minimum 49 days lockdown may be necessary to flatten the covid curve about a week back.
Predictions of the outcome of the current 21-day lockdown (Source: Rajesh Singh, R. Adhikari, Cambridge University)
Alternative lockdown strategies suggested by the Cambridge model (Source: Rajesh Singh, R. Adhikari, Cambridge University)
India caving to US pressure on Hydroxychloroquine While there has been lot of speculation in U.S. about Hydroxychloroquine as the wonder cure, last night Mr. Trump threatened India in a response to a reporter that Mr. Modi has to say no for Hydroxychloroquine and there may be retaliations.
As shared before if youtube is not your cup you can see the same on invidio.us https://www.invidio.us/watch?v=YP-ewgoJPLw Now while there have been several instances in the past of U.S. trying to bully India, going all the way back to 1954. In fact, in recent memory, there were sanctions on India by US under Atal Bihari Vajpayee Government (BJP) 1998 but he didn t buckle under the pressure and now we see our current PM taking down our own notification from a day ago and not just sharing Hydroxychloroquine but also Paracetemol to other countries so it would look as if India is sharing with other countries. Keep in mind, that India, Brazil haven t seen eye to eye on trade agreements of late and Paracetemol prices have risen in India. The price rise has been because the API (Active Pharmaceutical Ingredients) for the same come from China where the supply chain will take time to be fixed and we would also have to open up, although should we, should we not is another question altogether. I talk about supply chains as lean supply chains were the talk since late 90 s when the Japanese introduced Just-in-time manufacturing which lead to lean supply chains as well as lot of outsourcing as consequence. Of course, the companies saved money but at the cost of flexibility and how this model was perhaps flawed was shared by a series of articles in Economist as early as 2004 when there were lot of shocks to that model and would be exaberated since then. There have been frequent shocks to these fragile ecosystem more since 2008 after the financial meltdown and this would put more companies out of business than ever before. The MSME sector in India had already been severely impacted first by demonetization and then by the horrendous implementation of GST whose cries can be heard from all sectors. Also the frequent changing of GST taxes has made markets jumpy and investors unsure. With judgements such as retrospective taxes, AGR (Adjusted Gross Revenue) etc. it made not only the international investors scared, but also domestic investors. The flight of the capital has been noticeable. This I had shared before when Indian Government shared about LRS report which it hasn t since then. In fact Outlook Business had an interesting article about it where incidentally it talked about localcircles, a community networking platform where you get to know of lot of things and whom I am also a member of. At the very end I apologize for not sharing the blog post before but then I was feeling down but then I m not the only one.

24 March 2020

Russ Allbery: Review: Lost in Math

Review: Lost in Math, by Sabine Hossenfelder
Publisher: Basic
Copyright: June 2018
ISBN: 0-465-09426-0
Format: Kindle
Pages: 248
Listening to experts argue can be one of the better ways to learn about a new field. It does require some basic orientation and grounding or can be confusing or, worse, wildly misleading, so some advance research or Internet searches are warranted. But it provides some interesting advantages over reading multiple popular introductions to a field. First, experts arguing with each other are more precise about their points of agreement and disagreement because they're trying to persuade someone who is well-informed. The points of agreement are often more informative than the points of disagreement, since they can provide a feel for what is uncontroversial among experts in the field. Second, internal arguments tend to be less starry-eyed. One of the purposes of popularizations of a field is to get the reader excited about it, and that can be fun to read. But to generate that excitement, the author has a tendency to smooth over disagreements and play up exciting but unproven ideas. Expert disagreements pull the cover off of the uncertainty and highlight the boundaries of what we know and how we know it. Lost in Math (subtitled How Beauty Leads Physics Astray) is not quite an argument between experts. That's hard to find in book form; most of the arguments in the scientific world happen in academic papers, and I rarely have the energy or attention span to read those. But it comes close. Hossenfelder is questioning the foundations of modern particle physics for the general public, but also for her fellow scientists. High-energy particle physics is facing a tricky challenge. We have a solid theory (the standard model) which explains nearly everything that we have currently observed. The remaining gaps are primarily at very large scales (dark matter and dark energy) or near phenomena that are extremely difficult to study (black holes). For everything else, the standard model predicts our subatomic world to an exceptionally high degree of accuracy. But physicists don't like the theory. The details of why are much of the topic of this book, but the short version is that the theory does not seem either elegant or beautiful. It relies on a large number of measured constants that seem to have no underlying explanation, which is contrary to a core aesthetic principle that physicists use to judge new theories. Accompanying this problem is another: New experiments in particle physics that may be able to confirm or disprove alternate theories that go beyond the standard model are exceptionally expensive. All of the easy experiments have been done. Building equipment that can probe beyond the standard model is incredibly expensive, and thus only a few of those experiments have been done. This leads to two issues: Particle physics has an overgrowth of theories (such as string theory) that are largely untethered from experiments and are not being tested and validated or disproved, and spending on new experiments is guided primarily by a sense of scientific aesthetics that may simply be incorrect. Enter Lost in Math. Hossenfelder's book picks up a thread of skepticism about string theory (and, in Hossenfelder's case, supersymmetry as well) that I previously read in Lee Smolin's The Trouble with Physics. But while Smolin's critique was primarily within the standard aesthetic and epistemological framework of particle physics, Hossenfelder is questioning that framework directly. Why should nature be beautiful? Why should constants be small? What if the universe does have a large number of free constants? And is the dislike of an extremely reliable theory on aesthetic grounds a good basis for guiding which experiments we fund?
Do you recall the temple of science, in which the foundations of physics are the bottommost level, and we try to break through to deeper understanding? As I've come to the end of my travels, I worry that the cracks we're seeing in the floor aren't really cracks at all but merely intricate patterns. We're digging in the wrong places.
Lost in Math will teach you a bit less about physics than Smolin's book, although there is some of that here. Smolin's book was about two-thirds physics and one-third sociology of science. Lost in Math is about two-thirds sociology and one-third physics. But that sociology is engrossing. It's obvious in retrospect, but I hadn't thought before about the practical effects of running out of unexplained data on a theoretical field, or about the transition from more data than we can explain to having to spend billions of dollars to acquire new data. And Hossenfelder takes direct aim at the human tendency to find aesthetically appealing patterns and unified explanations, and scores some palpable hits.
I went into physics because I don't understand human behavior. I went into physics because math tells it how it is. I liked the cleanliness, the unambiguous machinery, the command math has over nature. Two decades later, what prevents me from understanding physics is that I still don't understand human behavior. "We cannot give exact mathematical rules that define if a theory is attractive or not," says Gian Francesco Giudice. "However, it is surprising how the beauty and elegance of a theory are universally recognized by people from different cultures. When I tell you, 'Look, I have a new paper and my theory is beautiful,' I don't have to tell you the details of my theory; you will get why I'm excited. Right?" I don't get it. That's why I am talking to him. Why should the laws of nature care what I find beautiful? Such a connection between me and the universe seems very mystical, very romantic, very not me. But then Gian doesn't think that nature cares what I find beautiful, but what he finds beautiful.
The structure of this book is half tour of how physics judges which theories are worthy of investigation and half personal quest to decide whether physics has lost contact with reality. Hossenfelder approaches this second thread with multiple interviews of famous scientists in the field. She probes at their bases for preferring one theory over another, at how objective those preferences can or should be, and what it means for physics if they're wrong (as increasingly appears to be the case for supersymmetry). In so doing, she humanizes theory development in a way that I found fascinating. The drawback to reading about ongoing arguments is the lack of a conclusion. Lost in Math, unsurprisingly, does not provide an epiphany about the future direction of high-energy particle physics. Its conclusion, to the extent that it has one, is a plea to find a way to put particle physics back on firmer experimental footing and to avoid cognitive biases in theory development. Given the cost of experiments and the nature of humans, this is challenging. But I enjoyed reading this questioning, contrarian take, and I think it's valuable for understanding the limits, biases, and distortions at the edge of new theory development. Rating: 7 out of 10

23 March 2020

Bits from Debian: New Debian Developers and Maintainers (January and February 2020)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

20 March 2020

Molly de Blanc: Seven hundred words on Internet access

I wrote this a few months ago, and never published it. Here you go. In the summer of 2017, I biked from Boston, MA to Montreal, QC. I rode across Massachusetts, then up the New York/Vermont border, weaving between the two states over two days. I spent the night in Washington County, NY at a bed and breakfast that generously fed me dinner even though they weren t supposed to. One of the proprietors told me about his history as a physics teacher, and talked about volunteer work he was doing. He somewhat casually mentioned that in his town there isn t really internet access. At the time (at least) Washington County wasn t served by broadband companies. Instead, for $80 a month you could purchase a limited data package from a mobile phone company, and use that. A limited data package means limited access. This could mean no or limited internet in schools or libraries. This was not the first time I heard about failings of Internet penetration in the United States. When I first moved to Boston I was an intern at One Laptop Per Child. I spoke with someone interested in bringing internet access to their rural community in Maine. They had hope for mesh networks, linking computers together into a web of connectivity, bouncing signals from one machine to another in order to bring internet to everyone. Access to the Internet is a necessity. As I write this, 2020 is only weeks away, which brings our decennial, nationwide census. There had been discussions of making the census entirely online, but it was settled that people could fill it out online, by telephone, or via mail and that households can answer the questions on the internet or by phone in English and 12 Non-English languages. [1][2] This is important because a comprehensive census is important. A census provides, if nothing else, population and demographics information, which is used to assist in the disbursement of government funding and grants to geographic communities. Apportionment, or the redistribution of the 435 seats occupied by members of the House of Representatives, is done based on the population of a given state: more people, more seats. Researchers, students, and curious people use census data to carry out their work. Non-profits and activist organizations can better understand the populations they serve. As things like the Census increasingly move online, the availability of access becomes increasingly important. Some things are only available online including job applications, customer service assistance, and even education opportunities like courses, academic resources, and applications for grants, scholarships, and admissions. The Internet is also a necessary point of connection between people, and necessary for building our identities. Being acknowledged with their correct names and pronouns decreases the risk of depression and suicide among trans youths and one assumes adults as well. [3] Online spaces provide acknowledgment and recognition that is not being met in physical spaces and geographic communities. Internet access has been important to me in my own mental health struggles and understanding. My bipolar exhibits itself through long, crushing periods of depression during which I can do little more than wait for it to be over. I fill these quiet spaces by listening to podcasts and talking with my friends using apps like Signal to manage our communications. My story of continuous recovery includes a particularly gnarly episode of bulimia in 2015. I was only able to really acknowledge that I had a problems with food and purging, using both as opportunities to inflict violence onto myself, when reading Tumblr posts by people with eating disorders. This made it possible for me to talk about my purging with my therapist, my psychiatrist, and my doctor in order to modify my treatment plan in order to start getting help I need. All of these things are made possible by having reliable, fast access to the Internet. We can respond to our needs immediately, regardless of where we are. We can find or build the communities we need, and serve the ones we already live in, whether they re physical or exist purely as digital. [1]: https://census.lacounty.gov/census/ Accessed 29.11.2019
[2]: https://www.census.gov/library/stories/2019/03/one-year-out-census-bureau-on-track-for-2020-census.html Accessed 29.11.2019
[3]: https://news.utexas.edu/2018/03/30/name-use-matters-for-transgender-youths-mental-health/ Accessed 29.11.2019

12 November 2017

Lars Wirzenius: Unit and integration testing: an analogy with cars

A unit is a part of your program you can test in isolation. You write unit tests to test all aspects of it that you care about. If all your unit tests pass, you should know that your unit works well. Integration tests are for testing that when your various well-tested, high quality units are combined, integrated, they work together. Integration tests test the integration, not the individual units. You could think of building a car. Your units are the ball bearings, axles, wheels, brakes, etc. Your unit tests for the ball bearings might test, for example, that they can handle a billion rotations, at various temperatures, etc. Your integration test would assume the ball bearings work, and should instead test that the ball bearings are installed in the right way so that the car, as whole, can run a kilometers, and accelerate and brake every kilometer, uses only so much fuel, produces only so much pollution, and doesn't kill passengers in case of a crash.

19 October 2017

Steinar H. Gunderson: Introducing Narabu, part 2: Meet the GPU

Narabu is a new intraframe video codec. You may or may not want to read part 1 first. The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like. A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows: A GPU contains 1 20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some shared memory (also called local memory sometimes, although that term is overloaded), typically 32 64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms. Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working. Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs CUDA cores , AMD calls them stream processors . Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.) The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by voting etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good. Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4 32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common. The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline. There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:
for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx)     // in parallel over cores
        char shared_mem[REQUESTED_SHARED_MEM];  // private for each workgroup
        for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx)    // in parallel on each core
                main(workgroup_idx, local_idx, shared_mem);
         
 
except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth. Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure. So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

10 October 2017

Lars Wirzenius: Debian and the GDPR

GDPR is a new EU regulation for privacy. The name is short for "General Data Protection Regulation" and it covers all organisations that handle personal data of EU citizens and EU residents. It will become enforceable May 25, 2018 (Towel Day). This will affect Debian. I think it's time for Debian to start working on compliance, mainly because the GDPR requires sensible things. I'm not an expert on GDPR legislation, but here's my understanding of what we in Debian should do: There's more, but let's start with those. I think Debian has at least the following systems that will need to be reviewed with regards to the GDPR: There may be more; these are just off the top of my head. I expect that mostly Debian will be OK, but we can't just assume that.

8 October 2017

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3. Now, I m publishing an updated version, containing the following changes: A couple of issues remain, notably the lack of WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome! As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/. To install the image, insert the SD card into your computer (I m assuming it s available as /dev/sdb) and copy the image onto it:
$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ bunzip2 2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ sudo dd if=2017-10-08-raspberry-pi-3-buster-PREVIEW.img of=/dev/sdb bs=5M
If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:
$ ssh root@rpi3
# Password is  raspberry 

2 October 2017

Lars Wirzenius: Attracting contributors to a new project

How do you attract contributors to a new free software project? I'm in the very early stages of a new personal project. It is irrelevant for this blog post what the new project actually is. Instead, I am thinking about the following question:
Do I want the project to be mainly for myself, and maybe a handful of others, or do I want to try to make it a more generally useful, possibly even a well-known, popular project? In other words, do I want to just solve a specific problem I have or try to solve it for a large group of people?
If it's a personal project, I'm all set. I can just start writing code. (In fact, I have.) If it's the latter, I'll need to attract contributions from others, and how do I do that? I asked that question on Twitter and Mastodon and got several suggestions. This is a summary of those, with some editorialising from me. I don't know if these things are all correct, or that they're enough to grow a successful, popular project. Karl Foger'l seminal book Producing Open Source Software should also be mentioned.

26 September 2017

Norbert Preining: Debian/TeX Live 2017.20170926-1

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages. From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing. Enjoy. New packages abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows. Updated packages 2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

27 August 2017

Andrew Cater: BBQ Cambridge 2017 - post 2

We were all up until about 0100 :) House full of folk talking about all sorts, a game of Mao. Garden full of people clustered round the barbeque or sitting chatting - I had a long chat about Debian, what it means and how it's often an easier world to deal with and move in than the world of work, office politics or whatever - being here is being at home.

Arguments in the kitchen over how far coffee "just happens" with the magic bean to cup machine, some folk are in the garden preparing for breakfast at noon.

I missed the significance of this week's date - the 26th anniversary of Linus' original announcement of Linux in 1991 fell on Friday. Probably the first user of Linux who installed it from scratch was Lars Wirzenius - who was here yesterday.

Debian's 24th birthday was just about ten days ago on 16th August, making it the second oldest distribution and I reckon I've been using it for twenty one of those years - I wouldn't change it for the world.

13 August 2017

Lars Wirzenius: Retiring Obnam

This is a difficult announcement to write. The summary is if you use Obnam you should switch to another backup program in the coming months. The first commit to Obnam's current code base is this:
commit 7eaf5a44534ffa7f9c0b9a4e9ee98d312f2fcb14
Author: Lars Wirzenius <liw@iki.fi>
Date:   Wed Sep 6 18:35:52 2006 +0300
    Initial commit.
It's followed by over 5200 more commits until the latest one, which is from yesterday. The NEWS file contains 58 releases. There are 20761 lines of Python, 15384 words in the English language manual, with translations in German and French. The yarn test suite, which is a kind of a manual, is another 13382 words in English and pseudo-English. That's a fair bit of code and prose. Not all of it mine, I've had help from some wonderful people. But most of it mine. I wrote all of that because backups were fun. It was pleasing to use my own program to guarantee the safety of my own data. The technical challenges of implmenting the kind of backup program I wanted were interesting, and solving interesting problems is a big part of why I am a programmer. Obnam has a kind user base. It's not a large user base: the Debian "popularity contest" service estimates it at around 500. But it's a user base that is kind and has treated me well. I have tried to reciprocate. Unfortunately, I have not had fun while developing Obnam for some time now. This has changed. A few years ago, I lived in Manchester, UK, and commuted by train to work. It was a short train ride, about 15 minutes. At times I would sit on the floor with my laptop on my knees, writing code or the manual. Back then Obnam was a lot of fun. I was excited, and enthusiastic. In the past two years or so, I've not been able to feel that excitement again. My native language, Finnish, has an expression describing unpleasant tasks: something is as much fun as drinking tar. That describes Obnam in recent years for me. Obnam has not turned out well, from a maintainability point of view. It seems that every time I try to fix something, I break something else. Usuaully what breaks is speed or memory use: Obnam gets slower or starts using even more memory. For several years now I've been working on a new repository format for Obnam, code names GREEN ALBATROSS. It was meant to solve Obnam's problems as far as extensibility, performance, and resource use were concerned. It seems to have failed. I'm afraid I've had enough. I'm going to retire Obnam as a project and as a program, and move on to doing something else, so I can feel excitement and pleasure again. After some careful thought, I fear that the maintainability problems of Obnam can realistically only be solved by a complete rewrite from scratch, and I'm not up to doing that. If you use Obnam, you should migrate to some other backup solution. Don't worry, you have until the end of the year. I will be around and I intend to fix any serious bugs in Obnam; in particular, security flaws. But you should start looking for a replacement sooner rather than later. I will be asking Obnam to be removed from the Debian unstable and testing branches. The next Debian release (buster, Debian 10) won't include Obnam. The Obnam mailing lists are kindly hosted by Daniel Silverstone, and they will remain, but later this year I will change them to be moderated. The Obnam git repository will remain. The web site will remain, but I will add a note that Obnam is no longer maintained. Other Obnam online resources may disappear. If you would like to take over the Obnam project, and try to resolve the various issues, please contact me to discuss that. Thank you, and may you never need to restore.

Enrico Zini: Consensually doing things together?

On 2017-08-06 I have a talk at DebConf17 in Montreal titled "Consensually doing things together?" (video). Here are the talk notes. Abstract At DebConf Heidelberg I talked about how Free Software has a lot to do about consensually doing things together. Is that always true, at least in Debian? I d like to explore what motivates one to start a project and what motivates one to keep maintaining it. What are the energy levels required to manage bits of Debian as the project keeps growing. How easy it is to say no. Whether we have roles in Debian that require irreplaceable heroes to keep them going. What could be done to make life easier for heroes, easy enough that mere mortals can help, or take their place. Unhappy is the community that needs heroes, and unhappy is the community that needs martyrs. I d like to try and make sure that now, or in the very near future, Debian is not such an unhappy community. Consensually doing things together I gave a talk in Heidelberg. Valhalla made stickers Debian France distributed many of them. There's one on my laptop. Which reminds me of what we ought to be doing. Of what we have a chance to do, if we play our cards right. I'm going to talk about relationships. Consensual relationships. Relationships in short. Nonconsensual relationships are usually called abuse. I like to see Debian as a relationship between multiple people. And I'd like it to be a consensual one. I'd like it not to be abuse. Consent From wikpedia:
In Canada "consent means the voluntary agreement of the complainant to engage in sexual activity" without abuse or exploitation of "trust, power or authority", coercion or threats.[7] Consent can also be revoked at any moment.[8] There are 3 pillars often included in the description of sexual consent, or "the way we let others know what we're up for, be it a good-night kiss or the moments leading up to sex." They are:
  • Knowing exactly what and how much I'm agreeing to
  • Expressing my intent to participate
  • Deciding freely and voluntarily to participate[20]
Saying "I've decided I won't do laundry anymore" when the other partner is tired, or busy doing things. Is different than saying "I've decided I won't do laundry anymore" when the other partner has a chance to say "why? tell me more" and take part in negotiation. Resources: Relationships Debian is the Universal Operating System. Debian is made and maintained by people. The long term health of debian is a consequence of the long term health of the relationship between Debian contributors. Debian doesn't need to be technically perfect, it needs to be socially healthy. Technical problems can be fixed by a healty community. graph showing relationship between avoidance, accomodation, compromise, competition, collaboration The Thomas-Kilmann Conflict Mode Instrument: source png. Motivations Quick poll: What are your motivations to be in a relationship? Which of those motivations are healthy/unhealthy? "Galadriel" (noun, by Francesca Ciceri): a task you have to do otherwise Sauron takes over Middle Earth See: http://blog.zouish.org/nonupdd/#/22/1 What motivates me to start a project or pick one up? What motivates me to keep maintaning a project? What motivates you? What's an example of a sustainable motivation? Is it really all consensual in Debian? Energy Energy that thing which is measured in spoons. The metaphore comes from people suffering with chronic health issues:
"Spoons" are a visual representation used as a unit of measure used to quantify how much energy a person has throughout a given day. Each activity requires a given number of spoons, which will only be replaced as the person "recharges" through rest. A person who runs out of spoons has no choice but to rest until their spoons are replenished.
For example, in Debian, I could spend: What is one person capable of doing? Have reasonable expectations, on others: Have reasonable expectations, on yourself: Debian is a shared responsibility When spoons are limited, what takes more energy tends not to get done As the project grows, project-wide tasks become harder Are they still humanly achievable? I don't want Debian to have positions that require hero-types to fill them Dictatorship of who has more spoons: Perfectionism You are in a relationship that is just perfect. All your friends look up to you. You give people relationship advice. You are safe in knowing that You Are Doing It Right. Then one day you have an argument in public. You don't just have to deal with the argument, but also with your reputation and self-perception shattering. One things I hate about Debian: consistent technical excellence. I don't want to be required to always be right. One of my favourite moments in the history of Debian is the openssl bug Debian doesn't need to be technically perfect, it needs to be socially healthy, technical problems can be fixed. I want to remove perfectionism from Debian: if we discover we've been wrong all the time in something important, it's not the end of Debian, it's the beginning of an improved Debian. Too good to be true There comes a point in most people's dating experience where one learns that when some things feel too good to be true, they might indeed be. There are people who cannot say no: There are people who cannot take a no: Note the diversity statement: it's not a problem to have one of those (and many other) tendencies, as long as one manages to keep interacting constructively with the rest of the community Also, it is important to be aware of these patterns, to be able to compensate for one's own tendencies. What happens when an avoidant person meets a narcissistic person, and they are both unaware of the risks? Resources: Note: there are problems with the way these resources are framed: Red flag / green flag http://pervocracy.blogspot.ca/2012/07/green-flags.html Ask for examples of red/green flags in Debian. Green flags: Red flags: Apologies / Dealing with issues I don't see the usefulness of apologies that are about accepting blame, or making a person stop complaining. I see apologies as opportunities to understand the problem I caused, help fix it, and possibly find ways of avoiding causing that problem again in the future. A Better Way to Say Sorry lists a 4 step process, which is basically what we do when in bug reports already: 1, Try to understand and reproduce the exact problem the person had. 2. Try to find the cause of the issue. 3. Try to find a solution for the issue. 4. Verify with the reporter that the solution does indeed fix the issue. This is just to say
My software ate
the files
that where in
your home directory and which
you were probably
needing
for work Forgive me
it was so quick to write
without tests
and it worked so well for me
(inspired by a 1934 poem by William Carlos Williams) Don't be afraid to fail Don't be afraid to fail or drop the ball. I think that anything that has a label attached of "if you don't do it, nobody will", shouldn't fall on anybody's shoulders and should be shared no matter what. Shared or dropped. Share the responsibility for a healthy relationship Don't expect that the more experienced mates will take care of everything. In a project with active people counted by the thousand, it's unlikely that harassment isn't happening. Is anyone writing anti-harassment? Do we have stats? Is having an email address and a CoC giving us a false sense of security?
When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop. If you cannot find any, or if the only thing you can find is people who say "it never happens here", consider whether you really want to be in that community.
(from http://www.enricozini.org/blog/2016/debian/you-ll-thank-me-later/)
There are some nice people in the world. I mean nice people, the sort I couldn t describe myself as. People who are friends with everyone, who are somehow never involved in any argument, who seem content to spend their time drawing pictures of bumblebees on flowers that make everyone happy. Those people are great to have around. You want to hold onto them as much as you can. But people only have so much tolerance for jerkiness, and really nice people often have less tolerance than the rest of us. The trouble with not ejecting a jerk whether their shenanigans are deliberate or incidental is that you allow the average jerkiness of the community to rise slightly. The higher it goes, the more likely it is that those really nice people will come around less often, or stop coming around at all. That, in turn, makes the average jerkiness rise even more, which teaches the original jerk that their behavior is acceptable and makes your community more appealing to other jerks. Meanwhile, more people at the nice end of the scale are drifting away.
(from https://eev.ee/blog/2016/07/22/on-a-technicality/) Give people freedom If someone tries something in Debian, try to acknowledge and accept their work. You can give feedback on what they are doing, and try not to stand in their way, unless what they are doing is actually hurting you. In that case, try to collaborate, so that you all can get what you need. It's ok if you don't like everything that they are doing. I personally don't care if people tell me I'm good when I do something, I perceive it a bit like "good boy" or "good dog". I rather prefer if people show an interest, say "that looks useful" or "how does it work?" or "what do you need to deploy this?" Acknowledge that I've done something. I don't care if it's especially liked, give me the freedom to keep doing it. Don't give me rewards, give me space and dignity. Rather than feeding my ego, feed by freedom, and feed my possibility to create.

5 August 2017

Lars Wirzenius: Enabling TRIM/DISCARD on Debian, ext4, luks, and lvm

I realised recently that my laptop isn't set up to send TRIM or DISCARD commands to its SSD. That means the SSD firmware has a harder time doing garbage collection (see whe linked Wikipedia page for more details.) After some searching, I found two articles by Christopher Smart: one, update. Those, plus some addition reading of documentation, and a little experimentation, allowed me to do this. Since the information is a bit scattered, here's the details, for Debian stretch, as much for my own memory as to make sure this is collected into one place. Note that it seems to be a possible information leak to TRIM encryped devices. I don't know the details, but if that bothers you, don't do it. I don't know of any harmful effects for enabling TRIM for everything, except the crypto bit above, so I wonder if it wouldn't make sense for the Debian installer to do this by default.

29 July 2017

Robert McQueen: Welcome, Flathub!

Alex Larsson talks about Flathub at GUADEC 2017At the Gtk+ hackfest in London earlier this year, we stole an afternoon from the toolkit folks (sorry!) to talk about Flatpak, and how we could establish a critical mass behind the Flatpak format. Bringing Linux container and sandboxing technology together with ostree, we ve got a technology which solves real world distribution, technical and security problems which have arguably held back the Linux desktop space and frustrated ISVs and app developers for nearly 20 years. The problem we need to solve, like any ecosystem, is one of users and developers without stuff you can easily get in Flatpak format, there won t be many users, and without many users, we won t have a strong or compelling incentive for developers to take their precious time to understand a new format and a new technology. As Alex Larsson said in his GUADEC talk yesterday: Decentralisation is good. Flatpak is a tool that is totally agnostic of who is publishing the software and where it comes from. For software freedom, that s an important thing because we want technology to empower users, rather than tell them what they can or can t do. Unfortunately, decentralisation makes for a terrible user experience. At present, the Flatpak webpage has a manually curated list of links to 10s of places where you can find different Flatpaks and add them to your system. You can t easily search and browse to find apps to try out so it s clear that if the current situation remains we re not going to be able to get a critical mass of users and developers around Flatpak. Enter Flathub. The idea is that by creating an obvious center of gravity for the Flatpak community to contribute and build their apps, users will have one place to go and find the best that the Linux app ecosystem has to offer. We can take care of the boring stuff like running a build service and empower Linux application developers to choose how and when their app gets out to their users. After the London hackfest we sketched out a minimum viable system Github, Buildbot and a few workers and got it going over the past few months, culminating in a mini-fundraiser to pay for the hosting of a production-ready setup. Thanks to the 20 individuals who supported our fundraiser, to Mythic Beasts who provided a server along with management, monitoring and heaps of bandwidth, and to Codethink and Scaleway who provide our ARM and Intel workers respectively. We inherit our core principles from the Flatpak project we want the Flatpak technology to succeed at alleviating the issues faced by app developers in targeting a diverse set of Linux platforms. None of this stops you from building and hosting your own Flatpak repos and we look forward to this being a wide and open playing field. We care about the success of the Linux desktop as a platform, so we are open to proprietary applications through Flatpak s extra data feature where the client machine downloads 3rd party binaries. They are correctly labeled as such in the AppStream, so will only be shown if you or your OS has configured GNOME Software to show you apps with proprietary licenses, respecting the user s preference. The new infrastructure is up and running and I put it into production on Thursday. We rebuilt the whole repository on the new system over the course of the week, signing everything with our new 4096-bit key stored on a Yubikey smartcard USB key. We have 66 apps at the moment, although Alex is working on bringing in the GNOME apps at present we hope those will be joined soon by the KDE apps, and Endless is planning to move over as many of our 3rd party Flatpaks as possible over the coming months. So, thanks again to Alex and the whole Flatpak community, and the individuals and the companies who supported making this a reality. You can add the repository and get downloading right away. Welcome to Flathub! Go forth and flatten  Flathub logo

Next.

Previous.