Search Results: "appro"

17 November 2017

Jonathan Carter: I am now a Debian Developer

It finally happened On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montr al, my application was approved. If you re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it. How it started In 1999 no wait, I can t start there, as much as I want to, this is a short post, so In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its boot-floppies installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn t want to support Windows ever again, and I didn t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called freshmeat that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again. Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up highvoltage@debian.org . I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :) Ubuntu and beyond

Ubuntu 4.10 default desktop Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just warty written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called Ubuntu and the desktop edition has naked people on it. I wasn t sure what he meant and was kind of dumbfounded so I just laughed and said something like Uh ok . At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline Linux for Human Beings . Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal. Closer to Ubuntu s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying Go talk to them! Go talk to them! , but I felt so intimidated by them that I couldn t even bring myself to walk up and say hello. In the interest of keeping this short, I m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I d like to do a similar video series that might help a new generation of packagers. I ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

13 November 2017

Markus Koschany: My Free Software Activities in October 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Debian Java Debian LTS This was my twentieth month as a paid contributor and I have been paid to work 19 hours on Debian LTS, a project started by Rapha l Hertzog. I will catch up with the remaining 1,75 hours in November. In that time I did the following: Misc Thanks for reading and see you next time.

08 November 2017

Neil McGovern: Software Freedom Law Center and Conservancy

Before I start, I would like to make it clear that the below is entirely my personal view, and not necessarily that of the GNOME Foundation, the Debian Project, or anyone else. There s been quite a bit of interest recently about the petition by Software Freedom Law Center to cancel the Software Freedom Conservancy s trademark. A number of people have asked my views on it, so I thought I d write up a quick blog on my experience with SFLC and Conservancy both during my time as Debian Project Leader, and since. It s clear to me that for some time, there s been quite a bit of animosity between SFLC and Conservancy, which for me started to become apparent around the time of the large debate over ZFS on Linux. I talked about this in my DebConf 16 talk, which fortunately was recorded (ZFS bit from 8:05 to 17:30).
This culminated in SFLC publishing a statement, and Conservancy also publishing their statement, backed up by the FSF. These obviously came to different conclusions, and it seems bizarre to me that SFLC who were acting as Debian s legal counsel published a position that was contrary to the position taken by Debian. Additionally, Conservancy and FSF who were not acting as counsel mirrored the position of the project. Then, I hear of an even more confusing move that SFLC has filed legal action against Conservancy, despite being the organisation they helped set up. This happened on the 22nd September, the day after SFLC announced corporate and support services for Free Software projects. SFLC has also published a follow up, which they say that the act is not an attack, let alone a bizarre attack , and that the response from Conservancy, who view it as such was like reading a declaration of war issued in response to a parking ticket . Then, as SFLC somehow find the threat of your trademark being taken away as something other than an attack, they also state: Any project working with the Conservancy that feels in any way at risk should contact us. We will immediately work with them to put in place measures fully ensuring that they face no costs and no risks in this situation. which I read as a direct pitch to try and pull projects away from Conservancy and over to SFLC. Now, even if there is a valid claim here, despite the objections that were filed by a trademark lawyer who I have a great deal of respect for (disclosure: Pam also provides pro-bono trademark advice to my employer, the GNOME Foundation), the optics are pretty terrible. We have a case of one FOSS organisation taking another one to court, after many years of them being aware of the issue, and when wishing to promote a competing service. At best, this is a distraction from the supposed goals of Free Software organisations, and at worst is a direct attempt to interrupt the workings of an established and successful umbrella organisation which lots of projects rely on. I truly hope that this case is simply dropped, and if I was advising SFLC, that s exactly what I would suggest, along with an apology for the distress. Put it this way if SFLC win, then they re simply displaying what would be viewed as an aggressive move to hold the term software freedom exclusively to themselves. If they lose, then it shows that they re willing to do so to another 501(c)3 without actually having a case. Before I took on the DPL role, I was under the naive impression that although there were differences in approach, at least we were coming to try and work together to promote software freedoms for the end user. Unfortunately, since then, I ve now become a lot more jaded about exactly who, and which organisations hold our best interests at heart. (Featured image by Nick Youngson CC-BY-SA-3.0 http://nyphotographic.com/)

07 November 2017

Don Armstrong: Autorandr: automatically adjust screen layout

Like many laptop users, I often plug my laptop into different monitor setups (multiple monitors at my desk, projector when presenting, etc.) Running xrandr commands or clicking through interfaces gets tedious, and writing scripts isn't much better. Recently, I ran across autorandr, which detects attached monitors using EDID (and other settings), saves xrandr configurations, and restores them. It can also run arbitrary scripts when a particular configuration is loaded. I've packed it, and it is currently waiting in NEW. If you can't wait, the deb is here and the git repo is here. To use it, simply install the package, and create your initial configuration (in my case, undocked):
 autorandr --save undocked
then, dock your laptop (or plug in your external monitor(s)), change the configuration using xrandr (or whatever you use), and save your new configuration (in my case, workstation):
autorandr --save workstation
repeat for any additional configurations you have (or as you find new configurations). Autorandr has udev, systemd, and pm-utils hooks, and autorandr --change should be run any time that new displays appear. You can also run autorandr --change or autorandr --load workstation manually too if you need to. You can also add your own ~/.config/autorandr/$PROFILE/postswitch script to run after a configuration is loaded. Since I run i3, my workstation configuration looks like this:
 #!/bin/bash
 xrandr --dpi 92
 xrandr --output DP2-2 --primary
 i3-msg '[workspace="^(1 4 6)"] move workspace to output DP2-2;'
 i3-msg '[workspace="^(2 5 9)"] move workspace to output DP2-3;'
 i3-msg '[workspace="^(3 8)"] move workspace to output DP2-1;'
which fixes the dpi appropriately, sets the primary screen (possibly not needed?), and moves the i3 workspaces about. You can also arrange for configurations to never be run by adding a block hook in the profile directory. Check it out if you change your monitor configuration regularly!

02 November 2017

Steinar H. Gunderson: Introducing Narabu, part 5: Encoding

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3 and part 4 first. At this point, we've basically caught up with where I am, so thing are less set in stone. However, let's look at what qualitatively differentiates encoding from decoding; unlike in interframe video codecs (where you need to do motion vector search and stuff), encoding and decoding are very much mirror images of each other, so we can intuitively expect them to be relatively similar in performance. The procedure is DCT, quantization, entropy coding, and that's it. One important difference is in the entropy coding. Since our rANS encoding is non-adaptive (a choice made largely for simplicity, but also because our streams are so short) works by first signaling a distribution and then encoding each coefficient using that distribution. However, we don't know that distribution until we've DCT-ed all blocks in the picture, so we can't just DCT each block and entropy code the coefficients on-the-fly. There are a few ways to deal with this: As you can see, tons of possible strategies here. For simplicity, I've ended up with the former, although this could very well be changed at some point. There are some interesting subproblems, though: First of all, we need to decide the data type of this temporary array. The DCT tends to concentrate energy into fewer coefficients (which is a great property for compression!), so even after quantization, some of them will get quite large. This means we cannot store them in an 8-bit texture; however, even the bigger ones are very rarely bigger than 10 bits (including a sign bit), so using 16-bit textures wastes precious memory bandwidth. I ended up slicing the coefficients by horizontal index and then pairing them up (so that we generate pairs 0+7, 1+6, 2+5 and 3+4 for the first line of the 8x8 DCT block, 8+15, 9+14, 10+13 and 11+12 for the next line, etc.). This allows us to pack two coefficients into a 16-bit texture, for an average of 8 bits per coefficient, which is what we want. It makes for some slightly fiddly clamping and bit-packing since we are packing signed values, but nothing really bad. Second, and perhaps surprisingly enough, counting efficiently is nontrivial. We want a histogram over what coefficients are used the more often, ie., for each coefficient, we want something like ++counts[dist][coeff] (recall we have four distinct distributions). However, since we're in a massively parallel algorithm, this add needs to be atomic, and since values like e.g. 0 are super-common, all of our GPU cores will end up fighting over the cache line containing counts[dist][0]. This is not fast. Think 10 ms/frame not fast. Local memory to the rescue again; all modern GPUs have fast atomic adds to local memory (basically integrating adders into the cache, as I understand it, although I might have misunderstood here). This means we just make a suitably large local group, build up our sub-histogram in local memory and then add all nonzero buckets (atomically) to the global histogram. This improved performance dramatically, to the point where it was well below 0.1 ms/frame. However, our histogram is still a bit raw; it sums to 1280x720 = 921,600 values, but we want an approximation that sums to exactly 4096 (12 bits), with some additional constraints (like no nonzero coefficients). Charles Bloom has an exposition of a nearly optimal algorithm, although it took me a while to understand it. The basic idea is: Make a good approximation by multiplying each frequency by 4096/921600 (rounding intelligently). This will give you something that very nearly rounds to 4096 either above or below, e.g. 4101. For each step you're above or below the total (five in this case), find the best single coefficient to adjust (most entropy gain, or least loss); Bloom is using a heap, but on the GPU, each core is slow but we have many of them, so it's better just to try all 256 possibilities in parallel and have a simple voting scheme through local memory to find the best one. And then finally, we want a cumulative distribution function, but that is simple through a parallel prefix sum on the 256 elements. And then finally, we can take our DCT coefficients and the finished rANS distribution, and write the data! We'll have to leave some headroom for the streams (I've allowed 1 kB for each, which should be ample except for adversarial data and we'll probably solve that just by truncating the writes and accepting the corruption), but we'll compact them when we write to disk. Of course, the Achilles heel here is performance. Where decoding 720p (luma only) on my GTX 950 took 0,4 ms or so, encoding is 1,2 ms or so, which is just too slow. Remember that 4:2:2 is twice that, and we want multiple streams, so 2,4 ms per frame is eating too much. I don't really know why it's so slow; the DCT isn't bad, the histogram counting is fast, it's just the rANS shader that's slow for some reason I don't understand, and also haven't had the time to really dive deeply into. Of course, a faster GPU would be faster, but I don't think I can reasonably demand that people get a 1080 just to encode a few video streams. Due to this, I haven't really worked out the last few kinks. In particular, I haven't implemented DC coefficient prediction (it needs to be done before tallying up the histograms, so it can be a bit tricky to do efficiently, although perhaps local memory will help again to send data between neighboring DCT blocks). And I also haven't properly done bounds checking in the encoder or decoder, but it should hopefully be simple as long as we're willing to accept that evil input decodes into garbage instead of flagging errors explicitly. It also depends on a GLSL extension that my Haswell laptop doesn't have to get 64-bit divides when precalculating the rANS tables; I've got some code to simulate 64-bit divides using 32-bit, but it doesn't work yet. The code as it currently stands is in this Git repository; you can consider it licensed under GPLv3. It's really not very user-friendly at this point, though, and rather rough around the edges. Next time, we'll wrap up with some performance numbers. Unless I magically get more spare time in the meantime and/or some epiphany about how to make the encoder faster. :-)

Antoine Beaupr : October 2017 report: LTS, feed2exec beta, pandoc filters, git mediawiki

Debian Long Term Support (LTS) This is my monthly Debian LTS report. This time I worked on the famous KRACK attack, git-annex, golang and the continuous stream of GraphicsMagick security issues.

WPA & KRACK update I spent most of my time this month on the Linux WPA code, to backport it to the old (~2012) wpa_supplicant release. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM. I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice peer->tk is reset, to_set to negotiate a new TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations. I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this. At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.

Git-annex The next big chunk on my list was completing the work on git-annex (CVE-2017-12976) that I started in August. It turns out doing the backport was simpler than I expected, even with my rusty experience with Haskell. Type-checking really helps in doing the right thing, especially considering how Joey Hess implemented the fix: by introducing a new type. So I backported the patch from upstream and notified the security team that the jessie and stretch updates would be similarly easy. I shipped the backport to LTS as DLA-1144-1. I also shared the updated packages for jessie (which required a similar backport) and stretch (which didn't) and those Sebastien Delafond published those as DSA 4010-1.

Graphicsmagick Up next was yet another security vulnerability in the Graphicsmagick stack. This involved the usual deep dive into intricate and sometimes just unreasonable C code to try and fit a round tree in a square sinkhole. I'm always unsure about those patches, but the test suite passes, smoke tests show the vulnerability as fixed, and that's pretty much as good as it gets. The announcement (DLA 1154-1) turned out to be a little special because I had previously noticed that the penultimate announcement (DLA 1130-1) was never sent out. So I made a merged announcement to cover both instead of re-sending the original 3 weeks late, which may have been confusing for our users.

Triage & misc We always do a bit of triage even when not on frontdesk duty, so I: I also did smaller bits of work on: The latter reminded me of the concerns I have about the long-term maintainability of the golang ecosystem: because everything is statically linked, an update to a core library (say the SMTP library as in CVE-2017-15042, thankfully not affecting LTS) requires a full rebuild of all packages including the library in all distributions. So what would be a simple update in a shared library system could mean an explosion of work on statically linked infrastructures. This is a lot of work which can definitely be error-prone: as I've seen in other updates, some packages (for example the Ruby interpreter) just bit-rot on their own and eventually fail to build from source. We would also have to investigate all packages to see which one include the library, something which we are not well equipped for at this point. Wheezy was the first release shipping golang packages but at least it's shipping only one... Stretch has shipped with two golang versions (1.7 and 1.8) which will make maintenance ever harder in the long term.
We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman

Other free software work This month again, I was busy doing some serious yak shaving operations all over the internet, on top of publishing two of my largest LWN articles to date (2017-10-16-strategies-offline-pgp-key-storage and 2017-10-26-comparison-cryptographic-keycards).

feed2exec beta Since I announced this new project last month I have released it as a beta and it entered Debian. I have also wrote useful plugins like the wayback plugin that saves pages on the Wayback machine for eternal archival. The archive plugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite). The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with --loglevel=ERROR to silence WARNINGs. Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple exec call, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the exec plugin now. I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap... Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?

Pandoc filters As I mentioned earlier, I dove again in Haskell programming when working on the git-annex security update. But I also have a small Haskell program of my own - a Pandoc filter that I use to convert the HTML articles I publish on LWN.net into a Ikiwiki-compatible markdown version. It turns out the script was still missing a bunch of stuff: image sizes, proper table formatting, etc. I also worked hard on automating more bits of the publishing workflow by extracting the time from the article which allowed me to simply extract the full article into an almost final copy just by specifying the article ID. The only thing left is to add tags, and the article is complete. In the process, I learned about new weird Haskell constructs. Take this code, for example:
-- remove needless blockquote wrapper around some tables
--
-- haskell newbie tips:
--
-- @ is the "at-pattern", allows us to define both a name for the
-- construct and inspect the contents as once
--
--   is the "empty record pattern": it basically means "match the
-- arguments but ignore the args"
cleanBlock (BlockQuote t@[Table  ]) = t
Here the idea is to remove <blockquote> elements needlessly wrapping a <table>. I can't specify the Table type on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean. The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:
cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = []
The last bit that drove me crazy was the date parsing:
-- the "GAByline" div has a date, use it to generate the ikiwiki dates
--
-- this is distinct from cleanBlock because we do not want to have to
-- deal with time there: it is only here we need it, and we need to
-- pass it in here because we do not want to mess with IO (time is I/O
-- in haskell) all across the function hierarchy
cleanDates :: ZonedTime -> Block -> [Block]
-- this mouthful is just the way the data comes in from
-- LWN/Pandoc. there could be a cleaner way to represent this,
-- possibly with a record, but this is complicated and obscure enough.
cleanDates time (Div (_, [cls], _)
                 [Para [Str month, Space, Str day, Space, Str year], Para _])
    cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date"
                                           (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e,"
                                                           (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime)))
                        ++ ikiwikiRawInline (ikiwikiMetaField "updated"
                                             (iso8601Format time))
                        ++ [Para []]
-- other elements just pass through
cleanDates time x = [x]
Now that seems just dirty, but it was even worse before. One thing I find difficult in adapting to coding in Haskell is that you need to take the habit of writing smaller functions. The language is really not well adapted to long discourse: it's more about getting small things connected together. Other languages (e.g. Python) discourage this because there's some overhead in calling functions (10 nanoseconds in my tests, but still), whereas functions are a fundamental and important construction in Haskell that are much more heavily optimized. So I constantly need to remind myself to split things up early, otherwise I can't do anything in Haskell. Other languages are more lenient, which does mean my code can be more dirty, but I feel get things done faster then. The oddity of Haskell makes frustrating to work with. It's like doing construction work but you're not allowed to get the floor dirty. When I build stuff, I don't mind things being dirty: I can cleanup afterwards. This is especially critical when you don't actually know how to make things clean in the first place, as Haskell will simply not let you do that at all. And obviously, I fought with Monads, or, more specifically, "I/O" or IO in this case. Turns out that getting the current time is IO in Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include IO. I eventually moved the time initialization up into main so that I had only one IO function and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern. I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.

Git remote MediaWiki Few people know that there is a MediaWiki remote for Git which allow you to mirror a MediaWiki site as a Git repository. As a disaster recovery mechanism, I have been keeping such a historical backup of the Amateur radio wiki for a while now. This originally started as a homegrown Python script to also convert the contents in Markdown. My theory then was to see if we could switch from Mediawiki to Ikiwiki, but it took so long to implement that I never completed the work. When someone had the weird idea of renaming a page to some impossible long name on the wiki, my script broke. I tried to look at fixing it and then remember I also had a mirror running using the Git remote. It turns out it also broke on the same issue and that got me looking in the remote again. I got lost in a zillion issues, including fixing that specific issue, but I especially looked at the possibility of fetching all namespaces because I realized that the remote fetches only a part of the wiki by default. And that drove me to submit namespace support as a patch to the git mailing list. Finally, the discussion came back to how to actually maintain that contrib: in git core or outside? Finally, it looks like I'll be doing some maintenance that project outside of git, as I was granted access to the GitHub organisation...

Galore Yak Shaving Then there's the usual hodgepodge of fixes and random things I did over the month.
There is no [web extension] only XUL! - Inside joke

31 October 2017

Paul Wise: FLOSS Activities October 2017

Changes

Issues

Review

Administration
  • Debian: respond to mail debug request, redirect hardware access seeker to guest account, redirect hardware donors to porters, redirect interview seeker to DPL, reboot system with dead service
  • Debian mentors: security updates, reboot
  • Debian wiki: upgrade search db format, remove incorrect bans, whitelist email addresses, disable accounts with bouncing email, update email for accounts with bouncing email
  • Debian website: remove need for a website rebuild
  • Openmoko: restart web server, set web server process limits, install monitoring tool

Sponsors The talloc/cmocka uploads and the remmina issue were sponsored by my employer. All other work was done on a volunteer basis.

30 October 2017

Russell Coker: Logic of Zombies

Most zombie movies feature shuffling hordes which prefer to eat brains but also generally eat any human flesh available. Because in most movies (pretty much everything but the 28 Days Later series [1]) zombies move slowly they rely on flocking to be dangerous. Generally the main way of killing zombies is severe head injury, so any time zombies succeed in their aim of eating brains they won t get a new recruit for their horde. The TV series iZombie [2] has zombies that are mostly like normal humans as long as they get enough brains and are smart enough to plan to increase their horde. But most zombies don t have much intelligence and show no signs of restraint so can t plan to recruit new zombies. In 28 Days Later the zombies aren t smart enough to avoid starving to death, in contrast to most zombie movies where the zombies aren t smart enough to find food other than brains but seem to survive on magic. For a human to become a member of a shuffling horde of zombies they need to be bitten but not killed. They then need to either decide to refrain from a method of suicide that precludes becoming a zombie (gunshot to the head or jumping off a building) or unable to go through with it. Most zombie movies (I think everything other than 28 Days Later) has the transition process taking some hours so there s plenty of time for an infected person to kill themself or be killed by others. Then they need to avoid having other humans notice that they are infected and kill them before they turn into a zombie. This doesn t seem likely to be a common occurrence. It doesn t seem likely that shuffling zombies (as opposed to the zombies in 28 Days Later or iZombie) would be able to form a horde. In the unlikely event that shuffling zombies managed to form a horde that police couldn t deal with I expect that earth-moving machinery could deal with them quickly. The fact that people don t improvise armoured vehicles capable of squashing zombies is almost as ridiculous as all the sci-fi movies that feature infantry. It s obvious that logic isn t involved in the choice of shuffling zombies. It s more of a choice of whether to have the jump-scare aspect of 18 Days Later, the human-drama aspect of zombies that pass for human in iZombie, or the terror of a slowly approaching horrible fate that you can t escape in most zombie movies. I wonder if any of the music streaming services have a horror-movie playlist that has screechy music to set your nerves on edge without the poor plot of a horror movie. Could listening to scary music in the dark become a thing?

27 October 2017

Sam Hartman: How Can Debian Turn Disagreement into Something that Makes us Stronger

Recently, when asked to engage with the Debian Technical Committee, a maintainer chose to orphan their package rather than discuss the issue brought before the committee. In another decision earlier this year, a maintainer orphaned their package indicating a lack of respect for the approach being taken and the process. Unfortunately, this joins an ever longer set of issues where people walk away from the TC process disheartened and upset.

For me personally the situations where maintainers walked away from the process were hard. People I respect and admire were telling me that they were unwilling to participate in our dispute resolution process. In one case the maintainer explicitly did not respect a process I had been heavily involved in. As someone who values understanding and build a team, I feel disappointed and hurt thinking about this.

Unfortunately, I don't feel much better when I consider several of the other issues brought before the TC. In a number of cases, the process has concluded, but it feels like a pyrrhic victory. We have an answer, but often we never reached an understanding of one of the key stakeholder's positions. People are sufficiently disappointed or frustrated that they reduce their involvement. That is, the answer is not one they can live with. Sometimes, I'm not even sure that answering the question was worth the cost.

My initial suspicion as I begin to consider issues that have come before the TC in the last few years is that as a necessary consequence of how the TC operates, we will drive a significant chunk of those who come to us away from our community. I will not be part of that. If I end up eventually agreeing with this suspicion, I will either work to improve the TC process or find a different way to contribute to the project that is more aligned with my goals.


Our Community is More Important than Technical Correctness I firmly believe that Debian's strength is its community. Distributions, upstreams, free-software advocates, corporate interests, and everything in between work together. Because we are working on a concrete product,we actually have to understand how our needs conflict and compliment each other. Also, like any organization, our ability to get things done is dependent on our contributors and the time we all put in.

I cannot think of anything more harmful we can do than drive away a constructive contributor. Technical quality is important: many of us will eventually go away if quality drops too much. However, Debian itself can be improved incrementally. Yes, new people do join Debian. However, it takes a lot of new people to make up for a situation where everyone involved is frustrated, and some leave and tell their friends to stay away from Debian.

When a Disagreement Reaches the TC When a disagreement reaches the TC, we know a few things. First, people have not have not been able to resolve it on their own. Second, the disagreement is important enough to at least one of the parties to ask for outside help.

If the TC were to simply announce a decision, it is very likely that at least one party would feel frustrated and disappointed. If the decision is important to someone it almost always means that they care about the outcome. If previous efforts have not reached agreement then there is an outcome that is undesired. While it's possible under the constitution that two parties could bring an issue to the TC mutually where the dynamics could be different, this does not happen in practice.

When we "lose" When a decision making process decides to select one of my undesired outcomes, I have strong negative feelings. First, there is the technical result itself. Because of an adverse decision it is generally harder for me to do my technical work, or some ethical position or group I care about is disadvantaged. However, the strongest feelings are related to needs about my place in the community, not a direct response to the technical decision. I worry about whether issues I care about will be valued in the future; I would likely feel weary or scared thinking of contributing in an environment where issues that matter to me aren't going to be considered. I tend to feel frustrated if my position was not adequately considered this time around.

Often, the strongest feelings do not stem from how I am impacted by the decision itself. Thus, if my more general needs are addressed, I will feel a lot better about not having my preferred outcome selected. Confidence that my position was understood is the single biggest factor in being able to accept the result. If the community took the time to understand me, then I have confidence that I am valued. I can advocate for change and be part of the ongoing discussion. If I understand the other positions then I can understand that the position was not arbitrary. Understanding the other positions is also a prerequisite to seeing how things I value were considered, especially when the tradeoff did not ultimately favor these values.

My conversations with others about their experience with conflict suggest my feelings and needs are common.


However, the TC focuses almost exclusively on the technical matter before it. I don't think the TC has done a good job of actually understanding maintainers or the people bringing issues to us. Especially when we're fairly sure we understand what the ultimate decision will be, we focus on getting to that decision. Of course part of actually understanding someone is considering what they have to say. Even when you have high confidence in an outcome, if you want someone to feel understood, you do have to listen at a point where you can consider what they bring forward.

Frustration of Being Questioned

On the other side, it's frustrating to have your decisions questioned. Reviewing a decision takes time. Even if the TC agrees with a maintainer, asking that maintainer to sit through a long process can create frustrations as deep as any I discussed in the previous section.

So the process is doubly bad for maintainers. Someone can bring up an issue which requires precious time. Then if the TC decides against the maintainer, we have all the problems I discussed previously.

I acknowledge this. A good process must respect the maintainer's time. However, it must respect the community members who disagree with maintainers. Everyone who brings an issue to the TC is a developer. They have contributed significantly to our community and are part of our team. Yes, we need to protect the process against abuse. But in a very real sense if we aren't willing to consider an issue and we aren't willing to engage with someone, understand why they think the issue should be considered, and work until they understand why we made our decision, we are saying we do not respect them enough to take that time. We should expect that to drive people away.

I wonder if the solution here involves a two phase process. During the first phase we work with someone bringing an issue to us until we understand it. We either engage the maintainer at that point, spend the time to help the person bringing an issue to us understand why we are not engaging the maintainer, or have decided the issue is abusive of the processs/misdirected. For this to work maintainers would have to trust us enough to actually be willing to sit back and not spend a lot of their time.

Sam, It's Just Personalities

One criticism I've heard as I discuss this is that a lot of the negative feelings surround interpersonal conflict and are personality conflicts. Yes, personalities matter. Yes, we re still healing from the Systemd discussion.

However, as a community, I think we need to figure out how we respect the inevitable personality conflicts. I can think of one or two developers I'm still upset with years after an issue. It happens. Perhaps if I were a maintainer when one of these developers raised a concern with my packages, I could ask that someone else be a primary advocate for an issue. Similarly, if a developer wanted to address an issue with me as a maintainer but would prefer not to deal with me, perhaps we could figure out some way that we could see if someone else shared the concern.

Dropping a Package is Sometimes an Answer

My concerns were sparked by instances where a maintainer dropped a package rather than interact with the TC. Sometimes that can be a healthy step in a transition. At Debconf this year, Enrico Zini gave a talk on consent culture in Debian. One of the key points of his talk is that we can find over time that what we're being asked to do in Debian is no longer consistent with our boundaries. If it isn t helping us move forward if it isn t fun then it is likely time to stop.

In that sense, approaching disagreement can be a great time to take stock and ask whether we still enjoy maintaining a package. If we don t, then stepping aside and letting others take it on can be a great way that we and they can be happier. I don t really think that s what happened in these instances though. Based on comments made to me, this sounded more like a lack of faith in being treated well or in our ability as a committee.

Even so, Enrico s talk applies in a number of ways here. He suggests that the approach we should take when someone is done and steps away is to thank then. I couldn t agree more. From the depth of my heart I offer thanks: Thank you for taking care of yourself and stepping away when it is no longer fun. Thank you for all the effort you have put into these packages. I hope to work with you on great things in the future.

25 October 2017

Petter Reinholdtsen: Locating IMDB IDs of movies in the Internet Archive using Wikidata

Recently, I needed to automatically check the copyright status of a set of The Internet Movie database (IMDB) entries, to figure out which one of the movies they refer to can be freely distributed on the Internet. This proved to be harder than it sounds. IMDB for sure list movies without any copyright protection, where the copyright protection has expired or where the movie is lisenced using a permissive license like one from Creative Commons. These are mixed with copyright protected movies, and there seem to be no way to separate these classes of movies using the information in IMDB. First I tried to look up entries manually in IMDB, Wikipedia and The Internet Archive, to get a feel how to do this. It is hard to know for sure using these sources, but it should be possible to be reasonable confident a movie is "out of copyright" with a few hours work per movie. As I needed to check almost 20,000 entries, this approach was not sustainable. I simply can not work around the clock for about 6 years to check this data set. I asked the people behind The Internet Archive if they could introduce a new metadata field in their metadata XML for IMDB ID, but was told that they leave it completely to the uploaders to update the metadata. Some of the metadata entries had IMDB links in the description, but I found no way to download all metadata files in bulk to locate those ones and put that approach aside. In the process I noticed several Wikipedia articles about movies had links to both IMDB and The Internet Archive, and it occured to me that I could use the Wikipedia RDF data set to locate entries with both, to at least get a lower bound on the number of movies on The Internet Archive with a IMDB ID. This is useful based on the assumption that movies distributed by The Internet Archive can be legally distributed on the Internet. With some help from the RDF community (thank you DanC), I was able to come up with this query to pass to the SPARQL interface on Wikidata:
SELECT ?work ?imdb ?ia ?when ?label
WHERE
 
  ?work wdt:P31/wdt:P279* wd:Q11424.
  ?work wdt:P345 ?imdb.
  ?work wdt:P724 ?ia.
  OPTIONAL  
        ?work wdt:P577 ?when.
        ?work rdfs:label ?label.
        FILTER(LANG(?label) = "en").
   
 
If I understand the query right, for every film entry anywhere in Wikpedia, it will return the IMDB ID and The Internet Archive ID, and when the movie was released and its English title, if either or both of the latter two are available. At the moment the result set contain 2338 entries. Of course, it depend on volunteers including both correct IMDB and The Internet Archive IDs in the wikipedia articles for the movie. It should be noted that the result will include duplicates if the movie have entries in several languages. There are some bogus entries, either because The Internet Archive ID contain a typo or because the movie is not available from The Internet Archive. I did not verify the IMDB IDs, as I am unsure how to do that automatically. I wrote a small python script to extract the data set from Wikidata and check if the XML metadata for the movie is available from The Internet Archive, and after around 1.5 hour it produced a list of 2097 free movies and their IMDB ID. In total, 171 entries in Wikidata lack the refered Internet Archive entry. I assume the 70 "disappearing" entries (ie 2338-2097-171) are duplicate entries. This is not too bad, given that The Internet Archive report to contain 5331 feature films at the moment, but it also mean more than 3000 movies are missing on Wikipedia or are missing the pair of references on Wikipedia. I was curious about the distribution by release year, and made a little graph to show how the amount of free movies is spread over the years: I expect the relative distribution of the remaining 3000 movies to be similar. If you want to help, and want to ensure Wikipedia can be used to cross reference The Internet Archive and The Internet Movie Database, please make sure entries like this are listed under the "External links" heading on the Wikipedia article for the movie:
*  Internet Archive film id=FightingLady 
*  IMDb title id=0036823 title=The Fighting Lady 
Please verify the links on the final page, to make sure you did not introduce a typo. Here is the complete list, if you want to correct the 171 identified Wikipedia entries with broken links to The Internet Archive: Q1140317, Q458656, Q458656, Q470560, Q743340, Q822580, Q480696, Q128761, Q1307059, Q1335091, Q1537166, Q1438334, Q1479751, Q1497200, Q1498122, Q865973, Q834269, Q841781, Q841781, Q1548193, Q499031, Q1564769, Q1585239, Q1585569, Q1624236, Q4796595, Q4853469, Q4873046, Q915016, Q4660396, Q4677708, Q4738449, Q4756096, Q4766785, Q880357, Q882066, Q882066, Q204191, Q204191, Q1194170, Q940014, Q946863, Q172837, Q573077, Q1219005, Q1219599, Q1643798, Q1656352, Q1659549, Q1660007, Q1698154, Q1737980, Q1877284, Q1199354, Q1199354, Q1199451, Q1211871, Q1212179, Q1238382, Q4906454, Q320219, Q1148649, Q645094, Q5050350, Q5166548, Q2677926, Q2698139, Q2707305, Q2740725, Q2024780, Q2117418, Q2138984, Q1127992, Q1058087, Q1070484, Q1080080, Q1090813, Q1251918, Q1254110, Q1257070, Q1257079, Q1197410, Q1198423, Q706951, Q723239, Q2079261, Q1171364, Q617858, Q5166611, Q5166611, Q324513, Q374172, Q7533269, Q970386, Q976849, Q7458614, Q5347416, Q5460005, Q5463392, Q3038555, Q5288458, Q2346516, Q5183645, Q5185497, Q5216127, Q5223127, Q5261159, Q1300759, Q5521241, Q7733434, Q7736264, Q7737032, Q7882671, Q7719427, Q7719444, Q7722575, Q2629763, Q2640346, Q2649671, Q7703851, Q7747041, Q6544949, Q6672759, Q2445896, Q12124891, Q3127044, Q2511262, Q2517672, Q2543165, Q426628, Q426628, Q12126890, Q13359969, Q13359969, Q2294295, Q2294295, Q2559509, Q2559912, Q7760469, Q6703974, Q4744, Q7766962, Q7768516, Q7769205, Q7769988, Q2946945, Q3212086, Q3212086, Q18218448, Q18218448, Q18218448, Q6909175, Q7405709, Q7416149, Q7239952, Q7317332, Q7783674, Q7783704, Q7857590, Q3372526, Q3372642, Q3372816, Q3372909, Q7959649, Q7977485, Q7992684, Q3817966, Q3821852, Q3420907, Q3429733, Q774474

23 October 2017

Patrick Schoenfeld: Testing javascript in a dockerized rails application with rspec-rails

The other day I wanted to add support for tests of javascript functionality in a (dockerized) rails application using rspec-rails. Since rails 5.1 includes system tests with niceties like automatically taking a screenshot on failed tests, I hoped for a way to benefit from this
features without changing to another test framework. Lucky me only recently the authors of rspec-rails added support for so-called system specs. There is not much documentation so far (but there are a lot of useful information in the corresponding bug report #1838 and a friendly guy named Thomas Walpole (@twalpole) is helpfully answering to questions in that issue. To make things a little bit more complicated: the application in question is usually running in a docker container and thus the tests of the application are also run in a docker container. I didn t want to change this, so here is what it took me to get this running. Overview Let s see what we want to achieve exactly: From a technical point of view, we will have the application under test (AUT) and the tests in one container (let s call it: web) and we will need another container running a javascript-capable browser (let s call it browser). Thus we need the tests to drive a remote running browser (at least when running in a docker environment) which needs to access the application under a different address than usually. Namely an address reachable by the chrome-container, since it will not be reachable via 127.0.0.1 as is (rightfully) assumed by default. If we want Warden authentication stubbing to work (as we do, since our application uses Devise) and transactional fixtures as well (e.g. rails handling database cleanup between tests without database_cleaner gem) we also need to ensure that the application server is being started by the tests and the tests are actually run against that server. Otherwise we might run into problems. Getting the containers ready Assuming you already have a container setup (and are using docker-compose like we do) there is not that much to change on the docker front. Basically you need to add a new service called chrome and point it to an appropriate image and add a link to it in your existing web-container. I ve decided to use standalone-chrome for the browser part, for which there are docker images provided by the selenium project (they also have images for other browsers). Kudos for that.
...
services:
  chrome:
    image: selenium/standalone-chrome
   
  web:
   
    links:
      - chrome
The link ensures that the chrome instance is available before we run the tests and that the the web-container is able to resolve the name of this container. Unfortunately this is not true for the other way round, so we need some magic in our test code to find out the ip-address of the web-container. More to this later. Other than that, you probably want to configure a volume for you to be able to access the screenshots, which get saved to tmp/screenshots
in the application directory. Preparing the application for running system tests There is a bit more to do on the application side. The steps are roughly:
  1. Add necessary depends / version constraints
  2. Register a driver for the remote chrome
  3. Configure capybara to use the appropriate host for your tests (and your configured driver)
  4. Add actual tests with type: :system and js: true
Let s walk them through. Add necessary depends What we need is the following: The required features are already part of 3.7.0, but this version is the version I used and it contains a bugfix, which may or may not be relevant. One comment about the rails version: for the tests to properly work it s viable to have puma use certain settings. In rails 5.1.4 (the version released, at time of writing this) uses the settings from config/puma.rb which most likely collides with the necessary settings. You can ensure these settings yourself or use rails from branch 5-1-stable which includes this change. I decided for the latter and pinned my Gemfile to the then current commit. Register a driver for the remote chrome To register the required driver, you ll have to add some lines to your rails_helper.rb:
if ENV['DOCKER']
  selenium_url =
      "http://chrome:4444/wd/hub"
  Capybara.register_driver :selenium_remote do  app 
    Capybara::Selenium::Driver.new(app,
         :url => selenium_url, :browser => :remote, desired_capabilities: :chrome )
  end
end
Note that I added those lines conditionally (since I still want to be able to use a local chrome via chromedriver) if an environment variable DOCKER is set. We defined that environment variable in our Dockerfile and thus you might need to adapt this to your case. Also note that the selenium_url is hard-coded. You could very well take a different approach, e.g. using an externally specified SELENIUM_URL, but ultimately the requirement is that the driver needs to know that the chrome instance is running on host chrome, port 4444 (the containers default). Configure capybara to use the appropriate host and driver The next step is to ensure that javascript-requiring system tests are actually run with the given driver and use the right host. To achieve that we need to add a before-hook to the corresponding tests or we can configure rspec accordingly to always include such a hook by modifying the rspec-configuration in rails_helper.rb like this:
RSpec.configure do  config 
  ...
  config.before(:each, type: :system, js: true) do
    if ENV['DOCKER']
      driven_by :selenium_remote
      ip = Socket.ip_address_list.detect addr  addr.ipv4_private?  .ip_address
      host! "http://# ip :# Capybara.server_port "
    else
      driven_by :headless_chrome
    end
  end
Note the part with the ip-address: it tries to find an IPv4 private address for the web-container (the container running the tests) to ensure the chrome-container uses this address to access the application. The Capybara.server_port is important here, since it will correspond to the puma instance launched by the tests. That heuristic (first ipv4 private address) works for us at the moment, but it might not work for you. It is basically a workaround to the fact that I couldn t get web resolvable for the chrome container which may be fixable on the docker side, but I was to lazy to further investigate that. If you change it: Just make sure the host! method uses an URI pointing to an address of the web-container that is reachable to the chrome-container. Define tests with type: :system and js: true Last but certainly not least, you need actual tests of the required type and with or without js: true. This can be achieved by creating tests files starting like this:
RSpec.feature "Foobar", type: :system, js: true do
Since the new rspec-style system tests are based around the feature-specs which used to be around previously, the rest of the tests is exactly like it is described for feature specs. Run the tests To run the tests a commandline like the following should do: docker-compose run web rspec It won t make a big noise about running the tests against chrome, unless something fails. In that case you ll see a message telling you where the screenshot has been placed. Troubleshooting Below I add some hints about problems I ve seen during configuring that: Test failing, screenshot shows login screen In that case puma might be configured wrongly or you are not using transactional fixtures. See the hints above about the rails version to use which also includes some pointers to helpful explanations. Note that rspec-rails by default does not output the puma startup output as it clutters the tests. For debugging purposes it might be helpful to change that by adding the following line to your tests:
ActionDispatch::SystemTesting::Server.silence_puma = false
Error message: Unable to find chromedriver This indicates that your driver is not configured properly, because the default for system tests is to be driven_by selenium, which tries to spawn an own chrome instance and is suitable for non-dockerized tests. Check if your tests are marked as js: true (if you followed the instructions above) and that you properly added the before-hook to your rspec-configuration. Collisions with VCR If you happen to have tests that make use of the vcr gem you might see it complaining about not knowing what to do with the requests between the driver and the chrome instance. You can fix this, by telling VCR to ignore that requests, by adding a line where you configured VCR:
VCR.configure do  config 
# required so we don't collide with capybara tests
config.ignore_hosts 'chrome'
...

Russ Allbery: Review: Algorithms to Live By

Review: Algorithms to Live By, by Brian Christian & Tom Griffiths
Publisher: Henry Holt and Company
Copyright: April 2016
ISBN: 1-62779-037-3
Format: Kindle
Pages: 255
Another read for the work book club. This was my favorite to date, apart from the books I recommended myself. One of the foundations of computer science as a field of study is research into algorithms: how do we solve problems efficiently using computer programs? This is a largely mathematical field, but it's often less about ideal or theoretical solutions and more about making the most efficient use of limited resources and arriving at an adequate, if not perfect, answer. Many of these problems are either day-to-day human problems or are closely related to them; after all, the purpose of computer science is to solve practical problems with computers. The question asked by Algorithms to Live By is "can we reverse this?": can we learn lessons from computer science's approach to problems that would help us make day-to-day decisions? There's a lot of interesting material in the eleven chapters of this book, but there's also an amusing theme: humans are already very good at this. Many chapters start with an examination of algorithms and mathematical analysis of problems, dive into a discussion of how we can use those results to make better decisions, then talk about studies of the decisions humans actually make... and discover that humans are already applying ad hoc versions of the best algorithms we've come up with, given the constraints of typical life situations. It tends to undermine the stated goal of the book. Thankfully, it in no way undermines interesting discussion of general classes of problems, how computer science has tackled them, and what we've learned about the mathematical and technical shapes of those problems. There's a bit less self-help utility here than I think the authors had intended, but lots of food for thought. (That said, it's worth considering whether this congruence is less because humans are already good at this and more because our algorithms are designed from human intuition. Maybe our best algorithms just reflect human thinking. In some cases we've checked our solutions against mathematical ideals, but in other cases they're still just our best guesses to date.) This is the sort of a book where a chapter listing is an important part of the review. The areas of algorithms discussed here are optimal stopping, explore/exploit decisions (when to go with the best thing you've found and when to look for something better), sorting, caching, scheduling, Bayes's rule (and prediction in general), overfitting when building models, relaxation (solving an easier problem than your actual problem), randomized algorithms, a collection of networking algorithms, and finally game theory. Each of these has useful insights and thought-provoking discussion of how these sometimes-theoretical concepts map surprisingly well onto daily problems. The book concludes with a discussion of "computational kindness": an encouragement to reduce the required computation and complexity penalty for both yourself and the people you interact with. If you have a computer science background (as I do), many of these will be familiar concepts, and you might be dubious that a popularization would tell you much that's new. Give this book a shot, though; the analogies are less stretched than you might fear, and the authors are both careful and smart about how they apply these principles. This book passes with flying colors a key sanity check: the chapters on topics that I know well or have thought about a lot make few or no obvious errors and say useful and important things. For example, the scheduling chapter, which unsurprisingly is about time management, surpasses more than half of the time management literature by jumping straight to the heart of most time management problems: if you're going to do everything on a list, it rarely matters the order in which you do it, so the hardest scheduling problems are about deciding what not to do rather than deciding order. The point in the book where the authors won my heart completely was in the chapter on Bayes's rule. Much of the chapter is about Bayesian priors, and how one's knowledge of past events is a vital part of analysis of future probabilities. The authors then discuss the (in)famous marshmallow experiment, in which children are given one marshmallow and told that if they refrain from eating it until the researcher returns, they'll get two marshmallows. Refraining from eating the marshmallow (delayed gratification, in the psychological literature) was found to be associated with better life outcomes years down the road. This experiment has been used and abused for years for all sorts of propaganda about how trading immediate pleasure for future gains leads to a successful life, and how failure in life is because of inability to delay gratification. More evil analyses have (of course) tied that capability to ethnicity, with predictably racist results. I have kind of a thing about the marshmallow experiment. It's a topic that reliably sends me off into angry rants. Algorithms to Live By is the only book I have ever read to mention the marshmallow experiment and then apply the analysis that I find far more convincing. This is not a test of innate capability in the children; it's a test of their Bayesian priors. When does it make perfect sense to eat the marshmallow immediately instead of waiting for a reward? When their past experience tells them that adults are unreliable, can't be trusted, disappear for unpredictable lengths of time, and lie. And, even better, the authors supported this analysis with both a follow-up study I hadn't heard of before and with the observation that some children would wait for some time and then "give in." This makes perfect sense if they were subconsciously using a Bayesian model with poor priors. This is a great book. It may try a bit too hard in places (applicability of the math of optimal stopping to everyday life is more contingent and strained than I think the authors want to admit), and some of this will be familiar if you've studied algorithms. But the writing is clear, succinct, and very well-edited. No part of the book outlives its welcome; the discussion moves right along. If you find yourself going "I know all this already," you'll still probably encounter a new concept or neat explanation in a few more pages. And sometimes the authors make connections that never would have occurred to me but feel right in retrospect, such as relating exponential backoff in networking protocols to choosing punishments in the criminal justice system. Or the realization that our modern communication world is not constantly connected, it's constantly buffered, and many of us are suffering from the characteristic signs of buffer bloat. I don't think you have to be a CS major, or know much about math, to read this book. There is a lot of mathematical details in the end notes if you want to dive in, but the main text is almost always readable and clear, at least so far as I could tell (as someone who was a CS major and has taken a lot of math, so a grain of salt may be indicated). And it still has a lot to offer even if you've studied algorithms for years. The more I read of this book, the more I liked it. Definitely recommended if you like reading this sort of analysis of life. Rating: 9 out of 10

22 October 2017

Hideki Yamane: openSUSE.Asia Summit 2017 in Tokyo

This weekend large typhoon is approaching to Japan, however, I went to UEC (The University of Electro-Communications, ) to give a talk in openSUSE.Asia Summit 2017 in Tokyo.

"... hey, wait. openSUSE? Are you using openSUSE?"

Honestly no, I'm not. My talk was "openSUSE tools on Debian" - the only one session about Debian in that conference :)

Photo by Youngbin Han (Ubuntu Korea Loco), thanks!


We Debian distribute some tools from openSUSE - OBS (Open Build Service), Snapper and I'm now working on openQA. So, it is a good chance to contact to upstream (=openSUSE) people, and I got some hints from them, thanks!


openSUSE tools on Debian from Hideki Yamane

17 October 2017

Antoine Beaupr : A comparison of cryptographic keycards

An earlier article showed that private key storage is an important problem to solve in any cryptographic system and established keycards as a good way to store private key material offline. But which keycard should we use? This article examines the form factor, openness, and performance of four keycards to try to help readers choose the one that will fit their needs. I have personally been using a YubiKey NEO, since a 2015 announcement on GitHub promoting two-factor authentication. I was also able to hook up my SSH authentication key into the YubiKey's 2048 bit RSA slot. It seemed natural to move the other subkeys onto the keycard, provided that performance was sufficient. The mail client that I use, (Notmuch), blocks when decrypting messages, which could be a serious problems on large email threads from encrypted mailing lists. So I built a test harness and got access to some more keycards: I bought a FST-01 from its creator, Yutaka Niibe, at the last DebConf and Nitrokey donated a Nitrokey Pro. I also bought a YubiKey 4 when I got the NEO. There are of course other keycards out there, but those are the ones I could get my hands on. You'll notice none of those keycards have a physical keypad to enter passwords, so they are all vulnerable to keyloggers that could extract the key's PIN. Keep in mind, however, that even with the PIN, an attacker could only ask the keycard to decrypt or sign material but not extract the key that is protected by the card's firmware.

Form factor The Nitrokey Pro, YubiKey NEO (worn out), YubiKey 4, and FST-01 The four keycards have similar form factors: they all connect to a standard USB port, although both YubiKey keycards have a capacitive button by which the user triggers two-factor authentication and the YubiKey 4 can also require a button press to confirm private key use. The YubiKeys feel sturdier than the other two. The NEO has withstood two years of punishment in my pockets along with the rest of my "real" keyring and there is only minimal wear on the keycard in the picture. It's also thinner so it fits well on the keyring. The FST-01 stands out from the other two with its minimal design. Out of the box, the FST-01 comes without a case, so the circuitry is exposed. This is deliberate: one of its goals is to be as transparent as possible, both in terms of software and hardware design and you definitely get that feeling at the physical level. Unfortunately, that does mean it feels more brittle than other models: I wouldn't carry it in my pocket all the time, although there is a case that may protect the key a little better, but it does not provide an easy way to hook it into a keyring. In the group picture above, the FST-01 is the pink plastic thing, which is a rubbery casing I received along with the device when I got it. Notice how the USB connectors of the YubiKeys differ from the other two: while the FST-01 and the Nitrokey have standard USB connectors, the YubiKey has only a "half-connector", which is what makes it thinner than the other two. The "Nano" form factor takes this even further and almost disappears in the USB port. Unfortunately, this arrangement means the YubiKey NEO often comes loose and falls out of the USB port, especially when connected to a laptop. On my workstation, however, it usually stays put even with my whole keyring hanging off of it. I suspect this adds more strain to the host's USB port but that's a tradeoff I've lived with without any noticeable wear so far. Finally, the NEO has this peculiar feature of supporting NFC for certain operations, as LWN previously covered, but I haven't used that feature yet. The Nitrokey Pro looks like a normal USB key, in contrast with the other two devices. It does feel a little brittle when compared with the YubiKey, although only time will tell how much of a beating it can take. It has a small ring in the case so it is possible to carry it directly on your keyring, but I would be worried the cap would come off eventually. Nitrokey devices are also two times thicker than the Yubico models which makes them less convenient to carry around on keyrings.

Open and closed designs The FST-01 is as open as hardware comes, down to the PCB design available as KiCad files in this Git repository. The software running on the card is the Gnuk firmware that implements the OpenPGP card protocol, but you can also get it with firmware implementing a true random number generator (TRNG) called NeuG (pronounced "noisy"); the device is programmable through a standard Serial Wire Debug (SWD) port. The Nitrokey Start model also runs the Gnuk firmware. However, the Nitrokey website announces only ECC and RSA 2048-bit support for the Start, while the FST-01 also supports RSA-4096. Nitrokey's founder Jan Suhr, in a private email, explained that this is because "Gnuk doesn't support RSA-3072 or larger at a reasonable speed". Its devices (the Pro, Start, and HSM models) use a similar chip to the FST-01: the STM32F103 microcontroller. Nitrokey Pro with STM32F103TBU6 MCU Nitrokey also publishes its hardware designs, on GitHub, which shows the Pro is basically a fork of the FST-01, according to the ChangeLog. I opened the case to confirm it was using the STM MCU, something I should warn you against; I broke one of the pins holding it together when opening it so now it's even more fragile. But at least, I was able to confirm it was built using the STM32F103TBU6 MCU, like the FST-01. Nitrokey back side But this is where the comparison ends: on the back side, we find a SIM card reader that holds the OpenPGP card that, in turn, holds the private key material and does the cryptographic operations. So, in effect, the Nitrokey Pro is really a evolution of the original OpenPGP card readers. Nitrokey confirmed the OpenPGP card featured in the Pro is the same as the one shipped by the Free Software Foundation Europe (FSFE): the BasicCard built by ZeitControl. Those cards, however, are covered by NDAs and the firmware is only partially open source. This makes the Nitrokey Pro less open than the FST-01, but that's an inevitable tradeoff when choosing a design based on the OpenPGP cards, which Suhr described to me as "pretty proprietary". There are other keycards out there, however, for example the SLJ52GDL150-150k smartcard suggested by Debian developer Yves-Alexis Perez, which he prefers as it is certified by French and German authorities. In that blog post, he also said he was experimenting with the GPL-licensed OpenPGP applet implemented by the French ANSSI. But the YubiKey devices are even further away in the closed-design direction. Both the hardware designs and firmware are proprietary. The YubiKey NEO, for example, cannot be upgraded at all, even though it is based on an open firmware. According to Yubico's FAQ, this is due to "best security practices": "There is a 'no upgrade' policy for our devices since nothing, including malware, can write to the firmware." I find this decision questionable in a context where security updates are often more important than trying to design a bulletproof design, which may simply be impossible. And the YubiKey NEO did suffer from critical security issue that allowed attackers to bypass the PIN protection on the card, which raises the question of the actual protection of the private key material on those cards. According to Niibe, "some OpenPGP cards store the private key unencrypted. It is a common attitude for many smartcard implementations", which was confirmed by Suhr: "the private key is protected by hardware mechanisms which prevent its extraction and misuse". He is referring to the use of tamper resistance. After that security issue, there was no other option for YubiKey NEO users than to get a new keycard (for free, thankfully) from Yubico, which also meant discarding the private key material on the key. For OpenPGP keys, this may mean having to bootstrap the web of trust from scratch if the keycard was responsible for the main certification key. But at least the NEO is running free software based on the OpenPGP card applet and the source is still available on GitHub. The YubiKey 4, on the other hand, is now closed source, which was controversial when the new model was announced last year. It led the main Linux Foundation system administrator, Konstantin Ryabitsev, to withdraw his endorsement of Yubico products. In response, Yubico argued that this approach was essential to the security of its devices, which are now based on "a secure chip, which has built-in countermeasures to mitigate a long list of attacks". In particular, it claims that:
A commercial-grade AVR or ARM controller is unfit to be used in a security product. In most cases, these controllers are easy to attack, from breaking in via a debug/JTAG/TAP port to probing memory contents. Various forms of fault injection and side-channel analysis are possible, sometimes allowing for a complete key recovery in a shockingly short period of time.
While I understand those concerns, they eventually come down to the trust you have in an organization. Not only do we have to trust Yubico, but also hardware manufacturers and designs they have chosen. Every step in the hidden supply chain is then trusted to make correct technical decisions and not introduce any backdoors. History, unfortunately, is not on Yubico's side: Snowden revealed the example of RSA security accepting what renowned cryptographer Bruce Schneier described as a "bribe" from the NSA to weaken its ECC implementation, by using the presumably backdoored Dual_EC_DRBG algorithm. What makes Yubico or its suppliers so different from RSA Security? Remember that RSA Security used to be an adamant opponent to the degradation of encryption standards, campaigning against the Clipper chip in the first crypto wars. Even if we trust the Yubico supply chain, how can we trust a closed design using what basically amounts to security through obscurity? Publicly auditable designs are an important tradition in cryptography, and that principle shouldn't stop when software is frozen into silicon. In fact, a critical vulnerability called ROCA disclosed recently affects closed "smartcards" like the YubiKey 4 and allows full private key recovery from the public key if the key was generated on a vulnerable keycard. When speaking with Ars Technica, the researchers outlined the importance of open designs and questioned the reliability of certification:
Our work highlights the dangers of keeping the design secret and the implementation closed-source, even if both are thoroughly analyzed and certified by experts. The lack of public information causes a delay in the discovery of flaws (and hinders the process of checking for them), thereby increasing the number of already deployed and affected devices at the time of detection.
This issue with open hardware designs seems to be recurring topic of conversation on the Gnuk mailing list. For example, there was a discussion in September 2017 regarding possible hardware vulnerabilities in the STM MCU that would allow extraction of encrypted key material from the key. Niibe referred to a talk presented at the WOOT 17 workshop, where Johannes Obermaier and Stefan Tatschner, from the Fraunhofer Institute, demonstrated attacks against the STMF0 family MCUs. It is still unclear if those attacks also apply to the older STMF1 design used in the FST-01, however. Furthermore, extracted private key material is still protected by user passphrase, but the Gnuk uses a weak key derivation function, so brute-forcing attacks may be possible. Fortunately, there is work in progress to make GnuPG hash the passphrase before sending it to the keycard, which should make such attacks harder if not completely pointless. When asked about the Yubico claims in a private email, Niibe did recognize that "it is true that there are more weak points in general purpose implementations than special implementations". During the last DebConf in Montreal, Niibe explained:
If you don't trust me, you should not buy from me. Source code availability is only a single factor: someone can maliciously replace the firmware to enable advanced attacks.
Niibe recommends to "build the firmware yourself", also saying the design of the FST-01 uses normal hardware that "everyone can replicate". Those advantages are hard to deny for a cryptographic system: using more generic components makes it harder for hostile parties to mount targeted attacks. A counter-argument here is that it can be difficult for a regular user to audit such designs, let alone physically build the device from scratch but, in a mailing list discussion, Debian developer Ian Jackson explained that:
You don't need to be able to validate it personally. The thing spooks most hate is discovery. Backdooring supposedly-free hardware is harder (more costly) because it comes with greater risk of discovery. To put it concretely: if they backdoor all of them, someone (not necessarily you) might notice. (Backdooring only yours involves messing with the shipping arrangements and so on, and supposes that you specifically are of interest.)
Since that, as far as we know, the STM microcontrollers are not backdoored, I would tend to favor those devices instead of proprietary ones, as such a backdoor would be more easily detectable than in a closed design. Even though physical attacks may be possible against those microcontrollers, in the end, if an attacker has physical access to a keycard, I consider the key compromised, even if it has the best chip on the market. In our email exchange, Niibe argued that "when a token is lost, it is better to revoke keys, even if the token is considered secure enough". So like any other device, physical compromise of tokens may mean compromise of the key and should trigger key-revocation procedures.

Algorithms and performance To establish reliable performance results, I wrote a benchmark program naively called crypto-bench that could produce comparable results between the different keys. The program takes each algorithm/keycard combination and runs 1000 decryptions of a 16-byte file (one AES-128 block) using GnuPG, after priming it to get the password cached. I assume the overhead of GnuPG calls to be negligible, as it should be the same across all tokens, so comparisons are possible. AES encryption is constant across all tests as it is always performed on the host and fast enough to be irrelevant in the tests. I used the following:
  • Intel(R) Core(TM) i3-6100U CPU @ 2.30GHz running Debian 9 ("stretch"/stable amd64), using GnuPG 2.1.18-6 (from the stable Debian package)
  • Nitrokey Pro 0.8 (latest firmware)
  • FST-01, running Gnuk version 1.2.5 (latest firmware)
  • YubiKey NEO OpenPGP applet 1.0.10 (not upgradable)
  • YubiKey 4 4.2.6 (not upgradable)
I ran crypto-bench for each keycard, which resulted in the following:
Algorithm Device Mean time (s)
ECDH-Curve25519 CPU 0.036
FST-01 0.135
RSA-2048 CPU 0.016
YubiKey-4 0.162
Nitrokey-Pro 0.610
YubiKey-NEO 0.736
FST-01 1.265
RSA-4096 CPU 0.043
YubiKey-4 0.875
Nitrokey-Pro 3.150
FST-01 8.218
Decryption graph There we see the performance of the four keycards I tested, compared with the same operations done without a keycard: the "CPU" device. That provides the baseline time of GnuPG decrypting the file. The first obvious observation is that using a keycard is slower: in the best scenario (FST-01 + ECC) we see a four-fold slowdown, but in the worst case (also FST-01, but RSA-4096), we see a catastrophic 200-fold slowdown. When I presented the results on the Gnuk mailing list, GnuPG developer Werner Koch confirmed those "numbers are as expected":
With a crypto chip RSA is much faster. By design the Gnuk can't be as fast - it is just a simple MCU. However, using Curve25519 Gnuk is really fast.
And yes, the FST-01 is really fast at doing ECC, but it's also the only keycard that handles ECC in my tests; the Nitrokey Start and Nitrokey HSM should support it as well, but I haven't been able to test those devices. Also note that the YubiKey NEO doesn't support RSA-4096 at all, so we can only compare RSA-2048 across keycards. We should note, however, that ECC is slower than RSA on the CPU, which suggests the Gnuk ECC implementation used by the FST-01 is exceptionally fast. In discussions about improving the performance of the FST-01, Niibe estimated the user tolerance threshold to be "2 seconds decryption time". In a new design using the STM32L432 microcontroller, Aurelien Jarno was able to bring the numbers for RSA-2048 decryption from 1.27s down to 0.65s, and for RSA-4096, from 8.22s down to 3.87s seconds. RSA-4096 is still beyond the two-second threshold, but at least it brings the FST-01 close to the YubiKey NEO and Nitrokey Pro performance levels. We should also underline the superior performance of the YubiKey 4: whatever that thing is doing, it's doing it faster than anyone else. It does RSA-4096 faster than the FST-01 does RSA-2048, and almost as fast as the Nitrokey Pro does RSA-2048. We should also note that the Nitrokey Pro also fails to cross the two-second threshold for RSA-4096 decryption. For me, the FST-01's stellar performance with ECC outshines the other devices. Maybe it says more about the efficiency of the algorithm than the FST-01 or Gnuk's design, but it's definitely an interesting avenue for people who want to deploy those modern algorithms. So, in terms of performance, it is clear that both the YubiKey 4 and the FST-01 take the prize in their own areas (RSA and ECC, respectively).

Conclusion In the above presentation, I have evaluated four cryptographic keycards for use with various OpenPGP operations. What the results show is that the only efficient way of storing a 4096-bit encryption key on a keycard would be to use the YubiKey 4. Unfortunately, I do not feel we should put our trust in such closed designs so I would argue you should either stick with 2048-bit encryption subkeys or keep the keys on disk. Considering that losing such a key would be catastrophic, this might be a good approach anyway. You should also consider switching to ECC encryption: even though it may not be supported everywhere, GnuPG supports having multiple encryption subkeys on a keyring: if one algorithm is unsupported (e.g. GnuPG 1.4 doesn't support ECC), it will fall back to a supported algorithm (e.g. RSA). Do not forget your previously encrypted material doesn't magically re-encrypt itself using your new encryption subkey, however. For authentication and signing keys, speed is not such an issue, so I would warmly recommend either the Nitrokey Pro or Start, or the FST-01, depending on whether you want to start experimenting with ECC algorithms. Availability also seems to be an issue for the FST-01. While you can generally get the device when you meet Niibe in person for a few bucks (I bought mine for around \$30 Canadian), the Seeed online shop says the device is out of stock at the time of this writing, even though Jonathan McDowell said that may be inaccurate in a debian-project discussion. Nevertheless, this issue may make the Nitrokey devices more attractive. When deciding on using the Pro or Start, Suhr offered the following advice:
In practice smart card security has been proven to work well (at least if you use a decent smart card). Therefore the Nitrokey Pro should be used for high security cases. If you don't trust the smart card or if Nitrokey Start is just sufficient for you, you can choose that one. This is why we offer both models.
So far, I have created a signing subkey and moved that and my authentication key to the YubiKey NEO, because it's a device I physically trust to keep itself together in my pockets and I was already using it. It has served me well so far, especially with its extra features like U2F and HOTP support, which I use frequently. Those features are also available on the Nitrokey Pro, so that may be an alternative if I lose the YubiKey. I will probably move my main certification key to the FST-01 and a LUKS-encrypted USB disk, to keep that certification key offline but backed up on two different devices. As for the encryption key, I'll wait for keycard performance to improve, or simply switch my whole keyring to ECC and use the FST-01 or Nitrokey Start for that purpose.
[The author would like to thank Nitrokey for providing hardware for testing.] This article first appeared in the Linux Weekly News.

15 October 2017

Iain R. Learmonth: Free Software Efforts (2017W41)

Here s my weekly report for week 41 of 2017. In this week I have explored some Java 8 features, looked at automatic updates in a few Linux distributions and decided that actually I don t need swap anymore.

Debian The issue that was preventing the migration of the Tasktools Packaging Team s mailing list from Alioth to Savannah has now been resolved. Ana s chkservice package that I sponsored last week has been ACCEPTED into unstable and since MIGRATED to testing.

Tor Project I have produced a patch for the Tor Project website to update links to the Onionoo documentation now this has moved (#23802 ). I ve updated the Debian and Ubuntu relay configuration instructions to use systemctl instead of service where appropriate (#23048 ). When a Tor relay is less than 2 years old, an alert will now appear on Atlas to link to the new relay lifecycle blog post (#23767 ). This should hopefully help new relay operators understand why their relay is not immediately fully loaded but instead it takes some time to ramp up. I have gone through the tickets for Tor Cloud and did not find any tickets that contain any important information that would be useful to someone reviving the project. I have closed out these tickets and the Tor Cloud component no longer has any non-closed tickets (#7763, #8544, #8768, #9064, #9751, #10282, #10637, #11153, #11502, #13391, #14035, #14036, #14073, #15821 ). I ve continued to work on turning the Atlas application into an integrated part of Tor Metrics (#23518 ) and you can see some progress here. Finally, I ve continued hacking on a Twitter bot to tweet factoids about the public Tor network and you can now enjoy some JavaDoc documentation if you d like to learn a little about its internals. I am still waiting for a git repository to be created (#23799 ) but will be publishing the sources shortly after that ticket is actioned.

Sustainability I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report. I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains 60.52. I do not believe that any hardware I rely on is looking at imminent failure. I d like to thank Digital Ocean for providing me with futher credit for their platform to support my open source work. I do not find it likely that I ll be travelling to Cambridge for the miniDebConf as the train alone would be around 350 and hotel accomodation a further 600 (to include both me and Ana).

13 October 2017

Shirish Agarwal: I need to speak up now X Economics

Dear all, This would be a longish blog post (as most of mine are) compiled over days but as there is so short a time and so much to share. I had previously thought to share beautiful photographs of Ganesh mandals taking out the procession at time of immersion of the idol or the last day of Durga Puja recent events around do not make my mood to share photos at this point in time. I may share some of them in a future blog post or two . Before going further, I would like to offer my sympathies and condolences to people hurt and dislocated in Hurricane Irma , the 2017 Central Mexico Earthquake and lastly the most recent Las Vegas shooting as well as Hurricane Maria in Puerto Rico . I am somewhat nonplussed as to why Americans always want to name, especially hurricanes which destroy people s lives and livelihood built over generations and why most of the hurricanes are named after women. A look at weather.com site unveiled the answer to the mystery. Ironically (or not) I saw some of the best science coverage about Earthquakes or anything scientific reporting and analysis after a long time in mainstream newspapers in India. On another note, I don t understand or even expect to understand why the gunman did what he did 2 days back. Country music AFAIK is one of the most chilled-out kind of music, in some ways very similar to classical Indian singing although they are worlds apart in style of singing, renditions, artists, the way they emote etc. I seriously wish that the gunman had not been shot but caught and reasons were sought about what he did, he did. While this is certainly armchair thinking as was not at the scene of crime, but if a Mumbai Police constable could do it around a decade ago armed only with a lathi could do it, why couldn t the American cops who probably are trained in innumerable ways to subdue people without killing them, did. While investigations are on, I suspect if he were caught just like Ajmal Kasab was caught then lot of revelations might have come up. From what is known, the gentleman was upwardly mobile i.e. he was white, rich and apparently had no reason to have beef with anybody especially a crowd swaying to some nice music, all of which makes absolutely no sense. Indian Economy Slowdown Anyways, back to one of the main reasons of writing this blog post. Few days back, an ex-finance Minister of India Yashwant Sinha wrote what was felt by probably millions of Indians, an Indian Express article called I need to speak up now While there have been many, many arguments made since then by various people. A simple search of I need to speak up would lead to lead to many a result besides the one I have shared above. The only exception I have with the article is the line Forty leading companies of the country are already facing bankruptcy proceedings. Many more are likely to follow suit. I would not bore you but you ask any entrepreneur trying to set up shop in India i.e. ones who actually go through the processes of getting all the licenses for setting up even a small businesses as to the numerous hurdles they have to overcome and laid-back corrupt bureaucracy which they have to overcome. I could have interviewed some of my friends who had the conviction and the courage to set up shop and spent more than half a decade getting all the necessary licenses and approval to set up but it probably would be too specific for one industry or the other and would lead to the same result. Co-incidentally, a new restaurant, leaf opened in my vicinity few weeks before. From the looks it looked like a high-brow, high-priced restaurant hence like many others I did not venture in. After a few days, they introduced south-Indian delicacies like Masala Dosa, Uttapam at prices similar to other restaurants around. So I ventured in and bought some south Indian food to consume between mum and me. Few days later, I became friends with the owner/franchisee and I suggested (in a friendly tone) that why he doesn t make it like a CCD play where many people including yours truly use the service to share, strategize and meet with clients. The CCD joints usually serve coffee and snacks (which are over-priced but still run out pretty fast) but people come as they have chilled-out atmosphere and Wi-Fi access which people need for their smartphones, although the Wi-Fi part may soon become redundant With Reliance Jio making a big play. I also shared why he doesn t add more variety and time (the south Indian items are time-limited) as I see/saw many empty chairs there. Anyways, the shop-owner/franchisee shared his gross costs including salary, stocking, electricity, rent and it doesn t pan out to be serving Rs.80/- dish (roughly a 1US dollar and 25 cents) then serving INR Rs. 400/- a dish (around 6 $USD). One round of INR 400/- + dishes make his costs for the day, around 12 tables were there. It s when they have two full rounds of dishes costing INR 400/- or more that he actually has profits and he is predicting loss for at least 6 months to a year before he makes a rebound. He needs steady customers rather than just walk-ins that will make his business work/click. Currently his family is bearing the costs. He didn t mention the taxes although I know apart from GST there are still some local body taxes that they will have to pay and comply with. There are a multitude of problems for shutting a shop legally as well as they have to again renavigate the bureaucracy for the same. I have seen more than a few retailers downing their shutters for 6-8 months and then either sell it to new management, let go of the lease or simply sell the property to a competitor. The Insolvency and Bankruptcy Code is probably the first proper exit policy for large companies. So the 40 odd companies that Mr. Sinha were talking about were probably sick for a long time. In India, there is also an additional shame of being a failed entrepreneur unlike in the west where Entrepreneurs start on their next venture. As seen from Retailing In India only 3.3% of the population or at the most 4% of the population is directly or indirectly linked with the retail trade. Most of the economy still derives its wealth from the agrarian sector which is still reeling under the pressure from demonetization which happened last year. Al jazeera surprisingly portrayed a truer picture of the effects demonetization had on common citizen than many Indian newspapers did at the time. Because of the South African Debconf, I had to resort to debit cards and hence was able to escape standing in long lines in which many an old and women perished. It is only yesterday that the Government has acknowledged which many prominent Indians have been saying for months now, that we are in a slowdown . Be aware of the terms being used for effect by the Prime Minister. There are two articles which outlines the troubles India is in atm. The only bright spot has been e-commerce which so far has eluded GST although the Govt. has claimed regulations to put it in check. Indian Education System Interestingly, Ravish Kumar has started a series on NDTV where he is showcasing how Indian education sector, especially public colleges have been left to teachers on contract basis, see the first four episodes on NDTV channel starting with the first one I have shared as a hyperlink. I apologize as the series is in Hindi as the channel is meant for Indians and is mostly limited to Northern areas of the Country (mostly) although he has been honest that it is because they lack resources to tackle the amount of information flowing to them. Ravish started the series with sharing information about the U.S. where the things are similar with some teachers needing to sleep in cars because of high-cost of living to some needing to turn to sex-work . I was shocked when I read the guardian article, that is no way to treat our teachers.I went on to read How the American University was Killed following the breadcrumbs along the way. Reading that it seems Indians have been following the American system playbook from the 1980 s itself. The article talks about HMO as well and that seems to have followed here as well with my own experience of hospital fees and drugs which I had to entail a few weeks/month ago. Few years ago, when me and some of my friends had the teaching bug and we started teaching in a nearby municipal school, couple of teachers had shared that they were doing 2-3 jobs to make ends meet. I don t know about others in my group, at least I was cynical because I thought all the teachers were permanent and they make good money only to realize now that the person was probably speaking the truth. When you have to do three jobs to make ends meet from where do you bring the passion to teach young people and that too outside the syllabus ? Also, with this new knowledge in hindsight, I take back all my comments I made last year and the year before for the pathetic education being put up by the State. With teachers being paid pathetically/underpaid and almost 60% teachers being ad-hoc/adjunct teachers they have to find ways to have some sense of security. Most teachers are bachelors as they are poor and cannot offer any security (either male or female) and for women, after marriage it actually makes no sense for them to continue in this profession. I salute all the professors who are ad-hoc in nature and probably will never get a permanent position in their life. I think in some way, thanx to him, that the government has chosen to give 7th pay commisson salary to teachers. While the numbers may appear be large, there are a lot of questions as to how many people will actually get paid. There needs to be lot of vacancies which need to be filled quickly but don t see any solution in the next 2-3 years as well. The Government has taken a position to use/re-hire retired teachers rather than have new young teachers as an unwritten policy. In this Digital India context how are retired teachers supposed to understand and then pass on digital concepts is beyond me when at few teacher trainings I have seen they lack even the most basic knowledge that I learnt at least a decade or two ago, the difference is that vast. I just don t know what to say to that. My own experience with my own mother who had pretty good education in her time and probably would have made a fine business-woman if she knew that she will have a child that she would have to raise by herself alone (along with maternal grand-parents) is testimonial to the fact how hard it is for older people to grasp technology and here I m talking just using the interface as a consumer rather than a producer or someone in-between who has the idea of how companies and governments profit from whatever data is shared one way or the other. After watching the series/episodes and discussing the issue with my mother it was revealed that both her and my late maternal grandfather were on casual/ad-hoc basis till 20-25 years in their service in the defense sector. If Ravish were to do a series on the defense sector he probably would find the same thing there. To add to that, the defense sector is a vital component to a country s security. If 60% of the defense staff in all defense establishments have temporary staff how do you ensure the loyalty of the people working therein. That brings to my mind Ignorance is bliss . Software development and deployment There is another worry that all are skirting around, the present dispensation/government s mantra is minimum government-maximum governance with digital technologies having all solutions which is leading to massive unemployment. Also from most of the stories/incidents I read in the newspapers, mainstream media and elsewhere it seems most software deployments done in India are done without having any system of internal checks and balances. There is no lintian for software to be implemented. Contracts seem to be given to big companies and there is no mention of what prerequisites or conditions were laid down by the Government for software development and deployment and if any checks were done to ensure that the software being developed was in according to government specifications or not. Ideally this should all be in public domain so that questions can be asked and responsibility fixed if things go haywire, as currently they do not. Software issues As my health been not that great, I have been taking a bit more time and depth while filing bugs. #877638 is a good example. I suspect though that part of the problem might be that mate has moved to gtk3 while guake still has gtk-2 bindings. I also reported the issue upstream both in mate-panel as well as guake . I haven t received any response from either or/and upstreams . I also have been fiddling around with gdb to better understand the tool so I can exploit/use this tool in a better way. There are some commands within the gdb interface which seem to be interesting and hopefully I ll try how the commands perform over days, weeks to a month. I hope we see more action on the mate-panel/guake bug as well as move of guake to gtk+3 but that what seemingly seemed like wait for eternity seems to have done by somebody in last couple of days. As shared in the ticket there are lots of things still to do but it seems the heavy lifting has been done but seems merging will be tricky as two developers have been trying to update to gtk+3 although aichingm seems to have a leg up with his 3! branch. Another interesting thing I saw is the below picture. Firefox is out of date on wordpress.com The firefox version I was using to test the site/wordpress-wp-admin was Mozilla Firefox 52.4.0 which AFAIK is a pretty recentish one and people using Debian stretch would probably be using the same version (firefox stable/LTS) rather than the more recent versions. I went to the link it linked to and it gave no indication as to why it thought my browser is out-of-date and what functionality was/is missing. I have found that wordpress support has declined quite a bit and people don t seem to use the forums as much as they used to before. I also filed a few bugs for qalculate. #877716 where a supposedly transitional package removes the actual application, #877717 as the software has moved its repo. to github.com as well as tickets and other things in process and lastly #877733. I had been searching for a calculator which can do currency calculations on the fly (say for e.g. doing personal budgeting for Taiwan debconf) without needing to manually enter the conversion rates and losing something in the middle. While the current version has support for some limited currencies, the new versions promise more as other people probably have more diverse needs for currency conversions (people who do long or short on oil, stocks overseas is just one example, I am sure there are many others) than simplistic mine.
Filed under: Miscellenous Tagged: #American Education System, #bug-filing, #Climate change, #Dignity, #e-commerce, #gtk+3, #gtk2, #Indian Economy 'Slowdown', #Indian Education System, #Insolvency and Bankruptcy Code, #Las Vegas shooting, #Modern Retail in India, #planet-debian, #qalculate, Ad-hoc and Adjunct Professors, wordpress.com

10 October 2017

Iain R. Learmonth: Automatic Updates

We have instructions for setting up new Tor relays on Debian. The only time the word upgrade is mentioned here is:
Be sure to set your ContactInfo line so we can contact you if you need to upgrade or something goes wrong.
This isn t great. We should have some decent instructions for keeping your relay up to date too. I ve been compiling a set of documentation for enabling automatic updates on various Linux distributions, here s a taste of what I have so far:

Debian Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):
apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later Beginning with Fedora 22, you can enable automatic updates via:
dnf install dnf-automatic
In /etc/dnf/automatic.conf set:
apply_updates = yes
Now enable and start automatic updates via:
systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer
(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:
yum install yum-cron
In /etc/yum/yum-cron.conf set:
apply_updates = yes
Enable and start automatic updates via:
systemctl start yum-cron.service

I d like to collect together instructions also for other distributions (and *BSD and Mac OS). Atlas knows which platform a relay is running on, so there could be a link in the future to some platform specific instructions on how to keep your relay up to date.

09 October 2017

Gunnar Wolf: Achievement unlocked - Made with Creative Commons translated to Spanish! (Thanks, @xattack!)

I am very, very, very happy to report this And I cannot believe we have achieved this so fast: Back in June, I announced I'd start working on the translation of the Made with Creative Commons book into Spanish. Over the following few weeks, I worked out the most viable infrastructure, gathered input and commitments for help from a couple of friends, submitted my project for inclusion in the Hosted Weblate translations site (and got it approved!) Then, we quietly and slowly started working. Then, as it usually happens in late August, early September... The rush of the semester caught me in full, and I left this translation project for later For the next semester, perhaps... Today, I received a mail that surprised me. That stunned me. 99% of translated strings! Of course, it does not look as neat as "100%" would, but there are several strings not to be translated. So, yay for collaborative work! Oh, and FWIW Thanks to everybody who helped. And really, really, really, hats off to Luis Enrique Amaya, a friend whom I see way less than I should. A LIDSOL graduate, and a nice guy all around. Why to him specially? Well... This has several wrinkles to iron out, but, by number of translated lines: ...Need I say more? Luis, I hope you enjoyed reading the book :-] There is still a lot of work to do, and I'm asking the rest of the team some days so I can get my act together. From the mail I just sent, I need to:
  1. Review the Pandoc conversion process, to get the strings formatted again into a book; I had got this working somewhere in the process, but last I checked it broke. I expect this not to be too much of a hurdle, and it will help all other translations.
  2. Start the editorial process at my Institute. Once the book builds, I'll have to start again the stylistic correction process so the Institute agrees to print it out under its seal. This time, we have the hurdle that our correctors will probably hate us due to part of the work being done before we had actually agreed on some important Spanish language issues... which are different between Mexico, Argentina and Costa Rica (where translators are from). Anyway This sets the mood for a great start of the week. Yay!
AttachmentSize
Screenshot from 2017-10-08 20-55-30.png103.1 KB

04 October 2017

Steve Kemp: Tracking aircraft in real-time, via software-defined-radio

So my last blog-post was about creating a digital-radio, powered by an ESP8266 device, there's a joke there about wireless-control of a wireless. I'm not going to make it. Sticking with a theme this post is also about radio, software-defined radio. I know almost nothing about SDR, except that it can be used to let your computer "do stuff" with radio. The only application I've ever read about that seemed interesting was tracking aircraft. This post is about setting up a Debian GNU/Linux system to do exactly that, show aircraft in real-time above your head! This was almost painless to setup. So I bought this USB device from AliExpress for the grand total of 8.46. I have no idea if that URL is stable, but I suspect it is probably not. Good luck finding something similar if you're living in the future! Once I connected the Antenna to the USB stick, and inserted it into a spare slot it showed up in the output of lsusb:
  $ lsusb
  ..
  Bus 003 Device 043: ID 0bda:2838 Realtek Semiconductor Corp. RTL2838 DVB-T
  ..
In more detail I see the major/minor numbers:
  idVendor           0x0bda Realtek Semiconductor Corp.
  idProduct          0x2838 RTL2838 DVB-T
So far, so good. I installed the development headers/library I needed:
  # apt-get install librtlsdr-dev libusb-1.0-0-dev
Once that was done I could clone antirez's repository, and build it:
  $ git clone https://github.com/antirez/dump1090.git
  $ cd dump1090
  $ make
And run it:
  $ sudo ./dump1090 --interactive --net
This failed initially as a kernel-module had claimed the device, but removing that was trivial:
  $ sudo rmmod dvb_usb_rtl28xxu
  $ sudo ./dump1090 --interactive --net
Once it was running I'd see live updates on the console, every second:
  Hex    Flight   Altitude  Speed   Lat       Lon       Track  Messages Seen       .
  --------------------------------------------------------------------------------
  4601fc          14200     0       0.000     0.000     0     11        1 sec
  4601f2          9550      0       0.000     0.000     0     58        0 sec
  45ac52 SAS1716  2650      177     60.252    24.770    47    26        1 sec
And opening a browser pointing at http://localhost:8080/ would show that graphically, like so: NOTE: In this view I'm in Helsinki, and the airport is at Vantaa, just outside the city. Of course there are tweaks to be made:

03 October 2017

Christoph Egger: Looking for a mail program + desktop environment

Seems it is now almost a decade since I migrated from Thunderbird to GNUS. And GNUS is an awesome mail program that I still rather like. However GNUS is also heavily quirky. It's essentially single-threaded and synchronous which means you either have to wait for the "IMAP check for new mails" to finish or you have to C-g abort it if you want the user interface to work; You have to wait for the "Move mail" to complete (which can take a while -- especially with dovecot-antispam training the filter) before you can continue working. It has it's funny way around TLS and certificate validation. And it seems to hang from time to time until it is C-g interrupted. So when I set up my new desktop machine I decided to try something else. My first try was claws-mail which seems OK but totally fails in the asynchronous area. While the GUI stays reactive, all actions that require IMAP interactions become incredibly slow when a background IMAP refresh is running. I do have quite some mailboxes and waiting the 5+ minutes after opening claws or whenever it decides to do a refresh is just to much. Now my last try has been Kmail -- also driven by the idea of having a more integrated setup with CalDAV and CardDAV around and similar goodies. And Kmail really compares nicely to claws in many ways. After all, I can use it while it's doing its things in the background. However the KDE folks seem to have dropped all support for the \recent IMAP flag which I heavily rely on. I do -- after all -- keep a GNUS like workflow where all unread mail (ref \seen) needs to still be acted upon which means there can easily be quite a few unread messages when I'm busy at the moment and just having a quick look at the new (ref \recent) mail to see if there's something super-urgent is essential. So I'm now looking for useful suggestions for a mail program (ideally with desktop integration) with the following essential features:

Next.