Search Results: "bab"

5 July 2020

Russell Coker: Debian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X the latest 64bit version of the IBM mainframe. Here s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way. First you need to create a filesystem in an an image file with commands like the following:
truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp
Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2. The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.
# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap
# List the support for different binary formats
update-binfmts --display
# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1
# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage
# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf
# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade
# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd
# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x
# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END
# clean up
umount /mnt/tmp
umount /mnt/tmp2
# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf
Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it s right for you and whether you need to alter it slightly for your system. To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I ve included that in the explanation because I think it s important to have all security options enabled. The -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the ccw is the difference). I m not sure if noresume on the kernel command line is required, but it doesn t do any harm. The net.ifnames=0 stops systemd from renaming Ethernet devices. For the virtual networking the ccw again is a difference from other architectures. Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run dhclient eth0 and other similar commands to setup networking and allow ssh logins.
qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper
Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the lockdown=confidentiality kernel security option even though it s not supported in 4.19 kernels, it doesn t do any harm and when I upgrade systems to newer kernels I won t have to remember to add it.
qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper
Try It Out I ve got a S390X system online for a while, ssh root@s390x.coker.com.au with password SELINUX to try it out. PPC64 I ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:
qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"
Above is the minimal qemu command that I m using. Below is the result, it stops after the 4. from 4.19.0-9 . Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.
  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php
Booting from memory...
Linux ppc64le
#1 SMP Debian 4.
The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package. Any suggestions on what I should try next would be appreciated.

4 July 2020

Dirk Eddelbuettel: Rcpp now used by 2000 CRAN packages and one in eight!

2000 Rcpp packages As of yesterday, Rcpp stands at exactly 2000 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time. Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018 and then 1750 packages last August. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is available too. Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We now passed 12.5 percent so one in every eight CRAN packages dependens on Rcpp. Stunning. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN as one would expect a growth curve to. To mark the occassion, I sent out two tweets yesterday: first a shorter one with just the numbers , followed by a second one also containing the few calculation steps. The screenshot from the second one is below. 2000 Rcpp packages 2000 user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did have only 1000 packages, then 5000, 10000, and here we are at just over 16000 with Rcpp at 12.5% and still growing (though maybe more slowly). Amazeballs. The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: The Light Brigade

Review: The Light Brigade, by Kameron Hurley
Publisher: Saga
Copyright: 2019
ISBN: 1-4814-4798-X
Format: Kindle
Pages: 355
In the wake of the Blink, which left a giant crater where S o Paulo was, Dietz signed up for the military. To be a hero. To satisfy an oath of vengeance. To kill aliens. Corporations have consumed the governments that used to run Earth and have divided the world between them. Dietz's family, before the Blink, were ghouls in Tene-Silva territory, non-citizens who scavenged a precarious life on the margins. Citizenship is a reward for loyalty and a mechanism of control. The only people who don't fit into the corporate framework are the Martians, former colonists who went dark for ten years and re-emerged as a splinter group offering to use their superior technology to repair environmental damage to the northern hemisphere caused by corporate wars. When the Blink happens, apparently done with technology far beyond what the corporations have, corporate war with the Martians is the unsurprising result. Long-time SF readers will immediately recognize The Light Brigade as a response to Starship Troopers with far more cynical world-building. For the first few chapters, the parallelism is very strong, down to the destruction of a large South American city (S o Paulo instead of Buenos Aires), a naive military volunteer, and horrific basic training. But, rather than dropships, the soldiers in Dietz's world are sent into battle via, essentially, Star Trek transporters. These still very experimental transporters send Dietz to a different mission than the one in the briefing. Advance warning that I'm going to talk about what's happening with Dietz's drops below. It's a spoiler, but you would find out not far into the book and I don't think it ruins anything important. (On the contrary, it may give you an incentive to stick through the slow and unappealing first few chapters.) I had so many suspension of disbelief problems with this book. So many. This starts with the technology. The core piece of world-building is Star Trek transporters, so fine, we're not talking about hard physics. Every SF story gets one or two free bits of impossible technology, and Hurley does a good job showing the transporters through a jaundiced military eye. But, late in the book, this technology devolves into one of my least-favorite bits of SF hand-waving that, for me, destroyed that gritty edge. Technology problems go beyond the transporters. One of the bits of horror in basic training is, essentially, torture simulators, whose goal is apparently to teach soldiers to dissociate (not that the book calls it that). One problem is that I never understood why a military would want to teach dissociation to so many people, but a deeper problem is that the mechanics of this simulation made no sense. Dietz's training in this simulator is a significant ongoing plot point, and it kept feeling like it was cribbed from The Matrix rather than something translatable into how computers work. Technology was the more minor suspension of disbelief problem, though. The larger problem was the political and social world-building. Hurley constructs a grim, totalitarian future, which is a fine world-building choice although I think it robs some nuance from the story she is telling about how militaries lie to soldiers. But the totalitarian model she uses is one of near-total information control. People believe what the corporations tell them to believe, or at least are indifferent to it. Huge world events (with major plot significance) are distorted or outright lies, and those lies are apparently believed by everyone. The skepticism that exists is limited to grumbling about leadership competence and cynicism about motives, not disagreement with the provided history. This is critical to the story; it's a driver behind Dietz's character growth and is required to set up the story's conclusion. This is a model of totalitarianism that's familiar from Orwell's Nineteen Eighty-Four. The problem: The Internet broke this model. You now need North Korean levels of isolation to pull off total message control, which is incompatible with the social structure or technology level that Hurley shows. You may be objecting that the modern world is full of people who believe outrageous propaganda against all evidence. But the world-building problem is not that some people believe the corporate propaganda. It's that everyone does. Modern totalitarians have stopped trying to achieve uniformity (because it stopped working) and instead make the disagreement part of the appeal. You no longer get half a country to believe a lie by ensuring they never hear the truth. Instead, you equate belief in the lie with loyalty to a social or political group, and belief in the truth with affiliation with some enemy. This goes hand in hand with "flooding the zone" with disinformation and fakes and wild stories until people's belief in the accessibility of objective truth is worn down and all facts become ideological statements. This does work, all too well, but it relies on more information, not less. (See Zeynep Tufekci's excellent Twitter and Tear Gas if you're unfamiliar with this analysis.) In that world, Dietz would have heard the official history, the true history, and all sorts of wild alternative histories, making correct belief a matter of political loyalty. There is no sign of that. Hurley does gesture towards some technology to try to explain this surprising corporate effectiveness. All the soldiers have implants, and military censors can supposedly listen in at any time. But, in the story, this censorship is primarily aimed at grumbling and local disloyalty. There is no sign that it's being used to keep knowledge of significant facts from spreading, nor is there any sign of the same control among the general population. It's stated in the story that the censors can't even keep up with soldiers; one would have to get unlucky to be caught. And yet the corporation maintains preternatural information control. The place this bugged me the most is around knowledge of the current date. For reasons that will be obvious in a moment, Dietz has reasons to badly want to know what month and year it is and is unable to find this information anywhere. This appears to be intentional; Tene-Silva has a good (albeit not that urgent) reason to keep soldiers from knowing the date. But I don't think Hurley realizes just how hard that is. Take a look around the computer you're using to read this and think about how many places the date shows up. Apart from the ubiquitous clock and calendar app, there are dates on every file, dates on every news story, dates on search results, dates in instant messages, dates on email messages and voice mail... they're everywhere. And it's not just the computer. The soldiers can easily smuggle prohibited outside goods into the base; knowledge of the date would be much easier. And even if Dietz doesn't want to ask anyone, there are opportunities to go off base during missions. Somehow every newspaper and every news bulletin has its dates suppressed? It's not credible, and it threw me straight out of the story. These world-building problems are unfortunate, since at the heart of The Light Brigade is a (spoiler alert) well-constructed time travel story that I would have otherwise enjoyed. Dietz is being tossed around in time with each jump. And, unlike some of these stories, Hurley does not take the escape hatch of alternate worlds or possible futures. There is a single coherent timeline that Dietz and the reader experience in one order and the rest of the world experiences in a different order. The construction of this timeline is incredibly well-done. Time can only disconnect at jump and return points, and Hurley maintains tight control over the number of unresolved connections. At every point in the story, I could list all of the unresolved discontinuities and enjoy their complexity and implications without feeling overwhelmed by them. Dietz gains some foreknowledge, but in a way that's wildly erratic and hard to piece together fast enough for a single soldier to do anything about the plot. The world spins out of control with foreshadowing of grimmer and grimmer events, and then Hurley pulls it back together in a thoroughly satisfying interweaving of long-anticipated scenes and major surprises. I'm not usually a fan of time travel stories, but this is one of the best I've read. It also has a satisfying emotional conclusion (albeit marred for me by some unbelievable mystical technobabble), which is impressive given how awful and nasty Hurley makes this world. Dietz is a great first-person narrator, believably naive and cynical by turns, and piecing together the story structure alongside the protagonist built my emotional attachment to Dietz's character arc. Hurley writes the emotional dynamics of soldiers thoughtfully and well: shit-talking, fights, sudden moments of connection, shared cynicism over degenerating conditions, and the underlying growth of squad loyalty that takes over other motivations and becomes the reason to keep on fighting. Hurley also pulled off a neat homage to (and improvement on) Starship Troopers that caught me entirely by surprise and that I've hopefully not spoiled. This is a solid science fiction novel if you can handle the world-building. I couldn't, but I understand why it was nominated for the Hugo and Clarke awards. Recommended if you're less picky about technological and social believability than I am, although content warning for a lot of bloody violence and death (including against children) and a horrifically depressing world. Rating: 6 out of 10

30 June 2020

Russ Allbery: Review: The Fifth Risk

Review: The Fifth Risk, by Michael Lewis
Publisher: W.W. Norton
Copyright: 2018
Printing: 2019
ISBN: 0-393-35745-7
Format: Kindle
Pages: 254
The Fifth Risk starts with the presidential transition. Max Stier, the first person profiled by Lewis in this book, is the founder of the Partnership for Public Service. That foundation helped push through laws to provide more resources and structure for the transition of the United States executive branch from one president to the next. The goal was to fight wasted effort, unnecessary churn, and pointless disruption in the face of each administration's skepticism about everyone who worked for the previous administration.
"It's Groundhog Day," said Max. "The new people come in and think that the previous administration and the civil service are lazy or stupid. Then they actually get to know the place they are managing. And when they leave, they say, 'This was a really hard job, and those are the best people I've ever worked with.' This happens over and over and over."
By 2016, Stier saw vast improvements, despite his frustration with other actions of the Obama administration. He believed their transition briefings were one of the best courses ever produced on how the federal government works. Then that transition process ran into Donald Trump. Or, to be more accurate, that transition did not run into Donald Trump, because neither he nor anyone who worked for him were there. We'll never know how good the transition information was because no one ever listened to or read it. Meetings were never scheduled. No one showed up. This book is not truly about the presidential transition, though, despite its presence as a continuing theme. The Fifth Risk is, at its heart, an examination of government work, the people who do it, why it matters, and why you should care about it. It's a study of the surprising and misunderstood responsibilities of the departments of the United States federal government. And it's a series of profiles of the people who choose this work as a career, not in the upper offices of political appointees, but deep in the civil service, attempting to keep that system running. I will warn now that I am far too happy that this book exists to be entirely objective about it. The United States desperately needs basic education about the government at all levels, but particularly the federal civil service. The public impression of government employees is skewed heavily towards the small number of public-facing positions and towards paperwork frustrations, over which the agency usually has no control because they have been sabotaged by Congress (mostly by Republicans, although the Democrats get involved occasionally). Mental images of who works for the government are weirdly selective. The Coast Guard could say "I'm from the government and I'm here to help" every day, to the immense gratitude of the people they rescue, but Reagan was still able to use that as a cheap applause line in his attack on government programs. Other countries have more functional and realistic social attitudes towards their government workers. The United States is trapped in a politically-fueled cycle of contempt and ignorance. It has to stop. And one way to help stop it is someone with Michael Lewis's story-telling skills writing a different narrative. The Fifth Risk is divided into a prologue about presidential transitions, three main parts, and an afterword (added in current editions) about a remarkable government worker whom you likely otherwise would never hear about. Each of the main parts talks about a different federal department: the Department of Energy, the Department of Agriculture, and the Department of Commerce. In keeping with the theme of the book, the people Lewis profiles do not do what you might expect from the names of those departments. Lewis's title comes from his discussion with John MacWilliams, a former Goldman Sachs banker who quit the industry in search of more personally meaningful work and became the chief risk officer for the Department of Energy. Lewis asks him for the top five risks he sees, and if you know that the DOE is responsible for safeguarding nuclear weapons, you will be able to guess several of them: nuclear weapons accidents, North Korea, and Iran. If you work in computer security, you may share his worry about the safety of the electrical grid. But his fifth risk was project management. Can the government follow through on long-term hazardous waste safety and cleanup projects, despite constant political turnover? Can it attract new scientists to the work of nuclear non-proliferation before everyone with the needed skills retires? Can it continue to lay the groundwork with basic science for innovation that we'll need in twenty or fifty years? This is what the Department of Energy is trying to do. Lewis's profiles of other departments are similarly illuminating. The Department of Agriculture is responsible for food stamps, the most effective anti-poverty program in the United States with the possible exception of Social Security. The section on the Department of Commerce is about weather forecasting, specifically about NOAA (the National Oceanic and Atmospheric Administration). If you didn't know that all of the raw data and many of the forecasts you get from weather apps and web sites are the work of government employees, and that AccuWeather has lobbied Congress persistently for years to prohibit the NOAA from making their weather forecasts public so that AccuWeather can charge you more for data your taxes already paid for, you should read this book. The story of American contempt for government work is partly about ignorance, but it's also partly about corporations who claim all of the credit while selling taxpayer-funded resources back to you at absurd markups. The afterword I'll leave for you to read for yourself, but it's the story of Art Allen, a government employee you likely have never heard of but whose work for the Coast Guard has saved more lives than we are able to measure. I found it deeply moving. If you, like I, are a regular reader of long-form journalism and watch for new Michael Lewis essays in particular, you've probably already read long sections of this book. By the time I sat down with it, I think I'd read about a third in other forms on-line. But the profiles that I had already read were so good that I was happy to read them again, and the additional stories and elaboration around previously published material was more than worth the cost and time investment in the full book.
It was never obvious to me that anyone would want to read what had interested me about the United States government. Doug Stumpf, my magazine editor for the past decade, persuaded me that, at this strange moment in American history, others might share my enthusiasm.
I'll join Michael Lewis in thanking Doug Stumpf. The Fifth Risk is not a proposal for how to fix government, or politics, or polarization. It's not even truly a book about the Trump presidency or about the transition. Lewis's goal is more basic: The United States government is full of hard-working people who are doing good and important work. They have effectively no public relations department. Achievements that would result in internal and external press releases in corporations, not to mention bonuses and promotions, go unnoticed and uncelebrated. If you are a United States citizen, this is your government and it does important work that you should care about. It deserves the respect of understanding and thoughtful engagement, both from the citizenry and from the politicians we elect. Rating: 10 out of 10

Norbert Preining: TeX Live Debian update 20200629

More than a month has passed since the last update of TeX Live packages in Debian, so here is a new checkout!
All arch all packages have been updated to the tlnet state as of 2020-06-29, see the detailed update list below. Enjoy. New packages akshar, beamertheme-pure-minimalistic, biblatex-unified, biblatex-vancouver, bookshelf, commutative-diagrams, conditext, courierten, ektype-tanka, hvarabic, kpfonts-otf, marathi, menucard, namedef, pgf-pie, pwebmac, qrbill, semantex, shtthesis, tikz-lake-fig, tile-graphic, utf8add. Updated packages abnt, achemso, algolrevived, amiri, amscls, animate, antanilipsum, apa7, babel, bangtex, baskervillef, beamerappendixnote, beamerswitch, beamertheme-focus, bengali, bib2gls, biblatex-apa, biblatex-philosophy, biblatex-phys, biblatex-software, biblatex-swiss-legal, bibleref, bookshelf, bxjscls, caption, ccool, cellprops, changes, chemfig, circuitikz, cloze, cnltx, cochineal, commutative-diagrams, comprehensive, context, context-vim, cquthesis, crop, crossword, ctex, cweb, denisbdoc, dijkstra, doclicense, domitian, dps, draftwatermark, dvipdfmx, ebong, ellipsis, emoji, endofproofwd, eqexam, erewhon, erewhon-math, erw-l3, etbb, euflag, examplep, fancyvrb, fbb, fbox, fei, fira, fontools, fontsetup, fontsize, forest-quickstart, gbt7714, genealogytree, haranoaji, haranoaji-extra, hitszthesis, hvarabic, hyperxmp, icon-appr, kpfonts, kpfonts-otf, l3backend, l3build, l3experimental, l3kernel, latex-amsmath-dev, latexbangla, latex-base-dev, latexdemo, latexdiff, latex-graphics-dev, latexindent, latex-make, latexmp, latex-mr, latex-tools-dev, libertinus-fonts, libertinust1math, lion-msc, listings, logix, lshort-czech, lshort-german, lshort-polish, lshort-portuguese, lshort-russian, lshort-slovenian, lshort-thai, lshort-ukr, lshort-vietnamese, luamesh, lua-uca, luavlna, lwarp, marathi, memoir, mnras, moderntimeline, na-position, newcomputermodern, newpx, nicematrix, nodetree, ocgx2, oldstandard, optex, parskip, pdfcrop, pdfpc, pdftexcmds, pdfxup, pgf, pgfornament, pgf-pie, pgf-umlcd, pgf-umlsd, pict2e, plautopatch, poemscol, pst-circ, pst-eucl, pst-func, pstricks, pwebmac, pxjahyper, quran, rec-thy, reledmac, rest-api, sanskrit, sanskrit-t1, scholax, semantex, showexpl, shtthesis, suftesi, svg, tcolorbox, tex4ht, texinfo, thesis-ekf, thuthesis, tkz-doc, tlshell, toptesi, tuda-ci, tudscr, twemoji-colr, univie-ling, updmap-map, vancouver, velthuis, witharrows, wtref, xecjk, xepersian-hm, xetex-itrans, xfakebold, xindex, xindy, xltabular, yathesis, ydoc, yquant, zref.

27 June 2020

Russell Coker: Links June 2020

Bruce Schneier wrote an informative post about Zoom security problems [1]. He recommends Jitsi which has a Debian package of their software and it s free software. Axel Beckert wrote an interesting post about keyboards with small numbers of keys, as few as 28 [2]. It s not something I d ever want to use, but interesting to read from a computer science and design perspective. The Guardian has a disturbing article explaining why we might never get a good Covid19 vaccine [3]. If that happens it will change our society for years if not decades to come. Matt Palmer wrote an informative blog post about private key redaction [4]. I learned a lot from that. Probably the simplest summary is that you should never publish sensitive data unless you are certain that all that you are publishing is suitable, if you don t understand it then you don t know if it s suitable to be published! This article by Umair Haque on eand.co has some interesting points about how Freedom is interpreted in the US [5]. This article by Umair Haque on eand.co has some good points about how messed up the US is economically [6]. I think that his analysis is seriously let down by omitting the savings that could be made by amending the US healthcare system without serious changes (EG by controlling drug prices) and by reducing the scale of the US military (there will never be another war like WW2 because any large scale war will be nuclear). If the US government could significantly cut spending in a couple of major areas they could then put the money towards fixing some of the structural problems and bootstrapping a first-world economic system. The American Conservatrive has an insightful article Seven Reasons Police Brutality is Systemic Not Anecdotal [7]. Scientific American has an informative article about how genetic engineering could be used to make a Covid-19 vaccine [8]. Rike wrote an insightful post about How Language Changes Our Concepts [9]. They cover the differences between the French, German, and English languages based on gender and on how the language limits thoughts. Then conclude with the need to remove terms like master/slave and blacklist/whitelist from our software, with a focus on Debian but it s applicable to all software. Gunnar Wolf also wrote an insightful post On Masters and Slaves, Whitelists and Blacklists [10], they started with why some people might not understand the importance of the issue and then explained some ways of addressing it. The list of suggested terms includes Primary-secondary, Leader-follower, and some other terms which have slightly different meanings and allow more precision in describing the computer science concepts used. We can be more precise when describing computer science while also not using terms that marginalise some groups of people, it s a win-win! Both Rike and Gunnar were responding to a LWN article about the plans to move away from Master/Slave and Blacklist/Whitelist in the Linux kernel [11]. One of the noteworthy points in the LWN article is that there are about 70,000 instances of words that need to be changed in the Linux kernel so this isn t going to happen immediately. But it will happen eventually which is a good thing.

25 June 2020

Russell Coker: How Will the Pandemic Change Things?

The Bulwark has an interesting article on why they can t Reopen America [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn t had a high death toll when compared to other pandemics of the last 100 years. People s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society. Transport One thing that has been happening recently is a transition in transport. It s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere. Can we have mass public transport that doesn t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones. Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber. Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren t likely to drive much faster than that anyway. Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren t obvious people may focus on the disease risk and keep buying cars. Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can t use them all and can t sell them then airline companies will go bankrupt. It s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don t think it s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive. Entertainment The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society? The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by Netflix and chill ? What will happen to all the prime real estate used by cinemas? Professional sporting matches have been played for a TV-only audience during the pandemic. There s no reason that they couldn t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups? Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that s enough to make a significant impact on profitability for many small businesses. When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it s one of the few industry segments which might do better after the pandemic than before. Work Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree. One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it s not worth converting the city cinemas into useful space? There s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn t getting much notice is remote access technologies for desktop support. If the IT people can t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc? Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won t go up overall. Generally telecoms won t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing. Money I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now. I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well. Business There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies. The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences. The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn t be packaged for home use in a reasonable amount of time. Farmer s markets alleviate some of the problems with packaging food etc. But they aren t a good option when there s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors. Religion Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

24 June 2020

Ian Jackson: Renaming the primary git branch to "trunk"

I have been convinced by the arguments that it's not nice to keep using the word master for the default git branch. Regardless of the etymology (which is unclear), some people say they have negative associations for this word, Changing this upstream in git is complicated on a technical level and, sadly, contested. But git is flexible enough that I can make this change in my own repositories. Doing so is not even so difficult. So: Announcement I intend to rename master to trunk in all repositories owned by my personal hat. To avoid making things very complicated for myself I will just delete refs/heads/master when I make this change. So there may be a little disruption to downstreams. I intend make this change everywhere eventually. But rather than front-loading the effort, I'm going to do this to repositories as I come across them anyway. That will allow me to update all the docs references, any automation, etc., at a point when I have those things in mind anyway. Also, doing it this way will allow me to focus my effort on the most active projects, and avoids me committing to a sudden large pile of fiddly clerical work. But: if you have an interest in any repository in particular that you want updated, please let me know so I can prioritise it. Bikeshed Why "trunk"? "Main" has been suggested elswewhere, and it is often a good replacement for "master" (for example, we can talk very sensibly about a disk's Main Boot Record, MBR). But "main" isn't quite right for the VCS case; for example a "main" branch ought to have better quality than is typical for the primary development branch. Conversely, there is much precedent for "trunk". "Trunk" was used to refer to this concept by at least SVN, CVS, RCS and CSSC (and therefore probably SCCS) - at least in the documentation, although in some of these cases the command line API didn't have a name for it. So "trunk" it is. Aside: two other words - passlist, blocklist People are (finally!) starting to replace "blacklist" and "whitelist". Seriously, why has it taken everyone this long? I have been using "blocklist" and "passlist" for these concepts for some time. They are drop-in replacements. I have also heard "allowlist" and "denylist" suggested, but they are cumbersome and cacophonous. Also "allow" and "deny" seem to more strongly imply an access control function than merely "pass" and "block", and the usefulness of passlists and blocklists extends well beyond access control: protocol compatibility and ABI filtering are a couple of other use cases.

comment count unavailable comments

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, May 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan A ssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian. The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update. Thanks to our sponsors New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment Liked this article? Click here. My blog is Flattr-enabled.

22 June 2020

Evgeni Golov: mass-migrating modules inside an Ansible Collection

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all. For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us. When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind. However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection. Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0. One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable. But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits! First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:
prefixless_name=$(echo $ old_name   sed -E 's/^(foreman katello)_//')
if [[ $ old_name  == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ $ old_name  == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ $ old_name  == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ $ old_name  == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ $ old_name  == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ $ old_name  == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ $ old_name  == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=$ prefixless_name 
fi
That defined, we need to actually have a $ old_name . Well, that's a for loop over the modules, right?
for module in $ BASE /foreman_*py $ BASE /katello_*py; do
  old_name=$(basename $ module  .py)
   
done
While we're looping over files, let's rename them and all the files that are associated with the module:
# rename the module
git mv $ BASE /$ old_name .py $ BASE /$ new_name .py
# rename the tests and test fixtures
git mv $ TESTS /$ old_name .yml $ TESTS /$ new_name .yml
git mv tests/fixtures/apidoc/$ old_name .json tests/fixtures/apidoc/$ new_name .json
for testfile in $ TESTS /fixtures/$ old_name -*.yml; do
  git mv $ testfile  $(echo $ testfile   sed "s/$ old_name /$ new_name /")
done
Now comes the really tricky part: search and replace. Let's see where we need to replace first:
  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: )
  2. in all test playbooks (e.g. foreman_example: )
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" $ BASE /*.py
sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
sed -E -i "/'$ old_name '/ s/$ old_name /$ new_name /" tests/conftest.py tests/test_crud.py
sed -E -i "/ $ old_name  / s/$ old_name /$ new_name /g' README.md docs/*.md
You've probably noticed I used $ BASE and $ TESTS and never defined them Lazy me. But here is the full script, defining the variables and looping over all the modules.
#!/bin/bash
BASE=plugins/modules
TESTS=tests/test_playbooks
RUNTIME=meta/runtime.yml
echo "plugin_routing:" > $ RUNTIME 
echo "  modules:" >> $ RUNTIME 
for module in $ BASE /foreman_*py $ BASE /katello_*py; do
  old_name=$(basename $ module  .py)
  prefixless_name=$(echo $ old_name   sed -E 's/^(foreman katello)_//')
  if [[ $ old_name  == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ $ old_name  == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ $ old_name  == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ $ old_name  == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ $ old_name  == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ $ old_name  == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ $ old_name  == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=$ prefixless_name 
  fi
  echo "renaming $ old_name  to $ new_name "
  git mv $ BASE /$ old_name .py $ BASE /$ new_name .py
  git mv $ TESTS /$ old_name .yml $ TESTS /$ new_name .yml
  git mv tests/fixtures/apidoc/$ old_name .json tests/fixtures/apidoc/$ new_name .json
  for testfile in $ TESTS /fixtures/$ old_name -*.yml; do
    git mv $ testfile  $(echo $ testfile   sed "s/$ old_name /$ new_name /")
  done
  sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" $ BASE /*.py
  sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
  sed -E -i "/'$ old_name '/ s/$ old_name /$ new_name /" tests/conftest.py tests/test_crud.py
  sed -E -i "/ $ old_name  / s/$ old_name /$ new_name /g' README.md docs/*.md
  echo "    $ old_name :" >> $ RUNTIME 
  echo "      redirect: $ new_name " >> $ RUNTIME 
  git commit -m "rename $ old_name  to $ new_name " $ BASE  tests/ README.md docs/ $ RUNTIME 
done
As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones. Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

Junichi Uekawa: The situation is getting better in some ways because we are opening up in Tokyo.

The situation is getting better in some ways because we are opening up in Tokyo. However fear remains the same. The fear of going out vs the business as usual. I'm spending more time on home network than ever before and I am learning about ipv6 more than before. I can probably explain MAP-E or dslite to you like I never could before.

20 June 2020

Dima Kogan: OpenCV C API transition. A rant.

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore. OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while. At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years. So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:
#include <opencv2/calib3d.hpp>
#include <stdio.h>
int main(void)
 
    double fx = 1000.0;
    double fy = 1000.0;
    double cx = 1000.0;
    double cy = 1000.0;
    double _camera_matrix[] =
          fx,  0, cx,
          0,  fy, cy,
          0,   0,  1  ;
    cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix);
    double pp[3] =  1., 2., 10. ;
    double qq[2] =  444, 555 ;
    int N=1;
    cv::Mat object_points(N,3, CV_64FC1, pp);
    cv::Mat image_points (N,2, CV_64FC1, qq);
    // rvec,tvec
    double _zero3[3] =  ;
    cv::Mat zero3(1,3,CV_64FC1, _zero3);
    cv::projectPoints( object_points,
                       zero3,zero3,
                       camera_matrix,
                       cv::noArray(),
                       image_points,
                       cv::noArray(), 0.0);
    fprintf(stderr, "manually-projected no-distortion: %f %f\n",
            pp[0]/pp[2] * fx + cx,
            pp[1]/pp[2] * fy + cy);
    fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]);
    return 0;
 
This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:
$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst
manually-projected no-distortion: 1100.000000 1200.000000
opencv says: 444.000000 555.000000
Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:
double qq[2] =  444, 555 ;
int N=1;
cv::Mat image_points (N,2, CV_64FC1, qq);
This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither. Stepping through the code revealed the problem. cv::projectPoints() looks like this:
void cv::projectPoints( InputArray _opoints,
                        InputArray _rvec,
                        InputArray _tvec,
                        InputArray _cameraMatrix,
                        InputArray _distCoeffs,
                        OutputArray _ipoints,
                        OutputArray _jacobian,
                        double aspectRatio )
 
    ....
    _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true);
    ....
I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues: Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go. So I go poking back into their code to grab what I need, and here's what I see:
static void cvProjectPoints2Internal( const CvMat* objectPoints,
                  const CvMat* r_vec,
                  const CvMat* t_vec,
                  const CvMat* A,
                  const CvMat* distCoeffs,
                  CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL),
                  CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL),
                  CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL),
                  CvMat* dpdo CV_DEFAULT(NULL),
                  double aspectRatio CV_DEFAULT(0) )
 
...
 
Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words. Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)
// The implementation of project_opencv is based on opencv. The sources have
// been heavily modified, but the opencv logic remains. This function is a
// cut-down cvProjectPoints2Internal() to keep only the functionality I want and
// to use my interfaces. Putting this here allows me to drop the C dependency on
// opencv. Which is a good thing, since opencv dropped their C API
//
// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
typedef union
 
    struct
     
        double x,y;
     ;
    double xy[2];
  point2_t;
typedef union
 
    struct
     
        double x,y,z;
     ;
    double xyz[3];
  point3_t;
void project_opencv( // outputs
                     point2_t* q,
                     point3_t* dq_dp,               // may be NULL
                     double* dq_dintrinsics_nocore, // may be NULL
                     // inputs
                     const point3_t* p,
                     int N,
                     const double* intrinsics,
                     int Nintrinsics)
 
    const double fx = intrinsics[0];
    const double fy = intrinsics[1];
    const double cx = intrinsics[2];
    const double cy = intrinsics[3];
    double k[12] =  ;
    for(int i=0; i<Nintrinsics-4; i++)
        k[i] = intrinsics[i+4];
    for( int i = 0; i < N; i++ )
     
        double z_recip = 1./p[i].z;
        double x = p[i].x * z_recip;
        double y = p[i].y * z_recip;
        double r2      = x*x + y*y;
        double r4      = r2*r2;
        double r6      = r4*r2;
        double a1      = 2*x*y;
        double a2      = r2 + 2*x*x;
        double a3      = r2 + 2*y*y;
        double cdist   = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        double xd      = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        double yd      = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;
        q[i].x = xd*fx + cx;
        q[i].y = yd*fy + cy;
        if( dq_dp )
         
            double dx_dp[] =   z_recip, 0,       -x*z_recip  ;
            double dy_dp[] =   0,       z_recip, -y*z_recip  ;
            for( int j = 0; j < 3; j++ )
             
                double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
                double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp;
                double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp);
                double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
                double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
                                k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp);
                double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
                                k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp);
                dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
                dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
             
         
        if( dq_dintrinsics_nocore )
         
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;
            if( Nintrinsics-4 > 2 )
             
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
                if( Nintrinsics-4 > 4 )
                 
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;
                    if( Nintrinsics-4 > 5 )
                     
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
                        if( Nintrinsics-4 > 8 )
                         
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
                         
                     
                 
             
         
     
 
This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them. So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening. Alright. Rant over.

19 June 2020

Russell Coker: Storage Trends

In considering storage trends for the consumer side I m looking at the current prices from MSY (where I usually buy computer parts). I know that other stores will have slightly different prices but they should be very similar as they all have low margins and wholesale prices are the main factor. Small Hard Drives Aren t Viable The cheapest hard drive that MSY sells is $68 for 500G of storage. The cheapest SSD is $49 for 120G and the second cheapest is $59 for 240G. SSD is cheaper at the low end and significantly faster. If someone needed about 500G of storage there s a 480G SSD for $97 which costs $29 more than a hard drive. With a modern PC if you have no hard drives you will notice that it s quieter. For anyone who s buying a new PC spending an extra $29 is definitely worthwhile for the performance, low power use, and silence. The cheapest 1TB disk is $69 and the cheapest 1TB SSD is $159. Saving $90 on the cost of a new PC probably isn t worth while. For 2TB of storage the cheapest options are Samsung NVMe for $339, Crucial SSD for $335, or a hard drive for $95. Some people would choose to save $244 by getting a hard drive instead of NVMe, but if you are getting a whole system then allocating $244 to NVMe instead of a faster CPU would probably give more benefits overall. Computer stores typically have small margins and computer parts tend to quickly either become cheaper or be obsoleted by better parts. So stores don t want to stock parts unless they will sell quickly. Disks smaller than 2TB probably aren t going to be profitable for stores for very long. The trend of SSD and NVMe becoming cheaper is going to make 2TB disks non-viable in the near future. NVMe vs SSD M.2 NVMe devices are at comparable prices to SATA SSDs. For some combinations of quality and capacity NVMe is about 50% more expensive and for some it s slightly cheaper (EG Intel 1TB NVMe being cheaper than Samsung EVO 1TB SSD). Last time I checked about half the motherboards on sale had a single M.2 socket so for a new workstation that doesn t need more than 2TB of storage (the largest NVMe that MSY sells) it wouldn t make sense to use anything other than NVMe. The benefit of NVMe is NOT throughput (even though NVMe devices can often sustain over 4GB/s), it s low latency. Workstations can t properly take advantage of this because RAM is so cheap ($198 for 32G of DDR4) that compiles etc mostly come from cache and because most filesystem writes on workstations aren t synchronous. For servers a large portion of writes are synchronous, for example a mail server can t acknowledge receiving mail until it knows that it s really on disk, so there s a lot of small writes that block server processes and the low latency of NVMe really improves performance. If you are doing a big compile on a workstation (the most common workstation task that uses a lot of disk IO) then the writes aren t synchronised to disk and if the system crashes you will just do all the compilation again. While NVMe doesn t give a lot of benefit over SSD for workstation use (I ve uses laptops with SSD and NVMe and not noticed a great difference) of course I still want better performance. ;) Last time I checked I couldn t easily buy a PCIe card that supported 2*NVMe cards, I m sure they are available somewhere but it would take longer to get and probably cost significantly more than twice as much. That means a RAID-1 of NVMe takes 2 PCIe slots if you don t have an M.2 socket on the motherboard. This was OK when I installed 2*NVMe devices on a server that had 18 disks and lots of spare PCIe slots. But for some systems PCIe slots are an issue. My home server has all PCIe slots used by a video card and Ethernet cards and the BIOS probably won t support booting from NVMe. It s a Dell server so I can t just replace the motherboard with one that has more PCIe slots and M.2 on the motherboard. As it s running nicely and doesn t need replacing any time soon I won t be using NVMe for home server stuff. Small Servers Most servers that I am responsible for have less than 2TB of storage. For my clients I now only recommend SSD storage for small servers and am recommending SSD for replacing any failed disks. My home server has 2*500G SSDs in a BTRFS RAID-1 for the root filesystem, and 3*4TB disks in a BTRFS RAID-1 for storing big files. I bought the SSDs when 500G SSDs were about $250 each and bought 2*4TB disks when they were about $350 each. Currently that server has about 3.3TB of space used and I could probably get it down to about 2.5TB if I deleted things I don t really need. If I was getting storage for that server now I d use 2*2TB SSDs and 3*1TB hard drives for the stuff that doesn t fit on SSDs (I have some spare 1TB disks that came with servers). If I didn t have spare hard drives I d get 3*2TB SSDs for that sort of server which would give 3TB of BTRFS RAID-1 storage. Last time I checked Dell servers had a card for supporting M.2 as an optional extra so Dells probably won t boot from NVMe without extra expense. Ars Technica has an informative article about WD selling SMR disks as NAS disks [1]. The Shingled Magnetic Recording technology allows greater storage density on a platter which leads to either larger capacity or cheaper disks but at the cost of lower write performance and apparently extremely bad latency in some situations. NAS disks are supposed to be low latency as the expectation is that they will be used in a RAID array and kicked out of the array if they have problems. There are reports of ZFS kicking SMR disks from RAID sets. I think this will end the use of hard drives for small servers. For a server you don t want to deal with this sort of thing, by definition when a server goes down multiple people will stop work (small server implies no clustering). Spending extra to get SSDs just to avoid the risk of unexpected SMR would be a good plan. Medium Servers The largest SSD and NVMe devices that are readily available are 2TB but 10TB disks are commodity items, there are reports of 20TB hard drives being available but I can t find anyone in Australia selling them. If you need to store dozens or hundreds of terabytes than hard drives have to be part of the mix at this time. There s no technical reason why SSDs larger than 10TB can t be made (the 2.5 SATA form factor has more than 5* the volume of a 2TB M.2 card) and it s likely that someone sells them outside the channels I buy from, but probably at a price higher than what my clients are willing to pay. If you want 100TB of affordable storage then a mid range server like the Dell PowerEdge T640 which can have up to 18*3.5 disks is good. One of my clients has a PowerEdge T630 with 18*3.5 disks in the 8TB-10TB range (we replace failed disks with the largest new commodity disks available, it used to have 6TB disks). ZFS version 0.8 introduced a Special VDEV Class which stores metadata and possibly small data blocks on faster media. So you could have some RAID-Z groups on hard drives for large storage and the metadata on a RAID-1 on NVMe for fast performance. For medium size arrays on hard drives having a find / operation take hours is not uncommon, for large arrays having it take days isn t that uncommon. So far it seems that ZFS is the only filesystem to have taken the obvious step of storing metadata on SSD/NVMe while bulk data is on cheap large disks. One problem with large arrays is that the vibration of disks can affect the performance and reliability of nearby disks. The ZFS server I run with 18 disks was originally setup with disks from smaller servers that never had ZFS checksum errors, but when disks from 2 small servers were put in one medium size server they started getting checksum errors presumably due to vibration. This alone is a sufficient reason for paying a premium for SSD storage. Currently the cost of 2TB of SSD or NVMe is between the prices of 6TB and 8TB hard drives, and the ratio of price/capacity for SSD and NVMe is improving dramatically while the increase in hard drive capacity is slow. 4TB SSDs are available for $895 compared to a 10TB hard drive for $549, so it s 4* more expensive on a price per TB. This is probably good for Windows systems, but for Linux systems where ZFS and special VDEVs is an option it s probably not worth considering. Most Linux user cases where 4TB SSDs would work well would be better served by smaller NVMe and 10TB disks running ZFS. I don t think that 4TB SSDs are at all popular at the moment (MSY doesn t stock them), but prices will come down and they will become common soon enough. Probably by the end of the year SSDs will halve in price and no hard drives less than 4TB will be viable. For rack mounted servers 2.5 disks have been popular for a long time. It s common for vendors to offer 2 versions of a rack mount server for 2.5 and 3.5 disks where the 2.5 version takes twice as many disks. If the issue is total storage in a server 4TB SSDs can give the same capacity as 8TB HDDs. SMR vs Regular Hard Drives Rumour has it that you can buy 20TB SMR disks, I haven t been able to find a reference to anyone who s selling them in Australia (please comment if you know who sells them and especially if you know the price). I expect that the ZFS developers will soon develop a work-around to solve the problems with SMR disks. Then arrays of 20TB SMR disks with NVMe for special VDEVs will be an interesting possibility for storage. I expect that SMR disks will be the majority of the hard drive market by 2023 if hard drives are still on the market. SSDs will be large enough and cheap enough that only SMR disks will offer enough capacity to be worth using. I think that it is a possibility that hard drives won t be manufactured in a few years. The volume of a 3.5 disk is significantly greater than that of 10 M.2 devices so current technology obviously allows 20TB of NVMe or SSD storage in the space of a 3.5 disk. If the price of 16TB NVMe and SSD devices comes down enough (to perhaps 3* the price of a 20TB hard drive) almost no-one would want the hard drive and it wouldn t be viable to manufacture them. It s not impossible that in a few years time 3D XPoint and similar fast NVM technologies occupy the first level of storage (the ZFS special VDEV , OS swap device, log device for database servers, etc) and NVMe occupies the level for bulk storage with no space left in the market for spinning media. Computer Cases For servers I expect that models supporting 3.5 storage devices will disappear. A 1RU server with 8*2.5 storage devices or a 2RU server with 16*2.5 storage devices will probably be of use to more people than a 1RU server with 4*3.5 or a 2RU server with 8*3.5 . My first IBM PC compatible system had a 5.25 hard drive, a 5.25 floppy drive, and a 3.5 floppy drive in 1988. My current PC is almost a similar size and has a DVD drive (that I almost never use) 5 other 5.25 drive bays that have never been used, and 5*3.5 drive bays that I have never used (I have only used 2.5 SSDs). It would make more sense to have PC cases designed around 2.5 and maybe 3.5 drives with no more than one 5.25 drive bay. The Intel NUC SFF PCs are going in the right direction. Many of them only have a single storage device but some of them have 2*M.2 sockets allowing RAID-1 of NVMe and some of them support ECC RAM so they could be used as small servers. A USB DVD drive costs $36, it doesn t make sense to have every PC designed around the size of an internal DVD drive that will probably only be used to install the OS when a $36 USB DVD drive can be used for every PC you own. The only reason I don t have a NUC for my personal workstation is that I get my workstations from e-waste. If I was going to pay for a PC then a NUC is the sort of thing I d pay to have on my desk.

16 June 2020

Ritesh Raj Sarraf: Kodi PS3 BD Remote

Setting up a Sony PS3 Blu-Ray Disc Remote Controller with Kodi TLDR; Since most of the articles on the internet were either obsolete or broken, I ve chosen to write these notes down in the form of a blog post so that it helps me now and in future, and hopefully others too.

Raspberry Pi All this time, I have been using the Raspberry Pi for my HTPC needs. The first RPi I acquired was in 2014 and I have been very very happy with the amount of support in the community and quality of the HTPC offering it has. I also appreciate the RPi s form factor and the power consumption limits. And then, to add more sugar to it, it uses a derivative of Debian, Raspbian, which was very familiar and feel good to me.

Raspberry Pi Issues So primarily, I use my RPi with Kodi. There are a bunch of other (daemon) services but the primary use case is HTPC only. RPi + Kodi has a very very annoying issue wherein it loses its audio pitch during video playback. The loss is so bad that the audio is barely audible. The workaround is to seek the video playback either way and then it comes back to its actual audio level, just to fade again in a while. My suspicion was that it may be a problem with Kodi. Or at least, Kodi would have a workaround in software. But unfortunately, I wasted a lot of time in dealing with my suspicion with no fruitful result. This started becoming a PITA over time. And it seems the issue is with the hardware itself because after I moved my setup to a regular laptop, the audio loss is gone.

Laptop with Kodi Since I had my old Lenovo Yoga 2 13 lying on all the time, it made sense to make some more use of it, using as the HTPC. This machine comes with a Micro-HDMI Out port, so it felt ideal for my High Definition video rendering needs. It comes stock with just Intel HD Video with good driver support in Linux, so it was quite quick and easy getting Kodi charged up and running on it. And as I mentioned above, the sound issues are not seen on this setup. Some added benefits are that I get to run stock Debian on this machine. And I must say a big THANK YOU to the Debian Multimedia Maintainers, who ve done a pretty good job maintaining Kodi under Debian.

HDMI CEC Only after I decommissioned my RPi, I came to notice how convenient the HDMI CEC functionality is. Turns out no standard laptops ship CEC functionality onto them. Even the case of my laptop, which has a Micro HDMI Out port, but still no CEC capabilities. As far as I know, the RPi came with the Pulse-Eight CEC module, so obvious first thought was to opt for a compatible external module of the same; but it comes with a nice price tag, me not willing to spend.

WiFi Remotes Kodi has very well implemented network interface for almost all its features. One could take the Yatse or Music Pump Kodi Remote Android applications that work very very well with Kodi. But wifi can be flaky some times. Especially, my experience with the Realtek network devices hasn t been very good. The driver support in Linux is okay but there are many firmware bugs to deal with. In my case, the machine will lose wifi signal/network every once in a while. And it turns out, for this machine, with this network device type, I m not the only one running into such problems. And to add to that, this is an UltraBook, which means it doesn t have an Ethernet port. So I ve had not much choice other than to live and suffer deal with it. The WiFi chip also provides the Bluetooth module, which so far I had not used much. From my /etc/modprobe.d/blacklist-memstick.conf, all relevant BT modules were added to the blacklist, all this time.
rrs@lenovo:~$ cat /etc/modprobe.d/blacklist-memstick.conf 
blacklist memstick
blacklist rtsx_usb_ms
# And bluetooth too
#blacklist btusb
#blacklist btrtl
#blacklist btbcm
#blacklist btintel
#blacklist bluetooth
21:21            
Also to keep in mind is that the driver for my card gives a very misleading kernel message, which is one of the many reasons for this blog post, so that I don t forget it a couple of months later. The missing firmware error message is okay to ignore, as per this upstream comment.
Jun 14 17:17:08 lenovo kernel: usbcore: registered new interface driver btusb
Jun 14 17:17:08 lenovo systemd[1]: Mounted /boot/efi.
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: examining hci_ver=06 hci_rev=000b lmp_ver=06 lmp_subver=8723
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_config.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: failed to load rtl_bt/rtl8723b_config.bin (-2)
Jun 14 17:17:08 lenovo kernel: firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: Direct firmware load for rtl_bt/rtl8723b_config.bin failed with error -2
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: cfg_sz -2, total sz 22496
This device s network + bt are on the same chip.
01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter
And then, when the btusb module is initialed (along with the misleading driver message), you ll get the following in your USB device listing
Bus 002 Device 005: ID 0bda:b728 Realtek Semiconductor Corp. Bluetooth Radio

Sony PlayStation 3 BD Remote Almost 10 years ago, I bought the PS3 and many of its accessories. The remote has just been rotting in the shelf. It had rusted so bad that it is better described with these pics.
Rusted inside
Rusted inside
Rusted inside and cover
Rusted inside and cover
Rusted spring
Rusted spring
The rust was so much that the battery holding spring gave up. A little bit scrubbing and cleaning has gotten it working. I hope it lasts for some time before I find time to open it up and give it a full clean-up.

Pairing the BD Remote to laptop Honestly, with the condition of the hardware and software on both ends, I did not have much hopes of getting this to work. And in all the years on my computer usage, I hardly recollect much days when I ve made use of BT. Probably, because the full BT stack wasn t that well integrated in Linux, earlier. And I mostly used to disable them in hardware and software to save on battery. All yielded results from the internet talked about tools/scripts that were either not working, pointing to broken links etc. These days, bluez comes with a nice utility, bluetoothctl. It was a nice experience using it. First, start your bluetooth service and ensure that the device talks well with the kernel
rrs@lenovo:~$ systemctl status bluetooth                                                                                                          
  bluetooth.service - Bluetooth service                                                                                                           
     Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)                                                      
     Active: active (running) since Mon 2020-06-15 12:54:58 IST; 3s ago                                                                           
       Docs: man:bluetoothd(8)                                                                                                                    
   Main PID: 310197 (bluetoothd)                                                                                                                  
     Status: "Running"                                                                                                                            
      Tasks: 1 (limit: 9424)                                                                                                                      
     Memory: 1.3M                                                                                                                                 
     CGroup: /system.slice/bluetooth.service                                                                                                      
              310197 /usr/lib/bluetooth/bluetoothd                                                                                               
                                                                                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Starting Bluetooth service...                                                                                  
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth daemon 5.50                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Started Bluetooth service.                                                                                     
Jun 15 12:54:58 lenovo bluetoothd[310197]: Starting SDP server                                                                                    
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth management interface 1.15 initialized                                                        
Jun 15 12:54:58 lenovo bluetoothd[310197]: Sap driver initialization failed.                                                                      
Jun 15 12:54:58 lenovo bluetoothd[310197]: sap-server: Operation not permitted (1)                                                                
12:55                                                                                                                                       
Next, then is to discover and connect to your device
rrs@lenovo:~$ bluetoothctl 
Agent registered
[bluetooth]# devices
Device E6:3A:32:A4:31:8F MI Band 2
Device D4:B8:FF:43:AB:47 MI RC
Device 00:1E:3D:10:29:0F BD Remote Control
[CHG] Device 00:1E:3D:10:29:0F Connected: yes
[BD Remote Control]# info 00:1E:3D:10:29:0F
Device 00:1E:3D:10:29:0F (public)
        Name: BD Remote Control
        Alias: BD Remote Control
        Class: 0x0000250c
        Paired: no
        Trusted: yes
        Blocked: no
        Connected: yes
        LegacyPairing: no
        UUID: Human Interface Device... (00001124-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v054Cp0306d0100
[bluetooth]# 
In case of the Sony BD Remote, there s no need to pair. In fact, trying to pair fails. It prompts for the PIN code, but neither 0000 or 1234 are accepted. So, the working steps so far are to Trust the device and then Connect the device. For the sake of future use, I also populated /etc/bluetooth/input.conf based on suggestions on the internet. Note: The advertised keymappings in this config file do not work. Note: I m only using it for the power saving measures in instructing the BT connection to sleep after 3 minutes.
rrs@priyasi:/tmp$ cat input.conf 
# Configuration file for the input service
# This section contains options which are not specific to any
# particular interface
[General]
# Set idle timeout (in minutes) before the connection will
# be disconnect (defaults to 0 for no timeout)
IdleTimeout=3
# Enable HID protocol handling in userspace input profile
# Defaults to false (HIDP handled in HIDP kernel module)
#UserspaceHID=true
# Limit HID connections to bonded devices
# The HID Profile does not specify that devices must be bonded, however some
# platforms may want to make sure that input connections only come from bonded
# device connections. Several older mice have been known for not supporting
# pairing/encryption.
# Defaults to false to maximize device compatibility.
#ClassicBondedOnly=true
# LE upgrade security
# Enables upgrades of security automatically if required.
# Defaults to true to maximize device compatibility.
#LEAutoSecurity=true
#
#[00:1E:3D:10:29:0F]
[2c:33:7a:8e:d6:30]
[PS3 Remote Map]
# When the 'OverlayBuiltin' option is TRUE (the default), the keymap uses
# the built-in keymap as a starting point.  When FALSE, an empty keymap is
# the starting point.
#OverlayBuiltin = TRUE
#buttoncode = keypress    # Button label = action with default key mappings
#OverlayBuiltin = FALSE
0x16 = KEY_ESC            # EJECT = exit
0x64 = KEY_MINUS          # AUDIO = cycle audio tracks
0x65 = KEY_W              # ANGLE = cycle zoom mode
0x63 = KEY_T              # SUBTITLE = toggle subtitles
0x0f = KEY_DELETE         # CLEAR = delete key
0x28 = KEY_F8             # /TIME = toggle through sleep
0x00 = KEY_1              # NUM-1
0x01 = KEY_2              # NUM-2
0x02 = KEY_3              # NUM-3
0x03 = KEY_4              # NUM-4
0x04 = KEY_5              # NUM-5
0x05 = KEY_6              # NUM-6
0x06 = KEY_7              # NUM-7
0x07 = KEY_8              # NUM-8
0x08 = KEY_9              # NUM-9
0x09 = KEY_0              # NUM-0
0x81 = KEY_F2             # RED = red
0x82 = KEY_F3             # GREEN = green
0x80 = KEY_F4             # BLUE = blue
0x83 = KEY_F5             # YELLOW = yellow
0x70 = KEY_I              # DISPLAY = show information
0x1a = KEY_S              # TOP MENU = show guide
0x40 = KEY_M              # POP UP/MENU = menu
0x0e = KEY_ESC            # RETURN = back/escape/cancel
0x5c = KEY_R              # TRIANGLE/OPTIONS = cycle through recording options
0x5d = KEY_ESC            # CIRCLE/BACK = back/escape/cancel
0x5f = KEY_A              # SQUARE/VIEW = Adjust Playback timestretch
0x5e = KEY_ENTER          # CROSS = select
0x54 = KEY_UP             # UP = Up/Skip forward 10 minutes
0x56 = KEY_DOWN           # DOWN = Down/Skip back 10 minutes
0x57 = KEY_LEFT           # LEFT = Left/Skip back 5 seconds
0x55 = KEY_RIGHT          # RIGHT = Right/Skip forward 30 seconds
0x0b = KEY_ENTER          # ENTER = select
0x5a = KEY_F10            # L1 = volume down
0x58 = KEY_J              # L2 = decrease the play speed
0x51 = KEY_HOME           # L3 = commercial skip previous
0x5b = KEY_F11            # R1 = volume up
0x59 = KEY_U              # R2 = increase the play speed
0x52 = KEY_END            # R3 = commercial skip next
0x43 = KEY_F9             # PS button = mute
0x50 = KEY_M              # SELECT = menu (as per PS convention)
0x53 = KEY_ENTER          # START = select / Enter (matches terminology in mythwelcome)
0x30 = KEY_PAGEUP         # PREV = jump back (default 10 minutes)
0x76 = KEY_J              # INSTANT BACK (newer RCs only) = decrease the play speed
0x75 = KEY_U              # INSTANT FORWARD (newer RCs only) = increase the play speed
0x31 = KEY_PAGEDOWN       # NEXT = jump forward (default 10 minutes)
0x33 = KEY_COMMA          # SCAN BACK =  decrease scan forward speed / play
0x32 = KEY_P              # PLAY = play/pause
0x34 = KEY_DOT            # SCAN FORWARD decrease scan backard speed / increase playback speed; 3x, 5, 10, 20, 30, 60, 120, 180
0x60 = KEY_LEFT           # FRAMEBACK = Left/Skip back 5 seconds/rewind one frame
0x39 = KEY_P              # PAUSE = play/pause
0x38 = KEY_P              # STOP = play/pause
0x61 = KEY_RIGHT          # FRAMEFORWARD = Right/Skip forward 30 seconds/advance one frame
0xff = KEY_MAX
21:48                 
I have not spent much time finding out why not all the key presses work. Especially, given that most places on the internet mention these mappings. For me, some of the key scan codes aren t even reported. For keys like L1, L2, L3, R1, R2, R3, Next_Item, Prev_Item, they generate no codes in the kernel. If anyone has suggestions, ideas or fixes, I d appreciate if you can drop a comment or email me privately. But given my limited use to get a simple remote ready, to be usable with Kodi, I was apt with only some of the keys working.

Mapping the keys in Kodi With the limited number of keys detected, mapping those keys to what Kodi could use was the next step. Kodi has a very nice and easy to use module, Keymap Editor. It is very simple to use and map detected keys to functionalities you want. With it, I was able to get a functioning remote to use with my Kodi HTPC setup.

Update: Wed Jun 17 11:38:20 2020 One annoying problem that breaks the overall experience is the following bug on the driver side, that results in connections not being established instantly. Once the device goes into sleep mode, in random attempts, waking up and re-establishing a BT connection can be multi-poll affair. This can last from a couple of seconds to well over minute. Random suggestions on the internet mention disabling the autosuspend functionality for the device in the driver with btusb.enable_autosuspend=n, but that did not help in this case. Given that this device is enumberated over the USB Bus, it probably needs this feature applied to the whole USB tree of the device s chain. Something to investigate over the weekend.
Jun 16 20:41:23 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 7
Jun 16 20:41:43 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 8
Jun 16 20:41:59 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 9
Jun 16 20:42:18 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:10/0005:054C:030>
Jun 16 20:42:18 lenovo kernel: sony 0005:054C:0306.0006: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 20:51:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:11/0005:054C:030>
Jun 16 20:51:59 lenovo kernel: sony 0005:054C:0306.0007: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 3 threads of 1 processes of 1 users.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Successfully made thread 32747 of process 1646 owned by '1000' RT at priority 5.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 4 threads of 1 processes of 1 users.
Jun 16 21:05:56 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 12
Jun 16 21:06:12 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 1
Jun 16 21:06:34 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 2
Jun 16 21:06:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:3/0005:054C:0306>
Jun 16 21:06:59 lenovo kernel: sony 0005:054C:0306.0008: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30

Others There s a package, kodi-eventclients-ps3, which can be used to talk to the BD Remote. Unfortunately, it isn t up-to-date. When trying to make use of it, I ran into a couple of problems. First, the easy one is:
rrs@lenovo:~/ps3pair$ kodi-ps3remote localhost 9777
usr/share/pixmaps/kodi//bluetooth.png
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 220, in <module>
  File "/usr/bin/kodi-ps3remote", line 208, in main
    xbmc.connect(host, port)
    packet = PacketHELO(self.name, self.icon_type, self.icon_file)
  File "/usr/lib/python3/dist-packages/kodi/xbmcclient.py", line 285, in __init__
    with open(icon_file, 'rb') as f:
11:16         => 1  
This one was simple as it was just a broken path. The second issue with the tool is a leftover from python2 to python3 conversion.
rrs@lenovo:/etc/bluetooth$ kodi-ps3remote localhost
/usr/share/pixmaps/kodi//bluetooth.png
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Redmi (E8:5A:8B:73:57:44) in range
Living Room TV (E4:DB:6D:24:23:E9) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Living Room TV (E4:DB:6D:24:23:E9) in range
Redmi (E8:5A:8B:73:57:44) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
BD Remote Control (00:1E:3D:10:29:0F) in range
Found BD Remote Control with address 00:1E:3D:10:29:0F
Attempting to pair with remote
Remote Paired.
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 221, in <module>
    main()
  File "/usr/bin/kodi-ps3remote", line 212, in main
    if process_keys(remote, xbmc):
  File "/usr/bin/kodi-ps3remote", line 164, in process_keys
    keycode = data.encode("hex")[10:12]
AttributeError: 'bytes' object has no attribute 'encode'
11:24         => 1  
Fixing that too did not give me the desired result on using the BD Remote in the way I want. So eventually, I gave up and used Kodi s Keymap Editor instead.

Next Next in line, when I can manage to get some free time, is to improve the Kodi Video Scraper to have a fallback mode. Currently, for files where it cannot determine the content, it reject the file resulting in those files not showing up in your collection at all. A better approach would have been to have a fallback mode, that when the scraper cannot determine the content, it should fallback to using the filename scraper

15 June 2020

Russ Allbery: Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.) Yarimar Bonilla & Marisol LeBr n (ed.) Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) Policing the Planet (non-fiction anthology)
Zachary D. Carter The Price of Peace (non-fiction)
Justin Akers Chac n & Mike Davis No One is Illegal (non-fiction)
Grace Chang Disposable Domestics (non-fiction)
Suzanne Collins The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis Freedom is a Constant Struggle (non-fiction)
Danny Katch Socialism... Seriously (non-fiction)
Naomi Klein The Battle for Paradise (non-fiction)
Naomi Klein No is Not Enough (non-fiction)
Naomi Kritzer Catfishing on CatNet (sff)
Derek K nsken The Quantum Magician (sff)
Rob Larson Bit Tyrants (non-fiction)
Michael L wy Ecosocialism (non-fiction)
Joe Macar , Maya Schenwar, et al. (ed.) Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi Riot Baby (sff)
Sarah Pinsker A Song for a New Day (sff)
Lina Rather Sisters of the Vast Black (sff)
Marta Russell Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) How We Get Free (non-fiction anthology)
Linda Tirado Hand to Mouth (non-fiction)
Alex S. Vitale The End of Policing (non-fiction)
C.M. Waggoner Unnatural Magic (sff)
Martha Wells Network Effect (sff)
Kai Ashante Wilson Sorcerer of the Wildeeps (sff)

14 June 2020

Evgeni Golov: naked pings 2020

ajax' post about "ping" etiquette is over 10 years old, but holds true until this day. So true, that my IRC client at work has a script, that will reply with a link to it each time I get a naked ping. But IRC is not the only means of communication. There is also mail, (video) conferencing, and GitHub/GitLab. Well, at least in the software engineering context. Oh and yes, it's 2020 and I still (proudly) have no Slack account. Thankfully, (naked) pings are not really a thing for mail or conferencing, but I see an increasing amount of them on GitHub and it bothers me, a lot. As there is no direct messaging on GitHub, you might rightfully ask why, as there is always context in form of the issue or PR the ping happened in, so lean back an listen ;-) notifications become useless While there might be context in the issue/PR, there is none (besides the title) in the notification mail, and not even the title in the notification from the Android app (which I have installed as I use it lot for smaller reviews). So the ping will always force a full context switch to open the web view of the issue in question, removing the possibility to just swipe away the notification/mail as "not important right now". even some context is not enough context Even after visiting the issue/PR, the ping quite often remains non-actionable. Do you want me to debug/fix the issue? Review the PR? Merge it? Close it? I don't know! The only actionable ping is when the previous message is directed at me and has an actionable request in it and the ping is just a reminder that I have to do it. And even then, why not write "hey @evgeni, did you have time to process my last question?" or something similar? BTW, this is also what I dislike about ajax' minimal example "ping re bz 534027" - what am I supposed to do with that BZ?! why me anyways?! Unless I am the only maintainer of a repo or the author of the issue/PR, there is usually no reason to ping me directly. I might be sick, or on holiday, or currently not working on that particular repo/topic or whatever. Any of that will result in you thinking that your request will be prioritized, while in reality it won't. Even worse, somebody might come across it, see me mentioned and think "ok, that's Evgeni's playground, I'll look elsewhere". Most organizations have groups of people working on specific topics. If you know the group name and have enough permissions (I am not exactly sure which, just that GitHub have limits to avoid spam, sorry) you can ping @organization/group and everyone in that group will get a notification. That's far from perfect, but at least this will get the attention of the right people. Sometimes there is also a bot that will either automatically ping a group of people or you can trigger to do so. Oh, and I'm getting paid for work on open source. So if you end up pinging me in a work-related repository, there is a high chance I will only process that during work hours, while another co-worker might have been available to help you out almost immediately. be patient Unless we talked on another medium before and I am waiting for it, please don't ping directly after creation of the issue/PR. Maintainers get notifications about new stuff and will triage and process it at some point. conclusion If you feel called out, please don't take it personally. Instead, please try to provide as much actionable information as possible and be patient, that's the best way to get a high quality result. I will ignore pings where I don't immediately know what to do, and so should you. one more thing Oh, and if you ping me on IRC, with context, and then disconnect before I can respond In the past you would sometimes get a reply by mail. These days the request will be most probably ignored. I don't like talking to the void. Sorry.

11 June 2020

Antoine Beaupr : CVE-2020-13777 GnuTLS audit: be scared

So CVE-2020-13777 came out while I wasn't looking last week. The GnuTLS advisory (GNUTLS-SA-2020-06-03) is pretty opaque so I'll refer instead to this tweet from @FiloSottile (Go team security lead):
PSA: don't rely on GnuTLS, please. CVE-2020-13777 Whoops, for the past 10 releases most TLS 1.0 1.2 connection could be passively decrypted and most TLS 1.3 connections intercepted. Trivially. Also, TLS 1.2 1.0 session tickets are awful.
You are reading this correctly: supposedly encrypted TLS connections made with affected GnuTLS releases are vulnerable to passive cleartext recovery attack (and active for 1.3, but who uses that anyways). That is extremely bad. It's pretty close to just switching everyone to HTTP instead of HTTPS, more or less. I would have a lot more to say about the security of GnuTLS in particular -- and security in general -- but I am mostly concerned about patching holes in the roof right now, so this article is not about that. This article is about figuring out what, exactly, was exposed in our infrastructure because of this.

Affected packages Assuming you're running Debian, this will show a list of packages that Depends on GnuTLS:
apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u
This assumes you run this only on hosts running Buster or above. Otherwise you'll need to figure out a way to pick machines running GnuTLS 3.6.4 or later. Note that this list only first level dependencies! It is perfectly possible that another package uses GnuTLS without being listed here. For example, in the above list I have libcurl3-gnutls, so the be really thorough, I would actually need to recurse down the dependency tree. On my desktop, this shows an "interesting" list of targets:
  • apt
  • cadaver - AKA WebDAV
  • curl & wget
  • fwupd - another attack on top of this one
  • git (through the libcurl3-gnutls dependency)
  • mutt - all your emails
  • weechat - your precious private chats
Arguably, fetchers like apt, curl, fwupd, and wget rely on HTTPS for "authentication" more than secrecy, although apt has its own OpenPGP-based authentication so that wouldn't matter anyways. Still, this is truly distressing. And I haven't mentioned here things like gobby, network-manager, systemd, and others - the scope of this is broad. Hell, even good old lynx links against GnuTLS. In our infrastructure, the magic command looks something like this:
cumin -o txt -p 0  'F:lsbdistcodename=buster' "apt-cache --installed rdepends libgnutls30   grep '^ '   sort -u"   tee gnutls-rdepds-per-host   awk ' print $NF '   sort   uniq -c   sort -n
There, the result is even more worrisome, as those important packages seem to rely on GnuTLS for their transport security:
  • mariadb - all MySQL traffic and passwords
  • mandos - full disk encryption
  • slapd - LDAP passwords
mandos is especially distressing although it's probably not vulnerable because it seems it doesn't store the cleartext -- it's encrypted with the client's OpenPGP public key -- so the TLS tunnel never sees the cleartext either. Other reports have also mentioned the following servers link against GnuTLS and could be vulnerable:
  • exim
  • rsyslog
  • samba
  • various VNC implementations

Not affected Those programs are not affected by this vulnerability:
  • apache2
  • gnupg
  • python
  • nginx
  • openssh
This list is not exhaustive, naturally, but serves as an example of common software you don't need to worry about. The vulnerability only exists in GnuTLS, as far as we know, so programs linking against other libraries are not vulnerable. Because the vulnerability affects session tickets -- and those are set on the server side of the TLS connection -- only users of GnuTLS as a server are vulnerable. This means, for example, that while weechat uses GnuTLS, it will only suffer from the problem when acting as a server (which it does, in relay mode) or, of course, if the remote IRC server also uses GnuTLS. Same with apt, curl, wget, or git: it is unlikely to be a problem because it is only used as a client; the remote server is usually a webserver -- not git itself -- when using TLS.

Caveats Keep in mind that it's not because a package links against GnuTLS that it uses it. For example, I have been told that, on Arch Linux, if both GnuTLS and OpenSSL are available, the mutt package will use the latter, so it's not affected. I haven't confirmed that myself nor have I checked on Debian. Also, because it relies on session tickets, there's a time window after which the ticket gets cycled and properly initialized. But that is apparently 6 hours by default so it is going to protect only really long-lasting TLS sessions, which are uncommon, I would argue. My audit is limited. For example, it might have been better to walk the shared library dependencies directly, instead of relying on Debian package dependencies.

Other technical details It seems the vulnerability might have been introduced in this merge request, itself following a (entirely reasonable) feature request to make it easier to rotate session tickets. The merge request was open for a few months and was thoroughly reviewed by a peer before being merged. Interestingly, the vulnerable function (_gnutls_initialize_session_ticket_key_rotation), explicitly says:
 * This function will not enable session ticket keys on the server side. That is done
 * with the gnutls_session_ticket_enable_server() function. This function just initializes
 * the internal state to support periodical rotation of the session ticket encryption key.
In other words, it thinks it is not responsible for session ticket initialization, yet it is. Indeed, the merge request fixing the problem unconditionally does this:
memcpy(session->key.initial_stek, key->data, key->size);
I haven't reviewed the code and the vulnerability in detail, so take the above with a grain of salt. The full patch is available here. See also the upstream issue 1011, the upstream advisory, the Debian security tracker, and the Redhat Bugzilla.

Moving forward The impact of this vulnerability depends on the affected packages and how they are used. It can range from "meh, someone knows I downloaded that Debian package yesterday" to "holy crap my full disk encryption passwords are compromised, I need to re-encrypt all my drives", including "I need to change all LDAP and MySQL passwords". It promises to be a fun week for some people at least. Looking ahead, however, one has to wonder whether we should follow @FiloSottile's advice and stop using GnuTLS altogether. There are at least a few programs that link against GnuTLS because of the OpenSSL licensing oddities but that has been first announced in 2015, then definitely and clearly resolved in 2017 -- or maybe that was in 2018? Anyways it's fixed, pinky-promise-I-swear, except if you're one of those weirdos still using GPL-2, of course. Even though OpenSSL isn't the simplest and secure TLS implementation out there, it could preferable to GnuTLS and maybe we should consider changing Debian packages to use it in the future. But then again, the last time something like this happened, it was Heartbleed and GnuTLS wasn't affected, so who knows... It is likely that people don't have OpenSSL in mind when they suggest moving away from GnuTLS and instead think of other TLS libraries like mbedtls (previously known as PolarSSL), NSS, BoringSSL, LibreSSL and so on. Not that those are totally sinless either... "This is fine", as they say...

Holger Levsen: 20200611-stress-management

Stress management I've got a note hanging in my kitchen which is from an unknown source. So while I still can share it happily, I sadly cannot give proper credit. (Update: it was pointed out to me privately that the story is probably coming from Kathy Hadley, a life coach. Thanks for sharing, Kathy!) It reads:
A psychologist walked around a room while teaching stress management to an
audience. As she raised a glass of water, everyone expected they'd be asked
the "half empty or half full' question. Instead, with a smile on her face, she
inquired: "How heavy is this glass of water?"
Answers called out ranged from 8oz to 20oz.
She replied, "The absolute weight doesn't matter. It depends on how long I
hold it. If I hold it for a minute, it's not a problem. If I hold if for an
hour, I'll have an ache in my arm. If I hold it for a day, my arm will feel
numb and paralyzed. In each case, the weight of the glass doesn't change, but
the longer I hold it, the heavier it becomes."
She continued, "The stresses and worries in life are like that glass of water.
Think about them for a while and nothing happens. Think about them a bit
longer and they will begin to hurt. And if you think about them all day long,
you will feel paralyzed - incapable of doing anything."
Remember to put the glass down.
Especially in times like these, do remember to put the glass down!

10 June 2020

Joey Hess: bracketing and async exceptions in haskell

I've been digging into async exceptions in haskell, and getting more and more concerned. In particular, bracket seems to be often used in ways that are not async exception safe. I've found multiple libraries with problems. Here's an example:
withTempFile a = bracket setup cleanup a
  where
    setup = openTempFile "/tmp" "tmpfile"
    cleanup (name, h) = do
        hClose h
        removeFile name
This looks reasonably good, it makes sure to clean up after itself even when the action throws an exception. But, in fact that code can leave stale temp files lying around. If the thread receives an async exception when hClose is running, it will be interrupted before the file is removed. We normally think of bracket as masking exceptions, but it doesn't prevent async exceptions in all cases. See Control.Exception on "interruptible operations", which can receive async exceptions even when other exceptions are masked. It's a bit surprising, but hClose is such an interruptable operation, because it flushes the write buffer. The only way to know is to read the code. It can be quite hard to determine if an operation is interruptable, since it can come down to whether it retries a STM transaction, or uses a MVar that is not always full. I've been auditing libraries and I often have to look at code several dependencies away, and even then may not be sure if a library has this problem. So far, around half of the libraries I've looked at, that use bracket or onException or the like probably have this problem. What can libraries do? My impression of the state of things now is that you should be very cautious using race or cancel or withAsync or the like, unless the thread is small and easy to audit for these problems. Kind of a shame, since I had wanted to be able to cancel a thread that is big and sprawling and uses all the libraries mentioned above.
This work was sponsored by Jake Vosloo and Graham Spencer on Patreon.

Jonathan Dowland: Template Haskell and Stream-processing programs

I've written about what Template Haskell is, and given an example of what it can be used for, it's time to explain why I was looking at it in the context of my PhD work. Encoding stream-processing programs StrIoT is an experimental distributed stream-processing system that myself and others are building in order to explore our research questions. A user of StrIoT writes a stream-processing program, using a set of 8 functional operators provided for the purpose. A simple example is
streamFn :: Stream Int -> Stream Int
streamFn = streamFilter (<15)
         . streamFilter (>5)
         . streamMap (*2)
Our system is distributed: we take a stream-processing program and partition it into sub-programs, which are distributed to and run on separate nodes (perhaps cloud instances, or embedded devices like Raspberry Pis etc.). In order to do that, we need to be able to manipulate the stream-processing program as data. We've initially opted for a graph data-structure, with the vertices in the graph defined as
data StreamVertex = StreamVertex
      vertexId   :: Int
    , operator   :: StreamOperator
    , parameters :: [String]
    , intype     :: String
    , outtype    :: String
      deriving (Eq,Show)
A stream-processing program encoded this way, equivalent to the first example
path [ StreamVertex 0 Map    ["(*2)"]  "Int" "Int"
     , StreamVertex 1 Filter ["(>5)"]  "Int" "Int"
     , StreamVertex 2 Filter ["(<15)"] "Int" "Int"
     ]
We can easily manipulate instances of such types, rewrite them, partition them and generate code from them. Unfortunately, this is quite a departure from the first simple code example from the perspective of a user writing their program. Template Haskell gives us the ability to manipulate code as a data structure, and also to inspect names to gather information about them (their type, etc.). I started looking at TH to see if we could build something where the user-supplied program was as close to that first case as possible. TH limitations There are two reasons that we can't easily manipulate a stream-processing definition written as in the first example. The following expressions are equivalent, in some sense, but are not equal, and so yield completely different expression trees when quasi-quoted:
[  streamFilter (<15) . streamFilter (>5) . streamMap (*2)  ]
  \s -> streamFilter (<15) (streamFilter (>5) (streamMap (*2) s))  ]
[  streamMap (*2) >>> streamFilter (>5) >>> streamFilter (<15)  ]
[  \s -> s & streamMap (*2) & streamFilter (>5) & streamFilter (<15)  ]
[  streamFn  ] -- a named expression, defined outside the quasi-quotes
In theory, reify can give you the definition of a function from its name, but in practice it doesn't, because this was never implemented. So at the very least we would need to insist that a user included the entirety of a stream-processing program within quasi-quotes, and not split it up into separate bits, with some bits defined outside the quotes and references within (as in the last case above). We would probably have to insist on a consistent approach for composing operators together, such as always use (.) and never >>>, &, etc. which is limiting. Incremental approach After a while ruminating on this, and before moving onto something else, I thought I'd try approaching it from the other side. Could I introduce some TH into the existing approach, and improve it? The first thing I've tried is to change the parameters field to TH's ExpQ, meaning the map instance example above would be
StreamVertex 0 Map [ [  (*2)  ] ] "Int" "Int"
I worked this through. It's an incremental improvement ease and clarity for the user writing a stream-processing program. It catches a class of programming bugs that would otherwise slip through: the expressions in the brackets have to be syntactically valid (although they aren't type checked). Some of the StrIoT internals are also much improved, particularly the logical operator. Here's an excerpt from a rewrite rule that involves composing code embedded in strings, dealing with all the escaping rules and hoping we've accounted for all possible incoming expression encodings:
let f' = "(let f = ("++f++"); p = ("++p++"); g = ("++g++") in\
         \ \\ (a,b) v -> (f a v, if p v a then g b v else b))"
    a' = "("++a++","++b++")"
    q' = "(let p = ("++p++"); q = ("++q++") in \\v (y,z) -> p v y && q v z)"
And the same section after, manipulating ExpQ types:
let f' = [  \ (a,b) v -> ($(f) a v, if $(p) v a then $(g) b v else b)  ]
    a' = [  ($(a), $(b))  ]
    q' = [  \v (y,z) -> $(p) v y && $(q) v z  ]
I think the code-generation part of StrIoT could be radically refactored to take advantage of this change but I have not made huge inroads into that. Next steps This is, probably, where I am going to stop. This work is very interesting to me but not the main thrust of my research. But incrementally improving the representation gave me some ideas of what I could try next: The type would have collapsed down to
data StreamVertex = StreamVertex
      vertexId   :: Int
    , opAndParams :: ExpQ
      deriving (Eq,Show)
Example instances might be
StreamVertex 0 [  streamMap (*2)  ]
StreamVertex 1 [  streamExpand  ]
StreamVertex 2 [  streamScan (\c _ -> c+1) 0  ]
The vertexId field is a bit of wart, but we require that due to the graph data structure that we are using. A change there could eliminate it, too. By this point we are not that far away from where we started, and certainly much closer to the "pure" function application in the very first example.

Next.