Search Results: "ema"

3 April 2020

Jonathan Dowland: More Switch games

Sonic Mania Sonic Mania
Sonic Mania is a really lovely homage to the classic 90s Sonic the Hedgehog platform games. Featuring more or less the classic gameplay, and expanded versions of the original levels, with lots of secrets, surprises and easter eggs for fans of the original. On my recommendation a friend of mine bought it for her daughter's birthday recently but her daughter will now have to prise her mum off it! Currently on sale at 30% off ( 11.19). The one complaint I have about it is the lack of females in the roster of 5 playable characters. Butcher is a Doom-esque aesthetic, very violent side-scrolling shooter/platformer, currently on sale at 70% off (just 2.69, the price of a coffee). I've played it for about 10 minutes during coffee breaks and it's fun, hard, and pretty intense. The sound track is great, and available to buy separately but only if you own or buy the original game from the same store, which is a strange restriction. It's also on Spotify.

Norbert Preining: KDE/Plasma updates for Debian sid/testing

I have written before about getting updated packages for KDE/Plasma on Debian. In the meantime I have moved all package building to the openSUSE Build Service, thus I am able to provide builds for Debian/testing, both i386 and amd64 architectures.
For those in hurry: new binary packages that can be used on both Debian/testing and Debian/sid can be obtained for both i386 and amd64 archs here: Debian/testing:
deb http://download.opensuse.org/repositories/home:/npreining:/debian-plasma/Debian_Testing  ./
Debian/unstable:
deb http://download.opensuse.org/repositories/home:/npreining:/debian-plasma/Debian_Unstable  ./
To make these repositories work out of the box, you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc. The sources for the above binaries are available at the OBS site for the debian-plasma sub-project, but I will also try to keep them apt-get-able on my server as before:
deb-src https://www.preining.info/debian unstable kde
I have choosen the openSUSE build service because of its ease to push new packages, and automatic resolution of package dependencies within the same repository. No need to compile the packages myself, nor search for the correct order. I have also added a few new packages and updates (dolphin, umbrello, kwalletmanager, kompare, ), at the moment we are at 131 packages that got updated. If you have requests for update, drop me an email! Enjoy Norbert

2 April 2020

Dirk Eddelbuettel: RQuantLib 0.4.12: Small QuantLib 1.18 update

A new release 0.4.12 of RQuantLib arrived on CRAN today, and was uploaded to Debian as well. QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language. This version does relatively little. When QuantLib 1.18 came out, I immediately did my usual bit of packaging it for Debian as well creating binaries via my Ubuntu PPA so that I could test the package against it. And a few call from RQuantLib are now hitting interface functions marked as deprecated leading to compiler nags. So I fixed that in PR #146. And today CRAN sent me email to please fix in the released version so I rolled this up as 0.4.12. Not other changes.

Changes in RQuantLib version 0.4.12 (2020-04-01)
  • Changes in RQuantLib code:
    • Calls deprecated-in-QuantLib 1.18 were updated (Dirk in #146).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Mike Gabriel: Q: RoamingProfiles under GNU/Linux? What's your Best Practice?

This post is an open question to the wide range of GNU/Linux site admins out there. Possibly some of you have the joy of maintaining GNU/Linux also on user endpoint devices (i.e. user workstations, user notebooks, etc.), not only on corporate servers. TL;DR; In the context of a customer project, I am researching ways of mimicking (or inventing anew) a feature well known (and sometimes also well hated) from the MS Windows world: Roaming User Profiles. If anyone does have any input on that, please contact me (OFTC/Freenode IRC, Telegram, email). I am curious what your solution may be. The Use Case Scenario In my use case, all user machines shall be mobile (notebooks, convertibles, etc). The machines maybe on-site most of the time, but they need offline capabilities so that the users can transparently move off-site and continue their work. At the same time, a copy of the home directory (or the home directory itself) shall be stored on some backend fileservers (for central backups as well as for providing the possibility to the user to login to another machine and be up-and-running +/- out-of-the-box). The Vision Initial Login Ideally, I'd like to have a low level file system feature for this that handles it all. On corporate user logon (which must take place on-site and uses some LDAP database as backend), the user credentials get cached locally (and get re-mapped and re-cached with every on-site login later on), and the home directory gets mounted from a remote server at first. Shortly after having logged in everything in the user's home gets sync'ed to a local cache in the background without the user noticing. At the end of the sync a GUI user notification would be nice, e.g. like "All user data has been cached locally, you are good to go and leave off-site now with this machine." Moving Off-Site A day later, the user may be travelling or such, the user logs into the machine again, the machine senses being offline or on some alien (not corporate) network, but the user can just continue their work, all in local cache. Several days later, the same user with the same machine returns back to office, logs into the machine again, and immediately after login, all cached data gets synced back to the user's server filespace. Possible Conflict Policies Now there might be cases where the user has been working locally for a while and all the profile data received slight changes. The user might have had the possibility to log into other corporate servers from the alien network he*she is on and with that login, some user profile files probably will have gotten changed. Regarding client-server sync policies, one could now enforce a client-always-wins policy that leads to changes being dropped server-side once the user's mobile workstation returns back on-site. One could also set up a bi-directional sync policy for normal data files, but a client-always-wins policy for configuration files (.files and .folders). Etc.pp. Request for Feedback and Comments I could go on further and further with making up edges and corner cases of all this. We had a little discussion on this some days ago on the #debian-devel IRC channel already. Thanks to all contributors to that discussion. And again, if you have solved the above riddle on your site and are corporate-wise allowed to share the concept, I'd be happy about your feedback. Plese get in touch! light+love
Mike (aka sunweaver on the Fediverse and in Debian)

1 April 2020

Joey Hess: DIN distractions

My offgrid house has an industrial automation panel. A row of electrical devices, mounted on a metal rail. Many wires neatly extend from it above and below, disappearing into wire gutters. I started building this in February, before covid-19 was impacting us here, when lots of mail orders were no big problem, and getting an unusual 3D-printed DIN rail bracket for a SSD was just a couple clicks. I finished a month later, deep into social isolation and quarentine, scrounging around the house for scrap wire, scavenging screws from unused stuff and cutting them to size, and hoping I would not end up in a "need just one more part that I can't get" situation. It got rather elaborate, and working on it was often a welcome distraction from the news when I couldn't concentrate on my usual work. I'm posting this now because people sometimes tell me they like hearing about my offfgrid stuff, and perhaps you could use a distraction too. The panel has my house's computer on it, as well as both AC and DC power distribution, breakers, and switching. Since the house is offgrid, the panel is designed to let every non-essential power drain be turned off, from my offgrid fridge to the 20 terabytes of offline storage to the inverter and satellite dish, the spring pump for my gravity flow water system, and even the power outlet by the kitchen sink. Saving power is part of why I'm using old-school relays and stuff and not IOT devices, the other reason is of course: IOT devices are horrible dystopian e-waste. I'm taking the utopian Star Trek approach, where I can command "full power to the vacuum cleaner!" Two circuit boards, connected by numerous ribbon cables, and clearly hand-soldered. The smaller board is suspended above the larger. An electrical schematic, of moderate complexity. At the core of the panel, next to the cubietruck arm board, is a custom IO daughterboard. Designed and built by hand to fit into a DIN mount case, it uses every GPIO pin on the cubietruck's main GPIO header. Making this board took 40+ hours, and was about half the project. It got pretty tight in there. This was my first foray into DIN rail mount, and it really is industrial lego -- a whole universe of parts that all fit together and are immensely flexible. Often priced more than seems reasonable for a little bit of plastic and metal, until you look at the spec sheets and the ratings. (Total cost for my panel was $400.) It's odd that it's not more used outside its niche -- I came of age in the Bay Area, surrounded by rack mount equipment, but no DIN mount equipment. Hacking the hardware in a rack is unusual, but DIN invites hacking. Admittedly, this is a second system kind of project, replacing some unsightly shelves full of gear and wires everywhere with something kind of overdone. But should be worth it in the long run as new gear gets clipped into place and it evolves for changing needs. Also, wire gutters, where have you been all my life? A cramped utility room with an entire wall covered with electronic gear, including the DIN rail, which is surrounded by wire gutters Detail of a wire gutter with the cover removed. Numerous large and small wires run along it and exit here and there. Finally, if you'd like to know what everything on the DIN rail is, from left to right: Ground block, 24v DC disconnect, fridge GFI, spare GFI, USB hub switch, computer switch, +24v block, -24v block, IO daughterboard, 1tb SSD, arm board, modem, 3 USB hubs, 5 relays, AC hot block, AC neutral block, DC-DC power converters, humidity sensor. Full width of DIN rail.

Russ Allbery: Review: A Grand and Bold Thing

Review: A Grand and Bold Thing, by Ann Finkbeiner
Publisher: Free Press
Copyright: August 2010
ISBN: 1-4391-9647-8
Format: Kindle
Pages: 200
With the (somewhat excessively long) subtitle of An Extraordinary New Map of the Universe Ushering In a New Era of Discovery, this is a history of the Sloan Digital Sky Survey. It's structured as a mostly chronological history of the project with background profiles on key project members, particularly James Gunn. Those who follow my blog will know that I recently started a new job at Vera C. Rubin Observatory (formerly the Large Synoptic Survey Telescope). Our goal is to take a complete survey of the night sky several times a week for ten years. That project is the direct successor of the Sloan Digital Sky Survey, and its project team includes many people who formerly worked on Sloan. This book (and another one, Giant Telescopes) was recommended to me as a way to come up to speed on the history of this branch of astronomy. Before reading this book, I hadn't understood how deeply the ready availability of the Sloan sky survey data had changed astronomy. Prior to the availability of that survey data, astronomers would develop theories and then try to book telescope time to make observations to test those theories. That telescope time was precious and in high demand, so was not readily available, and was vulnerable to poor weather conditions (like overcast skies) once the allocated time finally arrived. The Sloan project changed all of that. Its output was a comprehensive sky survey available digitally whenever and wherever an astronomer needed it. One could develop a theory and then search the Sloan Digital Sky Survey for relevant data and, for at least some types of theories, test that theory against the data without needing precious telescope time or new observations. It was a transformational change in astronomy, made possible by the radical decision, early in the project, to release all of the data instead of keeping it private to a specific research project. The shape of that change is one takeaway from this book. The other is how many problems the project ran into trying to achieve that goal. About a third of the way into this book, I started wondering if the project was cursed. So many things went wrong, from institutional politics through equipment failures to software bugs and manufacturing problems with the telescope mirror. That makes it all the more impressive how much impact the project eventually had. It's also remarkable just how many bad things can happen to a telescope mirror without making the telescope unusable. Finkbeiner provides the most relevant astronomical background as she tells the story so that the unfamiliar reader can get an idea of what questions the Sloan survey originally set out to answer (particularly about quasars), but this is more of a project history than a popular astronomy book. There's enough astronomy here for context, but not enough to satisfy curiosity. If you're like me, expect to have your curiosity piqued, possibly resulting in buying popular surveys of current astronomy research. (At least one review is coming soon.) Obviously this book is of special interest to me because of my new field of work, my background at a research university, and because it features some of my co-workers. I'm not sure how interesting it will be to someone without that background and personal connection. But if you've ever been adjacent to or curious about how large-scale science projects are done, this is a fascinating story. Both the failures and problems and the way they were eventually solved is different than how the more common stories of successful or failed companies are told. (It helps, at least for me, that the shared goal was to do science, rather than to make money for a corporation whose fortunes are loosely connected to those of the people doing the work.) Recommended if this topic sounds at all interesting. Rating: 7 out of 10

31 March 2020

Chris Lamb: Free software activities in March 2020

Here is my monthly update covering what I have been doing in the free software world during March 2020 (previous month): In addition, I did even more hacking on the Lintian static analysis tool for Debian packages, including:
Reproducible builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter. This month, I: In our tooling, I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading version 138 to Debian: The Reproducible Builds project also operates a fully-featured and comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, I reworked the web-based package rescheduling tool to:
Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 8 hours on its sister Extended LTS project. You can find out more about the Debian LTS project via the following video:
Debian Uploads For the Debian Privacy Maintainers team I requested that the pyptlib package be removed from the archive (#953429) as well as uploading onionbalance (0.1.8-6) to fix test failures under Pytest 3.x (#953535) and a new upstream release of nautilus-wipe. Finally, I sponsored an upload of bilibop (0.6.1) on behalf of Yann Amar.

30 March 2020

Jonathan Dowland: ephemeral note-taking wins

Some further thoughts on ephemeral versus preserve-everything note-taking. Note-taking is about capturing ideas, thoughts, and processes. You want as little friction as possible when doing so: you don't want to be thinking the page is too small, or the paper drying up the ink too quickly so the pen doesn't move smoothly, or similar such things distracting from capturing what you are trying to capture. I used my PhD notebook as an example of a preserve-everything approach. A serious drawback of the notebook as the sole place to capture work is the risk that it will be damaged or lost. I periodically photograph all the pages and store those photos digitally, alongside other things relating to the work. Those other things include two different private wiki instances that I use to capture notes when I'm working at the computer, as well as several Git repositories (some public, some private) for source code, experiments, drafts of papers, etc. There's also a not-insignificant amount of email correspondence. There have been several train journeys and several meetings where I've grabbed a cheap, larger-format pad of paper and a box of Pound-shop felt-tip pens to sketch ideas, whiteboard-style. At the time it just seemed easier to capture what we were doing in that way, rather than try to do so into the notebook. So the notebook is neither canonical nor comprehensive. Ultimately it's really another example of ephemeral note-taking, and so I think the Ephemeral model wins out. Use whatever notebook or paper or envelope or window pane that is convenient and feels attractive at the time you need to capture something with the least amount of friction. Digitise that and store, catalogue, adjust, derive, etc. from that in the digital domain.

Mike Gabriel: Mailman3 - Call for Translations (@Weblate)

TL;DR; please help localizing Mailman3 [1]. You can find it on hosted Weblate [2].The next component releases are planned in 1-2 weeks from now. Thanks for your contribution! If you can't make it now, please consider working on Mailman3 translations at some later point of time. Thanks! Time has come for Mailman3 Over the last months I have found an interest in Mailman3. Given the EOL of Python2 in January 2020 and also being a heavy Mailman2 provider for various of my projects and also for customers, I felt it was time to look at Mailman2's successor: Mailman3 [1]. One great novelty in Mailman3 is the strict split up between backend (Mailman Core), and the frontend components (django-mailman3, Postorius, Hyperkitty). All three are Django applications. Postorius is the list management web frontend whereas Hyperkitty is an archive viewer. Other than in Mailman2, you can also drop list posts into Hyperkitty directly (instead of sending a mail to the list). This makes Hyperkitty also some sort of forum software with a mailing list core in the back. The django-mailman3 module knits the previous two together (and handles account management, login dialog, profile settings, etc.). Looking into Mailman3 Upstream Code Some time back in midst 2019 I decided to deploy Mailman3 at a customer's site and also for my own business (which still is the test installation). Living and working in Germany, my customers' demand often is a fully localized WebUI. And at that time, Mailman3 could not provide this. Many exposed parts of the Mailman3 components were still not localized (or not localizable). Together with my employee I put some hours of effort into providing merge requests, filing bug reports, request better Weblate integration (meaning: hosted Weblate), improving the membership scripting support, etc. It felt a bit like setting the whole i18n thing in motion. Call for Translations Over the past months I had to focus on other work and two days ago I was delighted that Abhilash Raj (one of the Mailman3 upstream maintainers) informed me (via closing one of the related bugs [3]) that Mailman3 is now fully integrated with the hosted Weblate service and a continous translation workflow is set to go. The current translation stati of the Mailman3 components are at ~ 10%. We can do better than this, I sense. So, if you are a non-native English speaker and feel like contributing to Mailman3, please visit the hosted Weblate site [2], sign up for an account (if you don't have one already), and chime in into the translation of one of the future mailing list software suites run by many FLOSS projects all around the globe. Thanks a lot for your help. As a side note, if you plan working on translating Mailman Core into your language (and can't find it in the list of supported languages), please request this new language via the Weblate UI. All other components have all available languages enabled by default. References:

Axel Beckert: How do you type on a keyboard with only 46 or even 28 keys?

Some of you might have noticed that I m into keyboards since a few years ago into mechanical keyboards to be precise. Preface It basically started with the Swiss Mechanical Keyboard Meetup (whose website I started later on) was held in the hackerspace of the CCCZH. I mostly used TKL keyboards (i.e. keyboards with just the for me useless number block missing) and tried to get my hands on more keyboards with Trackpoints (but failed so far). At some point a year or two ago, I looking into smaller keyboards for having a mechanical keyboard with me when travelling. I first bought a Vortex Core at Candykeys. The size was nice and especially having all layers labelled on the keys was helpful, but nevertheless I soon noticed that the smaller the keyboards get, the more important is, that they re properly programmable. The Vortex Core is programmable, but not the keys in the bottom right corner which are exactly the keys I wanted to change to get a cursor block down there. (Later I found out that there are possibilities to get this done, either with an alternative firmware and a hack of it or desoldering all switches and mounting an alternative PCB called Atom47.) 40% Keyboards So at some point I ordered a MiniVan keyboard from The Van Keyboards (MiniVan keyboards will soon be available again at The Key Dot Company), here shown with GMK Paperwork (also bought from and designed by The Van Keyboards):
The MiniVan PCBs are fully programmable with the free and open source firmware QMK and started to use that more and more instead of bigger keyboards. Layers With the MiniVan I learned the concepts of layers. Layers are similar to what many laptop keyboards do with the Fn key and to some extent also what the German standard layout does with the AltGr key: Layers are basically alternative key maps you can switch with a special key (often called Fn , Fn1 , Fn2 , etc., or especially if there are two additional layers Raise and Lower ). There are several concepts how these layers can be reached with these keys: My MiniVan Layout For the MiniVan, two additional layers suffice easily, but since I have a few characters on multiple layers and also have mouse control and media keys crammed in there, I have three additional layers on my MiniVan keyboards:

TRNS means transparent, i.e. use the settings from lower layers.
I also use a feature that allows me to mind different actions to a key depending if I just tap the key or if I hold it. Some also call this tap dance . This is especially very popular on the usually rather huge spacebar. There, the term SpaceFn has been coined, probably after this discussion on Geekhack. I use this for all my layer switching keys: With this layout I can type English texts as fast as I can type them on a standard or TKL layout. German umlauts are a bit more difficult because it requires 4 to 6 key presses per umlaut as I use the Compose key functionality (mapped to the Menu key between the spacebars and the cursor block. So to type an on my MiniVan, I have to:
  1. press and release Menu (i.e. Compose); then
  2. press and hold either Shift-Spacebar (i.e. Shift-Fn1) or Slash (i.e. Fn2), then
  3. press N for a double quote (i.e. Shift-Fn1-N or Fn2-N) and then release all keys, and finally
  4. press and release the base character for the umlaut, in this case Shift-A.
And now just use these concepts and reduce the amount of keys to 28: 30% and Sub-30% Keyboards In late 2019 I stumbled upon a nice little keyboard kit shop on Etsy which I (and probably most other people in the mechanical keyboard scene) didn t take into account for looking for keyboards called WorldspawnsKeebs. They offer mostly kits for keyboards of 40% size and below, most of them rather simple and not expensive. For about 30 you get a complete sub-30% keyboard kit (without switches and keycaps though, but that very common for keyboard kits as it leaves the choice of switches and key caps to you) named Alpha28 consisting of a minimal Acrylic case and a PCB and electronics set. This Alpha28 keyboard is btw. fully open source as the source code, (i.e. design files) for the hardware are published under a free license (MIT license) on GitHub. And here s how my Alpha28 looks like with GMK Mitolet (part of the GMK Pulse group-buy) key caps:
So we only have character keys, Enter (labelled Data as there was no 1u Enter key with that row profile in that key cap set; I ll also call it Data for the rest of this posting) and a small spacebar, not even modifier keys. The Default Alpha28 Layout The original key layout by the developer of the Alpha28 used the spacbar as Shift on hold and as space if just tapped, and the Data key switches always to the next layer, i.e. it switches the layer permanently on tap and not just on hold. This way that key rotates through all layers. In all other layers, V switches back to the default layer. I assume that the modifiers on the second layer are also on tap and apply to the next other normal key. This has the advantage that you don t have to bend your fingers for some key combos, but you have to remember on which layer you are at the moment. (IIRC QMK allows you to show that via LEDs or similar.) Kinda just like vi. My Alpha28 Layout But maybe because I m more an Emacs person, I dislike remembering states myself and don t bind bending my fingers. So I decided to develop my own layout using tap-or-hold and only doing layer switches by holding down keys:

A triangle means that the settings from lower layers are used, N/A means the key does nothing.
It might not be very obvious, but on the default layer, all keys in the bottom row and most keys on the row ends have tap-or-hold configurations. Basic ideasBottom row if holdOther rows if holdHow the keys are divided into layersUsing the Alpha28 This layout works surprisingly well for me. Only for Minus, Equal, Single Quote and Semicolon I still often have to think or try if they re on Layer 1 or 2 as on my 40%s (MiniVan, Zlant, etc.) I have them all on layer 1 (and in general one layer less over all). And for really seldom used keys like Insert, PrintScreen, ScrollLock or Pause, I might have to consult my own documentation. They re somewhere in the middle of the keyboard, either on layer 1, 2, or 3. ;-) And of course, typing umlauts takes even two keys more per umlaut as on the MiniVan since on the one hand Menu is not on the default layer and on the other hand, I don t have this nice shifted number row and actually have to also press Shift to get a double quote. So to type an on my Alpha, I have to:
  1. press and release Space-F (i.e. Fn1-F) for Menu (i.e. Compose); then
  2. press and hold A-Spacebar-L (i.e. Shift-Fn1-L) for getting a double quote, then
  3. press and release the base character for the umlaut, in this case L-A for Shift-A (because we can t use A for Shift as I can t hold a key and then press it again :-).
Conclusion If the characters on upper layers are not labelled like on the Vortex Core, i.e. especially on all self-made layouts, typing is a bit like playing that old children s game Memory: as soon as you remember (or your muscle memory knows) where some special characters are, typing gets faster. Otherwise, you start with trial and error or look the documentation. Or give up. ;-) Nevertheless, typing on a sub-30% keyboard like the Alpha28 is much more difficult and slower than on a 40% keyboard like the MiniVan. So the Alpha28 very likely won t become my daily driver while the MiniVan defacto is my already my daily driver. But I like these kind of challenges as others like the game Memory . So I ordered three more 30% and sub-30% keyboard kits and WorldspawnsKeebs for soldering on the upcoming weekend during the COVID19 lockdown: And if I at some point want to try to type with even fewer keys, I ll try a Butterstick keyboard with just 20 keys. It s a chorded keyboard where you have to press multiple keys at the same time to get one charcter: So to get an A from the missing middle row, you have to press Q and Z simultaneously, to get Escape, press Q and W simultaneously, to get Control, press Q, W, Z and X simultaneously, etc. And if that s not even enough, I already bought a keyboard kit named Ginny (or Ginni, the developer can t seem to decide) with just 10 keys from an acquaintance. Couldn t resist when offered his surplus kits. :-) It uses the ASETNIOP layout which was initially developed for on-screen keyboards on tablets.

Shirish Agarwal: Covid 19 and the Indian response.

There have been lot of stories about Coronavirus and with it a lot of political blame-game has been happening. The first step that India took of a lockdown is and was a good step but without having a plan as to how especially the poor and the needy and especially the huge migrant population that India has (internal migration) be affected by it. A 2019 World Economic Forum shares the stats. as 139 million people. That is a huge amount of people and there are a variety of both push and pull factors which has displaced these huge number of people. While there have been attempts in the past and probably will continue in future they will be hampered unless we have trust-worthy data which is where there is lots that need to be done. In the recent few years, both the primary and secondary data has generated lot of controversies within India as well as abroad so no point in rehashing all of that. Even the definition of who is a migrant needs to be well-established just as who is a farmer . The simplest lucanae in the later is those who have land are known as farmers but the tenant farmers and their wives are not added as farmers hence the true numbers are never known. Is this an India-specific problem or similar definition issues are there in the rest of the world I don t know.

How our Policies fail to reach the poor and the vulnerable The sad part is most policies in India are made in castles in the air . An interview by the wire shares the conundrum of those who are affected and the policies which are enacted for them (it s a youtube video, sorry)
If one with an open and fresh mind sees the interview it is clear that why there was a huge reverse migration from Indian cities to villages. The poor and marginalized has always seen the Indian state as an extortive force so it doesn t make sense for them to be in the cities. The Prime Minister s annoucement of food for 3 months was a clear indication for the migrant population that for 3 months they will have no work. Faced with such a scenario, the best option for them was to return to their native places. While videos of huge number of migrants were shown of Delhi, this was the scenario of most states and cities, including Pune, my own city . Another interesting point which was made is most of the policies will need the migrants to be back in the villages. Most of these are tied to the accounts which are opened in villages, so even if they want to have the benefits they will have to migrate to villages in order to use them. Of course, everybody in India knows how leaky the administration is. The late Shri Rajiv Gandhi had famously and infamously remarked once how leaky the Public Distribution system and such systems are. It s only 10 paise out of rupee which reaches the poor. And he said this about 30 years ago. There have been numerous reports of both IPS (Indian Police Services) reforms and IAS (Indian Administrative Services) reforms over the years, many of the committee reports have been in public domain and in fact was part of the election manifesto of the ruling party in 2014 but no movement has happened on that part. The only thing which has happened is people from the ruling party have been appointed on various posts which is same as earlier governments. I was discussing with a friend who is a contractor and builder about the construction labour issues which were pointed in the report and if it is true that many a times the migrant labour is not counted. While he shared a number of cases where he knew, a more recent case in public memory was when some labourers died while building Amanora mall which is perhaps one of largest malls in India. There were few accidents while constructing the mall. Apparently, the insurance money which should have gone to the migrant laborer was taken by somebody close to the developers who were building the mall. I have a friend in who lives in Jharkhand who is a labour officer. She has shared with me so many stories of how the labourers are exploited. Keep in mind she has been a labor officer appointed by the state and her salary is paid by the state. So she always has to maintain a balance of ensuring worker s rights and the interests of the state, private entities etc. which are usually in cahoots with the state and it is possible that lot of times the State wins over the worker s rights. Again, as a labour officer, she doesn t have that much power and when she was new to the work, she was often frustrated but as she remarked few months back, she has started taking it easy (routinized) as anyways it wasn t helping her in any good way. Also there have been plenty of cases of labor officers being murdered so its easier to understand why one tries to retain some sanity while doing their job.

The Indian response and the World Response The Indian response has been the lockdown and very limited testing. We seem to be following the pattern of UK and U.S. which had been slow to respond and slow to testing. In the past Kerala showed the way but this time even that is not enough. At the end of the day we need to test, test and test just as shared by the WHO chairman. India is trying to create its own cheap test kits with ICMR approval, for e.g. a firm from my own city Pune MyLab has been given approval. We will know how good or bad they are only after they have been field-tested. For ventilators we have asked Mahindra and Mahindra even though there are companies like Allied Medical and others who have exported to EU and others which the Govt. is still taking time to think through. This is similar to how in UK some companies who are with the Govt. but who have no experience in making ventilators are been given orders while those who have experience and were exporting to Germany and other countries are not been given orders. The playbook is errily similar. In India, we don t have the infrastructure for any new patients, period. Heck only a couple of states have done something proper for the anganwadi workers. In fact, last year there were massive strikes by anganwadi workers all over India but only NDTV showed a bit of it along with some of the news channels from South India. Most mainstream channels chose to ignore it. On the world stage, some of the other countries and how they have responded perhaps need sharing. For e.g. I didn t know that Cuba had so many doctors and the politics between it and Brazil. Or the interesting stats. shared by Andreas Backhaus which seems to show how distributed the issue (age-wise) is rather than just a few groups as has been told in Indian media. What was surprising for me is the 20-29 age group which has not been shared so much in the Indian media which is the bulk of our population. The HBR article also makes a few key points which I hope both the general public and policymakers both in India as well as elsewhere take note of. What is worrying though that people can be infected twice or more as seems to be from Singapore or China and elsewhere. I have read enough of Robin Cook and Michael Crichton books to be aware that viruses can do whatever. They will over time mutate, how things will happen then is anybody s guess. What I found interesting is the world economic forum article which hypothesis that it may be two viruses which got together as well as research paper from journal from poteome research which has recently been published. The biggest myth flying around is that summer will halt or kill the spread which even some of my friends have been victim of . While a part of me wants to believe them, a simple scientific fact has been viruses have probably been around us and evolved over time, just like we have. In fact, there have been cases of people dying due to common cold and other things. Viruses are so prevalent it s unbelivable. What is and was interesting to note is that bat-borne viruses as well as pangolin viruses had been theorized and shared by Chinese researchers going all the way back to 90 s . The problem is even if we killed all the bats in the world, some other virus will take its place for sure. One of the ideas I had, dunno if it s feasible or not that at least in places like Airports, we should have some sort of screenings and a labs working on virology. Of course, this will mean more expenses for flying passengers but for public health and safety maybe it would worth doing so. In any case, virologists should have a field day cataloging various viruses and would make it harder for viruses to spread as fast as this one has. The virus spread also showed a lack of leadership in most of our leaders who didn t react fast enough. While one hopes people do learn from this, I am afraid the whole thing is far from over. These are unprecedented times and hope that all are maintaining social distancing and going out only when needed.

29 March 2020

Enrico Zini: Politics links

How tech's richest plan to save themselves after the apocalypse
politics privilege archive.org
Silicon Valley s elite are hatching plans to escape disaster and when it comes, they ll leave the rest of us behind
Heteronomy refers to action that is influenced by a force outside the individual, in other words the state or condition of being ruled, governed, or under the sway of another, as in a military occupation.
Poster P590CW $9.00 Early Warning Signs Of Fascism Laurence W. Britt wrote about the common signs of fascism in April, 2003, after researching seven fascist regimes: Hitler's Nazi Germany; Mussolini's Italy; Franco's Spain; Salazar's Portugal; Papadopoulos' Greece; Pinochet's Chile; Suharto's Indonesia. Get involved! Text: Early Warning Signs of Fascism Powerful and Continuing Nationalism Disdain For Human Rights Identification of Enemies As a unifying cause Supremacy of the military Rampant Sexism Controlled Mass Media Obsession With National Security
Political and social scientist Stefania Milan writes about social movements, mobilization and organized collective action. On the one hand, interactions and networks achieve more visibility and become a proxy for a collective we . On the other hand: Law enforcement can exercise preemptive monitorin
How new technologies and techniques pioneered by dictators will shape the 2020 election
A regional election offers lessons on combatting the rise of the far right, both across the Continent and in the United States.
The Italian diaspora is the large-scale emigration of Italians from Italy. There are two major Italian diasporas in Italian history. The first diaspora began more or less around 1880, a decade or so after the Unification of Italy (with most leaving after 1880), and ended in the 1920s to early-1940s with the rise of Fascism in Italy. The second diaspora started after the end of World War II and roughly concluded in the 1970s. These together constituted the largest voluntary emigration period in documented history. Between 1880-1980, about 15,000,000 Italians left the country permanently. By 1980, it was estimated that about 25,000,000 Italians were residing outside Italy. A third wave is being reported in present times, due to the socio-economic problems caused by the financial crisis of the early twenty-first century, especially amongst the youth. According to the Public Register of Italian Residents Abroad (AIRE), figures of Italians abroad rose from 3,106,251 in 2006 to 4,636,647 in 2015, growing by 49.3% in just ten years.

Molly de Blanc: Computing Under Quarantine

Under the current climate of lock-ins, self-isolation, shelter-in-place policies, and quarantine, it is becoming evident to more people the integral role computers play in our lives. Students are learning entirely online, those who can are working from home, and our personal relationships are being carried largely by technology like video chats, online games, and group messages. When these things have become our only means of socializing with those outside our homes, we begin to realize how important they are and the inequity inherent to many technologies. Someone was telling me how a neighbor doesn t have a printer, so they are printing off school assignments for their neighbor. People I know are sharing internet connections with people in their buildings, when possible, to help save on costs with people losing jobs. I worry now even more about people who have limited access to home devices or poor internet connections. As we are forced into our homes and are increasingly limited in the resources we have available, we find ourselves potentially unable to easily fill material needs and desires. In my neighborhood, it s hard to find flour. A friend cannot find yeast. A coworker couldn t find eggs. Someone else is without dish soap. Supply chains are not designed to meet with the demand currently being exerted on the system. This problem is mimicked in technology. If your computer breaks, it is much harder to fix it, and you lose a lot more than just a machine you lose your source of connection with the world. If you run out of toner cartridges for your printer and only one particular brand works the risk of losing your printer, and your access to school work, becomes a bigger deal. As an increasing number of things in our homes are wired, networked, and only able to function with a prescribed set of proprietary parts, gaps in supply chains become an even bigger issue. When you cannot use whatever is available, and instead need to wait for the particular thing, you find yourself either hoarding or going without. What happens when you can t get the toothbrush heads for your smart toothbrush due to prioritization and scarcity with online ordering when it s not so easy to just go to the pharmacy and get a regular toothbrush? In response to COVID-19 Adobe is offering no-cost access to some of their services. If people allow themselves to rely on these free services, they end up in a bad situation when a cost is re-attached. Lock-in is always a risk, but when people are desperate, unemployed, and lacking the resources they need to survive, the implications of being trapped in these proprietary systems are much more painful. What worries me even more than this is the reliance on insecure communication apps. Zoom, which is becoming the default service in many fields right now, offers anti-features like attendee attention tracking and user reporting. We are now being required to use technologies designed to maximize opportunities for surveillance to learn, work, and socialize. This is worrisome to me for two main reasons: the violation of privacy and the normalization of a surveillance state. It is a violation of privacy, to have our actions tracked. It also gets us used to being watched, which is dangerous as we look towards the future.

28 March 2020

Dirk Eddelbuettel: RProtoBuf 0.4.17: Robustified

A new release 0.4.17 of RProtoBuf is now on CRAN. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. This release contains small polishes related to the release 0.4.16 which added JSON support for messages, and switched to ByteSizeLong. This release now makes sure JSON functionality is only tested where available (on version 3 of the Protocol Buffers library), and that ByteSizeLong is only called where available (version 3.6.0 or later). Of course, older versions build as before and remain fully supported.

Changes in RProtoBuf version 0.4.17 (2020-03-xx)
  • Condition use of ByteSizeLong() on building with ProtoBuf 3.6.0 or later (Dirk in #71 fixing #70).
  • The JSON unit tests are skipped if ProtoBuf 2.* is used (Dirk, also #71).
  • The configure script now extracts the version from the DESCRIPTION file ( (Dirk, also #71).

CRANberries provides the usual diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 March 2020

Raphaël Hertzog: Freexian s report about Debian Long Term Support, February 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In February, 226 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation February began as rather calm month and the fact that more contributors have given back unused hours is an indicator of this calmness and also an indicator that contributing to LTS has become more of a routine now, which is good. In the second half of February Holger Levsen (from LTS) and Salvatore Bonaccorso (from the Debian Security Team) met at SnowCamp in Italy and discussed tensions and possible improvements from and for Debian LTS. The security tracker currently lists 25 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update. Thanks to our sponsors New sponsors are in bold.

No comment Liked this article? Click here. My blog is Flattr-enabled.

24 March 2020

Anisa Kuci: Outreachy post 5 - Final report

This is my last Outreachy blogpost, as my internship unfortunately has come to an end. It was a blast! Through the Outreachy internship I gained a lot of valuable knowledge and even though I dont know what the future will bring, I am more confident about myself, my skills in fundraising and my technical abilities now. During the contribution phase I did quite a lot of research so I could come up with relevant information to add on the Debian wiki as part of the contribution phase tasks. This was helpful for me to build a more deep understanding of the Debian project, the DebConf conferences and the general style of working within Debian. During this phase I completed most of the tasks given and got onto the mailing lists and IRC channels which are public. This was quite an intense experience by itself as it was like digging into a new job but in a competitive situation as other applicants, of course, were also putting in their best to get one of the 50 Outreachy internships (two in Debian) that were available. When I got selected I also got access to the sponsors@debconf.org email address and the private git repos as they would be needed for me to work on fundraising. I was also provided with an @debconf.org email address that is the one I use for sponsors communication. I learned what makes communication look professional and how to think of the recipient when formulating emails, sending replies or creating marketing messages. As the internship continued I started to learn how the DebConf organizing structure works, with a particular attention on the fundraising team. I have also quickly been given the responsibility to reach out to 100 potential sponsors and combine learning and working experience very nicely. I was given responsibility of working on the fundraising material (flyer and brochure) for DebConf20, so using LaTeX I updated the files in order to remove the translation system, I added the Israeli currency and the visual design chosen from a public logo proposal contest that is held for each DebConf. My creative side could also be put to good use as the team trusted me with selecting the images for the brochure and making sure that the usage rights granted and attribution are all compliant to Free Software/Creative Commons licences. I have continuously maintained those materials and was very happy that long term team members even asked me to approve changes they proposed on the fundraising material. The MiniDebConf team for Regensburg have taken the files I created and made them into their fundraising material (brochure, flyer) which is great to see. MiniDebConf Regensburg commits I have profited so much from other people s work, so I am very happy if community members can build on mine. Afterall sharing is caring! Through the Outreachy travel stipend I was able to attend FOSDEM 2020 and besides meeting many friends from the community and having a fundraising meeting with a sponsor, I was able to also support distributing the DebConf20 fundraising material both in electronic form and as printed versions. During the Outreachy internship I did two communication waves to potential sponsors and maintained the daily communication with them. I followed the documentation that was already available in the Debian wiki guiding me through the process of invoicing sponsors through the organizations that Debian collaborates with. As I have been in frequent contact with the sponsors, I have continuously updated the DebConf20 website with sponsor logos and links through git files, improving web pages on the wafer system and doing 18 merge requests in total. DebConf20 website merge requests I completed six email templates for the DebConf sponsor communication campaigns. While two of them have already been very useful, I hope my other proposals will be the ones used for the next waves of sponsors contacting. I committed them to the sponsors git repo so they can be easily accessible and well documented for other DebConf teams or can be recycled for smaller Debian events. Unfortunately, due to the insecurity about travel around the Corona virus pandemic, the third wave of communication, originally scheduled for the end of my internship in March, has been held. I have committed to the team to do this once the travel restrictions have been lifted and we know that DebConf20 can proceed as planned for August 2020. So yeah, I am definitely going to stay around and continue to help with DebConf fundraising! DebConf has been held for two decades and it also has a generic website, so I also updated the static sponsors page on the main website as it is the first landing page for everyone especially for people who are not necessarily insiders in Debian or DebConf. I have completed a Python tutorial, got the general understanding of the programming language and made some smaller contributions to improve existing tools in the fundraising team based on the learning. The internship was also focused on documentation, so during the whole internship I have kept notes in order to be able to improve existing documentation and I have also written new material, especially on the tasks that I have been working on more closely: Every DebConf has a new team every year, in a new country, so I hope this documentation I have worked on will be useful as a jump-starter for them to organize and finance their events. I would like to take a moment to thank again my mentors Daniel and Karina for all their support, it has been great working with them. Having them as an example, I have learned a lot. Also warm thanks to the DebConf global and local teams which have been very welcoming and always supportive during my internship. So, as you might already know, if you have read my other Outreachy blog posts, this has been a great experience for me! Outreachy provides an awesome opportunity that would not be available if it was not for the generous sponsors and volunteers making such a program for people from underrepresented groups in the FLOSS community. I really encourage people from these groups to find the confidence within themselves and apply for Outreachy! I love Debian

Russ Allbery: Review: Lost in Math

Review: Lost in Math, by Sabine Hossenfelder
Publisher: Basic
Copyright: June 2018
ISBN: 0-465-09426-0
Format: Kindle
Pages: 248
Listening to experts argue can be one of the better ways to learn about a new field. It does require some basic orientation and grounding or can be confusing or, worse, wildly misleading, so some advance research or Internet searches are warranted. But it provides some interesting advantages over reading multiple popular introductions to a field. First, experts arguing with each other are more precise about their points of agreement and disagreement because they're trying to persuade someone who is well-informed. The points of agreement are often more informative than the points of disagreement, since they can provide a feel for what is uncontroversial among experts in the field. Second, internal arguments tend to be less starry-eyed. One of the purposes of popularizations of a field is to get the reader excited about it, and that can be fun to read. But to generate that excitement, the author has a tendency to smooth over disagreements and play up exciting but unproven ideas. Expert disagreements pull the cover off of the uncertainty and highlight the boundaries of what we know and how we know it. Lost in Math (subtitled How Beauty Leads Physics Astray) is not quite an argument between experts. That's hard to find in book form; most of the arguments in the scientific world happen in academic papers, and I rarely have the energy or attention span to read those. But it comes close. Hossenfelder is questioning the foundations of modern particle physics for the general public, but also for her fellow scientists. High-energy particle physics is facing a tricky challenge. We have a solid theory (the standard model) which explains nearly everything that we have currently observed. The remaining gaps are primarily at very large scales (dark matter and dark energy) or near phenomena that are extremely difficult to study (black holes). For everything else, the standard model predicts our subatomic world to an exceptionally high degree of accuracy. But physicists don't like the theory. The details of why are much of the topic of this book, but the short version is that the theory does not seem either elegant or beautiful. It relies on a large number of measured constants that seem to have no underlying explanation, which is contrary to a core aesthetic principle that physicists use to judge new theories. Accompanying this problem is another: New experiments in particle physics that may be able to confirm or disprove alternate theories that go beyond the standard model are exceptionally expensive. All of the easy experiments have been done. Building equipment that can probe beyond the standard model is incredibly expensive, and thus only a few of those experiments have been done. This leads to two issues: Particle physics has an overgrowth of theories (such as string theory) that are largely untethered from experiments and are not being tested and validated or disproved, and spending on new experiments is guided primarily by a sense of scientific aesthetics that may simply be incorrect. Enter Lost in Math. Hossenfelder's book picks up a thread of skepticism about string theory (and, in Hossenfelder's case, supersymmetry as well) that I previously read in Lee Smolin's The Trouble with Physics. But while Smolin's critique was primarily within the standard aesthetic and epistemological framework of particle physics, Hossenfelder is questioning that framework directly. Why should nature be beautiful? Why should constants be small? What if the universe does have a large number of free constants? And is the dislike of an extremely reliable theory on aesthetic grounds a good basis for guiding which experiments we fund?
Do you recall the temple of science, in which the foundations of physics are the bottommost level, and we try to break through to deeper understanding? As I've come to the end of my travels, I worry that the cracks we're seeing in the floor aren't really cracks at all but merely intricate patterns. We're digging in the wrong places.
Lost in Math will teach you a bit less about physics than Smolin's book, although there is some of that here. Smolin's book was about two-thirds physics and one-third sociology of science. Lost in Math is about two-thirds sociology and one-third physics. But that sociology is engrossing. It's obvious in retrospect, but I hadn't thought before about the practical effects of running out of unexplained data on a theoretical field, or about the transition from more data than we can explain to having to spend billions of dollars to acquire new data. And Hossenfelder takes direct aim at the human tendency to find aesthetically appealing patterns and unified explanations, and scores some palpable hits.
I went into physics because I don't understand human behavior. I went into physics because math tells it how it is. I liked the cleanliness, the unambiguous machinery, the command math has over nature. Two decades later, what prevents me from understanding physics is that I still don't understand human behavior. "We cannot give exact mathematical rules that define if a theory is attractive or not," says Gian Francesco Giudice. "However, it is surprising how the beauty and elegance of a theory are universally recognized by people from different cultures. When I tell you, 'Look, I have a new paper and my theory is beautiful,' I don't have to tell you the details of my theory; you will get why I'm excited. Right?" I don't get it. That's why I am talking to him. Why should the laws of nature care what I find beautiful? Such a connection between me and the universe seems very mystical, very romantic, very not me. But then Gian doesn't think that nature cares what I find beautiful, but what he finds beautiful.
The structure of this book is half tour of how physics judges which theories are worthy of investigation and half personal quest to decide whether physics has lost contact with reality. Hossenfelder approaches this second thread with multiple interviews of famous scientists in the field. She probes at their bases for preferring one theory over another, at how objective those preferences can or should be, and what it means for physics if they're wrong (as increasingly appears to be the case for supersymmetry). In so doing, she humanizes theory development in a way that I found fascinating. The drawback to reading about ongoing arguments is the lack of a conclusion. Lost in Math, unsurprisingly, does not provide an epiphany about the future direction of high-energy particle physics. Its conclusion, to the extent that it has one, is a plea to find a way to put particle physics back on firmer experimental footing and to avoid cognitive biases in theory development. Given the cost of experiments and the nature of humans, this is challenging. But I enjoyed reading this questioning, contrarian take, and I think it's valuable for understanding the limits, biases, and distortions at the edge of new theory development. Rating: 7 out of 10

23 March 2020

Bits from Debian: New Debian Developers and Maintainers (January and February 2020)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

Gunnar Wolf: Made in what?

Say What? Just bought a 5-pack of 64GB USB keys. Am about to test them to ensure their actual capacity. And the label is Actually true! For basically anything we are likely to encounter, specially in electronics. But still, it demands a photo before opening. How come I ve never come across anything like this before? :-] update Of course, just opening the package yielded this much more traditional (and much more permanent) piece of information:

Dima Kogan: org-babel for documentation

So I just gave a talk at SCaLE 18x about numpysane and gnuplotlib, two libraries I wrote to make using numpy bearable. With these two, it's actually quite nice! Prior to the talk I overhauled the documentation for both these projects. The gnuplotlib docs now have a tutorial/gallery page, which is interesting-enough to write about. Check it out! Mostly it is a sequence of Clearly you want the plots in the documentation to correspond to the code, so you want something to actually run each code snippet to produce each plot. Automatically. I don't want to maintain these manually, and periodically discover that the code doesn't make the plot I claim it does or worse: that the code barfs. This is vaguely what Jupyter notebooks do, but they're ridiculous, so I'm doing something better: That's it. The git repo is hosted by github, which has a rudimentary renderer for .org documents. I'm committing the .svg files, so that's enough to get rendered documentation that looks nice. Note that the usual workflow is to use org to export to html, but here I'm outsourcing that job to github; I just make the .svg files, and that's enough. Look at the link again: gnuplotlib tutorial/gallery. This is just a .org file committed to the git repo. github is doing its normal org->html thing to display this file. This has drawbacks too: github is ignoring the :noexport: tag on the init section at the end of the file, so it's actually showing all the emacs lisp goop that makes this work (described below!). It's at the end, so I guess this is good-enough. Those of us that use org-mode would be completely unsurprised to hear that the talk is also written as .org document. And the slides that show gnuplotlib plots use the same org-babel system to render the plots. It's all oh-so-nice. As with anything as flexible as org-babel, it's easy to get into a situation where you're bending it to serve a not-quite-intended purpose. But since this all lives in emacs, you can make it do whatever you want with a bit of emacs lisp. I ended up advising a few things (mailing list post here). And I stumbled on an (arguable) bug in emacs that needed working around (mailing list post here). I'll summarize both here.

Handling large Local Variables blocks
The advises I ended up with ended up longer than emacs expected, which made emacs not evaluate them when loading the buffer. As I discovered (see the mailing list post) the loading code looks for the string Local Variables in the last 3000 bytes of the buffer only, and I exceeded that. Stefan Monnier suggested a workaround in this post. Instead of the normal Local Variables block at the end:
Local Variables:
eval: (progn ... ...
             ... ...
             LONG chunk of emacs-lisp
      )
End:
I do this:
(progn ;;local-config
   lisp lisp lisp
   as long as I want
)
Local Variables:
eval: (progn (re-search-backward "^(progn ;;local-config") (eval (read (current-buffer))))
End:
So emacs sees a small chunk of code that searches backwards through the buffer (as far back as needed) for the real lisp to evaluate. As an aside, this blog is also an .org document, and the lisp snippets above are org-babel blocks that I'm not evaluating. The exporter knows to respect the emacs-lisp syntax highlighting, however.

Advises
OK, so what was all the stuff I needed to tell org-babel to do specially here? First off, org needed to be able to communicate to the Python session the name of the file to write the plot to. I do this by making the whole plist for this org-babel snippet available to python:
;; THIS advice makes all the org-babel parameters available to python in the
;; _org_babel_params dict. I care about _org_babel_params['_file'] specifically,
;; but everything is available
(defun dima-org-babel-python-var-to-python (var)
  "Convert an elisp value to a python variable.
  Like the original, but supports (a . b) cells and symbols
"
  (if (listp var)
      (if (listp (cdr var))
          (concat "[" (mapconcat #'org-babel-python-var-to-python var ", ") "]")
        (format "\"\"\"%s\"\"\"" var))
    (if (symbolp var)
        (format "\"\"\"%s\"\"\"" var)
      (if (eq var 'hline)
          org-babel-python-hline-to
        (format
         (if (and (stringp var) (string-match "[\n\r]" var)) "\"\"%S\"\"" "%S")
         (if (stringp var) (substring-no-properties var) var))))))
(defun dima-alist-to-python-dict (alist)
  "Generates a string defining a python dict from the given alist"
  (let ((keyvalue-list
         (mapcar (lambda (x)
                   (format "%s = %s, "
                           (replace-regexp-in-string
                            "[^a-zA-Z0-9_]" "_"
                            (symbol-name (car x)))
                           (dima-org-babel-python-var-to-python (cdr x))))
                 alist)))
    (concat
     "dict( "
     (apply 'concat keyvalue-list)
     ")")))
(defun dima-org-babel-python-pass-all-params (f params)
  (cons
   (concat
    "_org_babel_params = "
    (dima-alist-to-python-dict params))
   (funcall f params)))
(unless
    (advice-member-p
     #'dima-org-babel-python-pass-all-params
     #'org-babel-variable-assignments:python)
  (advice-add
   #'org-babel-variable-assignments:python
   :around #'dima-org-babel-python-pass-all-params))
So if there's a :file plist key, the python code can grab that, and write the plot to that filename. But I don't really want to specify an output file for every single org-babel snippet. All I really care about is that each plot gets a unique filename. So I omit the :file key entirely, and use this advice to generate one for me:
;; This sets a default :file tag, set to a unique filename. I want each demo to
;; produce an image, but I don't care what it is called. I omit the :file tag
;; completely, and this advice takes care of it
(defun dima-org-babel-python-unique-plot-filename
    (f &optional arg info params)
  (funcall f arg info
           (cons (cons ':file
                       (format "guide-%d.svg"
                               (condition-case nil
                                   (setq dima-unique-plot-number (1+ dima-unique-plot-number))
                                 (error (setq dima-unique-plot-number 0)))))
                 params)))
(unless
    (advice-member-p
     #'dima-org-babel-python-unique-plot-filename
     #'org-babel-execute-src-block)
  (advice-add
   #'org-babel-execute-src-block
   :around #'dima-org-babel-python-unique-plot-filename))
This uses the dima-unique-plot-number integer to keep track of each plot. I increment this with each plot. Getting closer. It isn't strictly required, but it'd be nice if each plot had the same output filename each time I generated it. So I want to reset the plot number to 0 each time:
;; If I'm regenerating ALL the plots, I start counting the plots from 0
(defun dima-reset-unique-plot-number
    (&rest args)
    (setq dima-unique-plot-number 0))
(unless
    (advice-member-p
     #'dima-reset-unique-plot-number
     #'org-babel-execute-buffer)
  (advice-add
   #'org-babel-execute-buffer
   :after #'dima-reset-unique-plot-number))
Finally, I want to lie to the user a little bit. The code I'm actually executing writes each plot to an .svg. But the code I'd like the user to see should use the default output: an interactive, graphical window. I do that by tweaking the python session to tell the gnuplotlib object to write to .svg files from org by default, instead of using the graphical terminal:
;; I'm using github to display guide.org, so I'm not using the "normal" org
;; exporter. I want the demo text to not contain the hardcopy= tags, but clearly
;; I need the hardcopy tag when generating the plots. I add some python to
;; override gnuplotlib.plot() to add the hardcopy tag somewhere where the reader
;; won't see it. But where to put this python override code? If I put it into an
;; org-babel block, it will be rendered, and the :export tags will be ignored,
;; since github doesn't respect those (probably). So I put the extra stuff into
;; an advice. Whew.
(defun dima-org-babel-python-set-demo-output (f body params)
  (with-temp-buffer
    (insert body)
    (beginning-of-buffer)
    (when (search-forward "import gnuplotlib as gp" nil t)
      (end-of-line)
      (insert
       "\n"
       "if not hasattr(gp.gnuplotlib, 'orig_init'):\n"
       "    gp.gnuplotlib.orig_init = gp.gnuplotlib.__init__\n"
       "gp.gnuplotlib.__init__ = lambda self, *args, **kwargs: gp.gnuplotlib.orig_init(self, *args, hardcopy=_org_babel_params['_file'] if 'file' in _org_babel_params['_result_params'] else None, **kwargs)\n"))
    (setq body (buffer-substring-no-properties (point-min) (point-max))))
  (funcall f body params))
(unless
    (advice-member-p
     #'dima-org-babel-python-set-demo-output
     #'org-babel-execute:python)
  (advice-add
   #'org-babel-execute:python
   :around #'dima-org-babel-python-set-demo-output))
)
And that's it. The advises in the talk are slightly different, in uninteresting ways. Some of this should be upstreamed to org-babel somehow. Now entirely clear which part, but I'll cross that bridge when I get to it.

Next.