Search Results: "pollo"

3 October 2021

Louis-Philippe V ronneau: ANC is not for me

Active noise cancellation (ANC) has been all the rage lately in the headphones and in-ear monitors market. It seems after Apple got heavily praised for their AirPods Pro, every somewhat serious electronics manufacturer released their own design incorporating this technology. The first headphones with ANC I remember trying on (in the early 2010s) were the Bose QuietComfort 15. Although the concept did work (they indeed cancelled some sounds), they weren't amazing and did a great job of convincing me ANC was some weird fad for people who flew often. The Sony WH-1000X M3 folded in their case As the years passed, chip size decreased, battery capacity improved and machine learning blossomed truly a perfect storm for the wireless ANC headphones market. I had mostly stayed a sceptic of this tech until recently a kind friend offered to let me try a pair of Sony WH-1000X M3. Having tested them thoroughly, I have to say I'm really tempted to buy them from him, as they truly are fantastic headphones1. They are very light, comfortable, work without a proprietary app and sound very good with the ANC on2 if a little bass-heavy for my taste3. The ANC itself is truly astounding and is leaps and bounds beyond what was available five years ago. It still isn't perfect and doesn't cancel ALL sounds, but transforms the low hum of the subway I find myself sitting in too often these days into a light *swoosh*. When you turn the ANC on, HVAC simply disappears. Most impressive to me is the way they completely cancel the dreaded sound of your footsteps resonating in your headphones when you walk with them. My old pair of Senheiser HD 280 Pro, with aftermarket sheepskin earpads I won't be keeping them though. Whilst I really like what Sony has achieved here, I've grown to understand ANC simply isn't for me. Some of the drawbacks of ANC somewhat bother me: the ear pressure it creates is tolerable, but is an additional energy drain over long periods of time and eventually gives me headaches. I've also found ANC accentuates the motion sickness I suffer from, probably because it messes up with some part of the inner ear balance system. Most of all, I found that it didn't provide noticeable improvements over good passive noise cancellation solutions, at least in terms of how high I have to turn the volume up to hear music or podcasts clearly. The human brain works in mysterious ways and it seems ANC cancelling a class of noises (low hums, constant noises, etc.) makes other noises so much more noticeable. People talking or bursty high pitched noises bothered me much more with ANC on than without. So for now, I'll keep using my trusty Senheiser HD 280 Pro4 at work and good in-ear monitors with Comply foam tips on the go.

  1. This blog post certainly doesn't aim to be a comprehensive review of these headphones. See Zeos' review if you want something more in-depth.
  2. As most ANC headphones, they don't sound as good when used passively through the 3.5mm port, but that's just a testament of how a great job Sony did of tuning the DSP.
  3. Easily fixed using an EQ.
  4. Retrofitted with aftermarket sheepskin earpads, they provide more than 32db of passive noise reduction.

13 August 2021

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

9 August 2021

Ian Wienand: nutdrv_qx setup for Synology DSM7

I have a cheap no-name UPS acquired from Jaycar and was wondering if I could get it to connect to my Synology DS918+. It rather unhelpfully identifies itself as MEC0003 and comes with some blob of non-working software on a CD; however some investigation found it could maybe work on my Synology NAS using the Network UPS Tools nutdrv_qx driver with the hunnox subdriver type. Unfortunately this is a fairly recent addition to the NUTs source, requiring rebuilding the driver for DSM7. I don't fully understand the Synology environment but I did get this working. Firstly I downloaded the toolchain from https://archive.synology.com/download/ToolChain/toolchain/ and extracted it. I then used the script from https://github.com/SynologyOpenSource/pkgscripts-ng to download some sort of build environment. This appears to want root access and possibly sets up some sort of chroot. Anyway, for DSM7 on the DS918+ I ran EnvDeploy -v 7.0 -p apollolake and it downloaded some tarballs into toolkit_tarballs that I simply extracted into the same directory as the toolchain. I then grabbed the NUTs source from https://github.com/networkupstools/nut. I then built NUTS similar to the following
./autogen.sh
PATH_TO_TC=/home/your/path
export CC=$ PATH_TO_CC /x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-gcc
export LD=$ PATH_TO_LD /x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-ld
./configure \
  --prefix= \
  --with-statepath=/var/run/ups_state \
  --sysconfdir=/etc/ups \
  --with-sysroot=$ PATH_TO_TC /usr/local/sysroot \
  --with-usb=yes
  --with-usb-libs="-L$ PATH_TO_TC /usr/local/x86_64-pc-linux-gnu/x86_64-pc-linux-gnu/sys-root/usr/lib/ -lusb" \
  --with-usb-includes="-I$ PATH_TO_TC /usr/local/sysroot/usr/include/"
make
The tricks to be aware of are setting the locations DSM wants status/config files and overriding the USB detection done by configure which doesn't seem to obey sysroot. If you would prefer to avoid this you can try this prebuilt nutdrv_qx (ebb184505abd1ca1750e13bb9c5f991eaa999cbea95da94b20f66ae4bd02db41). SSH to the DSM7 machine; as root move /usr/bin/nutdrv_qx out of the way to save it; scp the new version and move it into place. If you cat /dev/bus/usb/devices I found this device has a Vendor 0001 and ProdID 0000.
T:  Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  3 Spd=1.5  MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs=  1
P:  Vendor=0001 ProdID=0000 Rev= 1.00
S:  Product=MEC0003
S:  SerialNumber=ffffff87ffffffb7ffffff87ffffffb7
C:* #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=100mA
I:* If#= 0 Alt= 0 #EPs= 2 Cls=03(HID  ) Sub=00 Prot=00 Driver=usbfs
E:  Ad=81(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
E:  Ad=02(O) Atr=03(Int.) MxPS=   8 Ivl=10ms
DSM does a bunch of magic to autodetect and configure NUTs when a UPS is plugged in. The first thing you'll need to do is edit /etc/nutscan-usb.sh and override where it tries to use the blazer_usb driver for this obviously incorrect vendor/product id. The line should now look like
static usb_device_id_t usb_device_table[] =  
    0x0001, 0x0000, "nutdrv_qx"  ,
    0x03f0, 0x0001, "usbhid-ups"  ,
  ... and so on ...
Then you want to edit the file /usr/syno/lib/systemd/scripts/ups-usb.sh to start the nutdrv_qx; find the DRV_LIST in that file and update it like so:
local DRV_LIST="nutdrv_qx usbhid-ups blazer_usb bcmxcp_usb richcomm_usb tripplite_usb"
This is triggered by /usr/lib/systemd/system/ups-usb.service and is ultimately what tries to setup the UPS configuration. Lastly, you will need to edit the /etc/ups/ups.conf file. This will probably vary depending on your UPS. One important thing is to add user=root above the driver; it seems recent NUT has become more secure and drops permissions, but the result it will not find USB devices in this environment (if you're getting something like no appropriate HID device found this is likely the cause). So the configuration should look something like:
user=root
[ups]
driver = nutdrv_qx
port = auto
subdriver = hunnox
vendorid = "0001"
productid = "0000"
langid_fix = 0x0409
novendor
noscanlangid
#pollonly
#community =
#snmp_version = v2c
#mibs =
#secName =
#secLevel =
#authProtocol =
#authPassword =
#privProtocol =
#privPassword =
I then restarted the UPS daemon by enabling/disabling UPS support in the UI. This should tell you that your UPS is connected. You can also check /var/log/ups.log which shows for me
2021-08-09T18:14:51+10:00 synology synoups[11994]: =====log UPS status start=====
2021-08-09T18:14:51+10:00 synology synoups[11996]: device.mfr=
2021-08-09T18:14:51+10:00 synology synoups[11998]: device.model=
2021-08-09T18:14:51+10:00 synology synoups[12000]: battery.charge=
2021-08-09T18:14:51+10:00 synology synoups[12002]: battery.runtime=
2021-08-09T18:14:51+10:00 synology synoups[12004]: battery.voltage=13.80
2021-08-09T18:14:51+10:00 synology synoups[12006]: input.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12008]: output.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12010]: ups.load=31
2021-08-09T18:14:51+10:00 synology synoups[12012]: ups.status=OL
2021-08-09T18:14:51+10:00 synology synoups[12013]: =====log UPS status end=====
Which corresponds to the correct input/output voltage and state. Of course this is all unsupported and probably likely to break -- although I don't imagine much of these bits are updated very frequently. It will likely be OK until the UPS battery dies; at which point I would reccommend buying a better UPS on the Synology support list.

29 June 2021

Louis-Philippe V ronneau: New Winter Bicycle

Although I'm about 5 months early and we're in the middle of a terrible heat wave where I live, I've just finished building my new winter bicycle! I've been riding all year round for about 8 years now and the winters in Montreal are harsh enough that a second winter-ready bicycle is required to have a fun and safe cold season. I'm saying "new bicycle" because a few months ago I totaled the frame of my previous winter bike. As you can see in the picture below (please disregard the salt crust), I broke the seat tube at the lug. I didn't notice it at first and while riding I kept hearing a "bang bang" sound coming from the frame when going over bumps. To my horror, I eventually realised the sound was coming from the seat tube hitting the bottom bracket shell! A large crack in my old bike frame I'm sad to see this Lejeune French frame go, but it was probably built in the 70's or the early 80's: it had a good life. Another great example of how a high quality steel frame can last a lifetime. Sparing no expense, I decided to replace it by a brand new Surly Cross-Check frameset. Surly is known for making great bike frames and the Cross-Check is a very versatile model. I'll finally be able to fit my Schwalbe Marathon Winter Plus tires at the front and the back with full length mud guards!!! Hard to ask for more. A picture of my new winter bike with CX Pro tires Summer has just started and I'm already looking forward to winter :)

27 June 2021

Louis-Philippe V ronneau: Writing QA Scripts for Debian Teams

Since I joined the Debian Python Team, I have had a lot of fun working on different QA issues. Although I'm still a Perl illiterate1, I've for example contributed to a few Lintian tags. There are multiple ways to make mass QA changes to team-managed packages. Projects like the Debian Janitor are more than fantastic: they make for a robust, thorough and automated way to fix QA issues in the archive and I don't have enough good words to describe the amazing work of Jelmer Vernooij on the toolsuite the Janitor uses. But with robustness comes complexity. The Janitor is currently based on 10 different subtools (silver-platter, ognibuild, lintian-brush, ...) and if you want to use it to fix a bug, you first need to make sure there's a Lintian tag that flags the issue you're working on. Then you need to write a lintian-brush fixer to fix said issue. Sadly, sometimes writing a new Lintian tag to flag a trivial changes is not the appropriate course of action and only creates clutter. All this to say until now, I was a missing a "quick and somewhat dirty2" way to make simple one-off changes to a bunch of packages. 200 lines of Python later, I'm happy to report I have a simple way to replace the old Clojure Team email in d/control by the new one for all of our packages. Even better, although this script doesn't aim to be a versatile tool like the Janitor is, most of the functions can be reused for other similar one-off scripts. Many thanks to Felix Lechner showing me the very handy Lintian Query JSON interface!

  1. I don't really enjoy coding in Perl, but it makes up so much of the current Debian infrastructure that I wish I did. I keep telling myself I should buy an "Introduction to Perl" book...
  2. A quick and dirty way to make those changes would've been to write a shell script, but one of my 2021 resolution is to use Python for all my scripting needs.

24 June 2021

Louis-Philippe V ronneau: Hardening Weechat Relays Against RCE on Bullseye

I've been using weechat to connect to IRC since late 2016 and one of its killer feature is relays. They let use other frontends like the Weechat Android app or the amazing Glowing Bear (packaged in Debian Bullseye by yours truly). Sadly, relays also used to be somewhat of a security risk: anyone with access to a relay1 could run scripts on the machine running weechat by using commands such as /exec or /script. Not great. Since version 2.5 (Buster had version 2.3), you can mitigate this risk by setting a command allowlist for relays. Later versions implemented a sane default by blocking the following commands: Sadly, this default didn't make in into Bullseye. If you are running weechat and are using the relays feature, after upgrading to Bullseye, I would recommend you run the following commands in the weechat TUI:
/set relay.weechat.commands *,!exec,!fset,!set,!unset,!plugin,!script,!python,!perl,!ruby,!lua,!tcl,!guile,!javascript,!php,!secure,!upgrade,!quit
/save

  1. For example, someone steals your phone and connects to IRC via the Weechat app...

10 June 2021

Louis-Philippe V ronneau: New Desktop Computer

I built my last desktop computer what seems like ages ago. In 2011, I was in a very different place, both financially and as a person. At the time, I was earning minimum wage at my school's caf to pay rent. Since the caf was owned by the school cooperative, I had an employee discount on computer parts. This gave me a chance to build my first computer from spare parts at a reasonable price. After 10 years of service1, the time has come to upgrade. Although this machine was still more than capable for day to day tasks like browsing the web or playing casual video games, it started to show its limits when time came to do more serious work. Old computer specs:
CPU: AMD FX-8530
Memory: 8GB DDR3 1600Mhz
Motherboard: ASUS TUF SABERTOOTH 990FX R2.0
Storage: Samsung 850 EVO 500GB SATA
I first started considering an upgrade in September 2020: David Bremner was kindly fixing a bug in ledger that kept me from balancing my books and since it seemed like a class of bug that would've been easily caught by an autopkgtest, I decided to add one. After adding the necessary snippets to run the upstream testsuite (an easy task I've done multiple times now), I ran sbuild and ... my computer froze and crashed. Somehow, what I thought was a simple Python package was maxing all the cores on my CPU and using all of the 8GB of memory I had available.2 A few month later, I worked on jruby and the builds took 20 to 30 minutes long enough to completely disrupt my flow. The same thing happened when I wanted to work on lintian: the testsuite would take more than 15 minutes to run, making quick iterations impossible. Sadly, the pandemic completely wrecked the computer hardware market and prices here in Canada have only recently started to go down again. As a result, I had to wait more time than I would've liked not to pay scalper prices. New computer specs:
CPU: AMD Ryzen 5900X
Memory: 64GB DDR4 3200MHz
Motherboard: MSI MPG B550 Gaming Plus
Storage: Corsair MP600 500 GB Gen4 NVME
The difference between the two machines is pretty staggering: I've gone from a CPU with 2 cores and 8 threads, to one with 12 cores and 24 threads. Not only that, but single-threaded performance has also vastly increased in those 10 years. A good example would be building grammalecte, a package I've recently sponsored. I feel it's a good benchmark, since the build relies on single-threaded performance for the normal Python operations, while being threaded when it compiles the dictionaries. On the old computer:
Build needed 00:10:07, 273040k disk space
And as you can see, on the new computer the build time has been significantly reduced:
Build needed 00:03:18, 273040k disk space
Same goes for things like the lintian testsuite. Since it's a very multi-threaded workload, it now takes less than 2 minutes to run; a 750% improvement. All this to say I'm happy with my purchase. And lo and behold I can now build ledger without a hitch, even though it maxes my 24 threads and uses 28GB of RAM. Who would've thought... Screen capture of htop showing how much resources ledger takes to build

  1. I managed to fry that PC's motherboard in 2016 and later replaced it with a brand new one. I also upgraded the storage along the way, from a very cheap cacheless 120GB SSD to a larger Samsung 850 EVO SATA drive.
  2. As it turns out, ledger is mostly written in C++ :)

29 March 2021

Louis-Philippe V ronneau: Montreal 2021 BSP

Last weekend Debian Quebec held a Bug Squashing Party to try to fix some bugs in the upcoming Debian Bullseye. I wasn't convinced at first, but Tassia's contagious energy and willingness to help organise the event eventually won me over. And shockers! it was really fun. Group picture of the BSP attendees on Jitsi Meet We fixed a couple of RC bugs, held lightning talks and had a virtual pizza party! My lightning talk on autopkgtests was well received and a few people decided to migrate to sbuild and enable autopkgtests by default. Sergio's talk on debuginfod was incredibly interesting. I'm not a C programmer and the live demo made me understand how this service can help making debugging C easier. Jerome's talk on using Yubikeys to unlock LUKS encrypted drives was also very good! It also served as a reminder that Yubico's product are much more featureful and convenient to use than other Open Hardware/ Free Software hardware tokens. Hopefully that will change as enterprises like Nitrokey and Solokey mature. This was my third BSP, crazy how time flies... With the Bullseye release closing in, you should try to join or organise one!

13 March 2021

Louis-Philippe V ronneau: Preventing an OpenPGP Smartcard from caching the PIN eternally

While I'm overall very happy about my migration to an OpenPGP hardware token, the process wasn't entirely seamless and I had to hack around some issues, for example the PIN caching behavior in GnuPG. As described in this bug the cache-ttl parameter in GnuPG is not implemented and thus does nothing. This means once you type in your PIN, it is cached for as long as the token is plugged. Security-wise, this is not great. Instead of manually disconnecting the token frequently, I've come up with a script that restarts scdameon if the token hasn't been used during the last X minutes. It seems to work well and I call it using this cron entry: */5 * * * * my_user /usr/local/bin/restart-scdaemon To get a log from scdaemon, you'll need a ~/.gnupg/scdaemon.conf file that looks like this:
debug-level basic
log-file /var/log/scdaemon.log
Hopefully it can be useful to others!
 #!/usr/bin/python3
# Copyright 2021, Louis-Philippe V ronneau <pollo@debian.org>
#
# This script is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
# 
# This script is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
# 
# You should have received a copy of the GNU General Public License along with
# this script. If not, see <http://www.gnu.org/licenses/>.
"""
This script restarts scdaemon after X minutes of inactivity to reset the PIN
cache. It is meant to be ran by cron each X/2 minutes.
This is needed because there is currently no way to set a cache time for
smartcards. See https://dev.gnupg.org/T3362#137811 for more details.
"""
import os
import sys
import subprocess
from datetime import datetime, timedelta
from argparse import ArgumentParser
p = ArgumentParser(description=__doc__)
p.add_argument('-l', '--log', default="/var/log/scdaemon.log",
               help='Path to the scdaemon log file.')
p.add_argument('-t', '--timeout', type=int, default="10",
               help=("Desired cache time in minutes."))
args = p.parse_args()
def get_last_line(scdaemon_log):
    """Returns the last line of the scdameon log file."""
    with open(scdaemon_log, 'rb') as f:
        f.seek(-2, os.SEEK_END)
        while f.read(1) != b'\n':
            f.seek(-2, os.SEEK_CUR)
        last_line = f.readline().decode()
    return last_line
def check_time(last_line, timeout):
    """Returns True if scdaemon hasn't been called since the defined timeout."""
    # We don't need to restart scdaemon if no gpg command has been run since
    # the last time it was restarted.
    should_restart = True
    if "OK closing connection" in last_line:
        should_restart = False
    else:
        last_time = datetime.strptime(last_line[:19], '%Y-%m-%d %H:%M:%S')
        now = datetime.now()
        delta = now - last_time
        if delta <= timedelta(minutes = timeout):
            should_restart = False
    return should_restart
def restart_scdaemon(scdaemon_log):
    """Restart scdaemon and verify the restart process was successful."""
    subprocess.run(['gpgconf', '--reload', 'scdaemon'], check=True)
    last_line = get_last_line(scdaemon_log)
    if "OK closing connection" not in last_line:
        sys.exit("Restarting scdameon has failed.")
def main():
    """Main function."""
    last_line = get_last_line(args.log)
    should_restart = check_time(last_line, args.timeout)
    if should_restart:
        restart_scdaemon(args.log)
if __name__ == "__main__":
    main()

7 March 2021

Louis-Philippe V ronneau: New Year, New OpenPGP Key

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Sun, 07 Mar 2021 13:00:17 -0500
I've recently set up a new OpenPGP key and will be transitioning away from my
old one.
It is a chance for me to start using a OpenPGP hardware token and to transition
to a new personal email address (my main public contact is still my
 @debian.org  address).
Please note that I've partially redacted some email addresses from this
statement to minimise the amount of spam I receive. It shouldn't be hard for
actual humans to follow the instructions below to find the complete addresses.
The old key will continue to be valid for a few months, but will eventually be
revoked.
You might know my old OpenPGP certificate as:
pub   rsa4096/0x7AEAC4EC6AAA0A97 2014-12-22 [expires: 2021-06-02]
      Key fingerprint = 677F 54F1 FA86 81AD 8EC0  BCE6 7AEA C4EC 6AAA 0A97
uid       Louis-Philippe V ronneau <REDACTED@riseup.net>
uid       Louis-Philippe V ronneau (alias) <REDACTED@riseup.net>
uid       Louis-Philippe V ronneau (debian) <REDACTED@debian.org>
My new OpenPGP certificate is:
pub   ed25519/0xE1E5457C8BAD4113 2021-03-06 [expires: 2022-03-06]
      Key fingerprint = F64D 61D3 21F3 CB48 9156  753D E1E5 457C 8BAD 4113
uid       Louis-Philippe V ronneau <REDACTED@veronneau.org>
uid       Louis-Philippe V ronneau <REDACTED@debian.org>
These days, I mostly use my key for Debian and to sign git commit. I don't
really expect you to sign my new key if you had signed my old one.
I've published the new certificate on keys.openpgp.org as well as on my
personal website. You can fetch it like this:
    $ wget -O- https://veronneau.org/media/openpgp.key   gpg --import
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEZ39U8fqGga2OwLzmeurE7GqqCpcFAmBFFM8ACgkQeurE7Gqq
CpcuchAAscAeszdtA+TlCI4YvK5nlk+nJnCnNBSnl7Et+jiNjq8kB/Fud+dWMTXC
Zag8oJkalbbxub0BT0bEAn+BiBunu58E0gd0Xq4syTbqZ5o5IN17S/tfxCD0k1hf
ewrnYZ2l0i5g4YvHGKC+Xv4D+Z84BylnIRaXHqlUdluOVfVYDfLybOAqoktO/KUH
I+vQBwXj0Fr/QAtgiz5Nwh/YHFiU9xMSvr5ozRwAFs6+xfIqFHuVPRRkEN5iVo4D
kkMIz+kFfkoh4aWIP4dgAu39XnEgxwTR9J+4yE8TzCCMzO7xCK0X6vqgPAxYMPvb
RuP4FnGWOnGnlcudCUAUkOaryrwRi+dPQTnNICHTYsvVc7dg+W0EhVUkwEuuEwpI
qtcB/Y5AGhqK0Cc11uXiFjIQwLTgwcUez4F0xrGeqsTtAM5gyRup2w0jbocTuYSh
ZRv/2zwrq/S3xVrUYGqdT+L5odmkBzz9zOwY5WlU2H9CMFOdh71XOv9wWQXan9ou
hLRodeOQ8MinIBP+sX36ol1zg/aP7mCHvRRSBzWt7l3WhVxgZFpNwIfp/RZqU0R4
IEq48mntFhPvHJjFmAKLKK/ckzNMtSn+HWQPJV3HTInKCTu5PTNMU3SAvPHOHEps
V6WWSOPB+1Lm/tlIULDc+0SopWoiWO4NObCSs8zMZHlYPBk5x/KIdQQBFgoAHRYh
BMqnQAcHqBawIC/DzfQlelCyHPqFBQJgRRTPAAoJEPQlelCyHPqFFVEA/1qScaAk
O+eBEE4q0BaJDsqweCS1XCcuQGkQCKi5Zv6kAQChQ96Ve7cKbN/wRkT9pdIhmx01
+CmIsnp3k6N0ZYLLCg==
=onl0
-----END PGP SIGNATURE-----

21 February 2021

Louis-Philippe V ronneau: dput-ng or: How I Learned to Stop Worrying and Love the Hooks

As my contributions to Debian continue to grow in number, I find myself uploading to the archive more and more often. Although I'm pretty happy with my current sbuild-based workflow, twice in the past few weeks I inadvertently made a binary upload instead of a source-only one.1 As it turns out, I am not the only DD who has had this problem before. As Nicolas Dandrimont kindly pointed to me, dput-ng supports pre and post upload hooks that can be used to lint your uploads. Even better, it also ships with a check-debs hook that lets you block binary uploads. Pretty neat, right? In a perfect world, enabling the hook would only be a matter of adding it in the hook list of /etc/dput.d/metas/debian.json and using the following defaults:
"check-debs":  
    "enforce": "source",
    "skip": false
 ,
Sadly, bug #983160 currently makes this whole setup more complex than it should be and forces me to use two different dput-ng profiles pointing to two different files in /etc/dput.d/metas: a default source-only one (ftp-master) and a binary upload one (ftp-master-binary). Otherwise, one could use a single profile that disallows binary uploads and when needed, override the hook using something like this:
$ dput --override "check-debs.enforce=debs" foo_1.0.0-1_amd64.changes
I did start debugging the --override issue in dput-ng, but I'm not sure I'll have time to submit a patch anytime soon. In the meantime, I'm happy to report I shouldn't be uploading the wrong .changes file by mistake again!

  1. Thanks to Holger Levsen and Adrian Bunk for catching those and notifying me.

Russ Allbery: Review: The Fated Sky

Review: The Fated Sky, by Mary Robinette Kowal
Series: Lady Astronaut #2
Publisher: Tor
Copyright: August 2018
ISBN: 0-7653-9893-1
Format: Kindle
Pages: 380
The Fated Sky is a sequel to The Calculating Stars, but you could start with this book if you wanted to. It would be obvious you'd missed a previous book in the series, and some of the relationships would begin in medias res, but the story is sufficiently self-contained that one could puzzle through. Mild spoilers follow for The Calculating Stars, although only to the extent of confirming that book didn't take an unexpected turn, and nothing that wouldn't already be spoiled if you had read the short story "The Lady Astronaut of Mars" that kicked this series off. (The short story takes place well after all of the books.) Also some minor spoilers for the first section of the book, since I have to talk about its outcome in broad strokes in order to describe the primary shape of the novel. In the aftermath of worsening weather conditions caused by the Meteor, humans have established a permanent base on the Moon and are preparing a mission to Mars. Elma is not involved in the latter at the start of the book; she's working as a shuttle pilot on the Moon, rotating periodically back to Earth. But the political situation on Earth is becoming more tense as the refugee crisis escalates and the weather worsens, and the Mars mission is in danger of having its funding pulled in favor of other priorities. Elma's success in public outreach for the space program as the Lady Astronaut, enhanced by her navigation of a hostage situation when an Earth re-entry goes off course and is met by armed terrorists, may be the political edge supporters of the mission need. The first part of this book is the hostage situation and other ground-side politics, but the meat of this story is the tense drama of experimental, pre-computer space flight. For those who aren't familiar with the previous book, this series is an alternate history in which a huge meteorite hit the Atlantic seaboard in 1952, potentially setting off runaway global warming and accelerating the space program by more than a decade. The Calculating Stars was primarily about the politics surrounding the space program. In The Fated Sky, we see far more of the technical details: the triumphs, the planning, and the accidents and other emergencies that each could be fatal in an experimental spaceship headed towards Mars. If what you were missing from the first book was more technological challenge and realistic detail, The Fated Sky delivers. It's edge-of-your-seat suspenseful and almost impossible to put down. I have more complicated feelings about the secondary plot. In The Calculating Stars, the heart of the book was an incredibly well-told story of Elma learning to deal with her social anxiety. That's still a theme here but a lesser one; Elma has better coping mechanisms now. What The Fated Sky tackles instead is pervasive sexism and racism, and how Elma navigates that (not always well) as a white Jewish woman. The centrality of sexism is about the same in both books. Elma's public outreach is tied closely to her gender and starts as a sort of publicity stunt. The space program remains incredibly sexist in The Fated Stars, something that Elma has to cope with but can't truly fix. If you found the sexism in the first book irritating, you're likely to feel the same about this installment. Racism is more central this time, though. In The Calculating Stars, Elma was able to help make things somewhat better for Black colleagues. She has a much different experience in The Fated Stars: she ends up in a privileged position that hurts her non-white colleagues, including one of her best friends. The merits of taking a stand on principle are ambiguous, and she chooses not to. When she later tries to help Black astronauts, she does so in a way that's focused on her perceptions rather than theirs and is therefore more irritating than helpful. The opportunities she gets, in large part because she's seen as white, unfairly hurt other people, and she has to sit with that. It's a thoughtful and uncomfortable look at how difficult it is for a white person to live with discomfort they can't fix and to not make it worse by trying to wave it away or point out their own problems. That was the positive side of this plot, although I'm still a bit wary and would like to read a review by a Black reviewer to see how well this plot works from their perspective. There are some other choices that I thought landed oddly. One is that the most racist crew member, the one who sparks the most direct conflict with the Black members of the international crew, is a white man from South Africa, which I thought let the United States off the hook too much and externalized the racism a bit too neatly. Another is that the three ships of the expedition are the Ni a, the Pinta, and the Santa Maria, and no one in the book comments on this. Given the thoughtful racial themes of the book, I can't imagine this is an accident, and it is in character for United States of this novel to pick those names, but it was an odd intrusion of an unremarked colonial symbol. This may be part of Kowal's attempt to show that Elma is embedded in a racist and sexist world, has limited room to maneuver, and can't solve most of the problems, which is certainly a theme of the series. But it left me unsettled on whether this book was up to fully handling the fraught themes Kowal is invoking. The other part of the book I found a bit frustrating is that it never seriously engaged with the political argument against Mars colonization, instead treating most of the opponents of space travel as either deluded conspiracy believers or cynical villains. Science fiction is still arguing with William Proxmire even though he's been dead for fifteen years and out of office for thirty. The strong argument against a Mars colony in Elma's world is not funding priorities; it's that even if it's successful, only a tiny fraction of well-connected elites will escape the planet to Mars. This argument is made in the book and Elma dismisses it as a risk she's trying to prevent, but it is correct. There is no conceivable technological future that leads to evacuating the Earth to Mars, but The Fated Sky declines to grapple with the implications of that fact. There's more that I haven't remarked on, including an ongoing excellent portrayal of the complicated and loving relationship between Elma and her husband, and a surprising development in her antagonistic semi-friendship with the sexist test pilot who becomes the mission captain. I liked how Kowal balanced technical problems with social problems on the long Mars flight; both are serious concerns and they interact with each other in complicated ways. The details of the perils and joys of manned space flight are excellent, at least so far as I can tell without having done the research that Kowal did. If you want a fictionalized Apollo 13 with higher stakes and less ground support, look no further; this is engrossing stuff. The interpersonal politics and sociology were also fascinating and gripping, but unsettling, in both good ways and bad. I like the challenge that Kowal presents to a white reader, although I'm not sure she was completely in control of it. Cautiously recommended, although be aware that you'll need to grapple with a sexist and racist society while reading it. Also a content note for somewhat graphic gastrointestinal problems. Followed by The Relentless Moon. Rating: 8 out of 10

17 February 2021

Louis-Philippe V ronneau: What are the incentive structures of Free Software?

When I started my Master's degree in January 2018, I was confident I would be done in a year and half. After all, I only had one year of classes and I figured 6 months to write a thesis would be plenty. Three years later, I'm finally done: the final version of my thesis was accepted on January 22nd 2021. My thesis, entitled What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model, can be found here 1. If you care about such things, both the data and the final document can be built from source with the code in this git repository. Results and analysis My thesis is divided in four main sections:
  1. an introduction to FOSS
  2. a chapter discussing the incentive structures of Free Software (and arguing the so called Tragedy of the Commons isn't inevitable)
  3. a chapter trying to use empirical data to validate the theories presented in the previous chapter
  4. an annex on the various FOSS business models
If you're reading this blog post, chances are you'll find both section 1 and 4 a tad boring, as you might already be familiar with these concepts. Incentives So, why do people contribute to Free Software? Unsurprisingly, it's complicated. Many economists have studied this topic, but for some reason, most research happened in the early 2000s. Although papers don't all agree with each other and most importantly, about the variables' importance, the main incentives2 can be summarized by: Giving weights to these variables is not an easy thing: the FOSS ecosystem is highly heterogeneous and thus, people tend to write FOSS for different reasons. Moreover, incentives tend to shift with time as the ecosystem does. People writing Free Software in the 1990s probably did it for different reasons than people in 2021. These four variables can also be divided in two general categories: extrinsic and intrinsic incentives. Monetary gain expectancy is an extrinsic incentive (its value is delayed and mediated), whereas the three other ones are intrinsic (they have an immediate value by themselves). Empirical analysis Theory is nice, but it's even better when you can back it up with data. Sadly, most of the papers on the economic incentives of FOSS are either purely theoretical, or use sample sizes so small they could as well be. Using the data from the StackOverflow 2018 survey, I thus tried to see if I could somehow confirm my previous assumptions. With 129 questions and more than 100 000 respondents (which after statistical processing yields between 28 000 and 39 000 observations per variable of interest), the StackOverflow 2018 survey is a very large dataset compared to what economists are used to work with. Sadly, it wasn't entirely enough to come up with hard answers. There is a strong and significant correlation between writing Free Software and having a higher salary, but endogeneity problems3 made it hard to give a reliable estimate of how much money this would represent. Same goes for writing code has a hobby: it seems there is a strong and significant correlation, but the exact numbers I came up with cannot really be trusted. The results on community as an incentive to writing FOSS were the ones that surprised me the most. Although I expected the relation to be quite strong, the coefficients predicted were in fact quite small. I theorise this is partly due to only 8% of the respondents declaring they didn't feel like they belonged in the IT community. With such a high level of adherence, the margin for improvement has to be smaller. As for altruism, I wasn't able get any meaningful results. In my opinion this is mostly due to the fact there was no explicit survey question on this topic and I tried to make up for it by cobbling data together. Kinda anti-climatic, isn't it? I would've loved to come up with decisive conclusions on this topic, but if there's one thing I learned while writing this thesis, it is I don't know much after all.

  1. Note that the thesis is written in French.
  2. Of course, life is complex and so are people's motivations. One could come up with dozen more reasons why people contribute to Free Software. The "fun" of theoretical modelisation is trying to make complex things somewhat simpler.
  3. I'll spare you the details, but this means there is no way to know if this correlation is the result of a causal link between the two variables. There are ways to deal with this problem (using an instrumental variables model is a very popular one), but again, the survey didn't provide the proper instruments to do so. For example, it could very well be the correlation is due to omitted variables. If you are interested in this topic (and can read French), I talk about this issue in section 3.2.8.

23 January 2021

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, Revisited

In 2019, I got curious and asked Soci t de Transport de Montr al, Montreal's transit agency, for the foot traffic data of Montreal's subway. Since then, two years has passed and with COVID-19 still going strong, I wanted to see what impact the pandemic had had. And oh boy, what an impact it is. So here it is, data from 2001 to 2020, graphed the same way as in the original 2019 blog post. I could certainly juice this data, graph the pandemic using daily figures and come up with a long and interesting blog post analysing the main trends. I start teaching next Monday though and I still have prep work to do, so I'll leave that to someone else. By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Interactive Map of Montreal's Subway Licences

9 January 2021

Louis-Philippe V ronneau: puppetserver 6: a Debian packaging post-mortem

I have been a Puppet user for a couple of years now, first at work, and eventually for my personal servers and computers. Although it can have a steep learning curve, I find Puppet both nimble and very powerful. I also prefer it to Ansible for its speed and the agent-server model it uses. Sadly, Puppet Labs hasn't been the most supportive upstream and tends to move pretty fast. Major versions rarely last for a whole Debian Stable release and the upstream .deb packages are full of vendored libraries.1 Since 2017, Apollon Oikonomopoulos has been the one doing most of the work on Puppet in Debian. Sadly, he's had less time for that lately and with Puppet 5 being deprecated in January 2021, Thomas Goirand, Utkarsh Gupta and I have been trying to package Puppet 6 in Debian for the last 6 months. With Puppet 6, the old ruby Puppet server using Passenger is not supported anymore and has been replaced by puppetserver, written in Clojure and running on the JVM. That's quite a large change and although puppetserver does reuse some of the Clojure libraries puppetdb (already in Debian) uses, packaging it meant quite a lot of work. Work in the Clojure team As part of my efforts to package puppetserver, I had the pleasure to join the Clojure team and learn a lot about the Clojure ecosystem. As I mentioned earlier, a lot of the Clojure dependencies needed for puppetserver were already in the archive. Unfortunately, when Apollon Oikonomopoulos packaged them, the leiningen build tool hadn't been packaged yet. This meant I had to rebuild a lot of packages, on top of packaging some new ones. Since then, thanks to the efforts of Elana Hashman, leiningen has been packaged and lets us run the upstream testsuites and create .jar artifacts closer to those upstream releases. During my work on puppetserver, I worked on the following packages:
List of packages
  • backport9
  • bidi-clojure
  • clj-digest-clojure
  • clj-helper
  • clj-time-clojure
  • clj-yaml-clojure
  • cljx-clojure
  • core-async-clojure
  • core-cache-clojure
  • core-match-clojure
  • cpath-clojure
  • crypto-equality-clojure
  • crypto-random-clojure
  • data-csv-clojure
  • data-json-clojure
  • data-priority-map-clojure
  • java-classpath-clojure
  • jnr-constants
  • jnr-enxio
  • jruby
  • jruby-utils-clojure
  • kitchensink-clojure
  • lazymap-clojure
  • liberator-clojure
  • ordered-clojure
  • pathetic-clojure
  • potemkin-clojure
  • prismatic-plumbing-clojure
  • prismatic-schema-clojure
  • puppetlabs-http-client-clojure
  • puppetlabs-i18n-clojure
  • puppetlabs-ring-middleware-clojure
  • puppetserver
  • raynes-fs-clojure
  • riddley-clojure
  • ring-basic-authentication-clojure
  • ring-clojure
  • ring-codec-clojure
  • shell-utils-clojure
  • ssl-utils-clojure
  • test-check-clojure
  • tools-analyzer-clojure
  • tools-analyzer-jvm-clojure
  • tools-cli-clojure
  • tools-reader-clojure
  • trapperkeeper-authorization-clojure
  • trapperkeeper-clojure
  • trapperkeeper-filesystem-watcher-clojure
  • trapperkeeper-metrics-clojure
  • trapperkeeper-scheduler-clojure
  • trapperkeeper-webserver-jetty9-clojure
  • url-clojure
  • useful-clojure
  • watchtower-clojure
If you want to learn more about packaging Clojure libraries and applications, I rewrote the Debian Clojure packaging tutorial and added a section about the quirks of using leiningen without a dedicated dh_lein tool. Work left to get puppetserver 6 in the archive Unfortunately, I was not able to finish the puppetserver 6 packaging work. It is thus unlikely it will make it in Debian Bullseye. If the issues described below are fixed, it would be possible to to package puppetserver in bullseye-backports though. So what's left? jruby Although I tried my best (kudos to Utkarsh Gupta and Thomas Goirand for the help), jruby in Debian is still broken. It does build properly, but the testsuite fails with multiple errors: jruby testsuite failures aside, I have not been able to use the jruby.deb the package currently builds in jruby-utils-clojure (testsuite failure). I had the same exact failure with the (more broken) jruby version that is currently in the archive, which leads me to think this is a LOAD_PATH issue in jruby-utils-clojure. More on that below. To try to bypass these issues, I tried to vendor jruby into jruby-utils-clojure. At first I understood vendoring meant including upstream pre-built artifacts (jruby-complete.jar) and shipping them directly. After talking with people on the #debian-mentors and #debian-ftp IRC channels, I now understand why this isn't a good idea (and why it's not permitted in Debian). Many thanks to the people who were patient and kind enough to discuss this with me and give me alternatives. As far as I now understand it, vendoring in Debian means "to have an embedded copy of the source code in another package". Code shipped that way still needs to be built from source. This means we need to build jruby ourselves, one way or another. Vendoring jruby in another package thus isn't terribly helpful. If fixing jruby the proper way isn't possible, I would suggest trying to build the package using embedded code copies of the external libraries jruby needs to build, instead of trying to use the Debian libraries.2 This should make it easier to replicate what upstream does and to have a final .jar that can be used. jruby-utils-clojure This package is a first-level dependency for puppetserver and is the glue between jruby and puppetserver. It builds fine, but the testsuite fails when using the Debian jruby package. I think the problem is caused by a jruby LOAD_PATH issue. The Debian jruby package plays with the LOAD_PATH a little to try use Debian packages instead of downloading gems from the web, as upstream jruby does. This seems to clash with the gem-home, gem-path, and jruby-load-path variables in the jruby-utils-clojure package. The testsuite plays around with these variables and some Ruby libraries can't be found. I tried to fix this, but failed. Using the upstream jruby-complete.jar instead of the Debian jruby package, the testsuite passes fine. This package could clearly be uploaded to NEW right now by ignoring the testsuite failures (we're just packaging static .clj source files in the proper location in a .jar). puppetserver jruby issues aside, packaging puppetserver itself is 80% done. Using the upstream jruby-complete.jar artifact, the testsuite fails with a weird Clojure error I'm not sure I understand, but I haven't debugged it for very long. Upstream uses git submodules to vendor puppet (agent), hiera (3), facter and puppet-resource-api for the testsuite to run properly. I haven't touched that, but I believe we can either: Without the testsuite actually running, it's hard to know what files are needed in those packages. What now Puppet 5 is now deprecated. If you or your organisation cares about Puppet in Debian,3 puppetserver really isn't far away from making it in the archive. Very talented Debian Developers are always eager to work on these issues and can be contracted for very reasonable rates. If you're interested in contracting someone to help iron out the last issues, don't hesitate to reach out via one of the following: As for I, I'm happy to say I got a new contract and will go back to teaching Economics for the Winter 2021 session. I might help out with some general Debian packaging work from time to time, but it'll be as a hobby instead of a job. Thanks The work I did during the last 6 weeks would be not have been possible without the support of the Wikimedia Foundation, who were gracious enough to contract me. My particular thanks to Faidon Liambotis, Moritz M hlenhoff and John Bond. Many, many thanks to Rob Browning, Thomas Goirand, Elana Hashman, Utkarsh Gupta and Apollon Oikonomopoulos for their direct and indirect help, without which all of this wouldn't have been possible.

  1. For example, the upstream package for the Puppet Agent vendors OpenSSL.
  2. One of the problems of using Ruby libraries already packaged in Debian is that jruby currently only supports Ruby 2.5. Ruby libraries in Debian are currently expected to work with Ruby 2.7, with the transition to Ruby 3.0 planned after the Bullseye release.
  3. If you run Puppet, you clearly should care: the .deb packages upstream publishes really aren't great and I would not recommend using them.

1 January 2021

Utkarsh Gupta: FOSS Activites in December 2020

Here s my (fifteenth) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This was my 24th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/ Amongs a lot of things, this was month was crazy, hectic, adventerous, and the last of 2020 more on some parts later this month.
I finally finished my 7th semester (FTW!) and moved onto my last one! That said, I had been busy with other things but still did a bunch of Debian stuff Here are the following things I did this month:

Uploads and bug fixes:

Other $things:
  • Attended the Debian Ruby team meeting.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored golang-github-gorilla-css for Fedrico.

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my fifteenth month as a Debian LTS and sixth month as a Debian ELTS paid contributor.
I was assigned 26.00 hours for LTS and 38.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:
  • Issued DLA 2474-1, fixing CVE-2020-28928, for musl.
    For Debian 9 Stretch, these problems have been fixed in version 1.1.16-3+deb9u1.
  • Issued DLA 2481-1, fixing CVE-2020-25709 and CVE-2020-25710, for openldap.
    For Debian 9 Stretch, these problems have been fixed in version 2.4.44+dfsg-5+deb9u6.
  • Issued DLA 2484-1, fixing #969126, for python-certbot.
    For Debian 9 Stretch, these problems have been fixed in version 0.28.0-1~deb9u3.
  • Issued DLA 2487-1, fixing CVE-2020-27350, for apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.11. The update was prepared by the maintainer, Julian.
  • Issued DLA 2488-1, fixing CVE-2020-27351, for python-apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.2. The update was prepared by the maintainer, Julian.
  • Issued DLA 2495-1, fixing CVE-2020-17527, for tomcat8.
    For Debian 9 Stretch, these problems have been fixed in version 8.5.54-0+deb9u5.
  • Issued DLA 2488-2, for python-apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.3. The update was prepared by the maintainer, Julian.
  • Issued DLA 2508-1, fixing CVE-2020-35730, for roundcube.
    For Debian 9 Stretch, these problems have been fixed in version 1.2.3+dfsg.1-4+deb9u8. The update was prepared by the maintainer, Guilhem.

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:
  • Front-desk duty from 21-12 until 27-12 and from 28-12 until 03-01 for both LTS and ELTS.
  • Triaged openldap, python-certbot, lemonldap-ng, qemu, gdm3, open-iscsi, gobby, jackson-databind, wavpack, cairo, nsd, tomcat8, and bountycastle.
  • Marked CVE-2020-17527/tomcat8 as not-affected for jessie.
  • Marked CVE-2020-28052/bountycastle as not-affected for jessie.
  • Marked CVE-2020-14394/qemu as postponed for jessie.
  • Marked CVE-2020-35738/wavpack as not-affected for jessie.
  • Marked CVE-2020-3550 3-6 /qemu as postponed for jessie.
  • Marked CVE-2020-3550 3-6 /qemu as postponed for stretch.
  • Marked CVE-2020-16093/lemonldap-ng as no-dsa for stretch.
  • Marked CVE-2020-27837/gdm3 as no-dsa for stretch.
  • Marked CVE-2020- 13987, 13988, 17437 /open-iscsi as no-dsa for stretch.
  • Marked CVE-2020-35450/gobby as no-dsa for stretch.
  • Marked CVE-2020-35728/jackson-databind as no-dsa for stretch.
  • Marked CVE-2020-28935/nsd as no-dsa for stretch.
  • Auto EOL ed libpam-tacplus, open-iscsi, wireshark, gdm3, golang-go.crypto, jackson-databind, spotweb, python-autobahn, asterisk, nsd, ruby-nokogiri, linux, and motion for jessie.
  • General discussion on LTS private and public mailing list.

Other $things! \o/

Bugs and Patches Well, I did report some bugs and issues and also sent some patches:
  • Issue #44 for github-activity-readme, asking for a feature request to set custom committer s email address.
  • Issue #711 for git2go, reporting build failure for the library.
  • PR #89 for rubocop-rails_config, bumping RuboCop::Packaging to v0.5.
  • Issue #36 for rubocop-packaging, asking to try out mutant :)
  • PR #212 for cucumber-ruby-core, bumping RuboCop::Packaging to v0.5.
  • PR #213 for cucumber-ruby-core, enabling RuboCop::Packaging.
  • Issue #19 for behance, asking to relax constraints on faraday and faraday_middleware.
  • PR #37 for rubocop-packaging, enabling tests against ruby3.0! \o/
  • PR #489 for cucumber-rails, bumping RuboCop::Packaging to v0.5.
  • Issue #362 for nheko, reporting a crash when opening the application.
  • PR #1282 for paper_trail, adding RuboCop::Packaging amongst other used extensions.
  • Bug #978640 for nheko Debian package, reporting a crash, as a result of libfmt7 regression.

Misc and Fun Besides squashing bugs and submitting patches, I did some other things as well!
  • Participated in my first Advent of Code event! :)
    Whilst it was indeed fun, I didn t really complete it. No reason, really. But I ll definitely come back stronger next year, heh! :)
    All the solutions thus far could be found here.
  • Did a couple of reviews for some PRs and triaged some bugs here and there, meh.
  • Also did some cloud debugging, not so fun if you ask me, but cool enough to make me want to do it again! ^_^
  • Worked along with pollo, zigo, ehashman, rlb, et al for puppet and puppetserver in Debian. OMG, they re so lovely! <3
  • Ordered some interesting books to read January onward. New year resolution? Meh, not really. Or maybe. But nah.
  • Also did some interesting stuff this month but can t really talk about it now. Hopefully sooooon.

Until next time.
:wq for today.

17 November 2020

Louis-Philippe V ronneau: A better git diff

A few days ago I wrote a quick patch and missed a dumb mistake that made the program crash. When reviewing the merge request on Salsa, the problem became immediately apparent; Gitlab's diff is much better than what git diff shows by default in a terminal. Well, it turns out since version 2.9, git bundles a better pager, diff-highlight. la Gitlab, it will highlight what changed in the line. The output of git diff using diff-highlight Sadly, even though diff-highlight comes with the git package in Debian, it is not built by default (925288). You will need to:
$ sudo make --directory /usr/share/doc/git/contrib/diff-highlight
You can then add this line to your .gitconfig file:
[core]
  pager = /usr/share/doc/git/contrib/diff-highlight/diff-highlight   less --tabs=4 -RFX
If you use tig, you'll also need to add this line in your tigrc:
set diff-highlight = /usr/share/doc/git/contrib/diff-highlight/diff-highlight

6 November 2020

Louis-Philippe V ronneau: Book Review: Working in Public by Nadia Eghbal

I have a lot of respect for Nadia Eghbal, partly because I can't help to be jealous of her work on the economics of Free Software1. If you are not already familiar with Eghbal, she is the author of Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure, a great technical report published for the Ford Foundation in 2016. You may also have caught her excellent keynote at LCA 2017, entitled Consider the Maintainer. Her latest book, Working in Public: The Making and Maintenance of Open Source Software, published by Stripe Press a few months ago, is a great read and if this topic interests you, I highly recommend it. The book itself is simply gorgeous; bright orange, textured hardcover binding, thick paper, wonderful typesetting it has everything to please. Well, nearly everything. Sadly, it is only available on Amazon, exclusively in the United States. A real let down for a book on Free and Open Source Software. The book is divided in five chapters, namely:
  1. Github as a Platform
  2. The Structure of an Open Source Project
  3. Roles, Incentives and Relationships
  4. The Work Required by Software
  5. Managing the Costs of Production
A picture of the book cover Contrary to what I was expecting, the book feels more like an extension of the LCA keynote I previously mentioned than Roads and Bridges. Indeed, as made apparent by the following quote, Eghbal doesn't believe funding to be the primary problem of FOSS anymore:
We still don't have a common understanding about who's doing the work, why they do it, and what work needs to be done. Only when we understand the underlying behavioral dynamics of open source today, and how it differs from its early origins, can we figure out where money fits in. Otherwise, we're just flinging wet paper towels at a brick wall, hoping that something sticks. p.184
That is to say, the behavior of maintainers and the challenges they face not the eternal money problem is the real topic of this book. And it feels refreshing. When was the last time you read something on the economics of Free Software without it being mostly about what licences projects should pick and how business models can be tacked on them? I certainly can't. To be clear, I'm not sure I agree with Eghbal on this. Her having worked at Github for a few years and having interviewed mostly people in the Ruby on Rails and Javascript communities certainly shows in the form of a strong selection bias. As she herself admits, this is a book on how software on Github is produced. As much as this choice irks me (the Free Software community certainly cannot be reduced to Github), this exercise had the merit of forcing me to look at my own selection biases. As such, reading Working in Public did to me something I wasn't expecting it to do: it broke my Free Software echo chamber. Although I consider myself very familiar with the world of Free and Open Source Software, I now understand my somewhat ill-advised contempt for certain programming languages mostly JS skewed my understanding of what FOSS in 2020 really is. My Free Software world very much revolves around Debian, a project with a strong and opinionated view of Free Software, rooted in a historical and political understanding of the term. This, Eghbal argues, is not the case for a large swat of developers anymore. They are The Github Generation, people attached to Github as a platform first and foremost, and who feel "Open Source" is just a convenient way to make things. Although I could intellectualise this, before reading the book, I didn't really grok how communities akin to npm have been reshaping the modern FOSS ecosystem and how different they are from Debian itself. To be honest, I am not sure I like this tangent and it is certainly part of the reason why I had a tendency to dismiss it as a fringe movement I could safely ignore. Thanks to Nadia Eghbal, I come out of this reading more humble and certainly reminded that FOSS' heterogeneity is real and should not be idly dismissed. This book is rich in content and although I could go on (my personal notes clock-in at around 2000 words and I certainly disagree with a number of things), I'll stop here for now. Go and grab a copy already!

  1. She insists on using the term open source, but I won't :)

19 October 2020

Louis-Philippe V ronneau: Musings on long-term software support and economic incentives

Although I still read a lot, during my college sophomore years my reading habits shifted from novels to more academic works. Indeed, reading dry textbooks and economic papers for classes often kept me from reading anything else substantial. Nowadays, I tend to binge read novels: I won't touch a book for months on end, and suddenly, I'll read 10 novels back to back1. At the start of a novel binge, I always follow the same ritual: I take out my e-reader from its storage box, marvel at the fact the battery is still pretty full, turn on the WiFi and check if there are OS updates. And I have to admit, Kobo Inc. (now Rakuten Kobo) has done a stellar job of keeping my e-reader up to date. I've owned this model (a Kobo Aura 1st generation) for 7 years now and I'm still running the latest version of Kobo's Linux-based OS. Having recently had trouble updating my Nexus 5 (also manufactured 7 years ago) to Android 102, I asked myself:
Why is my e-reader still getting regular OS updates, while Google stopped issuing security patches for my smartphone four years ago?
To try to answer this, let us turn to economic incentives theory. Although not the be-all and end-all some think it is3, incentives theory is not a bad tool to analyse this particular problem. Executives at Google most likely followed a very business-centric logic when they decided to drop support for the Nexus 5. Likewise, Rakuten Kobo's decision to continue updating older devices certainly had very little to do with ethics or loyalty to their user base. So, what are the incentives that keep Kobo updating devices and why are they different than smartphone manufacturers'? A portrait of the current long-term software support offerings for smartphones and e-readers Before delving deeper in economic theory, let's talk data. I'll be focusing on 2 brands of e-readers, Amazon's Kindle and Rakuten's Kobo. Although the e-reader market is highly segmented and differs a lot based on geography, Amazon was in 2015 the clear worldwide leader with 53% of the worldwide e-reader sales, followed by Rakuten Kobo at 13%4. On the smartphone side, I'll be differentiating between Apple's iPhones and Android devices, taking Google as the barometer for that ecosystem. As mentioned below, Google is sadly the leader in long-term Android software support. Rakuten Kobo According to their website and to this Wikipedia table, the only e-readers Kobo has deprecated are the original Kobo eReader and the Kobo WiFi N289, both released in 2010. This makes their oldest still supported device the Kobo Touch, released in 2011. In my book, that's a pretty good track record. Long-term software support does not seem to be advertised or to be a clear selling point in their marketing. Amazon According to their website, Amazon has dropped support for all 8 devices produced before the Kindle Paperwhite 2nd generation, first sold in 2013. To put things in perspective, the first Kindle came out in 2007, 3 years before Kobo started selling devices. Like Rakuten Kobo, Amazon does not make promises of long-term software support as part of their marketing. Apple Apple has a very clear software support policy for all their devices:
Owners of iPhone, iPad, iPod or Mac products may obtain a service and parts from Apple or Apple service providers for five years after the product is no longer sold or longer, where required by law.
This means in the worst-case scenario of buying an iPhone model just as it is discontinued, one would get a minimum of 5 years of software support. Android Google's policy for their Android devices is to provide software support for 3 years after the launch date. If you buy a Pixel device just before the new one launches, you could theoretically only get 2 years of support. In 2018, Google decided OEMs would have to provide security updates for at least 2 years after launch, threatening not to license Google Apps and the Play Store if they didn't comply. A question of cost structure From the previous section, we can conclude that in general, e-readers seem to be supported longer than smartphones, and that Apple does a better job than Android OEMs, providing support for about twice as long. Even Fairphone, who's entire business is to build phones designed to last and to be repaired was not able to keep the Fairphone 1 (2013) updated for more than a couple years and seems to be struggling to keep the Fairphone 2 (2015) running an up to date version of Android. Anyone who has ever worked in IT will tell you: maintaining software over time is hard work and hard work by specialised workers is expensive. Most commercial electronic devices are sold and developed by for-profit enterprises and software support all comes down to a question of cost structure. If companies like Google or Fairphone are to be expected to provide long-term support for the devices they manufacture, they have to be able to fund their work somehow. In a perfect world, people would be paying for the cost of said long-term support, as it would likely be cheaper then buying new devices every few years and would certainly be better for the planet. Problem is, manufacturers aren't making them pay for it. Economists call this type of problem externalities: things that should be part of the cost of a good, but aren't for one a reason or another. A classic example of an externality is pollution. Clearly pollution is bad and leads to horrendous consequences, like climate change. Sane people agree we should drastically cut our greenhouse gas emissions, and yet, we aren't. Neo-classical economic theory argues the way to fix externalities like pollution is to internalise these costs, in other words, to make people pay for the "real price" of the goods they buy. In the case of climate change and pollution, neo-classical economic theory is plain wrong (spoiler alert: it often is), but this is where band-aids like the carbon tax comes from. Still, coming back to long-term software support, let's see what would happen if we were to try to internalise software maintenance costs. We can do this multiple ways. 1 - Include the price of software maintenance in the cost of the device This is the choice Fairphone makes. This might somewhat work out for them since they are a very small company, but it cannot scale for the following reasons:
  1. This strategy relies on you giving your money to an enterprise now, and trusting them to "Do the right thing" years later. As the years go by, they will eventually look at their books, see how much ongoing maintenance is costing them, drop support for the device, apologise and move on. That is to say, enterprises have a clear economic incentive to promise long-term support and not deliver. One could argue a company's reputation would suffer from this kind of behaviour. Maybe sometime it does, but most often people forget. Political promises are a great example of this.
  2. Enterprises go bankrupt all the time. Even if company X promises 15 years of software support for their devices, if they cease to exist, your device will stop getting updates. The internet is full of stories of IoT devices getting bricked when the parent company goes bankrupt and their servers disappear. This is related to point number 1: to some degree, you have a disincentive to pay for long-term support in advance, as the future is uncertain and there are chances you won't get the support you paid for.
  3. Selling your devices at a higher price to cover maintenance costs does not necessarily mean you will make more money overall raising more money to fund maintenance costs being the goal here. To a certain point, smartphone models are substitute goods and prices higher than market prices will tend to drive consumers to buy cheaper ones. There is thus a disincentive to include the price of software maintenance in the cost of the device.
  4. People tend to be bad at rationalising the total cost of ownership over a long period of time. Economists call this phenomenon hyperbolic discounting. In our case, it means people are far more likely to buy a 500$ phone each 3 years than a 1000$ phone each 10 years. Again, this means OEMs have a clear disincentive to include the price of long-term software maintenance in their devices.
Clearly, life is more complex than how I portrayed it: enterprises are not perfect rational agents, altruism exists, not all enterprises aim solely for profit maximisation, etc. Still, in a capitalist economy, enterprises wanting to charge for software maintenance upfront have to overcome these hurdles one way or another if they want to avoid failing. 2 - The subscription model Another way companies can try to internalise support costs is to rely on a subscription-based revenue model. This has multiple advantages over the previous option, mainly:
  1. It does not affect the initial purchase price of the device, making it easier to sell them at a competitive price.
  2. It provides a stable source of income, something that is very valuable to enterprises, as it reduces overall risks. This in return creates an incentive to continue providing software support as long as people are paying.
If this model is so interesting from an economic incentives point of view, why isn't any smartphone manufacturer offering that kind of program? The answer is, they are, but not explicitly5. Apple and Google can fund part of their smartphone software support via the 30% cut they take out of their respective app stores. A report from Sensor Tower shows that in 2019, Apple made an estimated US$ 16 billion from the App Store, while Google raked in US$ 9 billion from the Google Play Store. Although the Fortune 500 ranking tells us this respectively is "only" 5.6% and 6.5% of their gross annual revenue for 2019, the profit margins in this category are certainly higher than any of their other products. This means Google and Apple have an important incentive to keep your device updated for some time: if your device works well and is updated, you are more likely to keep buying apps from their store. When software support for a device stops, there is a risk paying customers will buy a competitor device and leave their ecosystem. This also explains why OEMs who don't own app stores tend not to provide software support for very long periods of time. Most of them only make money when you buy a new phone. Providing long-term software support thus becomes a disincentive, as it directly reduces their sale revenues. Same goes for Kindles and Kobos: the longer your device works, the more money they make with their electronic book stores. In my opinion, it's likely Amazon and Rakuten Kobo produce quarterly cost-benefit reports to decide when to drop support for older devices, based on ongoing support costs and the recurring revenues these devices bring in. Rakuten Kobo is also in a more precarious situation than Amazon is: considering Amazon's very important market share, if your device stops getting new updates, there is a greater chance people will replace their old Kobo with a Kindle. Again, they have an important economic incentive to keep devices running as long as they are profitable. Can Free Software fix this? Yes and no. Free Software certainly isn't a magic wand one can wave to make everything better, but does provide major advantages in terms of security, user freedom and sometimes costs. The last piece of the puzzle explaining why Rakuten Kobo's software support is better than Google's is technological choices. Smartphones are incredibly complex devices and have become the main computing platform of many. Similar to the web, there is a race for features and complexity that tends to create bloat and make older devices slow and painful to use. On the other hand, e-readers are simpler devices built for a single task: display electronic books. Control over the platform is also a key aspect of the cost structure of providing software updates. Whereas Apple controls both the software and hardware side of iPhones, Android is a sad mess of drivers and SoCs, all providing different levels of support over time6. If you take a look at the platforms the Kindle and Kobo are built on, you'll quickly see they both use Freescale I.MX SoCs. These processors are well known for their excellent upstream support in the Linux kernel and their relative longevity, chips being produced for either 10 or 15 years. This in turn makes updates much easier and less expensive to provide. So clearly, open architectures, free drivers and open hardware helps tremendously, but aren't enough on their own. One of the lessons we must learn from the (amazing) LineageOS project is how lack of funding hurts everyone. If there is no one to do the volunteer work required to maintain a version of LOS for your device, it won't be supported. Worse, when purchasing a new device, users cannot know in advance how many years of LOS support they will get. This makes buying new devices a frustrating hit-and-miss experience. If you are lucky, you will get many years of support. Otherwise, you risk your device becoming an expensive insecure paperweight. So how do we fix this? Anyone with a brain understands throwing away perfectly good devices each 2 years is not sustainable. Government regulations enforcing a minimum support life would be a step in the right direction, but at the end of the day, Capitalism is to blame. Like the aforementioned carbon tax, band-aid solutions can make things somewhat better, but won't fix our current economic system's underlying problems. For now though, I'll leave fixing the problem of Capitalism to someone else.

  1. My most recent novel binge has been focused on re-reading the Dune franchise. I first read the 6 novels written by Frank Herbert when I was 13 years old and only had vague and pleasant memories of his work. Great stuff.
  2. I'm back on LineageOS! Nice folks released an unofficial LOS 17.1 port for the Nexus 5 last January and have kept it updated since then. If you are to use it, I would also recommend updating TWRP to this version specifically patched for the Nexus 5.
  3. Very few serious economists actually believe neo-classical rational agent theory is a satisfactory explanation of human behavior. In my opinion, it's merely a (mostly flawed) lens to try to interpret certain behaviors, a tool amongst others that needs to be used carefully, preferably as part of a pluralism of approaches.
  4. Good data on the e-reader market is hard to come by and is mainly produced by specialised market research companies selling their findings at very high prices. Those particular statistics come from a MarketWatch analysis.
  5. If they were to tell people: You need to pay us 5$/month if you want to receive software updates, I'm sure most people would not pay. Would you?
  6. Coming back to Fairphones, if they had so much problems providing an Android 9 build for the Fairphone 2, it's because Qualcomm never provided Android 7+ support for the Snapdragon 801 SoC it uses.

Next.

Previous.