Search Results: "philipp"

24 June 2021

Louis-Philippe V ronneau: Hardening Weechat Relays Against RCE on Bullseye

I've been using weechat to connect to IRC since late 2016 and one of its killer feature is relays. They let use other frontends like the Weechat Android app or the amazing Glowing Bear (packaged in Debian Bullseye by yours truly). Sadly, relays also used to be somewhat of a security risk: anyone with access to a relay1 could run scripts on the machine running weechat by using commands such as /exec or /script. Not great. Since version 2.5 (Buster had version 2.3), you can mitigate this risk by setting a command allowlist for relays. Later versions implemented a sane default by blocking the following commands: Sadly, this default didn't make in into Bullseye. If you are running weechat and are using the relays feature, after upgrading to Bullseye, I would recommend you run the following commands in the weechat TUI:
/set relay.weechat.commands *,!exec,!fset,!set,!unset,!plugin,!script,!python,!perl,!ruby,!lua,!tcl,!guile,!javascript,!php,!secure,!upgrade,!quit
/save

  1. For example, someone steals your phone and connects to IRC via the Weechat app...

21 June 2021

Shirish Agarwal: Accessibility, Freenode and American imperialism.

Accessibility This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare phenomena . While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes. Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(. While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be  Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy. Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish  I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ???? $ cat /etc/lightdm/lightdm-gtk-greeter.conf grep keyboard
# a11y-states = states of accessibility features: name save state on exit, -name
disabled at start (default value for unlisted), +name enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd no-gnome focus &
# keyboard-position = x y[;width height] ( 50%,center -0;50% 25% by default) Works only for onboard
#keyboard= Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as onboard which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3 Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them. So this is where I m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue. I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM. It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera. It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline. I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn t work. Did see that they had fixed the bug in sid/unstable and now it s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream. There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working. Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs. Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble. Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video

Now I m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place. Coming back to the video itself, this was shared by Anthony hazard and the Title is The Atlantic slave trade: What too few textbooks told you . I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these European or American settlers bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called Critical Race theory in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says All men are born equal . That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on. Now from what I came to know over the years, the American natives are called First nations . Time and time again the American Govt. has tried or been foul towards them. Some of the examples include The Yucca Mountain nuclear waste repository . The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom. But this is not restricted to just internal matters, sadly. Since the 1950 s i.e. the advent of the cold war, America s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986 Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior. After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan 2001 ongoing There is a helpful Wikipedia article titled History of CIA which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S. Now are all the behaviors above of a civilized nation ? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn t just the U.S. China story but more about the American abuse of its power. My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. India. Even with Europe or more precisely Norway which didn t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the Spanish Flu actually originated in Kensas, U.S.

What was also interesting as I found in the Pentagon Papers much before The Watergate scandal came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960 s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not. One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have. Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now  To those who feel that habitat loss of animals is untrue, I would suggest them to see Sherni which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

10 June 2021

Louis-Philippe V ronneau: New Desktop Computer

I built my last desktop computer what seems like ages ago. In 2011, I was in a very different place, both financially and as a person. At the time, I was earning minimum wage at my school's caf to pay rent. Since the caf was owned by the school cooperative, I had an employee discount on computer parts. This gave me a chance to build my first computer from spare parts at a reasonable price. After 10 years of service1, the time has come to upgrade. Although this machine was still more than capable for day to day tasks like browsing the web or playing casual video games, it started to show its limits when time came to do more serious work. Old computer specs:
CPU: AMD FX-8530
Memory: 8GB DDR3 1600Mhz
Motherboard: ASUS TUF SABERTOOTH 990FX R2.0
Storage: Samsung 850 EVO 500GB SATA
I first started considering an upgrade in September 2020: David Bremner was kindly fixing a bug in ledger that kept me from balancing my books and since it seemed like a class of bug that would've been easily caught by an autopkgtest, I decided to add one. After adding the necessary snippets to run the upstream testsuite (an easy task I've done multiple times now), I ran sbuild and ... my computer froze and crashed. Somehow, what I thought was a simple Python package was maxing all the cores on my CPU and using all of the 8GB of memory I had available.2 A few month later, I worked on jruby and the builds took 20 to 30 minutes long enough to completely disrupt my flow. The same thing happened when I wanted to work on lintian: the testsuite would take more than 15 minutes to run, making quick iterations impossible. Sadly, the pandemic completely wrecked the computer hardware market and prices here in Canada have only recently started to go down again. As a result, I had to wait more time than I would've liked not to pay scalper prices. New computer specs:
CPU: AMD Ryzen 5900X
Memory: 64GB DDR4 3200MHz
Motherboard: MSI MPG B550 Gaming Plus
Storage: Corsair MP600 500 GB Gen4 NVME
The difference between the two machines is pretty staggering: I've gone from a CPU with 2 cores and 8 threads, to one with 12 cores and 24 threads. Not only that, but single-threaded performance has also vastly increased in those 10 years. A good example would be building grammalecte, a package I've recently sponsored. I feel it's a good benchmark, since the build relies on single-threaded performance for the normal Python operations, while being threaded when it compiles the dictionaries. On the old computer:
Build needed 00:10:07, 273040k disk space
And as you can see, on the new computer the build time has been significantly reduced:
Build needed 00:03:18, 273040k disk space
Same goes for things like the lintian testsuite. Since it's a very multi-threaded workload, it now takes less than 2 minutes to run; a 750% improvement. All this to say I'm happy with my purchase. And lo and behold I can now build ledger without a hitch, even though it maxes my 24 threads and uses 28GB of RAM. Who would've thought... Screen capture of htop showing how much resources ledger takes to build

  1. I managed to fry that PC's motherboard in 2016 and later replaced it with a brand new one. I also upgraded the storage along the way, from a very cheap cacheless 120GB SSD to a larger Samsung 850 EVO SATA drive.
  2. As it turns out, ledger is mostly written in C++ :)

29 March 2021

Louis-Philippe V ronneau: Montreal 2021 BSP

Last weekend Debian Quebec held a Bug Squashing Party to try to fix some bugs in the upcoming Debian Bullseye. I wasn't convinced at first, but Tassia's contagious energy and willingness to help organise the event eventually won me over. And shockers! it was really fun. Group picture of the BSP attendees on Jitsi Meet We fixed a couple of RC bugs, held lightning talks and had a virtual pizza party! My lightning talk on autopkgtests was well received and a few people decided to migrate to sbuild and enable autopkgtests by default. Sergio's talk on debuginfod was incredibly interesting. I'm not a C programmer and the live demo made me understand how this service can help making debugging C easier. Jerome's talk on using Yubikeys to unlock LUKS encrypted drives was also very good! It also served as a reminder that Yubico's product are much more featureful and convenient to use than other Open Hardware/ Free Software hardware tokens. Hopefully that will change as enterprises like Nitrokey and Solokey mature. This was my third BSP, crazy how time flies... With the Bullseye release closing in, you should try to join or organise one!

13 March 2021

Louis-Philippe V ronneau: Preventing an OpenPGP Smartcard from caching the PIN eternally

While I'm overall very happy about my migration to an OpenPGP hardware token, the process wasn't entirely seamless and I had to hack around some issues, for example the PIN caching behavior in GnuPG. As described in this bug the cache-ttl parameter in GnuPG is not implemented and thus does nothing. This means once you type in your PIN, it is cached for as long as the token is plugged. Security-wise, this is not great. Instead of manually disconnecting the token frequently, I've come up with a script that restarts scdameon if the token hasn't been used during the last X minutes. It seems to work well and I call it using this cron entry: */5 * * * * my_user /usr/local/bin/restart-scdaemon To get a log from scdaemon, you'll need a ~/.gnupg/scdaemon.conf file that looks like this:
debug-level basic
log-file /var/log/scdaemon.log
Hopefully it can be useful to others!
 #!/usr/bin/python3
# Copyright 2021, Louis-Philippe V ronneau <pollo@debian.org>
#
# This script is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
# 
# This script is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
# 
# You should have received a copy of the GNU General Public License along with
# this script. If not, see <http://www.gnu.org/licenses/>.
"""
This script restarts scdaemon after X minutes of inactivity to reset the PIN
cache. It is meant to be ran by cron each X/2 minutes.
This is needed because there is currently no way to set a cache time for
smartcards. See https://dev.gnupg.org/T3362#137811 for more details.
"""
import os
import sys
import subprocess
from datetime import datetime, timedelta
from argparse import ArgumentParser
p = ArgumentParser(description=__doc__)
p.add_argument('-l', '--log', default="/var/log/scdaemon.log",
               help='Path to the scdaemon log file.')
p.add_argument('-t', '--timeout', type=int, default="10",
               help=("Desired cache time in minutes."))
args = p.parse_args()
def get_last_line(scdaemon_log):
    """Returns the last line of the scdameon log file."""
    with open(scdaemon_log, 'rb') as f:
        f.seek(-2, os.SEEK_END)
        while f.read(1) != b'\n':
            f.seek(-2, os.SEEK_CUR)
        last_line = f.readline().decode()
    return last_line
def check_time(last_line, timeout):
    """Returns True if scdaemon hasn't been called since the defined timeout."""
    # We don't need to restart scdaemon if no gpg command has been run since
    # the last time it was restarted.
    should_restart = True
    if "OK closing connection" in last_line:
        should_restart = False
    else:
        last_time = datetime.strptime(last_line[:19], '%Y-%m-%d %H:%M:%S')
        now = datetime.now()
        delta = now - last_time
        if delta <= timedelta(minutes = timeout):
            should_restart = False
    return should_restart
def restart_scdaemon(scdaemon_log):
    """Restart scdaemon and verify the restart process was successful."""
    subprocess.run(['gpgconf', '--reload', 'scdaemon'], check=True)
    last_line = get_last_line(scdaemon_log)
    if "OK closing connection" not in last_line:
        sys.exit("Restarting scdameon has failed.")
def main():
    """Main function."""
    last_line = get_last_line(args.log)
    should_restart = check_time(last_line, args.timeout)
    if should_restart:
        restart_scdaemon(args.log)
if __name__ == "__main__":
    main()

7 March 2021

Louis-Philippe V ronneau: New Year, New OpenPGP Key

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Sun, 07 Mar 2021 13:00:17 -0500
I've recently set up a new OpenPGP key and will be transitioning away from my
old one.
It is a chance for me to start using a OpenPGP hardware token and to transition
to a new personal email address (my main public contact is still my
 @debian.org  address).
Please note that I've partially redacted some email addresses from this
statement to minimise the amount of spam I receive. It shouldn't be hard for
actual humans to follow the instructions below to find the complete addresses.
The old key will continue to be valid for a few months, but will eventually be
revoked.
You might know my old OpenPGP certificate as:
pub   rsa4096/0x7AEAC4EC6AAA0A97 2014-12-22 [expires: 2021-06-02]
      Key fingerprint = 677F 54F1 FA86 81AD 8EC0  BCE6 7AEA C4EC 6AAA 0A97
uid       Louis-Philippe V ronneau <REDACTED@riseup.net>
uid       Louis-Philippe V ronneau (alias) <REDACTED@riseup.net>
uid       Louis-Philippe V ronneau (debian) <REDACTED@debian.org>
My new OpenPGP certificate is:
pub   ed25519/0xE1E5457C8BAD4113 2021-03-06 [expires: 2022-03-06]
      Key fingerprint = F64D 61D3 21F3 CB48 9156  753D E1E5 457C 8BAD 4113
uid       Louis-Philippe V ronneau <REDACTED@veronneau.org>
uid       Louis-Philippe V ronneau <REDACTED@debian.org>
These days, I mostly use my key for Debian and to sign git commit. I don't
really expect you to sign my new key if you had signed my old one.
I've published the new certificate on keys.openpgp.org as well as on my
personal website. You can fetch it like this:
    $ wget -O- https://veronneau.org/media/openpgp.key   gpg --import
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEZ39U8fqGga2OwLzmeurE7GqqCpcFAmBFFM8ACgkQeurE7Gqq
CpcuchAAscAeszdtA+TlCI4YvK5nlk+nJnCnNBSnl7Et+jiNjq8kB/Fud+dWMTXC
Zag8oJkalbbxub0BT0bEAn+BiBunu58E0gd0Xq4syTbqZ5o5IN17S/tfxCD0k1hf
ewrnYZ2l0i5g4YvHGKC+Xv4D+Z84BylnIRaXHqlUdluOVfVYDfLybOAqoktO/KUH
I+vQBwXj0Fr/QAtgiz5Nwh/YHFiU9xMSvr5ozRwAFs6+xfIqFHuVPRRkEN5iVo4D
kkMIz+kFfkoh4aWIP4dgAu39XnEgxwTR9J+4yE8TzCCMzO7xCK0X6vqgPAxYMPvb
RuP4FnGWOnGnlcudCUAUkOaryrwRi+dPQTnNICHTYsvVc7dg+W0EhVUkwEuuEwpI
qtcB/Y5AGhqK0Cc11uXiFjIQwLTgwcUez4F0xrGeqsTtAM5gyRup2w0jbocTuYSh
ZRv/2zwrq/S3xVrUYGqdT+L5odmkBzz9zOwY5WlU2H9CMFOdh71XOv9wWQXan9ou
hLRodeOQ8MinIBP+sX36ol1zg/aP7mCHvRRSBzWt7l3WhVxgZFpNwIfp/RZqU0R4
IEq48mntFhPvHJjFmAKLKK/ckzNMtSn+HWQPJV3HTInKCTu5PTNMU3SAvPHOHEps
V6WWSOPB+1Lm/tlIULDc+0SopWoiWO4NObCSs8zMZHlYPBk5x/KIdQQBFgoAHRYh
BMqnQAcHqBawIC/DzfQlelCyHPqFBQJgRRTPAAoJEPQlelCyHPqFFVEA/1qScaAk
O+eBEE4q0BaJDsqweCS1XCcuQGkQCKi5Zv6kAQChQ96Ve7cKbN/wRkT9pdIhmx01
+CmIsnp3k6N0ZYLLCg==
=onl0
-----END PGP SIGNATURE-----

21 February 2021

Louis-Philippe V ronneau: dput-ng or: How I Learned to Stop Worrying and Love the Hooks

As my contributions to Debian continue to grow in number, I find myself uploading to the archive more and more often. Although I'm pretty happy with my current sbuild-based workflow, twice in the past few weeks I inadvertently made a binary upload instead of a source-only one.1 As it turns out, I am not the only DD who has had this problem before. As Nicolas Dandrimont kindly pointed to me, dput-ng supports pre and post upload hooks that can be used to lint your uploads. Even better, it also ships with a check-debs hook that lets you block binary uploads. Pretty neat, right? In a perfect world, enabling the hook would only be a matter of adding it in the hook list of /etc/dput.d/metas/debian.json and using the following defaults:
"check-debs":  
    "enforce": "source",
    "skip": false
 ,
Sadly, bug #983160 currently makes this whole setup more complex than it should be and forces me to use two different dput-ng profiles pointing to two different files in /etc/dput.d/metas: a default source-only one (ftp-master) and a binary upload one (ftp-master-binary). Otherwise, one could use a single profile that disallows binary uploads and when needed, override the hook using something like this:
$ dput --override "check-debs.enforce=debs" foo_1.0.0-1_amd64.changes
I did start debugging the --override issue in dput-ng, but I'm not sure I'll have time to submit a patch anytime soon. In the meantime, I'm happy to report I shouldn't be uploading the wrong .changes file by mistake again!

  1. Thanks to Holger Levsen and Adrian Bunk for catching those and notifying me.

17 February 2021

Louis-Philippe V ronneau: What are the incentive structures of Free Software?

When I started my Master's degree in January 2018, I was confident I would be done in a year and half. After all, I only had one year of classes and I figured 6 months to write a thesis would be plenty. Three years later, I'm finally done: the final version of my thesis was accepted on January 22nd 2021. My thesis, entitled What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model, can be found here 1. If you care about such things, both the data and the final document can be built from source with the code in this git repository. Results and analysis My thesis is divided in four main sections:
  1. an introduction to FOSS
  2. a chapter discussing the incentive structures of Free Software (and arguing the so called Tragedy of the Commons isn't inevitable)
  3. a chapter trying to use empirical data to validate the theories presented in the previous chapter
  4. an annex on the various FOSS business models
If you're reading this blog post, chances are you'll find both section 1 and 4 a tad boring, as you might already be familiar with these concepts. Incentives So, why do people contribute to Free Software? Unsurprisingly, it's complicated. Many economists have studied this topic, but for some reason, most research happened in the early 2000s. Although papers don't all agree with each other and most importantly, about the variables' importance, the main incentives2 can be summarized by: Giving weights to these variables is not an easy thing: the FOSS ecosystem is highly heterogeneous and thus, people tend to write FOSS for different reasons. Moreover, incentives tend to shift with time as the ecosystem does. People writing Free Software in the 1990s probably did it for different reasons than people in 2021. These four variables can also be divided in two general categories: extrinsic and intrinsic incentives. Monetary gain expectancy is an extrinsic incentive (its value is delayed and mediated), whereas the three other ones are intrinsic (they have an immediate value by themselves). Empirical analysis Theory is nice, but it's even better when you can back it up with data. Sadly, most of the papers on the economic incentives of FOSS are either purely theoretical, or use sample sizes so small they could as well be. Using the data from the StackOverflow 2018 survey, I thus tried to see if I could somehow confirm my previous assumptions. With 129 questions and more than 100 000 respondents (which after statistical processing yields between 28 000 and 39 000 observations per variable of interest), the StackOverflow 2018 survey is a very large dataset compared to what economists are used to work with. Sadly, it wasn't entirely enough to come up with hard answers. There is a strong and significant correlation between writing Free Software and having a higher salary, but endogeneity problems3 made it hard to give a reliable estimate of how much money this would represent. Same goes for writing code has a hobby: it seems there is a strong and significant correlation, but the exact numbers I came up with cannot really be trusted. The results on community as an incentive to writing FOSS were the ones that surprised me the most. Although I expected the relation to be quite strong, the coefficients predicted were in fact quite small. I theorise this is partly due to only 8% of the respondents declaring they didn't feel like they belonged in the IT community. With such a high level of adherence, the margin for improvement has to be smaller. As for altruism, I wasn't able get any meaningful results. In my opinion this is mostly due to the fact there was no explicit survey question on this topic and I tried to make up for it by cobbling data together. Kinda anti-climatic, isn't it? I would've loved to come up with decisive conclusions on this topic, but if there's one thing I learned while writing this thesis, it is I don't know much after all.

  1. Note that the thesis is written in French.
  2. Of course, life is complex and so are people's motivations. One could come up with dozen more reasons why people contribute to Free Software. The "fun" of theoretical modelisation is trying to make complex things somewhat simpler.
  3. I'll spare you the details, but this means there is no way to know if this correlation is the result of a causal link between the two variables. There are ways to deal with this problem (using an instrumental variables model is a very popular one), but again, the survey didn't provide the proper instruments to do so. For example, it could very well be the correlation is due to omitted variables. If you are interested in this topic (and can read French), I talk about this issue in section 3.2.8.

23 January 2021

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, Revisited

In 2019, I got curious and asked Soci t de Transport de Montr al, Montreal's transit agency, for the foot traffic data of Montreal's subway. Since then, two years has passed and with COVID-19 still going strong, I wanted to see what impact the pandemic had had. And oh boy, what an impact it is. So here it is, data from 2001 to 2020, graphed the same way as in the original 2019 blog post. I could certainly juice this data, graph the pandemic using daily figures and come up with a long and interesting blog post analysing the main trends. I start teaching next Monday though and I still have prep work to do, so I'll leave that to someone else. By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Interactive Map of Montreal's Subway Licences

9 January 2021

Louis-Philippe V ronneau: puppetserver 6: a Debian packaging post-mortem

I have been a Puppet user for a couple of years now, first at work, and eventually for my personal servers and computers. Although it can have a steep learning curve, I find Puppet both nimble and very powerful. I also prefer it to Ansible for its speed and the agent-server model it uses. Sadly, Puppet Labs hasn't been the most supportive upstream and tends to move pretty fast. Major versions rarely last for a whole Debian Stable release and the upstream .deb packages are full of vendored libraries.1 Since 2017, Apollon Oikonomopoulos has been the one doing most of the work on Puppet in Debian. Sadly, he's had less time for that lately and with Puppet 5 being deprecated in January 2021, Thomas Goirand, Utkarsh Gupta and I have been trying to package Puppet 6 in Debian for the last 6 months. With Puppet 6, the old ruby Puppet server using Passenger is not supported anymore and has been replaced by puppetserver, written in Clojure and running on the JVM. That's quite a large change and although puppetserver does reuse some of the Clojure libraries puppetdb (already in Debian) uses, packaging it meant quite a lot of work. Work in the Clojure team As part of my efforts to package puppetserver, I had the pleasure to join the Clojure team and learn a lot about the Clojure ecosystem. As I mentioned earlier, a lot of the Clojure dependencies needed for puppetserver were already in the archive. Unfortunately, when Apollon Oikonomopoulos packaged them, the leiningen build tool hadn't been packaged yet. This meant I had to rebuild a lot of packages, on top of packaging some new ones. Since then, thanks to the efforts of Elana Hashman, leiningen has been packaged and lets us run the upstream testsuites and create .jar artifacts closer to those upstream releases. During my work on puppetserver, I worked on the following packages:
List of packages
  • backport9
  • bidi-clojure
  • clj-digest-clojure
  • clj-helper
  • clj-time-clojure
  • clj-yaml-clojure
  • cljx-clojure
  • core-async-clojure
  • core-cache-clojure
  • core-match-clojure
  • cpath-clojure
  • crypto-equality-clojure
  • crypto-random-clojure
  • data-csv-clojure
  • data-json-clojure
  • data-priority-map-clojure
  • java-classpath-clojure
  • jnr-constants
  • jnr-enxio
  • jruby
  • jruby-utils-clojure
  • kitchensink-clojure
  • lazymap-clojure
  • liberator-clojure
  • ordered-clojure
  • pathetic-clojure
  • potemkin-clojure
  • prismatic-plumbing-clojure
  • prismatic-schema-clojure
  • puppetlabs-http-client-clojure
  • puppetlabs-i18n-clojure
  • puppetlabs-ring-middleware-clojure
  • puppetserver
  • raynes-fs-clojure
  • riddley-clojure
  • ring-basic-authentication-clojure
  • ring-clojure
  • ring-codec-clojure
  • shell-utils-clojure
  • ssl-utils-clojure
  • test-check-clojure
  • tools-analyzer-clojure
  • tools-analyzer-jvm-clojure
  • tools-cli-clojure
  • tools-reader-clojure
  • trapperkeeper-authorization-clojure
  • trapperkeeper-clojure
  • trapperkeeper-filesystem-watcher-clojure
  • trapperkeeper-metrics-clojure
  • trapperkeeper-scheduler-clojure
  • trapperkeeper-webserver-jetty9-clojure
  • url-clojure
  • useful-clojure
  • watchtower-clojure
If you want to learn more about packaging Clojure libraries and applications, I rewrote the Debian Clojure packaging tutorial and added a section about the quirks of using leiningen without a dedicated dh_lein tool. Work left to get puppetserver 6 in the archive Unfortunately, I was not able to finish the puppetserver 6 packaging work. It is thus unlikely it will make it in Debian Bullseye. If the issues described below are fixed, it would be possible to to package puppetserver in bullseye-backports though. So what's left? jruby Although I tried my best (kudos to Utkarsh Gupta and Thomas Goirand for the help), jruby in Debian is still broken. It does build properly, but the testsuite fails with multiple errors: jruby testsuite failures aside, I have not been able to use the jruby.deb the package currently builds in jruby-utils-clojure (testsuite failure). I had the same exact failure with the (more broken) jruby version that is currently in the archive, which leads me to think this is a LOAD_PATH issue in jruby-utils-clojure. More on that below. To try to bypass these issues, I tried to vendor jruby into jruby-utils-clojure. At first I understood vendoring meant including upstream pre-built artifacts (jruby-complete.jar) and shipping them directly. After talking with people on the #debian-mentors and #debian-ftp IRC channels, I now understand why this isn't a good idea (and why it's not permitted in Debian). Many thanks to the people who were patient and kind enough to discuss this with me and give me alternatives. As far as I now understand it, vendoring in Debian means "to have an embedded copy of the source code in another package". Code shipped that way still needs to be built from source. This means we need to build jruby ourselves, one way or another. Vendoring jruby in another package thus isn't terribly helpful. If fixing jruby the proper way isn't possible, I would suggest trying to build the package using embedded code copies of the external libraries jruby needs to build, instead of trying to use the Debian libraries.2 This should make it easier to replicate what upstream does and to have a final .jar that can be used. jruby-utils-clojure This package is a first-level dependency for puppetserver and is the glue between jruby and puppetserver. It builds fine, but the testsuite fails when using the Debian jruby package. I think the problem is caused by a jruby LOAD_PATH issue. The Debian jruby package plays with the LOAD_PATH a little to try use Debian packages instead of downloading gems from the web, as upstream jruby does. This seems to clash with the gem-home, gem-path, and jruby-load-path variables in the jruby-utils-clojure package. The testsuite plays around with these variables and some Ruby libraries can't be found. I tried to fix this, but failed. Using the upstream jruby-complete.jar instead of the Debian jruby package, the testsuite passes fine. This package could clearly be uploaded to NEW right now by ignoring the testsuite failures (we're just packaging static .clj source files in the proper location in a .jar). puppetserver jruby issues aside, packaging puppetserver itself is 80% done. Using the upstream jruby-complete.jar artifact, the testsuite fails with a weird Clojure error I'm not sure I understand, but I haven't debugged it for very long. Upstream uses git submodules to vendor puppet (agent), hiera (3), facter and puppet-resource-api for the testsuite to run properly. I haven't touched that, but I believe we can either: Without the testsuite actually running, it's hard to know what files are needed in those packages. What now Puppet 5 is now deprecated. If you or your organisation cares about Puppet in Debian,3 puppetserver really isn't far away from making it in the archive. Very talented Debian Developers are always eager to work on these issues and can be contracted for very reasonable rates. If you're interested in contracting someone to help iron out the last issues, don't hesitate to reach out via one of the following: As for I, I'm happy to say I got a new contract and will go back to teaching Economics for the Winter 2021 session. I might help out with some general Debian packaging work from time to time, but it'll be as a hobby instead of a job. Thanks The work I did during the last 6 weeks would be not have been possible without the support of the Wikimedia Foundation, who were gracious enough to contract me. My particular thanks to Faidon Liambotis, Moritz M hlenhoff and John Bond. Many, many thanks to Rob Browning, Thomas Goirand, Elana Hashman, Utkarsh Gupta and Apollon Oikonomopoulos for their direct and indirect help, without which all of this wouldn't have been possible.

  1. For example, the upstream package for the Puppet Agent vendors OpenSSL.
  2. One of the problems of using Ruby libraries already packaged in Debian is that jruby currently only supports Ruby 2.5. Ruby libraries in Debian are currently expected to work with Ruby 2.7, with the transition to Ruby 3.0 planned after the Bullseye release.
  3. If you run Puppet, you clearly should care: the .deb packages upstream publishes really aren't great and I would not recommend using them.

17 November 2020

Louis-Philippe V ronneau: A better git diff

A few days ago I wrote a quick patch and missed a dumb mistake that made the program crash. When reviewing the merge request on Salsa, the problem became immediately apparent; Gitlab's diff is much better than what git diff shows by default in a terminal. Well, it turns out since version 2.9, git bundles a better pager, diff-highlight. la Gitlab, it will highlight what changed in the line. The output of git diff using diff-highlight Sadly, even though diff-highlight comes with the git package in Debian, it is not built by default (925288). You will need to:
$ sudo make --directory /usr/share/doc/git/contrib/diff-highlight
You can then add this line to your .gitconfig file:
[core]
  pager = /usr/share/doc/git/contrib/diff-highlight/diff-highlight   less --tabs=4 -RFX
If you use tig, you'll also need to add this line in your tigrc:
set diff-highlight = /usr/share/doc/git/contrib/diff-highlight/diff-highlight

6 November 2020

Louis-Philippe V ronneau: Book Review: Working in Public by Nadia Eghbal

I have a lot of respect for Nadia Eghbal, partly because I can't help to be jealous of her work on the economics of Free Software1. If you are not already familiar with Eghbal, she is the author of Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure, a great technical report published for the Ford Foundation in 2016. You may also have caught her excellent keynote at LCA 2017, entitled Consider the Maintainer. Her latest book, Working in Public: The Making and Maintenance of Open Source Software, published by Stripe Press a few months ago, is a great read and if this topic interests you, I highly recommend it. The book itself is simply gorgeous; bright orange, textured hardcover binding, thick paper, wonderful typesetting it has everything to please. Well, nearly everything. Sadly, it is only available on Amazon, exclusively in the United States. A real let down for a book on Free and Open Source Software. The book is divided in five chapters, namely:
  1. Github as a Platform
  2. The Structure of an Open Source Project
  3. Roles, Incentives and Relationships
  4. The Work Required by Software
  5. Managing the Costs of Production
A picture of the book cover Contrary to what I was expecting, the book feels more like an extension of the LCA keynote I previously mentioned than Roads and Bridges. Indeed, as made apparent by the following quote, Eghbal doesn't believe funding to be the primary problem of FOSS anymore:
We still don't have a common understanding about who's doing the work, why they do it, and what work needs to be done. Only when we understand the underlying behavioral dynamics of open source today, and how it differs from its early origins, can we figure out where money fits in. Otherwise, we're just flinging wet paper towels at a brick wall, hoping that something sticks. p.184
That is to say, the behavior of maintainers and the challenges they face not the eternal money problem is the real topic of this book. And it feels refreshing. When was the last time you read something on the economics of Free Software without it being mostly about what licences projects should pick and how business models can be tacked on them? I certainly can't. To be clear, I'm not sure I agree with Eghbal on this. Her having worked at Github for a few years and having interviewed mostly people in the Ruby on Rails and Javascript communities certainly shows in the form of a strong selection bias. As she herself admits, this is a book on how software on Github is produced. As much as this choice irks me (the Free Software community certainly cannot be reduced to Github), this exercise had the merit of forcing me to look at my own selection biases. As such, reading Working in Public did to me something I wasn't expecting it to do: it broke my Free Software echo chamber. Although I consider myself very familiar with the world of Free and Open Source Software, I now understand my somewhat ill-advised contempt for certain programming languages mostly JS skewed my understanding of what FOSS in 2020 really is. My Free Software world very much revolves around Debian, a project with a strong and opinionated view of Free Software, rooted in a historical and political understanding of the term. This, Eghbal argues, is not the case for a large swat of developers anymore. They are The Github Generation, people attached to Github as a platform first and foremost, and who feel "Open Source" is just a convenient way to make things. Although I could intellectualise this, before reading the book, I didn't really grok how communities akin to npm have been reshaping the modern FOSS ecosystem and how different they are from Debian itself. To be honest, I am not sure I like this tangent and it is certainly part of the reason why I had a tendency to dismiss it as a fringe movement I could safely ignore. Thanks to Nadia Eghbal, I come out of this reading more humble and certainly reminded that FOSS' heterogeneity is real and should not be idly dismissed. This book is rich in content and although I could go on (my personal notes clock-in at around 2000 words and I certainly disagree with a number of things), I'll stop here for now. Go and grab a copy already!

  1. She insists on using the term open source, but I won't :)

19 October 2020

Louis-Philippe V ronneau: Musings on long-term software support and economic incentives

Although I still read a lot, during my college sophomore years my reading habits shifted from novels to more academic works. Indeed, reading dry textbooks and economic papers for classes often kept me from reading anything else substantial. Nowadays, I tend to binge read novels: I won't touch a book for months on end, and suddenly, I'll read 10 novels back to back1. At the start of a novel binge, I always follow the same ritual: I take out my e-reader from its storage box, marvel at the fact the battery is still pretty full, turn on the WiFi and check if there are OS updates. And I have to admit, Kobo Inc. (now Rakuten Kobo) has done a stellar job of keeping my e-reader up to date. I've owned this model (a Kobo Aura 1st generation) for 7 years now and I'm still running the latest version of Kobo's Linux-based OS. Having recently had trouble updating my Nexus 5 (also manufactured 7 years ago) to Android 102, I asked myself:
Why is my e-reader still getting regular OS updates, while Google stopped issuing security patches for my smartphone four years ago?
To try to answer this, let us turn to economic incentives theory. Although not the be-all and end-all some think it is3, incentives theory is not a bad tool to analyse this particular problem. Executives at Google most likely followed a very business-centric logic when they decided to drop support for the Nexus 5. Likewise, Rakuten Kobo's decision to continue updating older devices certainly had very little to do with ethics or loyalty to their user base. So, what are the incentives that keep Kobo updating devices and why are they different than smartphone manufacturers'? A portrait of the current long-term software support offerings for smartphones and e-readers Before delving deeper in economic theory, let's talk data. I'll be focusing on 2 brands of e-readers, Amazon's Kindle and Rakuten's Kobo. Although the e-reader market is highly segmented and differs a lot based on geography, Amazon was in 2015 the clear worldwide leader with 53% of the worldwide e-reader sales, followed by Rakuten Kobo at 13%4. On the smartphone side, I'll be differentiating between Apple's iPhones and Android devices, taking Google as the barometer for that ecosystem. As mentioned below, Google is sadly the leader in long-term Android software support. Rakuten Kobo According to their website and to this Wikipedia table, the only e-readers Kobo has deprecated are the original Kobo eReader and the Kobo WiFi N289, both released in 2010. This makes their oldest still supported device the Kobo Touch, released in 2011. In my book, that's a pretty good track record. Long-term software support does not seem to be advertised or to be a clear selling point in their marketing. Amazon According to their website, Amazon has dropped support for all 8 devices produced before the Kindle Paperwhite 2nd generation, first sold in 2013. To put things in perspective, the first Kindle came out in 2007, 3 years before Kobo started selling devices. Like Rakuten Kobo, Amazon does not make promises of long-term software support as part of their marketing. Apple Apple has a very clear software support policy for all their devices:
Owners of iPhone, iPad, iPod or Mac products may obtain a service and parts from Apple or Apple service providers for five years after the product is no longer sold or longer, where required by law.
This means in the worst-case scenario of buying an iPhone model just as it is discontinued, one would get a minimum of 5 years of software support. Android Google's policy for their Android devices is to provide software support for 3 years after the launch date. If you buy a Pixel device just before the new one launches, you could theoretically only get 2 years of support. In 2018, Google decided OEMs would have to provide security updates for at least 2 years after launch, threatening not to license Google Apps and the Play Store if they didn't comply. A question of cost structure From the previous section, we can conclude that in general, e-readers seem to be supported longer than smartphones, and that Apple does a better job than Android OEMs, providing support for about twice as long. Even Fairphone, who's entire business is to build phones designed to last and to be repaired was not able to keep the Fairphone 1 (2013) updated for more than a couple years and seems to be struggling to keep the Fairphone 2 (2015) running an up to date version of Android. Anyone who has ever worked in IT will tell you: maintaining software over time is hard work and hard work by specialised workers is expensive. Most commercial electronic devices are sold and developed by for-profit enterprises and software support all comes down to a question of cost structure. If companies like Google or Fairphone are to be expected to provide long-term support for the devices they manufacture, they have to be able to fund their work somehow. In a perfect world, people would be paying for the cost of said long-term support, as it would likely be cheaper then buying new devices every few years and would certainly be better for the planet. Problem is, manufacturers aren't making them pay for it. Economists call this type of problem externalities: things that should be part of the cost of a good, but aren't for one a reason or another. A classic example of an externality is pollution. Clearly pollution is bad and leads to horrendous consequences, like climate change. Sane people agree we should drastically cut our greenhouse gas emissions, and yet, we aren't. Neo-classical economic theory argues the way to fix externalities like pollution is to internalise these costs, in other words, to make people pay for the "real price" of the goods they buy. In the case of climate change and pollution, neo-classical economic theory is plain wrong (spoiler alert: it often is), but this is where band-aids like the carbon tax comes from. Still, coming back to long-term software support, let's see what would happen if we were to try to internalise software maintenance costs. We can do this multiple ways. 1 - Include the price of software maintenance in the cost of the device This is the choice Fairphone makes. This might somewhat work out for them since they are a very small company, but it cannot scale for the following reasons:
  1. This strategy relies on you giving your money to an enterprise now, and trusting them to "Do the right thing" years later. As the years go by, they will eventually look at their books, see how much ongoing maintenance is costing them, drop support for the device, apologise and move on. That is to say, enterprises have a clear economic incentive to promise long-term support and not deliver. One could argue a company's reputation would suffer from this kind of behaviour. Maybe sometime it does, but most often people forget. Political promises are a great example of this.
  2. Enterprises go bankrupt all the time. Even if company X promises 15 years of software support for their devices, if they cease to exist, your device will stop getting updates. The internet is full of stories of IoT devices getting bricked when the parent company goes bankrupt and their servers disappear. This is related to point number 1: to some degree, you have a disincentive to pay for long-term support in advance, as the future is uncertain and there are chances you won't get the support you paid for.
  3. Selling your devices at a higher price to cover maintenance costs does not necessarily mean you will make more money overall raising more money to fund maintenance costs being the goal here. To a certain point, smartphone models are substitute goods and prices higher than market prices will tend to drive consumers to buy cheaper ones. There is thus a disincentive to include the price of software maintenance in the cost of the device.
  4. People tend to be bad at rationalising the total cost of ownership over a long period of time. Economists call this phenomenon hyperbolic discounting. In our case, it means people are far more likely to buy a 500$ phone each 3 years than a 1000$ phone each 10 years. Again, this means OEMs have a clear disincentive to include the price of long-term software maintenance in their devices.
Clearly, life is more complex than how I portrayed it: enterprises are not perfect rational agents, altruism exists, not all enterprises aim solely for profit maximisation, etc. Still, in a capitalist economy, enterprises wanting to charge for software maintenance upfront have to overcome these hurdles one way or another if they want to avoid failing. 2 - The subscription model Another way companies can try to internalise support costs is to rely on a subscription-based revenue model. This has multiple advantages over the previous option, mainly:
  1. It does not affect the initial purchase price of the device, making it easier to sell them at a competitive price.
  2. It provides a stable source of income, something that is very valuable to enterprises, as it reduces overall risks. This in return creates an incentive to continue providing software support as long as people are paying.
If this model is so interesting from an economic incentives point of view, why isn't any smartphone manufacturer offering that kind of program? The answer is, they are, but not explicitly5. Apple and Google can fund part of their smartphone software support via the 30% cut they take out of their respective app stores. A report from Sensor Tower shows that in 2019, Apple made an estimated US$ 16 billion from the App Store, while Google raked in US$ 9 billion from the Google Play Store. Although the Fortune 500 ranking tells us this respectively is "only" 5.6% and 6.5% of their gross annual revenue for 2019, the profit margins in this category are certainly higher than any of their other products. This means Google and Apple have an important incentive to keep your device updated for some time: if your device works well and is updated, you are more likely to keep buying apps from their store. When software support for a device stops, there is a risk paying customers will buy a competitor device and leave their ecosystem. This also explains why OEMs who don't own app stores tend not to provide software support for very long periods of time. Most of them only make money when you buy a new phone. Providing long-term software support thus becomes a disincentive, as it directly reduces their sale revenues. Same goes for Kindles and Kobos: the longer your device works, the more money they make with their electronic book stores. In my opinion, it's likely Amazon and Rakuten Kobo produce quarterly cost-benefit reports to decide when to drop support for older devices, based on ongoing support costs and the recurring revenues these devices bring in. Rakuten Kobo is also in a more precarious situation than Amazon is: considering Amazon's very important market share, if your device stops getting new updates, there is a greater chance people will replace their old Kobo with a Kindle. Again, they have an important economic incentive to keep devices running as long as they are profitable. Can Free Software fix this? Yes and no. Free Software certainly isn't a magic wand one can wave to make everything better, but does provide major advantages in terms of security, user freedom and sometimes costs. The last piece of the puzzle explaining why Rakuten Kobo's software support is better than Google's is technological choices. Smartphones are incredibly complex devices and have become the main computing platform of many. Similar to the web, there is a race for features and complexity that tends to create bloat and make older devices slow and painful to use. On the other hand, e-readers are simpler devices built for a single task: display electronic books. Control over the platform is also a key aspect of the cost structure of providing software updates. Whereas Apple controls both the software and hardware side of iPhones, Android is a sad mess of drivers and SoCs, all providing different levels of support over time6. If you take a look at the platforms the Kindle and Kobo are built on, you'll quickly see they both use Freescale I.MX SoCs. These processors are well known for their excellent upstream support in the Linux kernel and their relative longevity, chips being produced for either 10 or 15 years. This in turn makes updates much easier and less expensive to provide. So clearly, open architectures, free drivers and open hardware helps tremendously, but aren't enough on their own. One of the lessons we must learn from the (amazing) LineageOS project is how lack of funding hurts everyone. If there is no one to do the volunteer work required to maintain a version of LOS for your device, it won't be supported. Worse, when purchasing a new device, users cannot know in advance how many years of LOS support they will get. This makes buying new devices a frustrating hit-and-miss experience. If you are lucky, you will get many years of support. Otherwise, you risk your device becoming an expensive insecure paperweight. So how do we fix this? Anyone with a brain understands throwing away perfectly good devices each 2 years is not sustainable. Government regulations enforcing a minimum support life would be a step in the right direction, but at the end of the day, Capitalism is to blame. Like the aforementioned carbon tax, band-aid solutions can make things somewhat better, but won't fix our current economic system's underlying problems. For now though, I'll leave fixing the problem of Capitalism to someone else.

  1. My most recent novel binge has been focused on re-reading the Dune franchise. I first read the 6 novels written by Frank Herbert when I was 13 years old and only had vague and pleasant memories of his work. Great stuff.
  2. I'm back on LineageOS! Nice folks released an unofficial LOS 17.1 port for the Nexus 5 last January and have kept it updated since then. If you are to use it, I would also recommend updating TWRP to this version specifically patched for the Nexus 5.
  3. Very few serious economists actually believe neo-classical rational agent theory is a satisfactory explanation of human behavior. In my opinion, it's merely a (mostly flawed) lens to try to interpret certain behaviors, a tool amongst others that needs to be used carefully, preferably as part of a pluralism of approaches.
  4. Good data on the e-reader market is hard to come by and is mainly produced by specialised market research companies selling their findings at very high prices. Those particular statistics come from a MarketWatch analysis.
  5. If they were to tell people: You need to pay us 5$/month if you want to receive software updates, I'm sure most people would not pay. Would you?
  6. Coming back to Fairphones, if they had so much problems providing an Android 9 build for the Fairphone 2, it's because Qualcomm never provided Android 7+ support for the Snapdragon 801 SoC it uses.

11 September 2020

Louis-Philippe V ronneau: Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas! It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university. Looking for a job What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more! I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open. For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam. I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it... If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy! Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows. If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)

  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here.

23 August 2020

Philipp Kern: Self-service buildd givebacks now use Salsa auth

As client certificates are on the way out and Debian's SSO solution is effectively not maintained any longer, I switched self-service buildd givebacks over to Salsa authentication. It lives again at https://buildd.debian.org/auth/giveback.cgi. For authorization you still need to be in the "debian" group for now, i.e. be a regular Debian member.For convenience the package status web interface now features an additional column "Actions" with generated "giveback" links.Please remember to file bugs if you give builds back because of flakiness of the package rather than the infrastructure and resist the temptation to use this excessively to let your package migrate. We do not want to end up with packages that require multiple givebacks to actually build in stable, as that would hold up both security and stable updates needlessly and complicate development.

16 July 2020

Louis-Philippe V ronneau: DebConf Videoteam Sprint Report -- DebConf20@Home

DebConf20 starts in about 5 weeks, and as always, the DebConf Videoteam is working hard to make sure it'll be a success. As such, we held a sprint from July 9th to 13th to work on our new infrastructure. A remote sprint certainly ain't as fun as an in-person one, but we nonetheless managed to enjoy ourselves. Many thanks to those who participated, namely: We also wish to extend our thanks to Thomas Goirand and Infomaniak for providing us with virtual machines to experiment on and host the video infrastructure for DebConf20. Advice for presenters For DebConf20, we strongly encourage presenters to record their talks in advance and send us the resulting video. We understand this is more work, but we think it'll make for a more agreeable conference for everyone. Video conferencing is still pretty wonky and there is nothing worse than a talk ruined by a flaky internet connection or hardware failures. As such, if you are giving a talk at DebConf this year, we are asking you to read and follow our guide on how to record your presentation. Fear not: we are not getting rid of the Q&A period at the end of talks. Attendees will ask their questions either on IRC or on a collaborative pad and the Talkmeister will relay them to the speaker once the pre-recorded video has finished playing. New infrastructure, who dis? Organising a virtual DebConf implies migrating from our battle-tested on-premise workflow to a completely new remote one. One of the major changes this means for us is the addition of Jitsi Meet to our infrastructure. We normally have 3 different video sources in a room: two cameras and a slides grabber. With the new online workflow, directors will be able to play pre-recorded videos as a source, will get a feed from a Jitsi room and will see the audience questions as a third source. This might seem simple at first, but is in fact a very major change to our workflow and required a lot of work to implement.
               == On-premise ==                                          == Online ==
                                                      
              Camera 1                                                 Jitsi
                                                                          
                 v                 ---> Frontend                         v                 ---> Frontend
                                                                                            
    Slides -> Voctomix -> Backend -+--> Frontend         Questions -> Voctomix -> Backend -+--> Frontend
                                                                                            
                 ^                 ---> Frontend                         ^                 ---> Frontend
                                                                          
              Camera 2                                           Pre-recorded video
In our tests, playing back pre-recorded videos to voctomix worked well, but was sometimes unreliable due to inconsistent encoding settings. Presenters will thus upload their pre-recorded talks to SReview so we can make sure there aren't any obvious errors. Videos will then be re-encoded to ensure a consistent encoding and to normalise audio levels. This process will also let us stitch the Q&As at the end of the pre-recorded videos more easily prior to publication. Reducing the stream latency One of the pitfalls of the streaming infrastructure we have been using since 2016 is high video latency. In a worst case scenario, remote attendees could get up to 45 seconds of latency, making participation in events like BoFs arduous. In preparation for DebConf20, we added a new way to stream our talks: RTMP. Attendees will thus have the option of using either an HLS stream with higher latency or an RTMP stream with lower latency. Here is a comparative table that can help you decide between the two protocols:
HLS RTMP
Pros
  • Can be watched from a browser
  • Auto-selects a stream encoding
  • Single URL to remember
  • Lower latency (~5s)
Cons
  • Higher latency (up to 45s)
  • Requires a dedicated video player (VLC, mpv)
  • Specific URLs for each encoding setting
Live mixing from home with VoctoWeb Since DebConf16, we have been using voctomix, a live video mixer developed by the CCC VOC. voctomix is conveniently divided in two: voctocore is the backend server while voctogui is a GTK+ UI frontend directors can use to live-mix. Although voctogui can connect to a remote server, it was primarily designed to run either on the same machine as voctocore or on the same LAN. Trying to use voctogui from a machine at home to connect to a voctocore running in a datacenter proved unreliable, especially for high-latency and low bandwidth connections. Inspired by the setup FOSDEM uses, we instead decided to go with a web frontend for voctocore. We initially used FOSDEM's code as a proof of concept, but quickly reimplemented it in Python, a language we are more familiar with as a team. Compared to the FOSDEM PHP implementation, voctoweb implements A / B source selection (akin to voctogui) as well as audio control, two very useful features. In the following screen captures, you can see the old PHP UI on the left and the new shiny Python one on the right. The old PHP voctowebThe new Python3 voctoweb Voctoweb is still under development and is likely to change quite a bit until DebConf20. Still, the current version seems to works well enough to be used in production if you ever need to. Python GeoIP redirector We run multiple geographically-distributed streaming frontend servers to minimize the load on our streaming backend and to reduce overall latency. Although users can connect to the frontends directly, we typically point them to live.debconf.org and redirect connections to the nearest server. Sadly, 6 months ago MaxMind decided to change the licence on their GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano Rivera wrote a Python program that uses the new database and reworked our ansible frontend server role. Since the new database cannot be redistributed freely, you'll have to get a (free) license key from MaxMind if you to use this role. Ansible & CI improvements Infrastructure as code is a living process and needs constant care to fix bugs, follow changes in DSL and to implement new features. All that to say a large part of the sprint was spent making our ansible roles and continuous integration setup more reliable, less buggy and more featureful. All in all, we merged 26 separate ansible-related merge request during the sprint! As always, if you are good with ansible and wish to help, we accept merge requests on our ansible repository :)

11 June 2020

Louis-Philippe V ronneau: How to capture a remote IRC session live

DebConf20 will be held online this year and I've started doing some work for the DebConf videoteam to prepare what's to come. One thing I want us to do is capture a live IRC session and use it as a video input in Voctomix, the live video mixer we use. This way, at the end of a talk we could show both the attendees asking questions on IRC and the presenter replying to them side-by-side. A mockup of a side-by-side voctogui window with someone on the left and a terminal running weechat on the right Capturing a live video of an IRC client on a remote headless server is somewhat more complicated than you might think; as far as I know, neither ffmpeg nor gstreamer support recording a live ssh pseudoterminal1. Worse, neither weechat nor irssi run on X: they use ncurses... Although you can capture an X11 window with ffmpeg -f x11grab, I wasn't able to get them to run with Xvfb. Capturing the framebuffer One thing I dislike with this method is the framebuffer isn't always easy to access on remote machines. If you don't have a serial connection, you can try using a VNC server that can. I did my tests in a VM on an KVM hypervisor and used virt-manager to access the framebuffer. I had a hard time setting the framebuffer resolution to a 16:9 aspect ratio. The winning combination ended up passing the nomodeset kernel parameter at boot and setting up these parameters in /etc/default/grub2:
GRUB_GFXMODE=1280x720
GRUB_GFXPAYLOAD_LINUX=keep
To make the text more readable, this is the /etc/default/console-setup file that seemed to make the most sense:
# CONFIGURATION FILE FOR SETUPCON
# Consult the console-setup(5) manual page.
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="UTF-8"
CODESET="Lat15"
FONTFACE="TerminusBold"
FONTSIZE="12x24"
Once that is done, the only thing left is to run the IRC client and launch ffmpeg. The magic command to record the framebuffer seems to be something like:
ffmpeg -f fbdev -framerate 60 -i /dev/fb0 -c:v libvpx -crf 10 -b:v 1M -auto-alt-ref 0 output.webm
Here is what I ended up with:

  1. We need something similarly flexible and featureful that can output to a TCP socket.
  2. Don't forget to run update-grub before rebooting!

8 April 2020

Louis-Philippe V ronneau: Using Jitsi Meet with Puppet for self-hosted video conferencing

Here's a blog post I wrote for the puppet.com blog. Many thanks to Ben Ford and all their team!. With everything that is currently happening around the world, many of us IT folks have had to solve complex problems in a very short amount of time. Pretty quickly at work, I was tasked with finding a way to make virtual meetings easy, private and secure. Whereas many would have turned to a SaaS offering, we decided to use Jitsi Meet, a modern and fully on-premise FOSS videoconferencing solution. Jitsi works on all platforms by running in a browser and comes with nifty Android and iOS applications. We've been using our instance quite a bit, and so far everyone from technical to non-technical users have been pretty happy with it. Jitsi Meet is powered by WebRTC and can be broken into multiple parts across multiple machines if needed. In addition to the webserver running the Jitsi Meet JavaScript code, the base configuration uses the Videobridge to manage users' video feeds, Jicofo as a conference focus to manage media sessions and the Prosody XMPP server to tie it all together. Here's a network diagram I took from their documentation to show how those applications interact: A network diagram that shows how the different bits of jitsi meet work together Getting started with the Jitsi Puppet module First of all, you'll need a valid domain name and a server with decent bandwidth. Jitsi has published a performance evaluation of the Videobridge to help you spec your instance appropriately. You will also need to open TCP ports 443, 4443 and UDP port 10000 in your firewall. The puppetlabs/firewall module could come in handy here. Once that is done, you can use the smash/jitsimeet Puppet module on a Debian 10 (Buster) server to spin up an instance. A basic configuration would look like this:
  class   'jitsimeet':
    fqdn                 => 'jitsi.example.com',
    repo_key             => puppet:///files/apt/jitsimeet.gpg,
    manage_certs         => true,
    jitsi_vhost_ssl_key  => '/etc/letsencrypt/live/jitsi.example.com/privkey.pem'
    jitsi_vhost_ssl_cert => '/etc/letsencrypt/live/jitsi.example.com/cert.pem'
    auth_vhost_ssl_key   => '/etc/letsencrypt/live/auth.jitsi.example.com/privkey.pem'
    auth_vhost_ssl_cert  => '/etc/letsencrypt/live/auth.jitsi.example.com/cert.pem'
    jvb_secret           => 'mysupersecretstring',
    focus_secret         => 'anothersupersecretstring',
    focus_user_password  => 'yetanothersecret',
    meet_custom_options  =>  
      'enableWelcomePage'         => true,
      'disableThirdPartyRequests' => true,
     ;
   
The jitsimeet module is still pretty young: it clearly isn't perfect and some external help would be very appreciated. If you have some time, here are a few things that would be nice to work on: If you use this module to manage your Jitsi Meet instance, please send patches and bug reports our way! Learn more

31 March 2020

Chris Lamb: Free software activities in March 2020

Here is my monthly update covering what I have been doing in the free software world during March 2020 (previous month): In addition, I did even more hacking on the Lintian static analysis tool for Debian packages, including:
Reproducible builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter. This month, I: In our tooling, I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading version 138 to Debian: The Reproducible Builds project also operates a fully-featured and comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, I reworked the web-based package rescheduling tool to:
Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 8 hours on its sister Extended LTS project. You can find out more about the Debian LTS project via the following video:
Debian Uploads For the Debian Privacy Maintainers team I requested that the pyptlib package be removed from the archive (#953429) as well as uploading onionbalance (0.1.8-6) to fix test failures under Pytest 3.x (#953535) and a new upstream release of nautilus-wipe. Finally, I sponsored an upload of bilibop (0.6.1) on behalf of Yann Amar.

30 March 2020

Louis-Philippe V ronneau: Using Zoom's web client on Linux

TL;DR: The zoom meeting link you have probably look like this:
https://zoom.us/j/123456789
To use the web client, use this instead:
https://zoom.us/wc/join/123456789
Avant-propos Like too many institutions, the school where I teach chose to partner up with Zoom. I wasn't expecting anything else, as my school's IT department is a Windows shop. Well, I guess I'm still a little disappointed. Although I had vaguely heard of Zoom before, I had never thought I'd be forced to use it. Lucky for me, my employer decided not to force us to use it. To finish the semester, I plan to record myself and talk with my students on a Jitsi Meet instance. I will still have to attend meetings on Zoom though. I'm well aware of Zoom's bad privacy record and I will not install their desktop application. Zoom does offer a web client. Sadly, on Linux you need to jump through hoops to be able to use it. Using Zoom's web client on Linux Zoom's web client apparently works better on Chrome, so I decided to use Chromium. Without already having the desktop client installed on your machine, the standard procedure to use the web client would be:
  1. Open the link to the meeting in Chromium
  2. Click on the "download & run Zoom" link showed on the page
  3. Click on the "join from your browser" link that then shows up
Sadly, that's not what happens on Linux. When you click on the "download & run Zoom" link, it brings you to a page with instructions on how to install the desktop client on Linux. You can thwart that stupid behavior by changing your browser's user agent to make it look like you are using Windows. This is the UA string I've been using:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36
With that, when you click on the "download & run Zoom" link, it will try to download a .exe file. Cancel the download and you should now see the infamous "join from your browser" link. Upon closer inspection, it seem you can get to the web client by changing the meeting's URL. The zoom meeting link you have probably look like this:
https://zoom.us/j/123456789
To use the web client, use this instead:
https://zoom.us/wc/join/123456789
Jitsi Meet Puppet Module I've been playing around with Jitsi Meet quite a bit recently and I've written a Puppet module to install and configure an instance! The module certainly isn't perfect, but should wield a working Jitsi instance. If you already have a Puppet setup, please give it a go! I'm looking forward receiving feedback (and patches) to improve it.

Next.

Previous.