Search Results: "pm"

6 February 2022

Jonathan McDowell: Free Software Activities for 2021

About a month later than I probably should have posted it, here s a recap of my Free Software activities in 2021. For previous years see 2019 + 2020. Again, this year had fewer contributions than I d like thanks to continuing fatigue about the state of the world, and trying to work on separation between work and leisure while working from home. I ve made some effort to improve that balance but it s still a work in progress.

Conferences No surprise, I didn t attend any in-person conferences in 2021. I find virtual conferences don t do a lot for me (a combination of my not carving time out for them in the same way, because not being at the conference means other things will inevitably intrude, and the lack of the social side) but I did get to attend a few of the DebConf21 talks, which was nice. I m hoping to make it to DebConf22 this year in person.

Debian Most of my contributions to Free software continue to happen within Debian. As part of the Data Protection Team I responded to various inbound queries to that team. Some of this involved chasing up other project teams who had been slow to respond - folks, if you re running a service that stores personal data about people then you need to be responsive to requests about it. Some of this was dealing with what look like automated scraping tools which send no information about the person making the request, and in all the cases we ve seen so far there s been no indication of any data about that person on any systems we have access to. Further team time was wasted dealing with the Princeton-Radboud Study on Privacy Law Implementation (though Matthew did the majority of the work on this). The Debian Keyring was possibly my largest single point of contribution. We re in a roughly 3 month rotation of who handles the keyring updates, and I handled 2021.03.24, 2021.04.09, 2021.06.25, 2021.09.25 + 2021.12.24 For Debian New Members I m mostly inactive as an application manager - we generally seem to have enough available recently. If that changes I ll look at stepping in to help, but I don t see that happening. I continue to be involved in Front Desk, having various conversations throughout the year with the rest of the team, but there s no doubt Mattia and Pierre-Elliott are the real doers at present. I did take part in an NM Committee appeals process. In terms of package uploads I continued to work on gcc-xtensa-lx106, largely doing uploads to deal with updates to the GCC version or packaging (8 + 9). sigrok had a few minor updates, libsigkrok 0.5.2-3, pulseview 0.4.2-3 as well as a new upstream release of sigrok CLI 0.7.2-1. There was a last minute pre-release upload of libserialport 0.1.1-4 thanks to a kernel change in v5.10.37 which removed termiox support. Despite still not writing any VHDL these days I continue to keep an eye on ghdl, because I found it a useful tool in the past. Last year that was just a build fix for LLVM 11.1.0 - 1.0.0+dfsg+5. Andreas Bombe has largely taken over more proactive maintenance, which is nice to see. I uploaded OpenOCD 0.11.0~rc1-2, cleaning up some packaging / dependency issues. This was followed by 0.11.0~rc2-1 as a newer release candidate. Sadly 0.11.0 did not make it in time for bullseye, but rc2 was fairly close and I uploaded 0.11.0-1 once bullseye was released. Finally I did a drive-by upload for garmin-forerunner-tools 0.10repacked-12, cleaning up some packaging issues and uploading it to salsa. My Forerunner 305 has died (after 11 years of sterling service) and the Forerunner 45 I ve replaced it with uses a different set of tools, so I decided it didn t make sense to pick up longer term ownership of the package.

Linux My Linux contributions continued to revolve around pushing MikroTik RB3011 support upstream. There was a minor code change to Set FIFO sizes for ipq806x (which fixed up the allowed MTU for the internal switch + VLANs). The rest was DTS related - adding ADM DMA + NAND definitions now that the ADM driver was merged, adding tsens details, adding USB port info and adding the L2CC and RPM details for IPQ8064. Finally I was able to update the RB3011 DTS to enable NAND + USB. With all those in I m down to 4 local patches against a mainline kernel, all of which are hacks that aren t suitable for submission upstream. 2 are for patching in details of the root device and ethernet MAC addresses, one is dealing with the fact the IPQ8064 has some reserved memory that doesn t play well with AUTO_ZRELADDR (there keeps being efforts to add some support for this via devicetree, but unfortunately it gets shot down every time), and the final one is a hack to turn off the LCD backlight by treating it as an LED (actually supporting the LCD properly is on my TODO list).

Personal projects 2021 didn t see any releases of onak. It s not dead, just resting, but Sequoia PGP is probably where you should be looking for a modern OpenPGP implementation. I continued work on my Desk Viking project, which is an STM32F103 based debug tool inspired by the Bus Pirate. The main addtion was some CCLib support (forking it in the process to move to Python 3 and add some speed ups) to allow me to program my Zigbee dongles, but I also added some 1-Wire search logic and some support for Linux emulation mode with VCD output to allow for a faster development cycle. I really want to try and get OpenOCD JTAG mode supported at some point, and have vague plans for an STM32F4 based version that have suffered from a combination of a silicon shortage and a lack of time. That wraps up 2021. I d like to say I m hoping to make more Free Software contributions this year, but I don t have a concrete plan yet for how that might happen, so I ll have to wait and see.

5 February 2022

Thorsten Alteholz: My Debian Activities in January 2022

FTP master This month I accepted 342 and rejected 57 packages. The overall number of packages that got accepted was 366. Lately I was asked: Is it ftpmaster s opinion and policy that there is no difference in NEW queue review process between bin and src? This is a yes/no-question and in this generality the answer is clearly: Every package in NEW needs a full review. Of course there are circumstances with exceptions. For example after an upload of -1, which would get a full review, the upload of -2 afterwards, introducing a new binary package, would get a much faster review. In this case it would make sense to ping on IRC and draw attention to this. Nevertheless the evaluation of a light review might differ between the maintainer and the person doing the review. Debian LTS This was my ninety-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of: I also started to work on security support for golang packages. Though this sounds like an easy task, the devel is in the details.
As CVEs need to be fixed in unstable first, at the moment it looks like this is the most time consuming task. I will report later on my journey to fix open CVEs in golang-github-russellhaering-goxmldsig Further I worked on packages in NEW on security-master and injected missing sources. Last but not least I did some days of frontdesk duties and attended an LTS meeting on IRC. Debian ELTS This month was the forty-third ELTS month. During my allocated time I uploaded: Further I worked on an update for apache2 Last but not least I did some days of frontdesk duties. Debian Printing I was finally able to upload a new version of hplip and Ubuntu is now able to build new snaps for their next release.
Altogether I uploaded new upstream versions or improved packaging of: Now the dashboard looks rather good and my next task for February is an update of cups. Debian Astro As there was a release of version 1.9.4 of INDI and indi-3rdparty, I also uploaded the new version of all INDI drivers and releated libs from indi-3rdparty. Other stuff This month I uploaded lots of new upstream releases of golang packages.

4 February 2022

Ian Jackson: EUDCC QR codes vs NHS Travel barcodes vs TAC Verify

The EU Digital Covid Certificate scheme is a format for (digitally signed) vaccination status certificates. Not only EU countries participate - the UK is now a participant in this scheme. I am currently on my way to go skiing in the French Alps. So I needed a certificate that would be accepted in France. AFAICT the official way to do this is to get the international certificate from the NHS, and take it to a French pharmacy who will convert it into something suitably French. (AIUI the NHS international barcode is the same regardless of whether you get it via the NHS website, the NHS app, or a paper letter. NB that there is one barcode per vaccine dose so you have to get the right one - probably that means your booster since there s a 9 month rule!) I read on an forum somewhere that you could use the French TousAntiCovid app to convert the barcode. So I thought I would try that. The TousAntiCovid is Free Softare and on F-Droid, so I was happy to install and use it for this. I also used the French TAC Verify app to check to see what barcodes were accepted. (I found an official document addressed to French professionals recommending this as an option for verifying the status of visitors to one s establishment.) Unfortunately this involves a googlified phone, but one could use a burner phone or ask a friend who s bitten that bullet already. I discovered that, indeed: This made me curious. I used a QR code reader to decode both barcodes. The decodings were identical! A long string of guff starting HC1:. AIUI it is an encoded JWT. But there was a difference in the framing: Binary Eye reported that the NHS barcode used error correction level M (medium, aka 15%). The TousAntiCovid barcode used level L (low, 7%). I had my QR code software regenerate a QR code at level M for the data from the TousAntiCovid code. The result was a QR code which is identical (pixel-wise) to the one from the NHS. So the only difference is the error correction level. Curiously, both L (low, generated by TousAntiCovid, accepted by TAC Verify) and M (medium, generated by NHS, rejected by TAC Verify) are lower than the Q (25") recommended by what I think is the specification. This is all very odd. But the upshot is that I think you can convert the NHS international barcode into something that should work in France simply by passing it through any QR code software to re-encode it at error correction level L (7%). But if you re happy to use the TousAntiCovid app it s probably a good way to store them. I guess I ll find out when I get to France if the converted NHS barcodes work in real establishments. Thanks to the folks behind sanipasse.fr for publishing some helpful backround info and operating a Free Software backed public verification service. Footnote To compare the QR codes pixelwise, I roughly cropped the NHS PDF image using a GUI tool, and then on each of the two images used pnmcrop (to trim the border), pnmscale (to rescale the one-pixel-per-pixel output from Binary Eye) and pnmarith -difference to compare them (producing a pretty squirgly image showing just the pixel edges due to antialiasing).

comment count unavailable comments

1 February 2022

Paul Wise: FLOSS Activities January 2022

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages
  • Debian servers: ping folks about mail forwarding issues
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The oci-cli, oci-python-sdk, circuitbreaker, autoconf-archive, libpst, purple-discord, sptag work was sponsored. All other work was done on a volunteer basis.

30 January 2022

Russell Coker: Links Jan 2022

Washington Post has an interesting article on how gender neutral language is developing in different countries [1]. pimaker has an interesting blog post about how they wrote a RISCV CPU emulator to boot a Linux kernel in a pixel shader in the VR Chat platform [2]. ZD has an interesting article about the new Solo Bumblebee platform for writing EBPF programs to run inside the Linux kernel [3]. EBPF is an interesting platform and it s good to have new tools to help people develop for it. Big Think has an interesting article about augmented reality suggesting that it could be worse than social media for driving disputes [4]. Some people would want tags of racial status on all people they see. Vice has an insightful article making the case for 8 hours of work per week [5]. The Guardian has an insightful article about how our attention is being stolen by modern technology [6]. Interesting article about the Ilobleed rootkit that targets the HP ILO server management system [7], it s apparently designed for persistent attacks as it bypasses the firmware upgrade process and updates only the version number so anyone who thinks that a firmware update will fix it is horribly mislead. Nick Bostrom wrote an insightful and disturbing article for Aeon titles None of Our Technologies has Managed to Destroy Humanity Yet [8] about the existential risk of new technologies and how they might be mitigated, much of which involves authoritarian governments unlike any we have seen before. As a counterpoint the novel A Deepness in the Sky claims that universal surveillance would be as damaging to the future of a society as planet-buster bombs. Euronews has an informative article about eco-fascism [9]. The idea that the solution to ecological problems is to have less people in the world and particularly less non-white people is a gateway from green politics to fascism. The developer of the colors and faker npm libraries (for NodeJS) recently uploaded corrupted versions of those libraries as a protest against companies that use them without paying him [10]. This is the wrong way to approach the issue, but it does demonstrate a serious problem with systems like NPM that allow automatic updates. It gives a better result to just use software packaged by a distribution which has QA checks applied including on security updates. VentureBeat has an interesting article on the way that AI is taking over jobs [11]. Lots of white collar jobs can be replaced by machine learning systems, we need to plan for the ways this will change the economy. The Atlantic has an interesting article about tapeworms that infect some ants [12]. The tapeworms make the ants live longer and produce pheromones to make the other ants serve them. This increases the chance that the ants will be eaten by birds which is the next stage in the tapeworm lifecycle.

29 January 2022

Abiola Ajadi: Debci- An introduction for beginners!

Hello again! Been a minute! for this blog i will continue from my previous article where i explained Debci you can read more about it here. In my previous article I mentioned Debci stands for Debian Continous Integration and it exist to make sure packages work currently after an update by testing all of the packages that have tests written in them to make sure it works and nothing is broken. For my internship, I am working on improving the user experience through the UI of the debci site making it easier to use.

Debci consist of the following major terms:
  • Suite
  • Architecture
  • Packages
  • Trigger

How it works together There are three releases in the active maintenance which are Unstable, Testing, and stable(these are known as the suite). What do they mean? Unstable: This is where active development occurs and packages are initially uploaded. This distribution is always called sid. Testing: After packages undergone some degree of testing in unstable, they are installed into the testing directory. This directory contains packages that have not yet been accepted into the stable release but are on their way there. Stable: The stable distribution includes Debian s most recent officially released distribution. The current stable release which is Debian 11 is codenamed Bullseye. Also we have the oldstable which is the previous stable release. The Debian 10 is now old stable which was codenamed Buster. Architectures: These are known as the CPUs achitecture and there are various ports like amd64, arm64, i386 et.c. An scenerio for example is if a user wants to test a package such as acorn in Testing on arm64 along with a package X from Unstable this would be a pin-package (Pin packages are packages that need to be obtained from a different suite than the main suite that selected.), which means the package the user wants to test with the initial Package selected.Finally, trigger can be described as the name of the test job which is optional. This test is done to check if those packages in unstable can be migrated to Testing. This is a break down of Debci and I hope you enjoyed learning about what my internship entails. Till next time! references: Debian releases. Ports

26 January 2022

Timo Jyrinki: Unboxing Dell XPS 13 - openSUSE Tumbleweed alongside preinstalled Ubuntu

A look at the 2021 model of Dell XPS 13 - available with Linux pre-installed
I received a new laptop for work - a Dell XPS 13. Dell has been long famous for offering certain models with pre-installed Linux as a supported option, and opting for those is nice for moving some euros/dollars from certain PC desktop OS monopoly towards Linux desktop engineering costs. Notably Lenovo also offers Ubuntu and Fedora options on many models these days (like Carbon X1 and P15 Gen 2).
black box

opened box

accessories and a leaflet about Linux support

laptop lifted from the box, closed

laptop with lid open

Ubuntu running

openSUSE runnin
Obviously a smooth, ready-to-rock Ubuntu installation is nice for most people already, but I need openSUSE, so after checking everything is fine with Ubuntu, I continued to install openSUSE Tumbleweed as a dual boot option. As I m a funny little tinkerer, I obviously went with some special things. I wanted:
  • Ubuntu to remain as the reference supported OS on a small(ish) partition, useful to compare to if trying out new development versions of software on openSUSE and finding oddities.
  • openSUSE as the OS consuming most of the space.
  • LUKS encryption for openSUSE without LVM.
  • ext4 s new fancy fast_commit feature in use during filesystem creation.
  • As a result of all that, I ended up juggling back and forth installation screens a couple of times (even more than shown below, and also because I forgot I wanted to use encryption the first time around).
First boots to pre-installed Ubuntu and installation of openSUSE Tumbleweed as the dual-boot option:
(if the embedded video is not shown, use a direct link)
Some notes from the openSUSE installation:
  • openSUSE installer s partition editor apparently does not support resizing or automatically installing side-by-side another Linux distribution, so I did part of the setup completely on my own.
  • Installation package download hanged a couple of times, only passed when I entered a mirror manually. On my TW I ve also noticed download problems recently, there might be a problem with some mirror I need to escalate.
  • The installer doesn t very clearly show encryption status of the target installation - it took me a couple of attempts before I even noticed the small encrypted column and icon (well, very small, see below), which also did not spell out the device mapper name but only the main partition name. In the end it was going to do the right thing right away and use my pre-created encrypted target partition as I wanted, but it could be a better UX. Then again I was doing my very own tweaks anyway.
  • Let s not go to the details why I m so old-fashioned and use ext4 :)
  • openSUSE s installer does not work fine with HiDPI screen. Funnily the tty consoles seem to be fine and with a big font.
  • At the end of the video I install the two GNOME extensions I can t live without, Dash to Dock and Sound Input & Output Device Chooser.

Russell Coker: Australia/NZ Linux Meetings

I am going to start a new Linux focused FOSS online meeting for people in Australia and nearby areas. People can join from anywhere but the aim will be to support people in nearby areas. To cover the time zone range for Australia this requires a meeting on a weekend, I m thinking of the first Saturday of the month at 1PM Melbourne/Sydney time, that would be 10AM in WA and 3PM in NZ. We may have corner cases of daylight savings starting and ending on different days, but that shouldn t be a big deal as I think those times can vary by an hour either way without being too inconvenient for anyone. Note that I describe the meeting as Linux focused because my plans include having a meeting dedicated to different versions of BSD Unix and a meeting dedicated to the HURD. But those meetings will be mainly for Linux people to learn about other Unix-like OSs. One focus I want to have for the meetings is hands-on work, live demonstrations, and short highly time relevant talks. There are more lectures on YouTube than anyone could watch in a lifetime (see the Linux.conf.au channel for some good ones [1]). So I want to run events that give benefits that people can t gain from watching YouTube on their own. Russell Stuart and I have been kicking around ideas for this for a while. I think that the solution is to just do it. I know that Saturday won t work for everyone (no day will) but it will work for many people. I am happy to discuss changing the start time by an hour or two if that seems likely to get more people. But I m not particularly interested in trying to make it convenient for people in Hawaii or India, my idea is for an Australia/NZ focused event. I would be more than happy to share lecture notes etc with people in other countries who run similar events. As an aside I d be happy to give a talk for an online meeting at a Hawaiian LUG as the timezone is good for me. Please pencil in 1PM Melbourne time on the 5th of Feb for the first meeting. The meeting requirements will be a PC with good Internet access running a recent web browser and a ssh client for the hands-on stuff. A microphone or webcam is NOT required, any questions you wish to ask can be done with text if that s what you prefer. Suggestions for the name of the group are welcome.

23 January 2022

Matthieu Caneill: Debsources, python3, and funky file names

Rumors are running that python2 is not a thing anymore. Well, I'm certainly late to the party, but I'm happy to report that sources.debian.org is now running python3. Wait, it wasn't? Back when development started, python3 was very much a real language, but it was hard to adopt because it was not supported by many libraries. So python2 was chosen, meaning print-based debugging was used in lieu of print()-based debugging, and str were bytes, not unicode. And things were working just fine. One day python2 EOL was announced, with a date far in the future. Far enough to procrastinate for a long time. Combine this with a codebase that is stable enough to not see many commits, and the fact that Debsources is a volunteer-based project that happens at best on week-ends, and you end up with a dormant software and a missed deadline. But, as dormant as the codebase is, the instance hosted at sources.debian.org is very popular and gets 200k to 500k hits per day. Largely enough to be worth a proper maintenance and a transition to python3. Funky file names While transitioning to python3 and juggling left and right with str, bytes and unicode for internal objects, files, database entries and HTTP content, I stumbled upon a bug that has been there since day 1. Quick recap if you're unfamiliar with this tool: Debsources displays the content of the source packages in the Debian archive. In other words, it's a bit like GitHub, but for the Debian source code. And some pieces of software out there, that ended up in Debian packages, happen to contain files whose names can't be decoded to UTF-8. Interestingly enough, there's no such thing as a standard for file names: with a few exceptions that vary by operating system, any sequence of bytes can be a legit file name. And some sequences of bytes are not valid UTF-8. Of course those files are rare, and using ASCII characters to name a file is a much more common practice than using bytes in a non-UTF-8 character encoding. But when you deal with almost 100 million files on which you have no control (those files come from free software projects, and make their way into Debian without any renaming), it happens. Now back to the bug: when trying to display such a file through the web interface, it would crash because it can't convert the file name to UTF-8, which is needed for the HTML representation of the page. Bugfix An often valid approach when trying to represent invalid UTF-8 content is to ignore errors, and replace them with ? or . This is what Debsources actually does to display non-UTF-8 file content. Unfortunately, this best-effort approach is not suitable for file names, as file names are also identifiers in Debsources: among other places, they are part of URLs. If an URL were to use placeholder characters to replace those bytes, there would be no deterministic way to match it with a file on disk anymore. The representation of binary data into text is a known problem. Multiple lossless solutions exist, such as base64 and its variants, but URLs looking like https://sources.debian.org/src/Y293c2F5LzMuMDMtOS4yL2Nvd3NheS8= are not readable at all compared to https://sources.debian.org/src/cowsay/3.03-9.2/cowsay/. Plus, not backwards-compatible with all existing links. The solution I chose is to use double-percent encoding: this allows the representation of any byte in an URL, while keeping allowed characters unchanged - and preventing CGI gateways from trying to decode non-UTF-8 bytes. This is the best of both worlds: regular file names get to appear normally and are human-readable, and funky file names only have percent signs and hex numbers where needed. Here is an example of such an URL: https://sources.debian.org/src/aspell-is/0.51-0-4/%25EDslenska.alias/. Notice the %25ED to represent the percentage symbol itself (%25) followed by an invalid UTF-8 byte (%ED). Transitioning to this was quite a challenge, as those file names don't only appear in URLs, but also in web pages themselves, log files, database tables, etc. And everything was done with str: made sense in python2 when str were bytes, but not much in python3. What are those files? What's their network? I was wondering too. Let's list them!
import os
with open('non-utf-8-paths.bin', 'wb') as f:
    for root, folders, files in os.walk(b'/srv/sources.debian.org/sources/'):
        for path in folders + files:
            try:
                path.decode('utf-8')
            except UnicodeDecodeError:
                f.write(root + b'/' + path + b'\n')
Running this on the Debsources main instance, which hosts pretty much all Debian packages that were part of a Debian release, I could find 307 files (among a total of almost 100 million files). Without looking deep into them, they seem to fall into 2 categories: That last point hits home, as it was clearly lacking in Debsources. A funky file name is now part of its test suite. ;)

Antoine Beaupr : Switching from OpenNTPd to Chrony

A friend recently reminded me of the existence of chrony, a "versatile implementation of the Network Time Protocol (NTP)". The excellent introduction is worth quoting in full:
It can synchronise the system clock with NTP servers, reference clocks (e.g. GPS receiver), and manual input using wristwatch and keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network. It is designed to perform well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuosly, or run on a virtual machine. Typical accuracy between two machines synchronised over the Internet is within a few milliseconds; on a LAN, accuracy is typically in tens of microseconds. With hardware timestamping, or a hardware reference clock, sub-microsecond accuracy may be possible.
Now that's already great documentation right there. What it is, why it's good, and what to expect from it. I want more. They have a very handy comparison table between chrony, ntp and openntpd.

My problem with OpenNTPd Following concerns surrounding the security (and complexity) of the venerable ntp program, I have, a long time ago, switched to using openntpd on all my computers. I hadn't thought about it until I recently noticed a lot of noise on one of my servers:
jan 18 10:09:49 curie ntpd[1069]: adjusting local clock by -1.604366s
jan 18 10:08:18 curie ntpd[1069]: adjusting local clock by -1.577608s
jan 18 10:05:02 curie ntpd[1069]: adjusting local clock by -1.574683s
jan 18 10:04:00 curie ntpd[1069]: adjusting local clock by -1.573240s
jan 18 10:02:26 curie ntpd[1069]: adjusting local clock by -1.569592s
You read that right, openntpd was constantly rewinding the clock, sometimes in less than two minutes. The above log was taken while doing diagnostics, looking at the last 30 minutes of logs. So, on average, one 1.5 seconds rewind per 6 minutes! That might be due to a dying real time clock (RTC) or some other hardware problem. I know for a fact that the CMOS battery on that computer (curie) died and I wasn't able to replace it (!). So that's partly garbage-in, garbage-out here. But still, I was curious to see how chrony would behave... (Spoiler: much better.) But I also had trouble on another workstation, that one a much more recent machine (angela). First, it seems OpenNTPd would just fail at boot time:
anarcat@angela:~(main)$ sudo systemctl status openntpd
  openntpd.service - OpenNTPd Network Time Protocol
     Loaded: loaded (/lib/systemd/system/openntpd.service; enabled; vendor pres>
     Active: inactive (dead) since Sun 2022-01-23 09:54:03 EST; 6h ago
       Docs: man:openntpd(8)
    Process: 3291 ExecStartPre=/usr/sbin/ntpd -n $DAEMON_OPTS (code=exited, sta>
    Process: 3294 ExecStart=/usr/sbin/ntpd $DAEMON_OPTS (code=exited, status=0/>
   Main PID: 3298 (code=exited, status=0/SUCCESS)
        CPU: 34ms
jan 23 09:54:03 angela systemd[1]: Starting OpenNTPd Network Time Protocol...
jan 23 09:54:03 angela ntpd[3291]: configuration OK
jan 23 09:54:03 angela ntpd[3297]: ntp engine ready
jan 23 09:54:03 angela ntpd[3297]: ntp: recvfrom: Permission denied
jan 23 09:54:03 angela ntpd[3294]: Terminating
jan 23 09:54:03 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 09:54:03 angela systemd[1]: openntpd.service: Succeeded.
After a restart, somehow it worked, but it took a long time to sync the clock. At first, it would just not consider any peer at all:
anarcat@angela:~(main)$ sudo ntpctl -s all
0/20 peers valid, clock unsynced
peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  5  2    6s    6s             ---- peer not valid ----
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  5  2    6s    7s             ---- peer not valid ----
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  4  2    2s    8s             ---- peer not valid ----
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  4  2    0s    5s             ---- peer not valid ----
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  5  2    5s    5s             ---- peer not valid ----
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  4  1    6s    9s             ---- peer not valid ----
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  5  2    8s    9s             ---- peer not valid ----
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  4  3    2s    6s             ---- peer not valid ----
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  4  4    1s    6s             ---- peer not valid ----
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  4  2    0s    6s             ---- peer not valid ----
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  4  3    4s    7s             ---- peer not valid ----
Then it would accept them, but still wouldn't sync the clock:
anarcat@angela:~(main)$ sudo ntpctl -s all
20/20 peers valid, clock unsynced
peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  8  2    5s    6s         0.672ms    13.507ms     0.442ms
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  7  2    4s    8s         1.260ms    13.388ms     0.494ms
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  7  1    3s    5s        -0.390ms    47.641ms     1.537ms
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  7  2    1s    6s        -0.573ms    15.012ms     1.845ms
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  7  2    3s    8s        -0.178ms    21.691ms     1.807ms
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  7  2    4s    8s        -5.742ms    70.040ms     1.656ms
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  7  2    0s    7s         0.170ms    21.035ms     1.914ms
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  7  3    5s    8s        -2.626ms    20.862ms     2.032ms
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  7  2    6s    8s         0.123ms    20.758ms     2.248ms
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  8  3    4s    5s         2.043ms    14.138ms     1.675ms
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  6  1    0s    7s        -0.027ms    14.189ms     2.206ms
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  7  2    1s    5s        -1.777ms    53.459ms     1.865ms
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  6  2    1s    8s         0.195ms    14.572ms     2.624ms
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  7  3    6s    9s         2.068ms    14.102ms     1.767ms
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  6  2    4s    9s         0.254ms    21.471ms     2.120ms
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  7  4    5s    9s        -1.706ms    21.030ms     1.849ms
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  7  2    0s    7s         8.907ms    75.070ms     2.095ms
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  7  2    6s    9s        -1.729ms    53.823ms     2.193ms
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  7  1    1s    7s        -1.265ms    46.355ms     4.171ms
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  7  3    4s    8s         1.732ms    35.792ms     2.228ms
It took a solid five minutes to sync the clock, even though the peers were considered valid within a few seconds:
jan 23 15:58:41 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 15:58:58 angela ntpd[84086]: peer 142.114.187.107 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 198.27.76.102 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 207.34.49.172 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 209.115.181.110 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 159.203.8.72 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 138.197.135.239 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 162.159.200.123 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 2607:5300:201:3100::345c now valid
jan 23 15:59:00 angela ntpd[84086]: peer 2606:4700:f1::1 now valid
jan 23 15:59:00 angela ntpd[84086]: peer 158.69.254.196 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 216.6.2.70 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 68.69.221.61 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.40 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.42 now valid
jan 23 15:59:02 angela ntpd[84086]: peer 162.159.200.1 now valid
jan 23 15:59:04 angela ntpd[84086]: peer 216.197.156.83 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 206.108.0.131 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 2001:678:8::123 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 149.56.121.16 now valid
jan 23 15:59:07 angela ntpd[84086]: peer 2607:5300:205:200::1991 now valid
jan 23 16:03:47 angela ntpd[84086]: clock is now synced
That seems kind of odd. It was also frustrating to have very little information from ntpctl about the state of the daemon. I understand it's designed to be minimal, but it could inform me on his known offset, for example. It does tell me about the offset with the different peers, but not as clearly as one would expect. It's also unclear how it disciplines the RTC at all.

Compared to chrony Now compare with chrony:
jan 23 16:07:16 angela systemd[1]: Starting chrony, an NTP client/server...
jan 23 16:07:16 angela chronyd[87765]: chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
jan 23 16:07:16 angela chronyd[87765]: Initial frequency 3.814 ppm
jan 23 16:07:16 angela chronyd[87765]: Using right/UTC timezone to obtain leap second data
jan 23 16:07:16 angela chronyd[87765]: Loaded seccomp filter
jan 23 16:07:16 angela systemd[1]: Started chrony, an NTP client/server.
jan 23 16:07:21 angela chronyd[87765]: Selected source 206.108.0.131 (2.debian.pool.ntp.org)
jan 23 16:07:21 angela chronyd[87765]: System clock TAI offset set to 37 seconds
First, you'll notice there's none of that "clock synced" nonsense, it picks a source, and then... it's just done. Because the clock on this computer is not drifting that much, and openntpd had (presumably) just sync'd it anyways. And indeed, if we look at detailed stats from the powerful chronyc client:
anarcat@angela:~(main)$ sudo chronyc tracking
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:07:21 2022
System time     : 0.000000311 seconds slow of NTP time
Last offset     : +0.000807989 seconds
RMS offset      : 0.000807989 seconds
Frequency       : 3.814 ppm fast
Residual freq   : -24.434 ppm
Skew            : 1000000.000 ppm
Root delay      : 0.013200894 seconds
Root dispersion : 65.357254028 seconds
Update interval : 1.4 seconds
Leap status     : Normal
We see that we are nanoseconds away from NTP time. That was ran very quickly after starting the server (literally in the same second as chrony picked a source), so stats are a bit weird (e.g. the Skew is huge). After a minute or two, it looks more reasonable:
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:09:32 2022
System time     : 0.000487002 seconds slow of NTP time
Last offset     : -0.000332960 seconds
RMS offset      : 0.000751204 seconds
Frequency       : 3.536 ppm fast
Residual freq   : +0.016 ppm
Skew            : 3.707 ppm
Root delay      : 0.013363549 seconds
Root dispersion : 0.000324015 seconds
Update interval : 65.0 seconds
Leap status     : Normal
Now it's learning how good or bad the RTC clock is ("Frequency"), and is smoothly adjusting the System time to follow the average offset (RMS offset, more or less). You'll also notice the Update interval has risen, and will keep expanding as chrony learns more about the internal clock, so it doesn't need to constantly poll the NTP servers to sync the clock. In the above, we're 487 micro seconds (less than a milisecond!) away from NTP time. (People interested in the explanation of every single one of those fields can read the excellent chronyc manpage. That thing made me want to nerd out on NTP again!) On the machine with the bad clock, chrony also did a 1.5 second adjustment, but just once, at startup:
jan 18 11:54:33 curie chronyd[2148399]: Selected source 206.108.0.133 (2.debian.pool.ntp.org) 
jan 18 11:54:33 curie chronyd[2148399]: System clock wrong by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock was stepped by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock TAI offset set to 37 seconds 
Then it would still struggle to keep the clock in sync, but not as badly as openntpd. Here's the offset a few minutes after that above startup:
System time     : 0.000375352 seconds slow of NTP time
And again a few seconds later:
System time     : 0.001793046 seconds slow of NTP time
I don't currently have access to that machine, and will update this post with the latest status, but so far I've had a very good experience with chrony on that machine, which is a testament to its resilience, and it also just works on my other machines as well.

Extras On top of "just working" (as demonstrated above), I feel that chrony's feature set is so much superior... Here's an excerpt of the extras in chrony, taken from the comparison table:
  • source frequency tracking
  • source state restore from file
  • temperature compensation
  • ready for next NTP era (year 2036)
  • replace unreachable / falseticker servers
  • aware of jitter
  • RTC drift tracking
  • RTC trimming
  • Restore time from file w/o RTC
  • leap seconds correction, in slew mode
  • drops root privileges
I even understand some of that stuff. I think. So kudos to the chrony folks, I'm switching.

Caveats One thing to keep in mind in the above, however is that it's quite possible chrony does as bad of a job as openntpd on that old machine, and just doesn't tell me about it. For example, here's another log sample from another server (marcos):
jan 23 11:13:25 marcos ntpd[1976694]: adjusting clock frequency by 0.451035 to -16.420273ppm
I get those basically every day, which seems to show that it's at least trying to keep track of the hardware clock. In other words, it's quite possible I have no idea what I'm talking about and you definitely need to take this article with a grain of salt. I'm not an NTP expert. Update: I should also mentioned that I haven't evaluated systemd-timesyncd, for a few reasons:
  1. I have enough things running under systemd
  2. I wasn't aware of it when I started writing this
  3. I couldn't find good documentation on it... later I found the above manpage and of course the Arch Wiki but that is very minimal
  4. therefore I can't tell how it compares with chrony or (open)ntpd, so I don't see an enticing reason to switch
It has a few things going for it though:
  • it's likely shipped with your distribution already
  • it drops privileges (possibly like chrony, unclear if it also has seccomp filters)
  • it's minimalist: it only does SNTP so not the server side
  • the status command is good enough that you can tell the clock frequency, precision, and so on (especially when compared to openntpd's ntpctl)
So I'm reserving judgement over it, but I'd certainly note that I'm always a little weary in trusting systemd daemons with the network, and would prefer to keep that attack surface to a minimum. Diversity is a good thing, in general, so I'll keep chrony for now. It would certainly nice to see it added to chrony's comparison table.

Switching to chrony Because the default configuration in chrony (at least as shipped in Debian) is sane (good default peers, no open network by default), installing it is as simple as:
apt install chrony
And because it somehow conflicts with openntpd, that also takes care of removing that cruft as well.

Update: Debian defaults So it seems like I managed to write this entire blog post without putting it in relation with the original reason I had to think about this in the first place, which is odd and should be corrected. This conversation came about on an IRC channel that mentioned that the ntp package (and upstream) is in bad shape in Debian. In that discussion, chrony and ntpsec were discussed as possible replacements, but when we had the discussion on chat, I mentioned I was using openntpd, and promptly realized I was actually unhappy with it. A friend suggested chrony, I tried it, and it worked amazingly, I switched, wrote this blog post, end of story. Except today (2022-02-07, two weeks later), I actually read that thread and realized that something happened in Debian I wasn't actually aware of. In bookworm, systemd-timesyncd was not only shipped, but it was installed by default, as it was marked as a hard dependency of systemd. That was "fixed" in systemd-247.9-2 (see bug 986651), but only by making the dependency a Recommends and marking it as Priority: important. So in effect, systemd-timesyncd became the default NTP daemon in Debian in bookworm, which I find somewhat surprising. timesyncd has many things going for it (as mentioned above), but I do find it a bit annoying that systemd is replacing all those utilities in such a way. I also wonder what is going to happen on upgrades. This is all a little frustrating too because there is no good comparison between the other NTP daemons and timesyncd anywhere. The chrony comparison table doesn't mention it, and an audit by the Core Infrastructure Initiative from 2017 doesn't mention it either, even though timesyncd was announced in 2014. (Same with this blog post from Facebook.)

21 January 2022

Neil McGovern: Further investments in desktop Linux

This was originally posted on the GNOME Foundation news feed The GNOME Foundation was supported during 2020-2021 by a grant from Endless Network which funded the Community Engagement Challenge, strategy consultancy with the board, and a contribution towards our general running costs. At the end of last year we had a portion of this grant remaining, and after the success of our work in previous years directly funding developer and infrastructure work on GTK and Flathub, we wanted to see whether we could use these funds to invest in GNOME and the wider Linux desktop platform. We re very pleased to announce that we got approval to launch three parallel contractor engagements, which started over the past few weeks. These projects aim to improve our developer experience, make more applications available on the GNOME platform, and move towards equitable and sustainable revenue models for developers within our ecosystem. Thanks again to Endless Network for their support on these initiatives. Flathub Verified apps, donations and subscriptions (Codethink and James Westman) This project is described in detail on the Flathub Discourse but goal is to add a process to verify first-party apps on Flathub (ie uploaded by a developer or an authorised representative) and then make it possible for those developers to collect donations or subscriptions from users of their applications. We also plan to publish a separate repository that contains only these verified first-party uploads (without any of the community contributed applications), as well as providing a repository with only free and open source applications, allowing users to choose what they are comfortable installing and running on their system. Creating the user and developer login system to manage your apps will also set us up well for future enhancements, such managing tokens for direct binary uploads (eg from a CI/CD system hosted elsewhere, as is already done with Mozilla Firefox and OBS) and making it easier to publish apps from systems such as Electron which can be hard to use within a flatpak-builder sandbox. For updates on this project you can follow the Discourse thread, check out the work board on GitHub or join us on Matrix. PWAs Integrating Progressive Web Apps in GNOME (Phaedrus Leeds) While everyone agrees that native applications can provide the best experience on the GNOME desktop, the web platform, and particularly PWAs (Progressive Web Apps) which are designed to be downloadable as apps and offer offline functionality, makes it possible for us to offer equivalent experiences to other platforms for app publishers who have not specifically targeted GNOME. This allows us to attract and retain users by giving them the choice of using applications from a wider range of publishers than are currently directly targeting the Linux desktop. The first phase of the GNOME PWA project involves adding back support to Software for web apps backed by GNOME Web, and making this possible when Web is packaged as a Flatpak. So far some preparatory pull requests have been merged in Web and libportal to enable this work, and development is ongoing to get the feature branches ready for review. Discussions are also in progress with the Design team on how best to display the web apps in Software and on the user interface for web apps installed from a browser. There has also been discussion among various stakeholders about what web apps should be included as available with Software, and how they can provide supplemental value to users without taking priority over apps native to GNOME. Finally, technical discussion is ongoing in the portal issue tracker to ensure that the implementation of a new dynamic launcher portal meets all security and robustness requirements, and is potentially useful not just to GNOME Web but Chromium and any other app that may want to install desktop launchers. Adding support for the launcher portal in upstream Chromium, to facilitate Chromium-based browsers packaged as a Flatpak, and adding support for Chromium-based web apps in Software are stretch goals for the project should time permit. GTK4 / Adwaita To support the adoption of Gtk4 by the community (Emmanuele Bassi) With the release of GTK4 and renewed interest in GTK as a toolkit, we want to continue improving the developer experience and ease of use of GTK and ensure we have a complete and competitive offering for developers considering using our platform. This involves identifying missing functionality or UI elements that applications need to move to GTK4, as well as informing the community about the new widgets and functionality available. We have been working on documentation and bug fixes for GTK in preparation for the GNOME 42 release and have also started looking at the missing widgets and API in Libadwaita, in preparation for the next release. The next steps are to work with the Design team and the Libadwaita maintainers and identify and implement missing widgets that did not make the cut for the 1.0 release. In the meantime, we have also worked on writing a beginners tutorial for the GNOME developers documentation, including GTK and Libadwaita widgets so that newcomers to the platform can easily move between the Interface Guidelines and the API references of various libraries. To increase the outreach of the effort, Emmanuele has been streaming it on Twitch, and published the VOD on YouTube as well.

20 January 2022

Caleb Adepitan: I'm Thinking About You Right Now!

Just in case you stumbled on this incidentally and you wonder Who in the seven fat worlds is this mysterious...? Ha! That was what I was thinking about you you were thinking about me. You gerrit!? I heard you listening to my thoughts; I listened to yours too. I wonder if you heard me too. I will like to talk, today, about what it is I do at Debian as an Outreachy Intern under the JavaScript team. I woke up this morning and decided to bore you with so much details. I must have woken up glorified!

A Broader View My sole role at Debian alongside my teammate, aided by our mentors, is to facilitate the Node.js 16 and Webpack 5 Transitioning. What exactly does that mean? Node.js 16, as of the time of this writing, is the active LTS release from the Node.js developers while Webpack 5 is also the current release from the Webpack developers. At Debian we have to work towards supporting these packages. Debian as an OS comes with a package manager coined Advanced Package Tool or simply APT on which command-line programs specific to Debian and it's many-flavored distributions, apt, apt-get, apt-cache are based. This means before the conception of yarn and npm, the typical JavaScript developer's package managers, apt has been. Debian unlike yarn and npm, ideally, only supports one version of a software at any point in time and on edge cases may have to support an extra one as noted in this chat between my mentor and a member. To provide support for Webpack 5 and Node.js 16 which as regards to Debian are currently in experimental and only can be migrated to unstable after our transitioning, we have to test, reverse build, report and fix bugs till a certain level of compatibility has been attained with dependent packages currently in unstable. Webpack and Node.js have their respective dependencies, but there are certain software and packages also dependent on Webpack and/or Node.js, these are termed as reverse-dependencies. We have to test and build these reverse-dependencies, report and fix bugs and incompatibilities with the new versions of Webpack and Node.js. For reverse-dependent packages not yet supporting Webpack 5 and/or Node.js 16, we'll open an issue in form of a feature-request in upstream repository asking for Webpack 5 and/or Node.js 16 support. Ideally, Debian manages a repository of all supported packages on a GitLab managed Git based VCS. For JavaScript packages maintained by the JS Team, the home of those packages sits at https://salsa.debian.org/js-team/. Supported packages are pulled from upstream repository, mostly GitHub, using some certian packaging tools provided by Debian. The pulled source cannot be directly modified else it will break build. So there exists a dedicated folder named debian where certain cofiguration files, scripts and rules to convey to the debian package builder live at. In some cases, source code needs to be modified; these are done via patching which means the modifications won't live in the source but in a dedicated patch file inside the debian/patches/ folder. The modifications are diffed line by line with the original source (just as with git) and the result is output in a file managed by debian utility tool, Quilt. The contents of the debian folder are instructions on how to build the source into binaries or an installable archive .deb (like Java's .jar or Android's .apk).

Understanding Debian Software Release Cycle There are quite some interesting things about the software release cycle at Debian to get familiar with. Listed here are some release repositories alongside their codenames as of Debian 11:
  1. Unstable (Sid)
  2. Testing (Bookworm)
  3. Stable (Bullseye)
  4. Old stable (Buster)
  5. Old old stable (Stretch)
Ha! Isn't it ironic that unstable is the only one with a stable codename? Some of these, if not all, have codenames subject to change after every new release and/or migration. Only unstable which is referred to as Sid never changes. The current stable release which is Debian 11 is codenamed Bullseye. The next stable release which will be Debian 12 will be codenamed Bookworm because the current testing repository will be migrated to stable and released as Debian 12. The previous stable release which was Debian 10, now old stable, was codenamed Buster. To better understand Debian releases you may take a look at this wiki that completely defines them. Basically, as explained by one of my mentors remixed in my own words, experimental software are migrated to unstable after (as I said earlier) they have attained a certain level of compatibility with dependent software. They remain in unstable for a long period of time undergoing testing, autopkgtest tests, regression tests, etc. At this point bugs are reported and fixed to a satisfactory level. The unstable repository is then migrated to testing where release-critical bugs are reported and fixed to a satisfactory level where one can comfortably say testing is almost stable , and voila (!), testing is released as a Debian stable version. This happens roughly every two years. Some months before a new stable release, a soft freeze is turned on such that no new versions or transitions should be uploaded to unstable. Only fixes will be uploaded at this point. In like 4-6 weeks before the release, a hard freeze is turned on that completely disallows uploading to unstable, not even fixes. In due time, testing becomes the new stable release and freeze is lifted.

References
  1. Packaging pre-requisites
  2. Working with chroots
  3. Sbuild (clean builds)
  4. Updating a Debian Package by Abraham Raji

17 January 2022

Wouter Verhelst: Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap. He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases. There are two reasons why you should have backups. The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed. Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online. The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable. That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create. All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites. In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much. As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data. But of course, this is something everyone should consider for themselves...

Matthew Garrett: Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

16 January 2022

Chris Lamb: Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on after all, nobody needs to hear that The Godfather is a good movie.) It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to make a tour of the landmarks of the first century of cinema, and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here. Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) the latter being an almost perfect example of late-20th century cultural exhaustion. But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

Dogtooth (2009) A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well. Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and F lix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

Holy Motors (2012) There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Bu uel and famed artist Salvador Dal . A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest. This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!") Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work. Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

Manchester by the Sea (2016) An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

McCabe & Mrs. Miller (1971) Roger Ebert called this movie one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come. But whilst it is difficult to disagree with his sentiment, Ebert's choice of sad is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some. Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure. As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: This is not the kind of movie where the characters are introduced. They are all already here. Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.) On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

Detour (1945) Detour was filmed in less than a week, and it's difficult to decide out of the actors and the screenplay which is its weakest point.... Yet it still somehow seemed to drag me in. The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control. It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema. In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

Safe (1995) Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she doesn't feel right, Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist. Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies. On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.) As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

The Driver (1978) Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978. The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective. If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well. To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

The Souvenir (2019) The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film. Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

The Wrestler (2008) Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback. The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler. In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s. Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat... But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

Thief (1981) Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity. Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie. The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps. I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

Uncut Gems (2019) The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?) No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what. Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

Woman in the Dunes (1964) I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life. Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

A Quiet Place (2018) Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here. The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet that is to say, by staying stoic suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.) I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

14 January 2022

Norbert Preining: Future of my packages in Debian

After having been (again) demoted (timed perfectly to my round birthday!) based on flimsy arguments, I have been forced to rethink the level of contribution I want to do for Debian. Considering in particular that I have switched my main desktop to dual-boot into Arch Linux (all on the same btrfs fs with subvolumes, great!) and have run Arch now for several days exclusively, I think it is time to review the packages I am somehow responsible for (full list of packages). After about 20 years in Debian, time to send off quite some stuff that has accumulated over time. KDE/Plasma, frameworks, Gears, and related packages All these packages are group maintained, so there is not much to worry about. Furthermore, a few new faces have joined the team and are actively working on the packages, although mostly on Qt6. I guess that with me not taking action, frameworks, gears, and plasma will fall back over time (frameworks: Debian 5.88 versus current 5.90, gears: Debian 21.08 versus current 21.12, plasma uptodate at the moment). With respect to my packages on OBS, they will probably also go stale over time. Using Arch nowadays I lack the development tools necessary to build Debian packages, and above all, the motivation. I am sorry for all those who have learned to rely on my OBS packages over the last years, bringing modern and uptodate KDE/Plasma to Debian/stable, please direct your complaints at the responsible entities in Debian. Cinnamon As I have written already here, I have reduced my involvement quite a lot, and nowadays Fabio and Joshua are doing the work. But both are not even DM (AFAIR) and I am the only one doing uploads (I got DM upload permissions for it). But I am not sure how long I will continue doing this. This also means that in the near future, Cinnamon will also go stale. TeX related packages Hilmar has DM upload permissions and is very actively caring for the packages, so I don t see any source of concern here. New packages will need to find a new uploader, though. With myself also being part of upstream, I can surely help out in the future with difficult problems. Calibre and related packages Yokota-san (another DM I have sponsored) has DM upload permissions and is very actively caring for the packages, so also here there is not much of concern. Onedrive This is already badly outdated, and I recommend using the OBS builds which are current and provide binaries for Ubuntu and Debian for various versions. ROCm Here fortunately a new generation of developers has taken over maintenance and everything is going smoothly, much better than I could have done, yeah to that! Qalculate related packages These are group maintained, but unfortunately nobody else but me has touched the repos for quite some time. I fear that the packages will go stale rather soon. isync/mbsync I have recently salvaged this package, and use it daily, but I guess it needs to be orphaned sooner or later. CafeOBJ While I am also part of upstream here, I guess it will be orphaned. Julia Julia is group maintained, but unfortunately nobody else but me has touched the repo for quite some time, and we are already far behind the normal releases (and julia got removed from testing). While go stale/orphaned. I recommend installing upstream binaries. python-mechanize Another package that is group maintained in the Python team, but with only me as uploader I guess it will go stale and effectively be orphaned soon. xxhash Has already by orphaned. qpdfview No upstream development, so not much to do, but will be orphaned, too.

11 January 2022

Russ Allbery: Review: Hench

Review: Hench, by Natalie Zina Walschots
Publisher: William Morrow
Copyright: September 2020
ISBN: 0-06-297859-4
Format: Kindle
Pages: 403
Anna Tromedlov is a hench, which means she does boring things for terrible people for money. Supervillains need a lot of labor to keep their bases and criminal organizations running, and they get that labor the same way everyone else does: through temporary agencies. Anna does spreadsheets, preferably from home on her couch. On-site work was terrifying and she tried to avoid it, but the lure of a long-term contract was too strong. The Electric Eel, despite being a creepy sleazeball, seemed to be a manageable problem. He needed some support at a press conference, which turns out to be code for being a diversity token in front of the camera, but all she should have to do is stand there. That's how Anna ends up holding the mind control device to the head of the mayor's kid when the superheroes attack, followed shortly by being thrown across the room by Supercollider. Left with a complex fracture of her leg that will take months to heal, a layoff notice and a fruit basket from Electric Eel's company, and a vaguely menacing hospital conversation with the police (including Supercollider in a transparent disguise) in which it's made clear to her that she is mistaken about Supercollider's hand-print on her thigh, Anna starts wondering just how much damage superheroes have done. The answer, when analyzed using the framework for natural disasters, is astonishingly high. Anna's resulting obsession with adding up the numbers leads to her starting a blog, the Injury Report, with a growing cult following. That, in turn, leads to a new job and a sponsor: the mysterious supervillain Leviathan. To review this book properly, I need to talk about Watchmen. One of the things that makes superheroes interesting culturally is the straightforwardness of their foundational appeal. The archetypal superhero story is an id story: an almost pure power fantasy aimed at teenage boys. Like other pulp mass media, they reflect the prevailing cultural myths of the era in which they're told. World War II superheroes are mostly all-American boy scouts who punch Nazis. 1960s superheroes are a more complex mix of outsider misfits with a moral code and sarcastic but earnestly ethical do-gooders. The superhero genre is vast, with numerous reinterpretations, deconstructions, and alternate perspectives, but its ur-story is a good versus evil struggle of individual action, in which exceptional people use their powers for good to defeat nefarious villains. Watchmen was not the first internal critique of the genre, but it was the one that everyone read in the 1980s and 1990s. It takes direct aim at that moral binary. The superheroes in Watchmen are not paragons of virtue (some of them are truly horrible people), and they have just as much messy entanglement with the world as the rest of us. It was superheroes re-imagined for the post-Vietnam, post-Watergate era, for the end of the Cold War when we were realizing how many lies about morality we had been told. But it still put superheroes and their struggles with morality at the center of the story. Hench is a superhero story for the modern neoliberal world of reality TV and power inequality in the way that Watchmen was a superhero story for the Iran-Contra era and the end of the Cold War. Whether our heroes have feet of clay is no longer a question. Today, a better question is whether the official heroes, the ones that are celebrated as triumphs of individual achievement, are anything but clay. Hench doesn't bother asking whether superheroes have fallen short of their ideal; that answer is obvious. What Hench asks instead is a question familiar to those living in a world full of televangelists, climate denialism, manipulative advertising, and Facebook: are superheroes anything more than a self-perpetuating scam? Has the good superheroes supposedly do ever outweighed the collateral damage? Do they care in the slightest about the people they're supposedly protecting? Or is the whole system of superheroes and supervillains a performance for an audience, one that chews up bystanders and spits them out mangled while delivering simplistic and unquestioned official morality? This sounds like a deeply cynical premise, but Hench is not a cynical book. It is cynical about superheroes, which is not the same thing. The brilliance of Walschots's approach is that Anna has a foot in both worlds. She works for a supervillain and, over the course of the book, gains access to real power within the world of superheroic battles. But she's also an ordinary person with ordinary problems: not enough money, rocky friendships, deep anger at the injustices of the world and the way people like her are discarded, and now a disability and PTSD. Walschots perfectly balances the tension between those worlds and maintains that tension straight to the end of the book. From the supervillain world, Anna draws support, resources, and a mission, but all of the hope, true morality, and heart of this book comes from the ordinary side. If you had the infrastructure of a supervillain at your disposal, what would you do with it? Anna's answer is to treat superheroes as a destructive force like climate change, and to do whatever she can to drive them out of the business and thus reduce their impact on the world. The tool she uses for that is psychological warfare: make them so miserable that they'll snap and do something too catastrophic to be covered up. And the raw material for that psychological warfare is data. That's the foot in the supervillain world. In descriptions of this book, her skills with data are often called her superpower. That's not exactly wrong, but the reason why she gains power and respect is only partly because of her data skills. Anna lives by the morality of the ordinary people world: you look out for your friends, you treat your co-workers with respect as long as they're not assholes, and you try to make life a bit better for the people around you. When Leviathan gives her the opportunity to put together a team, she finds people with skills she admires, funnels work to people who are good at it, and worries about the team dynamics. She treats the other ordinary employees of a supervillain as people, with lives and personalities and emotions and worth. She wins their respect. Then she uses their combined skills to destroy superhero lives. I was fascinated by the moral complexity in this book. Anna and her team do villainous things by the morality of the superheroic world (and, honestly, by the morality of most readers), including some things that result in people's deaths. By the end of the book, one could argue that Anna has been driven by revenge into becoming an unusual sort of supervillain. And yet, she treats the people around her so much better than either the heroes or the villains do. Anna is fiercely moral in all the ordinary person ways, and that leads directly to her becoming a villain in the superhero frame. Hench doesn't resolve that conflict; it just leaves it on the page for the reader to ponder. The best part about this book is that it's absurdly grabby, unpredictable, and full of narrative momentum. Walschots's pacing kept me up past midnight a couple of times and derailed other weekend plans so that I could keep reading. I had no idea where the plot was going even at the 80% mark. The ending is ambiguous and a bit uncomfortable, just like the morality throughout the book, but I liked it the more I thought about it. One caveat, unfortunately: Hench has some very graphic descriptions of violence and medical procedures, and there's an extended torture sequence with some incredibly gruesome body horror that I thought went on far too long and was unnecessary to the plot. If you're a bit squeamish like I am, there are some places where you'll want to skim, including one sequence that's annoyingly intermixed with important story developments. Otherwise, though, this is a truly excellent book. It has a memorable protagonist with a great first-person voice, an epic character arc of empowerment and revenge, a timely take on the superhero genre that uses it for sharp critique of neoliberal governance and reality TV morality, a fascinatingly ambiguous and unsettled moral stance, a gripping and unpredictable plot, and some thoroughly enjoyable competence porn. I had put off reading it because I was worried that it would be too cynical or dark, but apart from the unnecessary torture scene, it's not at all. Highly recommended. Rating: 9 out of 10

9 January 2022

Matthew Garrett: Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

8 January 2022

John Goerzen: Make the Internet Yours Again With an Instant Mesh Network

I m going to lead with the technical punch line, and then explain it: Yggdrasil Network is an opportunistic mesh that can be deployed privately or as part of a global-scale network. Each node gets a stable IPv6 address (or even an entire /64) that is derived from its public key and is bound to that node as long as the node wants it (of course, it can generate a new keypair anytime) and is valid wherever the node joins the mesh. All traffic is end-to-end encrypted. Yggdrasil will automatically discover peers on a LAN via broadcast beacons, and requires zero configuration to peer in such a way. It can also run as an overlay network atop the public Internet. Public peers serve as places to join the global network, and since it s a mesh, if one device on your LAN joins the global network, the others will automatically have visibility on it also, thanks to the mesh routing. It neatly solves a lot of problems of portability (my ssh sessions stay live as I move networks, for instance), VPN (incoming ports aren t required since local nodes can connect to a public peer via an outbound connection), security, and so forth. Now on to the explanation: The Tyranny of IP rigidity Every device on the Internet, at one time, had its own globally-unique IP address. This number was its identifier to the world; with an IP address, you can connect to any machine anywhere. Even now, when you connect to a computer to download a webpage or send a message, under the hood, your computer is talking to the other one by IP address. Only, now it s hard to get one. The Internet protocol we all grew up with, version 4 (IPv4), didn t have enough addresses for the explosive growth we ve seen. Internet providers and IT departments had to use a trick called NAT (Network Address Translation) to give you a sort of fake IP address, so they could put hundreds or thousands of devices behind a single public one. That, plus the mobility of devices changing IPs whenever they change locations has meant that a fundamental rule of the old Internet is now broken: Every participant is an equal peer. (Well, not any more.) Nowadays, you can t you host your own website from your phone. Or share files from your house. (Without, that is, the use of some third-party service that locks you down and acts as an intermediary.) Back in the 90s, I worked at a university, and I, like every other employee, had a PC on my desk with an unfirewalled public IP. I installed a webserver, and poof instant website. Nowadays, running a website from home is just about impossible. You may not have a public IP, and if you do, it likely changes from time to time. And even then, your ISP probably blocks you from running servers on it. In short, you have to buy your way into the resources to participate on the Internet. I wrote about these problems in more detail in my article Recovering Our Lost Free Will Online. Enter Yggdrasil I already gave away the punch line at the top. But what does all that mean?
  • Every device that participates gets an IP address that is fully live on the Yggdrasil network.
  • You can host a website, or a mail server, or whatever you like with your Yggdrasil IP.
  • Encryption and authentication are smaller (though not nonexistent) worries thanks to the built-in end-to-end encryption.
  • You can travel the globe, and your IP will follow you: onto a plane, from continent to continent, wherever. Yggdrasil will find you.
  • I ve set up /etc/hosts on my laptop to use the Yggdrasil IPs for other machines on my LAN. Now I can just ssh foo and it will work from home, from a coffee shop, from a 4G tether, wherever. Now, other tools like tinc can do this, obviously. And I could stop there; I could have a completely closed, private Yggdrasil network. Or, I can join the global Yggdrasil network. Each device, in addition to accepting peers it finds on the LAN, can also be configured to establish outbound peering connections or accept inbound ones over the Internet. Put a public peer or two in your configuration and you ve joined the global network. Most people will probably want to do that on every device (because why not?), but you could also do that from just one device on your LAN. Again, there s no need to explicitly build routes via it; your other machines on the LAN will discover the route s existence and use it. This is one of many projects that are working to democratize and decentralize the Internet. So far, it has been quite successful, growing to over 2000 nodes. It is the direct successor to the earlier cjdns/Hyperboria and BATMAN networks, and aims to be a proof of concept and a viable tool for global expansion. Finally, think about how much easier development is when you don t have to necessarily worry about TLS complexity in every single application. When you don t have to worry about port forwarding and firewall penetration. It s what the Internet should be.

    Ayoyimika Ajibade: Nodejs 16 and Webpack 5 transition in Debian

    What is Debian ? Debian is also known as Debian GNU/Linux is a free open-source operating system (OS) based currently on the Linux kernel or the FreeBSD kernel, developed by the community-supported Debian Project; although efforts are in place to provide Debian for other kernels, primarily for the Hurd.

    Fun fact about Debian
    • Debian was the first Linux distribution to include a package management system for easy installation and removal of software. It was also the first Linux distribution that could be upgraded without requiring reinstallation.
    • To protect your system against Trojan horses and other malevolent software, Debian's servers verify that uploaded packages come from their registered Debian maintainers.
    • Debian comes with over 59000 packages; as of this writing (precompiled software that is bundled up in a nice format for easy installation on your machine), a package manager (APT), and other utilities that make it possible to manage thousands of packages on thousands of computers as easily as installing a single application. All of it is FREE!
    • Debian is also the basis for many other distributions, most notably Ubuntu

    What is Webpack ? Webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it internally builds a dependency graph from one or more entry points and then combines every module your project needs into one or more bundles, which are static assets to serve your content from

    What is nodejs ? Node.js is an open-source, cross-platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications and also developing server-side applications, Here javascript code is no longer limited to the traditional method of running on the web browser

    What does Transitioning mean in Debian? Transitioning is a concept in Debian about maintaining only one version of a library like webpack, nodejs. There is a bottleneck as other libraries and applications may not support the version we have in Debian. So we have to port that software which For example, node-mini-css-extract-plugin, node-mermaid and so many packages uses webpack. In buster we had webpack4 and in bullseye, we want to update it to webpack5. node-mini-css-extract-plugin already supports webpack5, but others like node-mermaid don't support it yet. So either we wait or we help those projects to update their webpack version. Check out this chat between my mentor and a community member on transitioning of rails6

    Getting Started with Creating or Updating packages in Debian To be able to create or maintain packages suitable for uploading to Debian you must be in a sid/unstable environment or distribution. See recommended instructions on how to setup Debian Sid via this link See link on how to debianize a new package See link for brief steps on how to update a package to its new upstream version. For more detailed content on the whys and hows of updating a package to its new upstream version visit here Note In updating to the new upstream version we have to watch out for breaking changes caused by both minor updates or major updates. As per https://semver.org major updates(e.g If the current version is 2.3.4, then 3.0 is a major update) of libraries with versions greater than 1.0 and minor updates(e.g If the current version is 0.10 then 0.11 is a minor update) of libraries with versions less than 1.0 can have breaking changes

    The overall flow of webpack5 and nodejs16 transitioning in Debian After grasping the fundamental process and flow on how to update a package, you are well on your way to transitioning . Transitioning in webpack or nodejs involves building and testing of dependencies or packages that depend on webpack or nodejs respectively called reverse-dependencies, these reverse dependencies are tested and built with the new updated version usually uploaded to the experimental distribution if reverse dependencies are built and tested successfully both reverse dependencies and dependency in this case nodejs or webpack are then uploaded to the unstable/sid distribution for further processing

    The major guidelines to follow while transitioning are
    • Find a list of reverse dependencies to fix
    • See if new upstream versions of reverse dependencies are available that supports the transitioning version
    • See if new upstream of reverse dependencies are available that supports the transitioning version works
    • Report bugs found while rebuilding and testing reverse dependencies in Debian
    • Forward bugs found while rebuilding and testing reverse dependencies upstream
    • Fix or update packages and forward patches upstream
    After a successful transitioning phase users of the Debian OS have access to the latest and also oldest installation of these packages via apt based on their preferences, which implies having the benefit of more features, bug fixes, updated security patches from those packages, all these are possible due to the community of amazing people

    Next.

    Previous.