Search Results: "fer"

27 January 2022

Jonathan Dowland: Using an iPad for note-taking in talks

I've found that using a laptop during conference talks means you either end up doing something else and missing important bits of the talk, or at least look like you're doing something else. But it's extremely helpful to be able to look up the person who is talking, or their project, or expand an acronym that's mentioned or read around the subject. At December's uksystems21 conference, I experimented with using an iPad as a kind of compromise. Modern iOS versions let you split the display between two apps1, so I put the built-in Notes app on one side and a web browser on the other. I took notes using the Apple Pencil. I've got a "paper-like" rough surface display protector on the front which vastly improves the experience of using the Apple Pencil for writing2.
An example of note-taking and researching talks on an iPad. An example of note-taking and researching talks on an iPad.
I mostly took notes on the active talk, but I also tweaked my own slides3, looked up supplementary information about the topic and the talker, and things like that. It worked really well: much better than I expected. Apple's split-screen implementation is clunky but essential to make this work. The textured surface protector is a serious improvement over the normal surface for writing. But most importantly I didn't lose focus over the talks, and I don't think I looked like I did either.

  1. Yes, this is something that some Android vendors have supported for years. I remember playing around with Samsung Galaxy Notes when I was still in IT and being pretty impressed. On the other hand, I'd bet not 1% of those tablets are still running Today. My iPad Mini from that time still is, albeit vastly diminished.
  2. It's still not as good as the Remarkable, but that's a topic for another blog post.
  3. A tiny bit. Not serious reworking. Just nervous last minute tweaks that I could probably have not bothered with at all. I'm one of those people who does that right up to the wire.

Russ Allbery: Review: I Didn't Do the Thing Today

Review: I Didn't Do the Thing Today, by Madeleine Dore
Publisher: Avery
Copyright: 2022
ISBN: 0-593-41914-6
Format: Kindle
Pages: 291
At least from my narrow view of it, the world of productivity self-help literature is a fascinating place right now. The pandemic overturned normal work patterns and exacerbated schedule inequality, creating vastly different experiences for the people whose work continued to be in-person and the people whose work could become mostly or entirely remote. Self-help literature, which is primarily aimed at the more affluent white-collar class, primarily tracked the latter disruption: newly-remote work, endless Zoom meetings, the impossibility of child care, the breakdown of boundaries between work and home, and the dawning realization that much of the mechanics of day-to-day office work are neither productive nor defensible. My primary exposure these days to the more traditional self-help productivity literature is via Cal Newport. The stereotype of the productivity self-help book is a collection of life hacks and list-making techniques that will help you become a more efficient capitalist cog, but Newport has been moving away from that dead end for as long as I've been reading him, and his recent work focuses more on structural issues with the organization of knowledge work. He also shares with the newer productivity writers a willingness to tell people to use the free time they recover via improved efficiency on some life goal other than improved job productivity. But he's still prickly and defensive about the importance of personal productivity and accomplishing things. He gives lip service on his podcast to the value of the critique of productivity, but then usually reverts to characterizing anti-productivity arguments as saying that productivity is a capitalist invention to control workers. (Someone has doubtless said this on Twitter, but I've never seen a serious critique of productivity make this simplistic of an argument.) On the anti-productivity side, as it's commonly called, I've seen a lot of new writing in the past couple of years that tries to break the connection between productivity and human worth so endemic to US society. This is not a new analysis; disabled writers have been making this point for decades, it's present in both Keynes and in Galbraith's The Affluent Society, and Kathi Weeks's The Problem with Work traces some of its history in Marxist thought. But what does feel new to me is its widespread mainstream appearance in newspaper articles, viral blog posts, and books such as Jenny Odell's How to Do Nothing and Devon Price's Laziness Does Not Exist. The pushback against defining life around productivity is having a moment. Entering this discussion is Madeleine Dore's I Didn't Do the Thing Today. Dore is the author of the Extraordinary Routines blog and host of the Routines and Ruts podcast. Extraordinary Routines began as a survey of how various people organize their daily lives. I Didn't Do the Thing Today is, according to the preface, a summary of the thoughts Dore has had about her own life and routines as a result of those interviews. As you might guess from the subtitle (Letting Go of Productivity Guilt), Dore's book is superficially on the anti-productivity side. Its chapters are organized around gentle critiques of productivity concepts, with titles like "The Hopeless Search for the Ideal Routine," "The Myth of Balance," or "The Harsh Rules of Discipline." But I think anti-productivity is a poor name for this critique; its writers are not opposed to being productive, only to its position as an all-consuming focus and guilt-generating measure of personal worth. Dore structures most chapters by naming an aspect, goal, or concern of a life defined by productivity, such as wasted time, ambition, busyness, distraction, comparison, or indecision. Each chapter sketches the impact of that idea and then attempts to gently dismantle the grip that it may have on the reader's life. All of these discussions are nuanced; it's rare for Dore to say that one of these aspects has no value, and she anticipates numerous objections. But her overarching goal is to help the reader be more comfortable with imperfection, more willing to live in the moment, and less frustrated with the limitations of life and the human brain. If striving for productivity is like lifting weights, Dore's diagnosis is that we've tried too hard for too long, and have overworked that muscle until it is cramping. This book is a gentle massage to induce the muscle to relax and let go. Whether this will work is, as with all self-help books, individual. I found it was best read in small quantities, perhaps a chapter per day, since it otherwise began feeling too much the same. I'm also not the ideal audience; Dore is a creative freelancer and primarily interviewed other creative people, which I think has a different sort of productivity rhythm than the work that I do. She's also not a planner to the degree that I am; more on that below. And yet, I found this book worked on me anyway. I can't say that I was captivated all the way through, but I found myself mentally relaxing while I was reading it, and I may re-read some chapters from time to time. How does this relate to the genre of productivity self-help? With less conflict than I think productivity writers believe, although there seems to be one foundational difference of perspective. Dore is not opposed to accomplishing things, or even to systems that help people accomplish things. She is more attuned than the typical productivity writer to the guilt and frustration that can accumulate when one has a day in which one does not do the thing, but her goal is not to talk you out of attempting things. It is, instead, to convince you to hold those attempts and goals more lightly, to allow them to move and shift and change, and to not treat a failure to do the thing today as a reason for guilt. This is wholly compatible with standard productivity advice. It's adding nuance at one level of abstraction higher: how tightly to cling to productivity goals, and what to do when they don't work out. Cramping muscles are not strong muscles capable of lifting heavy things. If one can massage out the cramp, one's productivity by even the strict economic definition may improve. Where I do see a conflict is that most productivity writers are planners, and Dore is not. This is, I think, a significant blind spot in productivity self-help writing. Cal Newport, for example, advocates time-block planning, where every hour of the working day has a job. David Allen advocates a complex set of comprehensive lists and well-defined next actions. Mark Forster builds a flurry of small systems for working through lists. The standard in productivity writing is to to add structure to your day and cultivate the self-discipline required to stick to that structure. For many people, including me, this largely works. I'm mostly a planner, and when my life gets chaotic, adding more structure and focusing on that structure helps me. But the productivity writers I've read are quite insistent that their style of structure will work for everyone, and on that point I am dubious. Newport, for example, advocates time-block planning for everyone without exception, insisting that it is the best way to structure a day. Dore, in contrast, describes spending years trying to perfect a routine before realizing that elastic possibilities work better for her than routines. For those who are more like Dore than Newport, I Didn't Do the Thing Today is more likely to be helpful than Newport's instructions. This doesn't make Newport's ideas wrong; it simply makes them not universal, something that the productivity self-help genre seems to have trouble acknowledging. Even for readers like myself who prefer structure, I Didn't Do the Thing Today is a valuable corrective to the emphasis on every-better systems. For those who never got along with too much structure, I think it may strike a chord. The standard self-help caveat still applies: Dore has the most to say to people who are in a similar social class and line of work as her. I'm not sure this book will be of much help to someone who has to juggle two jobs with shift work and child care, where the problem is more sharp external constraints than internalized productivity guilt. But for its target audience, I think it's a valuable, calming message. Dore doesn't have a recipe to sort out your life, but may help you feel better about the merits of life unsorted. Rating: 7 out of 10

Michael Ablassmeier: Qemu backup on Debian Bullseye

In my last article i showed how to use the new features included in Debian Bullseye to easily create backups of your libvirt managed domains. A few years ago as this topic came to my interest, i also implemented a rather small utility (POC) to create full and incremental backups from standalone qemu processes: qmpbackup The workflow for this is a little bit different from the approach i have taken with virtnbdbackup. While with libvirt managed virtual machines, the libvirt API provides all necessary API calls to create backups, a running qemu process only provides the QMP protocol socket to get things going. Using the QMP protocol its possible to create bitmaps for the attached disks and make Qemu push the contents of the bitmaps to a specified target directory. As the bitmaps keep track of the changes on the attached block devices, you can create incremental backups too. The nice thing here is that the Qemu process actually does this all by itself and you dont have to care about which blocks are dirty, like you would have to do with the Pull based approach. So how does it work? The utility requires to start your qemu process with an active QMP socket attached, like so:
 qemu-system-<arch> <options> -qmp unix:/tmp/socket,server,nowait
Now you can easily make qemu push the latest data for a created bitmap to a given target directory:
# qmpbackup --socket /tmp/socket backup --level full --target /tmp/backup/
[2022-01-27 19:41:33,819]    INFO  Version: 0.10
[2022-01-27 19:41:33,819]    INFO  Qemu version: [5.0.2] [Debian 1:5.2+dfsg-11+deb11u1]
[2022-01-27 19:41:33,825]    INFO  Guest Agent socket connected
[2022-01-27 19:41:33,825]    INFO  Trying to ping guest agent
[2022-01-27 19:41:38,827] WARNING  Unable to reach Guest Agent: cant freeze file systems.
[2022-01-27 19:41:38,828]    INFO  Backup target directory: /tmp/backup/
[2022-01-27 19:41:38,828]    INFO  FULL Backup operation: "/tmp/backup//ide0-hd0/FULL-1643308898"
[2022-01-27 19:41:38,836]    INFO  Wrote Offset: 0% (0 of 2147483648)
[2022-01-27 19:41:39,838]    INFO  Wrote Offset: 25% (541065216 of 2147483648)
[2022-01-27 19:41:40,840]    INFO  Wrote Offset: 33% (701890560 of 2147483648)
[2022-01-27 19:41:41,841]    INFO  Wrote Offset: 40% (867041280 of 2147483648)
[2022-01-27 19:41:42,844]    INFO  Wrote Offset: 50% (1073741824 of 2147483648)
[2022-01-27 19:41:43,846]    INFO  Wrote Offset: 59% (1269760000 of 2147483648)
[2022-01-27 19:41:44,847]    INFO  Wrote Offset: 75% (1610612736 of 2147483648)
[2022-01-27 19:41:45,848]    INFO  Saved disk: [ide0-hd0]
The resulting directory now contains a full backup image of the disk attached. From this point on, its possible to create further incremental backups:
# qmpbackup  --socket /tmp/socket backup --level inc --target /tmp/backup/
[2022-01-27 19:42:03,930]    INFO  Version: 0.10
[2022-01-27 19:42:03,931]    INFO  Qemu version: [5.0.2] [Debian 1:5.2+dfsg-11+deb11u1]
[2022-01-27 19:42:03,933]    INFO  Guest Agent socket connected
[2022-01-27 19:42:03,933]    INFO  Trying to ping guest agent
[2022-01-27 19:42:08,938] WARNING  Unable to reach Guest Agent: cant freeze file systems.
[2022-01-27 19:42:08,939]    INFO  Backup target directory: /tmp/backup/
[2022-01-27 19:42:08,939]    INFO  INC Backup operation: "/tmp/backup//ide0-hd0/INC-1643308928"
[2022-01-27 19:42:08,953]    INFO  Wrote Offset: 0% (0 of 1835008)
[2022-01-27 19:42:09,957]    INFO  Saved disk: [ide0-hd0]
The target directory will now have multiple data backups:
/tmp/backup/ide0-hd0/
  FULL-1643308898
  INC-1643308928
Restoring the image Using the qmprebase utility you can now rebase the images to the latest state. The --dry-run option gives an good impression which command sequences are required, if one wants only rebase to a specific incremental backup, thats possible using the --until option.
# qmprebase rebase --dir /tmp/backup/ide0-hd0/ --dry-run
[2022-01-27 17:18:08,790]    INFO  Version: 0.10
[2022-01-27 17:18:08,790]    INFO  Dry run activated, not applying any changes
[2022-01-27 17:18:08,790]    INFO  qemu-img check /tmp/backup/ide0-hd0/INC-1643308928
[2022-01-27 17:18:08,791]    INFO  qemu-img rebase -b "/tmp/backup/ide0-hd0/FULL-1643308898" "/tmp/backup/ide0-hd0/INC-1643308928" -u
[2022-01-27 17:18:08,791]    INFO  qemu-img commit "/tmp/backup/ide0-hd0/INC-1643308928"
[2022-01-27 17:18:08,791]    INFO  Rollback of latest [FULL]<-[INC] chain complete, ignoring older chains
[2022-01-27 17:18:08,791]    INFO  Image files rollback successful.
Filesystem consistency The backup utility also supports to freeze and thaw the virtual machines file system in case qemu is started with a guest agent socket and the guest agent is reachable during backup operation. Check out the README for the full feature set.

26 January 2022

Gunnar Wolf: Progvis Now in Debian proper! (unstable)

Progvis finally made it into Debian! What is it, you ask? It is a great tool to teach about memory management and concurrency. I first saw progvis in the poster presentation his author, Filip Str mb ck, did last year at the 52nd ACM Technical Sympossium on Computer Science Education (SIGCSE), immediately recognizing it as a tool I wanted to use at my classes, and being it free software, make it available for all interested Debian users. Quoting from Progvis Web page:
This is a program visualization tool aimed at concurrent programs and related issues. The tool itself is mostly language agnostic, and relies on Storm to compile the provided code and provide basic debug information. The generated code is then inspected and instrumented to provide an experience similar to a basic debugger. The tool emphasizes a visual representation of the object hierarchy that is manipulated by the executed program to make it easy to understand how it looks. In particular, a visual representation is beneficial over a text representation since it makes it easier to find shared data that might need to be synchronized in a concurrent program. As mentioned, the tool is aimed at concurrent programs. Therefore, it allows spawning multiple threads running the same program to see if that affects the program s execution (this is mostly interesting if global variables are used). Furthermore, any spawned threads also appear in the tool, and the user may control them independently to explore possible race conditions or other synchronization errors. If enabled from the menu bar, the tool keeps track of reads and writes to the data structure in order to highlight basic race conditions in addition to deadlocks.
So, what is this Storm thing? Filip promptly informed me that Progvis is not just a pedagogical tool Or rather, that it is part of something bigger. Progvis is a program built using the Storm programming language platform is more than a compiler; it presents as a framework for creating languages, designed to make easy to implement languages that can be extended with new syntax and semantics. Storm is much more than what I have explored, and can be used as an interactive compiler, a language server used as a service for highlighting and completing in IDEs. But I won t dig much more into Storm (which is, of course, now also available in Debian as well as the libraries built from the same source). Back to progvis: It implements a very-close-to-C++ language, with some details to better suit its purpose (i.e. instead of using the usual pthread implementation, an own thread model is used; i.e. thread creation is handled via int thread_id = thread_name(funcname, &params) instead of the more complex pthread_create() function (including details such as the thread object being passed as by reference as a parameter) All in all, while I have not yet taken full advantage of this tool in my teaching, it has helped me show somewhat hard to grasp concepts such as: All in all, a great tool. I hope you find it useful and enjoyable as well! PS- I suggest you to install the progvis-examples package to get started. You will find some dozens of sample programs in /usr/share/doc/progvis-examples/examples; playing with them will help you better understand the tool and be able to better write your own programs.

Timo Jyrinki: Unboxing Dell XPS 13 - openSUSE Tumbleweed alongside preinstalled Ubuntu

A look at the 2021 model of Dell XPS 13 - available with Linux pre-installed
I received a new laptop for work - a Dell XPS 13. Dell has been long famous for offering certain models with pre-installed Linux as a supported option, and opting for those is nice for moving some euros/dollars from certain PC desktop OS monopoly towards Linux desktop engineering costs. Notably Lenovo also offers Ubuntu and Fedora options on many models these days (like Carbon X1 and P15 Gen 2).
black box

opened box

accessories and a leaflet about Linux support

laptop lifted from the box, closed

laptop with lid open

Ubuntu running

openSUSE runnin
Obviously a smooth, ready-to-rock Ubuntu installation is nice for most people already, but I need openSUSE, so after checking everything is fine with Ubuntu, I continued to install openSUSE Tumbleweed as a dual boot option. As I m a funny little tinkerer, I obviously went with some special things. I wanted:
  • Ubuntu to remain as the reference supported OS on a small(ish) partition, useful to compare to if trying out new development versions of software on openSUSE and finding oddities.
  • openSUSE as the OS consuming most of the space.
  • LUKS encryption for openSUSE without LVM.
  • ext4 s new fancy fast_commit feature in use during filesystem creation.
  • As a result of all that, I ended up juggling back and forth installation screens a couple of times (even more than shown below, and also because I forgot I wanted to use encryption the first time around).
First boots to pre-installed Ubuntu and installation of openSUSE Tumbleweed as the dual-boot option:
(if the embedded video is not shown, use a direct link)
Some notes from the openSUSE installation:
  • openSUSE installer s partition editor apparently does not support resizing or automatically installing side-by-side another Linux distribution, so I did part of the setup completely on my own.
  • Installation package download hanged a couple of times, only passed when I entered a mirror manually. On my TW I ve also noticed download problems recently, there might be a problem with some mirror I need to escalate.
  • The installer doesn t very clearly show encryption status of the target installation - it took me a couple of attempts before I even noticed the small encrypted column and icon (well, very small, see below), which also did not spell out the device mapper name but only the main partition name. In the end it was going to do the right thing right away and use my pre-created encrypted target partition as I wanted, but it could be a better UX. Then again I was doing my very own tweaks anyway.
  • Let s not go to the details why I m so old-fashioned and use ext4 :)
  • openSUSE s installer does not work fine with HiDPI screen. Funnily the tty consoles seem to be fine and with a big font.
  • At the end of the video I install the two GNOME extensions I can t live without, Dash to Dock and Sound Input & Output Device Chooser.

Russell Coker: Australia/NZ Linux Meetings

I am going to start a new Linux focused FOSS online meeting for people in Australia and nearby areas. People can join from anywhere but the aim will be to support people in nearby areas. To cover the time zone range for Australia this requires a meeting on a weekend, I m thinking of the first Saturday of the month at 1PM Melbourne/Sydney time, that would be 10AM in WA and 3PM in NZ. We may have corner cases of daylight savings starting and ending on different days, but that shouldn t be a big deal as I think those times can vary by an hour either way without being too inconvenient for anyone. Note that I describe the meeting as Linux focused because my plans include having a meeting dedicated to different versions of BSD Unix and a meeting dedicated to the HURD. But those meetings will be mainly for Linux people to learn about other Unix-like OSs. One focus I want to have for the meetings is hands-on work, live demonstrations, and short highly time relevant talks. There are more lectures on YouTube than anyone could watch in a lifetime (see the Linux.conf.au channel for some good ones [1]). So I want to run events that give benefits that people can t gain from watching YouTube on their own. Russell Stuart and I have been kicking around ideas for this for a while. I think that the solution is to just do it. I know that Saturday won t work for everyone (no day will) but it will work for many people. I am happy to discuss changing the start time by an hour or two if that seems likely to get more people. But I m not particularly interested in trying to make it convenient for people in Hawaii or India, my idea is for an Australia/NZ focused event. I would be more than happy to share lecture notes etc with people in other countries who run similar events. As an aside I d be happy to give a talk for an online meeting at a Hawaiian LUG as the timezone is good for me. Please pencil in 1PM Melbourne time on the 5th of Feb for the first meeting. The meeting requirements will be a PC with good Internet access running a recent web browser and a ssh client for the hands-on stuff. A microphone or webcam is NOT required, any questions you wish to ask can be done with text if that s what you prefer. Suggestions for the name of the group are welcome.

23 January 2022

Matthieu Caneill: Debsources, python3, and funky file names

Rumors are running that python2 is not a thing anymore. Well, I'm certainly late to the party, but I'm happy to report that sources.debian.org is now running python3. Wait, it wasn't? Back when development started, python3 was very much a real language, but it was hard to adopt because it was not supported by many libraries. So python2 was chosen, meaning print-based debugging was used in lieu of print()-based debugging, and str were bytes, not unicode. And things were working just fine. One day python2 EOL was announced, with a date far in the future. Far enough to procrastinate for a long time. Combine this with a codebase that is stable enough to not see many commits, and the fact that Debsources is a volunteer-based project that happens at best on week-ends, and you end up with a dormant software and a missed deadline. But, as dormant as the codebase is, the instance hosted at sources.debian.org is very popular and gets 200k to 500k hits per day. Largely enough to be worth a proper maintenance and a transition to python3. Funky file names While transitioning to python3 and juggling left and right with str, bytes and unicode for internal objects, files, database entries and HTTP content, I stumbled upon a bug that has been there since day 1. Quick recap if you're unfamiliar with this tool: Debsources displays the content of the source packages in the Debian archive. In other words, it's a bit like GitHub, but for the Debian source code. And some pieces of software out there, that ended up in Debian packages, happen to contain files whose names can't be decoded to UTF-8. Interestingly enough, there's no such thing as a standard for file names: with a few exceptions that vary by operating system, any sequence of bytes can be a legit file name. And some sequences of bytes are not valid UTF-8. Of course those files are rare, and using ASCII characters to name a file is a much more common practice than using bytes in a non-UTF-8 character encoding. But when you deal with almost 100 million files on which you have no control (those files come from free software projects, and make their way into Debian without any renaming), it happens. Now back to the bug: when trying to display such a file through the web interface, it would crash because it can't convert the file name to UTF-8, which is needed for the HTML representation of the page. Bugfix An often valid approach when trying to represent invalid UTF-8 content is to ignore errors, and replace them with ? or . This is what Debsources actually does to display non-UTF-8 file content. Unfortunately, this best-effort approach is not suitable for file names, as file names are also identifiers in Debsources: among other places, they are part of URLs. If an URL were to use placeholder characters to replace those bytes, there would be no deterministic way to match it with a file on disk anymore. The representation of binary data into text is a known problem. Multiple lossless solutions exist, such as base64 and its variants, but URLs looking like https://sources.debian.org/src/Y293c2F5LzMuMDMtOS4yL2Nvd3NheS8= are not readable at all compared to https://sources.debian.org/src/cowsay/3.03-9.2/cowsay/. Plus, not backwards-compatible with all existing links. The solution I chose is to use double-percent encoding: this allows the representation of any byte in an URL, while keeping allowed characters unchanged - and preventing CGI gateways from trying to decode non-UTF-8 bytes. This is the best of both worlds: regular file names get to appear normally and are human-readable, and funky file names only have percent signs and hex numbers where needed. Here is an example of such an URL: https://sources.debian.org/src/aspell-is/0.51-0-4/%25EDslenska.alias/. Notice the %25ED to represent the percentage symbol itself (%25) followed by an invalid UTF-8 byte (%ED). Transitioning to this was quite a challenge, as those file names don't only appear in URLs, but also in web pages themselves, log files, database tables, etc. And everything was done with str: made sense in python2 when str were bytes, but not much in python3. What are those files? What's their network? I was wondering too. Let's list them!
import os
with open('non-utf-8-paths.bin', 'wb') as f:
    for root, folders, files in os.walk(b'/srv/sources.debian.org/sources/'):
        for path in folders + files:
            try:
                path.decode('utf-8')
            except UnicodeDecodeError:
                f.write(root + b'/' + path + b'\n')
Running this on the Debsources main instance, which hosts pretty much all Debian packages that were part of a Debian release, I could find 307 files (among a total of almost 100 million files). Without looking deep into them, they seem to fall into 2 categories: That last point hits home, as it was clearly lacking in Debsources. A funky file name is now part of its test suite. ;)

Antoine Beaupr : Switching from OpenNTPd to Chrony

A friend recently reminded me of the existence of chrony, a "versatile implementation of the Network Time Protocol (NTP)". The excellent introduction is worth quoting in full:
It can synchronise the system clock with NTP servers, reference clocks (e.g. GPS receiver), and manual input using wristwatch and keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network. It is designed to perform well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuosly, or run on a virtual machine. Typical accuracy between two machines synchronised over the Internet is within a few milliseconds; on a LAN, accuracy is typically in tens of microseconds. With hardware timestamping, or a hardware reference clock, sub-microsecond accuracy may be possible.
Now that's already great documentation right there. What it is, why it's good, and what to expect from it. I want more. They have a very handy comparison table between chrony, ntp and openntpd.

My problem with OpenNTPd Following concerns surrounding the security (and complexity) of the venerable ntp program, I have, a long time ago, switched to using openntpd on all my computers. I hadn't thought about it until I recently noticed a lot of noise on one of my servers:
jan 18 10:09:49 curie ntpd[1069]: adjusting local clock by -1.604366s
jan 18 10:08:18 curie ntpd[1069]: adjusting local clock by -1.577608s
jan 18 10:05:02 curie ntpd[1069]: adjusting local clock by -1.574683s
jan 18 10:04:00 curie ntpd[1069]: adjusting local clock by -1.573240s
jan 18 10:02:26 curie ntpd[1069]: adjusting local clock by -1.569592s
You read that right, openntpd was constantly rewinding the clock, sometimes in less than two minutes. The above log was taken while doing diagnostics, looking at the last 30 minutes of logs. So, on average, one 1.5 seconds rewind per 6 minutes! That might be due to a dying real time clock (RTC) or some other hardware problem. I know for a fact that the CMOS battery on that computer (curie) died and I wasn't able to replace it (!). So that's partly garbage-in, garbage-out here. But still, I was curious to see how chrony would behave... (Spoiler: much better.) But I also had trouble on another workstation, that one a much more recent machine (angela). First, it seems OpenNTPd would just fail at boot time:
anarcat@angela:~(main)$ sudo systemctl status openntpd
  openntpd.service - OpenNTPd Network Time Protocol
     Loaded: loaded (/lib/systemd/system/openntpd.service; enabled; vendor pres>
     Active: inactive (dead) since Sun 2022-01-23 09:54:03 EST; 6h ago
       Docs: man:openntpd(8)
    Process: 3291 ExecStartPre=/usr/sbin/ntpd -n $DAEMON_OPTS (code=exited, sta>
    Process: 3294 ExecStart=/usr/sbin/ntpd $DAEMON_OPTS (code=exited, status=0/>
   Main PID: 3298 (code=exited, status=0/SUCCESS)
        CPU: 34ms
jan 23 09:54:03 angela systemd[1]: Starting OpenNTPd Network Time Protocol...
jan 23 09:54:03 angela ntpd[3291]: configuration OK
jan 23 09:54:03 angela ntpd[3297]: ntp engine ready
jan 23 09:54:03 angela ntpd[3297]: ntp: recvfrom: Permission denied
jan 23 09:54:03 angela ntpd[3294]: Terminating
jan 23 09:54:03 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 09:54:03 angela systemd[1]: openntpd.service: Succeeded.
After a restart, somehow it worked, but it took a long time to sync the clock. At first, it would just not consider any peer at all:
anarcat@angela:~(main)$ sudo ntpctl -s all
0/20 peers valid, clock unsynced
peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  5  2    6s    6s             ---- peer not valid ----
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  5  2    6s    7s             ---- peer not valid ----
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  4  2    2s    8s             ---- peer not valid ----
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  4  2    0s    5s             ---- peer not valid ----
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  5  2    5s    5s             ---- peer not valid ----
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  4  1    6s    9s             ---- peer not valid ----
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  5  2    8s    9s             ---- peer not valid ----
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  4  3    2s    6s             ---- peer not valid ----
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  4  4    1s    6s             ---- peer not valid ----
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  4  2    0s    6s             ---- peer not valid ----
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  4  3    4s    7s             ---- peer not valid ----
Then it would accept them, but still wouldn't sync the clock:
anarcat@angela:~(main)$ sudo ntpctl -s all
20/20 peers valid, clock unsynced
peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  8  2    5s    6s         0.672ms    13.507ms     0.442ms
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  7  2    4s    8s         1.260ms    13.388ms     0.494ms
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  7  1    3s    5s        -0.390ms    47.641ms     1.537ms
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  7  2    1s    6s        -0.573ms    15.012ms     1.845ms
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  7  2    3s    8s        -0.178ms    21.691ms     1.807ms
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  7  2    4s    8s        -5.742ms    70.040ms     1.656ms
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  7  2    0s    7s         0.170ms    21.035ms     1.914ms
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  7  3    5s    8s        -2.626ms    20.862ms     2.032ms
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  7  2    6s    8s         0.123ms    20.758ms     2.248ms
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  8  3    4s    5s         2.043ms    14.138ms     1.675ms
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  6  1    0s    7s        -0.027ms    14.189ms     2.206ms
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  7  2    1s    5s        -1.777ms    53.459ms     1.865ms
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  6  2    1s    8s         0.195ms    14.572ms     2.624ms
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  7  3    6s    9s         2.068ms    14.102ms     1.767ms
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  6  2    4s    9s         0.254ms    21.471ms     2.120ms
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  7  4    5s    9s        -1.706ms    21.030ms     1.849ms
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  7  2    0s    7s         8.907ms    75.070ms     2.095ms
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  7  2    6s    9s        -1.729ms    53.823ms     2.193ms
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  7  1    1s    7s        -1.265ms    46.355ms     4.171ms
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  7  3    4s    8s         1.732ms    35.792ms     2.228ms
It took a solid five minutes to sync the clock, even though the peers were considered valid within a few seconds:
jan 23 15:58:41 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 15:58:58 angela ntpd[84086]: peer 142.114.187.107 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 198.27.76.102 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 207.34.49.172 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 209.115.181.110 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 159.203.8.72 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 138.197.135.239 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 162.159.200.123 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 2607:5300:201:3100::345c now valid
jan 23 15:59:00 angela ntpd[84086]: peer 2606:4700:f1::1 now valid
jan 23 15:59:00 angela ntpd[84086]: peer 158.69.254.196 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 216.6.2.70 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 68.69.221.61 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.40 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.42 now valid
jan 23 15:59:02 angela ntpd[84086]: peer 162.159.200.1 now valid
jan 23 15:59:04 angela ntpd[84086]: peer 216.197.156.83 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 206.108.0.131 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 2001:678:8::123 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 149.56.121.16 now valid
jan 23 15:59:07 angela ntpd[84086]: peer 2607:5300:205:200::1991 now valid
jan 23 16:03:47 angela ntpd[84086]: clock is now synced
That seems kind of odd. It was also frustrating to have very little information from ntpctl about the state of the daemon. I understand it's designed to be minimal, but it could inform me on his known offset, for example. It does tell me about the offset with the different peers, but not as clearly as one would expect. It's also unclear how it disciplines the RTC at all.

Compared to chrony Now compare with chrony:
jan 23 16:07:16 angela systemd[1]: Starting chrony, an NTP client/server...
jan 23 16:07:16 angela chronyd[87765]: chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
jan 23 16:07:16 angela chronyd[87765]: Initial frequency 3.814 ppm
jan 23 16:07:16 angela chronyd[87765]: Using right/UTC timezone to obtain leap second data
jan 23 16:07:16 angela chronyd[87765]: Loaded seccomp filter
jan 23 16:07:16 angela systemd[1]: Started chrony, an NTP client/server.
jan 23 16:07:21 angela chronyd[87765]: Selected source 206.108.0.131 (2.debian.pool.ntp.org)
jan 23 16:07:21 angela chronyd[87765]: System clock TAI offset set to 37 seconds
First, you'll notice there's none of that "clock synced" nonsense, it picks a source, and then... it's just done. Because the clock on this computer is not drifting that much, and openntpd had (presumably) just sync'd it anyways. And indeed, if we look at detailed stats from the powerful chronyc client:
anarcat@angela:~(main)$ sudo chronyc tracking
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:07:21 2022
System time     : 0.000000311 seconds slow of NTP time
Last offset     : +0.000807989 seconds
RMS offset      : 0.000807989 seconds
Frequency       : 3.814 ppm fast
Residual freq   : -24.434 ppm
Skew            : 1000000.000 ppm
Root delay      : 0.013200894 seconds
Root dispersion : 65.357254028 seconds
Update interval : 1.4 seconds
Leap status     : Normal
We see that we are nanoseconds away from NTP time. That was ran very quickly after starting the server (literally in the same second as chrony picked a source), so stats are a bit weird (e.g. the Skew is huge). After a minute or two, it looks more reasonable:
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:09:32 2022
System time     : 0.000487002 seconds slow of NTP time
Last offset     : -0.000332960 seconds
RMS offset      : 0.000751204 seconds
Frequency       : 3.536 ppm fast
Residual freq   : +0.016 ppm
Skew            : 3.707 ppm
Root delay      : 0.013363549 seconds
Root dispersion : 0.000324015 seconds
Update interval : 65.0 seconds
Leap status     : Normal
Now it's learning how good or bad the RTC clock is ("Frequency"), and is smoothly adjusting the System time to follow the average offset (RMS offset, more or less). You'll also notice the Update interval has risen, and will keep expanding as chrony learns more about the internal clock, so it doesn't need to constantly poll the NTP servers to sync the clock. In the above, we're 487 micro seconds (less than a milisecond!) away from NTP time. (People interested in the explanation of every single one of those fields can read the excellent chronyc manpage. That thing made me want to nerd out on NTP again!) On the machine with the bad clock, chrony also did a 1.5 second adjustment, but just once, at startup:
jan 18 11:54:33 curie chronyd[2148399]: Selected source 206.108.0.133 (2.debian.pool.ntp.org) 
jan 18 11:54:33 curie chronyd[2148399]: System clock wrong by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock was stepped by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock TAI offset set to 37 seconds 
Then it would still struggle to keep the clock in sync, but not as badly as openntpd. Here's the offset a few minutes after that above startup:
System time     : 0.000375352 seconds slow of NTP time
And again a few seconds later:
System time     : 0.001793046 seconds slow of NTP time
I don't currently have access to that machine, and will update this post with the latest status, but so far I've had a very good experience with chrony on that machine, which is a testament to its resilience, and it also just works on my other machines as well.

Extras On top of "just working" (as demonstrated above), I feel that chrony's feature set is so much superior... Here's an excerpt of the extras in chrony, taken from the comparison table:
  • source frequency tracking
  • source state restore from file
  • temperature compensation
  • ready for next NTP era (year 2036)
  • replace unreachable / falseticker servers
  • aware of jitter
  • RTC drift tracking
  • RTC trimming
  • Restore time from file w/o RTC
  • leap seconds correction, in slew mode
  • drops root privileges
I even understand some of that stuff. I think. So kudos to the chrony folks, I'm switching.

Caveats One thing to keep in mind in the above, however is that it's quite possible chrony does as bad of a job as openntpd on that old machine, and just doesn't tell me about it. For example, here's another log sample from another server (marcos):
jan 23 11:13:25 marcos ntpd[1976694]: adjusting clock frequency by 0.451035 to -16.420273ppm
I get those basically every day, which seems to show that it's at least trying to keep track of the hardware clock. In other words, it's quite possible I have no idea what I'm talking about and you definitely need to take this article with a grain of salt. I'm not an NTP expert. Update: I should also mentioned that I haven't evaluated systemd-timesyncd, for a few reasons:
  1. I have enough things running under systemd
  2. I wasn't aware of it when I started writing this
  3. I couldn't find good documentation on it... later I found the above manpage and of course the Arch Wiki but that is very minimal
  4. therefore I can't tell how it compares with chrony or (open)ntpd, so I don't see an enticing reason to switch
It has a few things going for it though:
  • it's likely shipped with your distribution already
  • it drops privileges (possibly like chrony, unclear if it also has seccomp filters)
  • it's minimalist: it only does SNTP so not the server side
  • the status command is good enough that you can tell the clock frequency, precision, and so on (especially when compared to openntpd's ntpctl)
So I'm reserving judgement over it, but I'd certainly note that I'm always a little weary in trusting systemd daemons with the network, and would prefer to keep that attack surface to a minimum. Diversity is a good thing, in general, so I'll keep chrony for now. It would certainly nice to see it added to chrony's comparison table.

Switching to chrony Because the default configuration in chrony (at least as shipped in Debian) is sane (good default peers, no open network by default), installing it is as simple as:
apt install chrony
And because it somehow conflicts with openntpd, that also takes care of removing that cruft as well.

Update: Debian defaults So it seems like I managed to write this entire blog post without putting it in relation with the original reason I had to think about this in the first place, which is odd and should be corrected. This conversation came about on an IRC channel that mentioned that the ntp package (and upstream) is in bad shape in Debian. In that discussion, chrony and ntpsec were discussed as possible replacements, but when we had the discussion on chat, I mentioned I was using openntpd, and promptly realized I was actually unhappy with it. A friend suggested chrony, I tried it, and it worked amazingly, I switched, wrote this blog post, end of story. Except today (2022-02-07, two weeks later), I actually read that thread and realized that something happened in Debian I wasn't actually aware of. In bookworm, systemd-timesyncd was not only shipped, but it was installed by default, as it was marked as a hard dependency of systemd. That was "fixed" in systemd-247.9-2 (see bug 986651), but only by making the dependency a Recommends and marking it as Priority: important. So in effect, systemd-timesyncd became the default NTP daemon in Debian in bookworm, which I find somewhat surprising. timesyncd has many things going for it (as mentioned above), but I do find it a bit annoying that systemd is replacing all those utilities in such a way. I also wonder what is going to happen on upgrades. This is all a little frustrating too because there is no good comparison between the other NTP daemons and timesyncd anywhere. The chrony comparison table doesn't mention it, and an audit by the Core Infrastructure Initiative from 2017 doesn't mention it either, even though timesyncd was announced in 2014. (Same with this blog post from Facebook.)

22 January 2022

Louis-Philippe V ronneau: Homebrewing recipes

Looking at my blog, it seems I haven't written anything about homebrewing in a while. In fact, the last time I did was when I had a carboy blow out on me in the middle of the night... Fear not, I haven't stopped brewing since then. I have in fact decided to publish my homebrew recipes. Not on this blog though, as it would get pretty repetitive. So here are my recipes. So far, I've brewed around 30 different beers! The format is pretty simple (no fancy HTML, just plain markdown) and although I'm not the most scientific brewer, you should be able to replicate some of those if that's what you want to try. Cheers!

21 January 2022

Neil McGovern: Further investments in desktop Linux

This was originally posted on the GNOME Foundation news feed The GNOME Foundation was supported during 2020-2021 by a grant from Endless Network which funded the Community Engagement Challenge, strategy consultancy with the board, and a contribution towards our general running costs. At the end of last year we had a portion of this grant remaining, and after the success of our work in previous years directly funding developer and infrastructure work on GTK and Flathub, we wanted to see whether we could use these funds to invest in GNOME and the wider Linux desktop platform. We re very pleased to announce that we got approval to launch three parallel contractor engagements, which started over the past few weeks. These projects aim to improve our developer experience, make more applications available on the GNOME platform, and move towards equitable and sustainable revenue models for developers within our ecosystem. Thanks again to Endless Network for their support on these initiatives. Flathub Verified apps, donations and subscriptions (Codethink and James Westman) This project is described in detail on the Flathub Discourse but goal is to add a process to verify first-party apps on Flathub (ie uploaded by a developer or an authorised representative) and then make it possible for those developers to collect donations or subscriptions from users of their applications. We also plan to publish a separate repository that contains only these verified first-party uploads (without any of the community contributed applications), as well as providing a repository with only free and open source applications, allowing users to choose what they are comfortable installing and running on their system. Creating the user and developer login system to manage your apps will also set us up well for future enhancements, such managing tokens for direct binary uploads (eg from a CI/CD system hosted elsewhere, as is already done with Mozilla Firefox and OBS) and making it easier to publish apps from systems such as Electron which can be hard to use within a flatpak-builder sandbox. For updates on this project you can follow the Discourse thread, check out the work board on GitHub or join us on Matrix. PWAs Integrating Progressive Web Apps in GNOME (Phaedrus Leeds) While everyone agrees that native applications can provide the best experience on the GNOME desktop, the web platform, and particularly PWAs (Progressive Web Apps) which are designed to be downloadable as apps and offer offline functionality, makes it possible for us to offer equivalent experiences to other platforms for app publishers who have not specifically targeted GNOME. This allows us to attract and retain users by giving them the choice of using applications from a wider range of publishers than are currently directly targeting the Linux desktop. The first phase of the GNOME PWA project involves adding back support to Software for web apps backed by GNOME Web, and making this possible when Web is packaged as a Flatpak. So far some preparatory pull requests have been merged in Web and libportal to enable this work, and development is ongoing to get the feature branches ready for review. Discussions are also in progress with the Design team on how best to display the web apps in Software and on the user interface for web apps installed from a browser. There has also been discussion among various stakeholders about what web apps should be included as available with Software, and how they can provide supplemental value to users without taking priority over apps native to GNOME. Finally, technical discussion is ongoing in the portal issue tracker to ensure that the implementation of a new dynamic launcher portal meets all security and robustness requirements, and is potentially useful not just to GNOME Web but Chromium and any other app that may want to install desktop launchers. Adding support for the launcher portal in upstream Chromium, to facilitate Chromium-based browsers packaged as a Flatpak, and adding support for Chromium-based web apps in Software are stretch goals for the project should time permit. GTK4 / Adwaita To support the adoption of Gtk4 by the community (Emmanuele Bassi) With the release of GTK4 and renewed interest in GTK as a toolkit, we want to continue improving the developer experience and ease of use of GTK and ensure we have a complete and competitive offering for developers considering using our platform. This involves identifying missing functionality or UI elements that applications need to move to GTK4, as well as informing the community about the new widgets and functionality available. We have been working on documentation and bug fixes for GTK in preparation for the GNOME 42 release and have also started looking at the missing widgets and API in Libadwaita, in preparation for the next release. The next steps are to work with the Design team and the Libadwaita maintainers and identify and implement missing widgets that did not make the cut for the 1.0 release. In the meantime, we have also worked on writing a beginners tutorial for the GNOME developers documentation, including GTK and Libadwaita widgets so that newcomers to the platform can easily move between the Interface Guidelines and the API references of various libraries. To increase the outreach of the effort, Emmanuele has been streaming it on Twitch, and published the VOD on YouTube as well.

20 January 2022

Caleb Adepitan: I'm Thinking About You Right Now!

Just in case you stumbled on this incidentally and you wonder Who in the seven fat worlds is this mysterious...? Ha! That was what I was thinking about you you were thinking about me. You gerrit!? I heard you listening to my thoughts; I listened to yours too. I wonder if you heard me too. I will like to talk, today, about what it is I do at Debian as an Outreachy Intern under the JavaScript team. I woke up this morning and decided to bore you with so much details. I must have woken up glorified!

A Broader View My sole role at Debian alongside my teammate, aided by our mentors, is to facilitate the Node.js 16 and Webpack 5 Transitioning. What exactly does that mean? Node.js 16, as of the time of this writing, is the active LTS release from the Node.js developers while Webpack 5 is also the current release from the Webpack developers. At Debian we have to work towards supporting these packages. Debian as an OS comes with a package manager coined Advanced Package Tool or simply APT on which command-line programs specific to Debian and it's many-flavored distributions, apt, apt-get, apt-cache are based. This means before the conception of yarn and npm, the typical JavaScript developer's package managers, apt has been. Debian unlike yarn and npm, ideally, only supports one version of a software at any point in time and on edge cases may have to support an extra one as noted in this chat between my mentor and a member. To provide support for Webpack 5 and Node.js 16 which as regards to Debian are currently in experimental and only can be migrated to unstable after our transitioning, we have to test, reverse build, report and fix bugs till a certain level of compatibility has been attained with dependent packages currently in unstable. Webpack and Node.js have their respective dependencies, but there are certain software and packages also dependent on Webpack and/or Node.js, these are termed as reverse-dependencies. We have to test and build these reverse-dependencies, report and fix bugs and incompatibilities with the new versions of Webpack and Node.js. For reverse-dependent packages not yet supporting Webpack 5 and/or Node.js 16, we'll open an issue in form of a feature-request in upstream repository asking for Webpack 5 and/or Node.js 16 support. Ideally, Debian manages a repository of all supported packages on a GitLab managed Git based VCS. For JavaScript packages maintained by the JS Team, the home of those packages sits at https://salsa.debian.org/js-team/. Supported packages are pulled from upstream repository, mostly GitHub, using some certian packaging tools provided by Debian. The pulled source cannot be directly modified else it will break build. So there exists a dedicated folder named debian where certain cofiguration files, scripts and rules to convey to the debian package builder live at. In some cases, source code needs to be modified; these are done via patching which means the modifications won't live in the source but in a dedicated patch file inside the debian/patches/ folder. The modifications are diffed line by line with the original source (just as with git) and the result is output in a file managed by debian utility tool, Quilt. The contents of the debian folder are instructions on how to build the source into binaries or an installable archive .deb (like Java's .jar or Android's .apk).

Understanding Debian Software Release Cycle There are quite some interesting things about the software release cycle at Debian to get familiar with. Listed here are some release repositories alongside their codenames as of Debian 11:
  1. Unstable (Sid)
  2. Testing (Bookworm)
  3. Stable (Bullseye)
  4. Old stable (Buster)
  5. Old old stable (Stretch)
Ha! Isn't it ironic that unstable is the only one with a stable codename? Some of these, if not all, have codenames subject to change after every new release and/or migration. Only unstable which is referred to as Sid never changes. The current stable release which is Debian 11 is codenamed Bullseye. The next stable release which will be Debian 12 will be codenamed Bookworm because the current testing repository will be migrated to stable and released as Debian 12. The previous stable release which was Debian 10, now old stable, was codenamed Buster. To better understand Debian releases you may take a look at this wiki that completely defines them. Basically, as explained by one of my mentors remixed in my own words, experimental software are migrated to unstable after (as I said earlier) they have attained a certain level of compatibility with dependent software. They remain in unstable for a long period of time undergoing testing, autopkgtest tests, regression tests, etc. At this point bugs are reported and fixed to a satisfactory level. The unstable repository is then migrated to testing where release-critical bugs are reported and fixed to a satisfactory level where one can comfortably say testing is almost stable , and voila (!), testing is released as a Debian stable version. This happens roughly every two years. Some months before a new stable release, a soft freeze is turned on such that no new versions or transitions should be uploaded to unstable. Only fixes will be uploaded at this point. In like 4-6 weeks before the release, a hard freeze is turned on that completely disallows uploading to unstable, not even fixes. In due time, testing becomes the new stable release and freeze is lifted.

References
  1. Packaging pre-requisites
  2. Working with chroots
  3. Sbuild (clean builds)
  4. Updating a Debian Package by Abraham Raji

19 January 2022

Joerg Jaspert: Funny CPU usage

Munin plugin and it s CPU usage (shell fixup) So at work we do have a munin server running, and one of the graphs we do for every system is a network statistics one with a resolution of 1 second. That s a simple enough script to have, and it is working nicely - on 98% of our machines. You just don t notice the data gatherer at all, so that we also have some other graphs done with a 1 second resolution. For some, this really helps.

Basics The basic code for this is simple. There is a bunch of stuff to start the background gathering, some to print out the config, and some to hand out the data when munin wants it. Plenty standard. The interesting bit that goes wrong and uses too much CPU on one Linux Distribution is this:
run_acquire()  
   echo "$$" > $ pidfile 
   while :; do
     TSTAMP=$(date +%s)
     echo $ IFACE _tx.value $ TSTAMP :$(cat /sys/class/net/$ IFACE /statistics/tx_bytes ) >> $ cache 
     echo $ IFACE _rx.value $ TSTAMP :$(cat /sys/class/net/$ IFACE /statistics/rx_bytes ) >> $ cache 
     # Sleep for the rest of the second
     sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
   done
 
That code works, and none of Debian wheezy, stretch and buster as well as RedHat 6 or 7 shows anything, it just works, no noticable load generated. Now, Oracle Linux 7 thinks differently. The above code run there generates between 8 and 15% CPU usage (on fairly recent Intel CPUs, but that shouldn t matter). (CPU usage measured with the highly accurate use of top and looking what it tells ) Whyever.

Fixing Ok, well, the code above isn t all the nicest shell, actually. There is room for improvement. But beware, the older the bash, the less one can fix it.
  • So, first of, there are two useless uses of cat. Bash can do that for us, just use the $(< /PATH/TO/FILE ) way.
  • Oh, Bash5 knows the epoch directly, we can replace the date call for the timestamp and use $ EPOCHSECONDS
  • Too bad Bash4 can t do that. But hey, it s builtin printf can help out, a nice TSTAMP=$(printf %(%s)T\n -1) works.
  • Unfortunately, Bash4.2 and later, not 4.1, and meh, we have a 4.1 system, so that has to stay with the date call there.
Taking that, we end up with 3 different possible versions, depending on the Bash on the system.
obtain5()  
  ## Purest bash version, Bash can tell us epochs directly
  echo $ IFACE _tx.value $ EPOCHSECONDS :$(</sys/class/net/$ IFACE /statistics/tx_bytes) >> $ cache 
  echo $ IFACE _rx.value $ EPOCHSECONDS :$(</sys/class/net/$ IFACE /statistics/rx_bytes) >> $ cache 
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
 
obtain42()  
  ## Bash cant tell us epochs directly, but the builtin printf can
  TSTAMP=$(printf '%(%s)T\n' -1)
  echo $ IFACE _tx.value $ TSTAMP :$(</sys/class/net/$ IFACE /statistics/tx_bytes) >> $ cache 
  echo $ IFACE _rx.value $ TSTAMP :$(</sys/class/net/$ IFACE /statistics/rx_bytes) >> $ cache 
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
 
obtain41()  
  ## Bash needs help from a tool to get epoch, means one exec() all the time
  TSTAMP=$(date +%s)
  echo $ IFACE _tx.value $ TSTAMP :$(</sys/class/net/$ IFACE /statistics/tx_bytes) >> $ cache 
  echo $ IFACE _rx.value $ TSTAMP :$(</sys/class/net/$ IFACE /statistics/rx_bytes) >> $ cache 
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
 
run_acquire()  
   echo "$$" > $ pidfile 
   case $ BASH_VERSINFO[0]  in
     5) while :; do
          obtain5
        done
        ;;
     4) if [[ $ BASHVERSION[1]  -ge 2 ]]; then
          while :; do
            obtain42
          done
        else
          while :; do
            obtain41
          done
        fi
        ;;
   esac
 

Does it help? Oh yes, it does. Oracle Linux 7 appears to use Bash 4.2, so uses obtain42 and hey, removing one date and two cat calls, and it has a sane CPU usage of 0 (again, highly accurate number generated from top ). Appears OL7 is doing heck-what-do-i-know extra, when calling other tools, for whatever gains, removing that does help (who would have thought). (None of RedHat or Oracle Linux has SELinux turned on, so that one shouldn t bite. But it is clear OL7 doing something extra for everything that bash spawns.)

17 January 2022

Wouter Verhelst: Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap. He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases. There are two reasons why you should have backups. The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed. Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online. The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable. That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create. All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites. In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much. As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data. But of course, this is something everyone should consider for themselves...

Matthew Garrett: Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

16 January 2022

Chris Lamb: Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on after all, nobody needs to hear that The Godfather is a good movie.) It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to make a tour of the landmarks of the first century of cinema, and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here. Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) the latter being an almost perfect example of late-20th century cultural exhaustion. But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

Dogtooth (2009) A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well. Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and F lix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

Holy Motors (2012) There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Bu uel and famed artist Salvador Dal . A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest. This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!") Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work. Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

Manchester by the Sea (2016) An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

McCabe & Mrs. Miller (1971) Roger Ebert called this movie one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come. But whilst it is difficult to disagree with his sentiment, Ebert's choice of sad is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some. Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure. As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: This is not the kind of movie where the characters are introduced. They are all already here. Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.) On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

Detour (1945) Detour was filmed in less than a week, and it's difficult to decide out of the actors and the screenplay which is its weakest point.... Yet it still somehow seemed to drag me in. The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control. It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema. In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

Safe (1995) Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she doesn't feel right, Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist. Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies. On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.) As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

The Driver (1978) Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978. The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective. If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well. To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

The Souvenir (2019) The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film. Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

The Wrestler (2008) Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback. The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler. In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s. Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat... But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

Thief (1981) Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity. Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie. The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps. I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

Uncut Gems (2019) The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?) No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what. Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

Woman in the Dunes (1964) I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life. Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

A Quiet Place (2018) Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here. The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet that is to say, by staying stoic suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.) I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

Wouter Verhelst: Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server. Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware. Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing. But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay. So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that. When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too. After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly. After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try. The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful. As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that. All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

14 January 2022

Debian Social Team: Some site updates

Dirk Eddelbuettel: Rcpp 1.0.8: Updated, Strict Headers

rcpp logo The Rcpp team is thrilled to share the news of the newest release 1.0.8 of Rcpp which hit CRAN today, and has already been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, interim dev or rc releases will alwasys be available in the Rcpp drat repo; this cycle there were once again seven (!!) times two as we also tested the modified header (more below). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2478 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 242 in BioConductor. This release finally brings a change we have worked on quite a bit over the last few months. The idea of enforcing the setting of STRICT_R_HEADERS was prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded. Hence, we could make the case to CRAN (who were always CC ed on the monthly nag emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13. So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM). Other than the aforementioned change which will not automatically set STRICT_R_HEADERS (unless opted out which one can), a number of nice pull request by a number of contributors are included in this release: The full list of details follows.

Changes in Rcpp release version 1.0.8 (2022-01-11)
  • Changes in Rcpp API:
    • STRICT_R_HEADERS is now enabled by default, see extensive discussion in #1158 closing #898.
    • A new #define allows default setting of finalizer calls for external pointers (I aki in #1180 closing #1108).
    • Rcpp:::CxxFlags() now quotes the include path generated, (Kevin in #1189 closing #1188).
    • New header files Rcpp/Light, Rcpp/Lighter, Rcpp/Lightest and default Rcpp/Rcpp for fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).
  • Changes in Rcpp Attributes:
    • A new option signature allows customization of function signatures (Travers Ching in #1184 and #1187 fixing #1182)
  • Changes in Rcpp Documentation:
    • The Rcpp FAQ has a new entry on how not to grow a vector (Dirk in #1167).
    • Some long-spurious calls to RNGSope have been removed from examples (Dirk in #1173 closing #1172).
    • DOI reference in the bibtex files have been updated per JSS request (Dirk in #1186).
  • Changes in Rcpp Deployment:
    • Some continuous integration components have been updated (Dirk in #1174, #1181, and #1190).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2822 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 January 2022

Daniel Lange: Leveling the playing field for non-native speakers

Wordle game screenshort of bash, grep and pipes

Updates 24.01.2022: What I love about the community is the playful creativity that inspires a game like Wordle and that in turn inspires others to create fun tools around it: Robert Reichel has reverse engineered the Wordle application, so in case you want to play tomorrow's word today .. you can. Or have that one guess "Genius" solution experience. JP Fosterson created a Wordle helper that is very much the Python version of my grep-foo above. In case you play regularly and can use a hand. And Tom Lockwood wrote a Wordle solver also in Python. He blogged about it and ... is pondering to rewrite things in Rust:

I ve decided to explore Rust for this, and so far what was taking 1GB of RAM in Python is taking, literally 1MB in Rust!
Welcome to 2022. 01.02.2022: OMG. Wordle has been bought by the New York Times for "for a price in the low seven figures" (Source). Joey Rees-Hill put it well in The Death of Wordle:
Today s Web is dominated by platforms. The average Web user will spend most of their time on large platforms such as Instagram, Facebook, Twitter, TikTok, Google Drive/Docs, YouTube, Netflix, Spotify, Gmail, and Google Calendar, along with sites operated by large publishers such as The New York Times or The Washington Post. [..]
The Web wasn t always this way. I m not old enough to remember this, but things weren t always so centralized. Web users might run their own small website, and certainly would visit a good variety of smaller sites. With the increasing availability of internet access, the Web has become incredibly commercialized, with a handful of companies concentrating Web activity on their own properties.
Wordle was a small site that gained popularity despite not being part of a corporate platform. It was wonderful to see an independent site gain attention for being simple and fun. Wordle was refreshingly free of attention-manipulating dark patterns and pushy monetization. That s why it s a shame to see it absorbed, to inevitably become just another feature of one large media company s portfolio.
Still kudos to Josh Wardle, a Million Pounds for Wordle. Well done! It was fun while it lasted. Let's see what the next Wordle will be. This one has just been absorbed into the borg collective.

11 January 2022

Ritesh Raj Sarraf: ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.
ThinkPad T14 Gen2 AMD
ThinkPad T14 Gen2 AMD

Lenovo It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.
  • No fingerprint reader
  • No Touch screen
There s still parts where Lenovo could improve. Or less frustate a customer. I don t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD For the first time in my computing life, I m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven t had to tinker even once so far, in my 30 days of use. On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn t just this machine, but also the previous T14 Gen1 which has similar problems. I m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead. Similarly, on the thermal side, this machine doesn t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle. I m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn t good but I took the plunge, considering that in the worst scenario I d have the option to swap the card. There s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration Other than what s mentioned above, I haven t had any serious issues. I may have had some rare occassional hangs but they ve been so infrequent that I haven t spent time to investigate those. Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it. Below is filtered list of subvols created over years, that were worthy of moving to the new machine.
rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54                      

btrfs send/receive This did come in handy but I sorely missed some feature. Maybe they aren t there, or are there and I didn t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it d be nice to say, Take this volume and take it with all its attributes . I didn t find that functionality in send/receive. There s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer. In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes. Also, iirc, send/receive works only on ro volumes. So there s more work one needs to do in:
  1. create ro vol
  2. send
  3. receive
  4. don t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination
I wish this all be condensed into a sub-command. For my own sake, for this migration, the steps used were:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/TOSHIBA/Migrate/   cut -d ' ' -f9   grep -v ROOTVOL   grep -v etc   grep -v btrbk ; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume   sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....
And then, follow-up with:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/BTRFSROOT/   cut -d ' ' -f9 ; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp
And then finally, renaming everything to match proper:
user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x   cut -d '-' -f2   sed -e "s _ / g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I d like to explore other boot options but given that that is such a non-essential task, it is low on the list. The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part. But I surely think there s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

Next.

Previous.