Search Results: "don"

16 January 2022

Chris Lamb: Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on after all, nobody needs to hear that The Godfather is a good movie.) It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to make a tour of the landmarks of the first century of cinema, and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here. Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) the latter being an almost perfect example of late-20th century cultural exhaustion. But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

Dogtooth (2009) A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well. Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and F lix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

Holy Motors (2012) There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Bu uel and famed artist Salvador Dal . A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest. This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!") Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work. Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

Manchester by the Sea (2016) An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

McCabe & Mrs. Miller (1971) Roger Ebert called this movie one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come. But whilst it is difficult to disagree with his sentiment, Ebert's choice of sad is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some. Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure. As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: This is not the kind of movie where the characters are introduced. They are all already here. Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.) On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

Detour (1945) Detour was filmed in less than a week, and it's difficult to decide out of the actors and the screenplay which is its weakest point.... Yet it still somehow seemed to drag me in. The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control. It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema. In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

Safe (1995) Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she doesn't feel right, Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist. Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies. On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.) As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

The Driver (1978) Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978. The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective. If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well. To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

The Souvenir (2019) The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film. Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

The Wrestler (2008) Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback. The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler. In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s. Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat... But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

Thief (1981) Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity. Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie. The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps. I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

Uncut Gems (2019) The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?) No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what. Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

Woman in the Dunes (1964) I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life. Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

A Quiet Place (2018) Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here. The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet that is to say, by staying stoic suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.) I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

Wouter Verhelst: Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server. Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware. Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing. But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay. So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that. When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too. After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly. After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try. The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful. As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that. All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

Russ Allbery: Review: The Brightest Fell

Review: The Brightest Fell, by Seanan McGuire
Series: October Daye #11
Publisher: DAW
Copyright: 2017
ISBN: 0-698-18352-5
Format: Kindle
Pages: 353
This is the eleventh book in the October Daye urban fantasy series, not counting various novellas and side stories. You really cannot start here, particularly given how many ties this book has to the rest of the series. I would like to claim there's some sort of plan or strategy in how I read long series, but there are just a lot of books to read and then I get distracted and three years have gone by. The advantage of those pauses, at least for writing reviews, is that I return to the series with fresh eyes and more points of comparison. My first thought this time around was "oh, these books aren't that well written, are they," followed shortly thereafter by staying up past midnight reading just one more chapter. Plot summaries are essentially impossible this deep into a series, when even the names of the involved characters can be a bit of a spoiler. What I can say is that we finally get the long-awaited confrontation between Toby and her mother, although it comes in an unexpected (and unsatisfying) form. This fills in a few of the gaps in Toby's childhood, although there's not much there we didn't already know. It fills in considerably more details about the rest of Toby's family, most notably her pure-blood sister. The writing is indeed not great. This series is showing some of the signs I've seen in other authors (Mercedes Lackey, for instance) who wrote too many books per year to do each of them justice. I have complained before about McGuire's tendency to reuse the same basic plot structure, and this instance seemed particularly egregious. The book opens with Toby enjoying herself and her found family, feeling like they can finally relax. Then something horrible happens to people she cares about, forcing her to go solve the problem. This in theory requires her to work out some sort of puzzle, but in practice is fairly linear and obvious because, although I love Toby as a character, she can't puzzle her way out of a wet sack. Everything is (mostly) fixed in the end, but there's a high cost to pay, and everyone ends the book with more trauma. The best books of this series are the ones where McGuire manages to break with this formula. This is not one of them. The plot is literally on magical rails, since The Brightest Fell skips even pretending that Toby is an actual detective (although it establishes that she's apparently still working as one in the human world, a detail that I find baffling) and gives her a plot compass that tells her where to go. I don't really mind this since I read this series for emotional catharsis rather than Toby's ingenuity, but alas that's mostly missing here as well. There is a resolution of sorts, but it's the partial and conditional kind that doesn't include awful people getting their just deserts. This is also not a good series entry for world-building. McGuire has apparently been dropping hints for this plot back at least as far as Ashes of Honor. I like that sort of long-term texture to series like this, but the unfortunate impact on this book is a lot of revisiting of previous settings and very little in the way of new world-building. The bit with the pixies was very good; I wanted more of that, not the trip to an Ashes of Honor setting to pick up a loose end, or yet another significant scene in Borderland Books. As an aside, I wish authors would not put real people into their books as characters, even when it's with permission as I'm sure it was here. It's understandable to write a prominent local business into a story as part of the local color (although even then I would rather it not be a significant setting in the story), but having the actual owner and staff show up, even in brief cameos, feels creepy and weird to me. It also comes with some serious risks because real people are not characters under the author's control. (All the content warnings for that link, which is a news story from three years after this book was published.) So, with all those complaints, why did I stay up late reading just one more chapter? Part of the answer is that McGuire writes very grabby books, at least for me. Toby is a full-speed-ahead character who is constantly making things happen, and although the writing in this book had more than the usual amount of throat-clearing and rehashing of the same internal monologue, the plot still moved along at a reasonable clip. Another part of the answer is that I am all-in on these characters: I like them, I want them to be happy, and I want to know what's going to happen next. It helps that McGuire has slowly added characters over the course of a long series and given most of them a chance to shine. It helps even more that I like all of them as people, and I like the style of banter that McGuire writes. Also, significant screen time for the Luidaeg is never a bad thing. I think this was the weakest entry in the series in a while. It wrapped up some loose ends that I wasn't that interested in wrapping up, introduced a new conflict that it doesn't resolve, spent a bunch of time with a highly unpleasant character I didn't enjoy reading about, didn't break much new world-building ground, and needed way more faerie court politics. But some of the banter was excellent, the pixies and the Luidaeg were great, and I still care a lot about these characters. I am definitely still reading. Followed by Nights and Silences. Continuing a pattern from Once Broken Faith, the ebook version of The Brightest Fell includes a bonus novella. (I'm not sure if it's also present in the print version.) "Of Things Unknown": As is usual for the short fiction in this series, this is a side story from the perspective of someone other than Toby. In this case, that's April O'Leary, first introduced all the way back in A Local Habitation, and the novella focuses on loose ends from that novel. Loose ends are apparently the theme of this book. This was... fine. I like April, I enjoyed reading a story from her perspective, and I'm always curious to see how Toby looks from the outside. I thought the plot was strained and the resolution a bit too easy and painless, and I was not entirely convinced by April's internal thought processes. It felt like McGuire left some potential for greater plot complications on the table here, and I found it hard to shake the impression that this story was patching an error that McGuire felt she'd made in the much earlier novel. But it was nice to have an unambiguously happy ending after the more conditional ending of the main story. (6) Rating: 6 out of 10

14 January 2022

Norbert Preining: Future of my packages in Debian

After having been (again) demoted (timed perfectly to my round birthday!) based on flimsy arguments, I have been forced to rethink the level of contribution I want to do for Debian. Considering in particular that I have switched my main desktop to dual-boot into Arch Linux (all on the same btrfs fs with subvolumes, great!) and have run Arch now for several days exclusively, I think it is time to review the packages I am somehow responsible for (full list of packages). After about 20 years in Debian, time to send off quite some stuff that has accumulated over time. KDE/Plasma, frameworks, Gears, and related packages All these packages are group maintained, so there is not much to worry about. Furthermore, a few new faces have joined the team and are actively working on the packages, although mostly on Qt6. I guess that with me not taking action, frameworks, gears, and plasma will fall back over time (frameworks: Debian 5.88 versus current 5.90, gears: Debian 21.08 versus current 21.12, plasma uptodate at the moment). With respect to my packages on OBS, they will probably also go stale over time. Using Arch nowadays I lack the development tools necessary to build Debian packages, and above all, the motivation. I am sorry for all those who have learned to rely on my OBS packages over the last years, bringing modern and uptodate KDE/Plasma to Debian/stable, please direct your complaints at the responsible entities in Debian. Cinnamon As I have written already here, I have reduced my involvement quite a lot, and nowadays Fabio and Joshua are doing the work. But both are not even DM (AFAIR) and I am the only one doing uploads (I got DM upload permissions for it). But I am not sure how long I will continue doing this. This also means that in the near future, Cinnamon will also go stale. TeX related packages Hilmar has DM upload permissions and is very actively caring for the packages, so I don t see any source of concern here. New packages will need to find a new uploader, though. With myself also being part of upstream, I can surely help out in the future with difficult problems. Calibre and related packages Yokota-san (another DM I have sponsored) has DM upload permissions and is very actively caring for the packages, so also here there is not much of concern. Onedrive This is already badly outdated, and I recommend using the OBS builds which are current and provide binaries for Ubuntu and Debian for various versions. ROCm Here fortunately a new generation of developers has taken over maintenance and everything is going smoothly, much better than I could have done, yeah to that! Qalculate related packages These are group maintained, but unfortunately nobody else but me has touched the repos for quite some time. I fear that the packages will go stale rather soon. isync/mbsync I have recently salvaged this package, and use it daily, but I guess it needs to be orphaned sooner or later. CafeOBJ While I am also part of upstream here, I guess it will be orphaned. Julia Julia is group maintained, but unfortunately nobody else but me has touched the repo for quite some time, and we are already far behind the normal releases (and julia got removed from testing). While go stale/orphaned. I recommend installing upstream binaries. python-mechanize Another package that is group maintained in the Python team, but with only me as uploader I guess it will go stale and effectively be orphaned soon. xxhash Has already by orphaned. qpdfview No upstream development, so not much to do, but will be orphaned, too.

Dirk Eddelbuettel: Rcpp 1.0.8: Updated, Strict Headers

rcpp logo The Rcpp team is thrilled to share the news of the newest release 1.0.8 of Rcpp which hit CRAN today, and has already been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, interim dev or rc releases will alwasys be available in the Rcpp drat repo; this cycle there were once again seven (!!) times two as we also tested the modified header (more below). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2478 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 242 in BioConductor. This release finally brings a change we have worked on quite a bit over the last few months. The idea of enforcing the setting of STRICT_R_HEADERS was prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded. Hence, we could make the case to CRAN (who were always CC ed on the monthly nag emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13. So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM). Other than the aforementioned change which will not automatically set STRICT_R_HEADERS (unless opted out which one can), a number of nice pull request by a number of contributors are included in this release: The full list of details follows.

Changes in Rcpp release version 1.0.8 (2022-01-11)
  • Changes in Rcpp API:
    • STRICT_R_HEADERS is now enabled by default, see extensive discussion in #1158 closing #898.
    • A new #define allows default setting of finalizer calls for external pointers (I aki in #1180 closing #1108).
    • Rcpp:::CxxFlags() now quotes the include path generated, (Kevin in #1189 closing #1188).
    • New header files Rcpp/Light, Rcpp/Lighter, Rcpp/Lightest and default Rcpp/Rcpp for fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).
  • Changes in Rcpp Attributes:
    • A new option signature allows customization of function signatures (Travers Ching in #1184 and #1187 fixing #1182)
  • Changes in Rcpp Documentation:
    • The Rcpp FAQ has a new entry on how not to grow a vector (Dirk in #1167).
    • Some long-spurious calls to RNGSope have been removed from examples (Dirk in #1173 closing #1172).
    • DOI reference in the bibtex files have been updated per JSS request (Dirk in #1186).
  • Changes in Rcpp Deployment:
    • Some continuous integration components have been updated (Dirk in #1174, #1181, and #1190).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2822 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible Builds (diffoscope): diffoscope 200 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 200. This version includes the following changes:
* Even if a Sphinx .inv inventory file is labelled "The remainder of this
  file is compressed using zlib", it might not actually be. In this case,
  don't traceback, and simply return the original content.
  (Closes: reproducible-builds/diffoscope#299)
* Update "X has been modified after NT_GNU_BUILD_ID has been applied" message
  to, for instance, not duplicating the full filename in the primary
  diffoscope's output.
You find out more by visiting the project homepage.

11 January 2022

Ritesh Raj Sarraf: ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.
ThinkPad T14 Gen2 AMD
ThinkPad T14 Gen2 AMD

Lenovo It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.
  • No fingerprint reader
  • No Touch screen
There s still parts where Lenovo could improve. Or less frustate a customer. I don t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD For the first time in my computing life, I m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven t had to tinker even once so far, in my 30 days of use. On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn t just this machine, but also the previous T14 Gen1 which has similar problems. I m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead. Similarly, on the thermal side, this machine doesn t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle. I m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn t good but I took the plunge, considering that in the worst scenario I d have the option to swap the card. There s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration Other than what s mentioned above, I haven t had any serious issues. I may have had some rare occassional hangs but they ve been so infrequent that I haven t spent time to investigate those. Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it. Below is filtered list of subvols created over years, that were worthy of moving to the new machine.
rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54                      

btrfs send/receive This did come in handy but I sorely missed some feature. Maybe they aren t there, or are there and I didn t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it d be nice to say, Take this volume and take it with all its attributes . I didn t find that functionality in send/receive. There s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer. In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes. Also, iirc, send/receive works only on ro volumes. So there s more work one needs to do in:
  1. create ro vol
  2. send
  3. receive
  4. don t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination
I wish this all be condensed into a sub-command. For my own sake, for this migration, the steps used were:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/TOSHIBA/Migrate/   cut -d ' ' -f9   grep -v ROOTVOL   grep -v etc   grep -v btrbk ; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume   sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....
And then, follow-up with:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/BTRFSROOT/   cut -d ' ' -f9 ; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp
And then finally, renaming everything to match proper:
user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x   cut -d '-' -f2   sed -e "s _ / g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I d like to explore other boot options but given that that is such a non-essential task, it is low on the list. The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part. But I surely think there s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

Russ Allbery: Review: Hench

Review: Hench, by Natalie Zina Walschots
Publisher: William Morrow
Copyright: September 2020
ISBN: 0-06-297859-4
Format: Kindle
Pages: 403
Anna Tromedlov is a hench, which means she does boring things for terrible people for money. Supervillains need a lot of labor to keep their bases and criminal organizations running, and they get that labor the same way everyone else does: through temporary agencies. Anna does spreadsheets, preferably from home on her couch. On-site work was terrifying and she tried to avoid it, but the lure of a long-term contract was too strong. The Electric Eel, despite being a creepy sleazeball, seemed to be a manageable problem. He needed some support at a press conference, which turns out to be code for being a diversity token in front of the camera, but all she should have to do is stand there. That's how Anna ends up holding the mind control device to the head of the mayor's kid when the superheroes attack, followed shortly by being thrown across the room by Supercollider. Left with a complex fracture of her leg that will take months to heal, a layoff notice and a fruit basket from Electric Eel's company, and a vaguely menacing hospital conversation with the police (including Supercollider in a transparent disguise) in which it's made clear to her that she is mistaken about Supercollider's hand-print on her thigh, Anna starts wondering just how much damage superheroes have done. The answer, when analyzed using the framework for natural disasters, is astonishingly high. Anna's resulting obsession with adding up the numbers leads to her starting a blog, the Injury Report, with a growing cult following. That, in turn, leads to a new job and a sponsor: the mysterious supervillain Leviathan. To review this book properly, I need to talk about Watchmen. One of the things that makes superheroes interesting culturally is the straightforwardness of their foundational appeal. The archetypal superhero story is an id story: an almost pure power fantasy aimed at teenage boys. Like other pulp mass media, they reflect the prevailing cultural myths of the era in which they're told. World War II superheroes are mostly all-American boy scouts who punch Nazis. 1960s superheroes are a more complex mix of outsider misfits with a moral code and sarcastic but earnestly ethical do-gooders. The superhero genre is vast, with numerous reinterpretations, deconstructions, and alternate perspectives, but its ur-story is a good versus evil struggle of individual action, in which exceptional people use their powers for good to defeat nefarious villains. Watchmen was not the first internal critique of the genre, but it was the one that everyone read in the 1980s and 1990s. It takes direct aim at that moral binary. The superheroes in Watchmen are not paragons of virtue (some of them are truly horrible people), and they have just as much messy entanglement with the world as the rest of us. It was superheroes re-imagined for the post-Vietnam, post-Watergate era, for the end of the Cold War when we were realizing how many lies about morality we had been told. But it still put superheroes and their struggles with morality at the center of the story. Hench is a superhero story for the modern neoliberal world of reality TV and power inequality in the way that Watchmen was a superhero story for the Iran-Contra era and the end of the Cold War. Whether our heroes have feet of clay is no longer a question. Today, a better question is whether the official heroes, the ones that are celebrated as triumphs of individual achievement, are anything but clay. Hench doesn't bother asking whether superheroes have fallen short of their ideal; that answer is obvious. What Hench asks instead is a question familiar to those living in a world full of televangelists, climate denialism, manipulative advertising, and Facebook: are superheroes anything more than a self-perpetuating scam? Has the good superheroes supposedly do ever outweighed the collateral damage? Do they care in the slightest about the people they're supposedly protecting? Or is the whole system of superheroes and supervillains a performance for an audience, one that chews up bystanders and spits them out mangled while delivering simplistic and unquestioned official morality? This sounds like a deeply cynical premise, but Hench is not a cynical book. It is cynical about superheroes, which is not the same thing. The brilliance of Walschots's approach is that Anna has a foot in both worlds. She works for a supervillain and, over the course of the book, gains access to real power within the world of superheroic battles. But she's also an ordinary person with ordinary problems: not enough money, rocky friendships, deep anger at the injustices of the world and the way people like her are discarded, and now a disability and PTSD. Walschots perfectly balances the tension between those worlds and maintains that tension straight to the end of the book. From the supervillain world, Anna draws support, resources, and a mission, but all of the hope, true morality, and heart of this book comes from the ordinary side. If you had the infrastructure of a supervillain at your disposal, what would you do with it? Anna's answer is to treat superheroes as a destructive force like climate change, and to do whatever she can to drive them out of the business and thus reduce their impact on the world. The tool she uses for that is psychological warfare: make them so miserable that they'll snap and do something too catastrophic to be covered up. And the raw material for that psychological warfare is data. That's the foot in the supervillain world. In descriptions of this book, her skills with data are often called her superpower. That's not exactly wrong, but the reason why she gains power and respect is only partly because of her data skills. Anna lives by the morality of the ordinary people world: you look out for your friends, you treat your co-workers with respect as long as they're not assholes, and you try to make life a bit better for the people around you. When Leviathan gives her the opportunity to put together a team, she finds people with skills she admires, funnels work to people who are good at it, and worries about the team dynamics. She treats the other ordinary employees of a supervillain as people, with lives and personalities and emotions and worth. She wins their respect. Then she uses their combined skills to destroy superhero lives. I was fascinated by the moral complexity in this book. Anna and her team do villainous things by the morality of the superheroic world (and, honestly, by the morality of most readers), including some things that result in people's deaths. By the end of the book, one could argue that Anna has been driven by revenge into becoming an unusual sort of supervillain. And yet, she treats the people around her so much better than either the heroes or the villains do. Anna is fiercely moral in all the ordinary person ways, and that leads directly to her becoming a villain in the superhero frame. Hench doesn't resolve that conflict; it just leaves it on the page for the reader to ponder. The best part about this book is that it's absurdly grabby, unpredictable, and full of narrative momentum. Walschots's pacing kept me up past midnight a couple of times and derailed other weekend plans so that I could keep reading. I had no idea where the plot was going even at the 80% mark. The ending is ambiguous and a bit uncomfortable, just like the morality throughout the book, but I liked it the more I thought about it. One caveat, unfortunately: Hench has some very graphic descriptions of violence and medical procedures, and there's an extended torture sequence with some incredibly gruesome body horror that I thought went on far too long and was unnecessary to the plot. If you're a bit squeamish like I am, there are some places where you'll want to skim, including one sequence that's annoyingly intermixed with important story developments. Otherwise, though, this is a truly excellent book. It has a memorable protagonist with a great first-person voice, an epic character arc of empowerment and revenge, a timely take on the superhero genre that uses it for sharp critique of neoliberal governance and reality TV morality, a fascinatingly ambiguous and unsettled moral stance, a gripping and unpredictable plot, and some thoroughly enjoyable competence porn. I had put off reading it because I was worried that it would be too cynical or dark, but apart from the unnecessary torture scene, it's not at all. Highly recommended. Rating: 9 out of 10

9 January 2022

Russell Coker: Video Conferencing (LCA)

I ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn t support resolutions higher than VGA with Chrome (it didn t have the advanced settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK. The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run. After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):
[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
...
[95457.805481] Call Trace:
[95457.805484]  
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]
It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There s a race condition in the kernel code in which the code to initialise the device won t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like lsusb hang, and a regular shutdown/reboot hangs because it can t kill the user session because something is blocked on USB. One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it s described as USB 3 but also maximum speed 480 Mbps (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for. When debugging this I used the cheese webcam utility program and ran it in a KVM virtual machine. The KVM parameters -device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2 (where 1 and 2 are replaced by the Bus and Device numbers from lsusb ) allow the USB device to be passed through to the VM. Doing this meant that I didn t have to reboot my PC every time a webcam test failed. For audio I m using the Sades Wand gaming headset I wrote about previously [3].

Russ Allbery: Review: Redemptor

Review: Redemptor, by Jordan Ifueko
Series: Raybearer #2
Publisher: Amulet Books
Copyright: 2021
ISBN: 1-68335-720-5
Format: Kindle
Pages: 328
Redemptor is the second half of a duology that started with Raybearer. You could read the first book without the second, but reading the second without the first will not make much sense. I'm going to be a bit elliptical in my plot description since there's a lot of potential for spoilers for the first book. Tarisai has reached a point of stability and power, but she's also committed herself to a goal, one that will right a great historical and ongoing injustice. She's also now in a position to both notice and potentially correct numerous other injustices in the structure of her society, and plans to start by defending those closest to her. But in the midst of her opening gambit to save someone she believes is unjustly imprisoned, the first murderous undead child appears, attacking both Tarisai's fragile sense of security and her self-esteem and self-worth. Before long, she's drowning in feelings of inadequacy and isolation, and her grand plans for reordering the world have turned into an anxiety loop of self-flagellating burnout. I so much wanted to like this book. Argh. I think I see what Ifueko was aiming for, and it's a worthy topic for a novel. In Raybearer, Tarisai got the sort of life that she previously could only imagine, but she's also the sort of person who shoulders massive obligations. Imposter syndrome, anxiety, overwork, and burnout are realistic risks, and are also important topics to write about. There are some nicely subtle touches buried in this story, such as the desire of her chosen family to have her present and happy without entirely understanding why she isn't, and without seeing the urgency that she sees in the world's injustice. The balancing act of being effective without overwhelming oneself is nearly impossible, and Tarisai has very little preparation or knowledgeable support. But this story is told with the subtlety of a sledgehammer, and in a way that felt forced rather than arising naturally from the characters. If the point of emphasis had been a disagreement with her closest circle over when and how much the world should be changed, I think this would be a better book. In the places where this drives the plot, it is a better book. But Ifueko instead externalizes anxiety and depression in the form of obviously manipulative demonic undead children who (mostly) only Tarisai can see, and it's just way too much. Her reactions are manipulated and sometimes externally imposed in a way that turns what should have been a character vs. self plot into a character vs. character plot in which the protagonist is very obviously making bad decisions and the antagonist is an uninteresting cliche. The largest problem I had with this book is that I found it thuddingly obvious, in part because the plot felt like it was on narrowly constrained rails to ensure it hit all of the required stops. When the characters didn't want the plot to go somewhere, they're sidelined, written out of the story, or otherwise forcibly overridden. Tarisai has to feel isolated, so all the people who, according to the events of the previous book and the established world-building rules, would not let her be isolated are pushed out of her life. When this breaks the rules of magic in this world, those rules are off-handedly altered. Characters that could have had their own growth arcs after Raybearer become static and less interesting, since there's no room for them in the plot. Instead, we get all new characters, which gives Redemptor a bit of a cast size problem. Underneath this, there is an occasional flash of great writing. Ifueko chooses to introduce a dozen mostly-new characters to an already large cast and I was still able to mostly keep them straight, which shows real authorial skill. She is very good with short bursts of characterization to make new characters feel fresh and interesting. Even the most irritating of the new characters (Crocodile, whose surprise twist I thought was obvious and predictable) is an interesting archetype to explore in a book about activism and activist burnout. I can see some pieces of a better book here. But I desperately wanted something to surprise me, for Tarisai or one of the other characters to take the plot in some totally unexpected direction the way that Raybearer did. It never happened. That leads directly to another complaint: I liked Raybearer in part because of the freshness of a different mythological system and a different storytelling tradition than what we typically get in fantasy novels. I was hoping for more of the same in Redemptor, which meant I was disappointed when I got a mix of Christianity and Greek mythology. As advertised by Raybearer, the central mythological crisis of Redemptor concerns the Underworld. This doesn't happen until about 80% into the book (which is also a bit of a problem; the ending felt rushed given how central it was to the plot), so I can't talk about it in detail without spoiling it. But what I think I can say is that unfortunately the religious connotations of the title are not an accident. Rather than something novel that builds on the excellent idea of the emi-ehran spirit animal, there is a lot of Christ symbolism mixed with an underworld that could have come from an Orpheus retelling. There's nothing inherently wrong with this (although the Christian bits landed poorly for me), but it wasn't what I was hoping for from the mythology of this world. I rarely talk much about the authors in fiction reviews. I prefer to let books stand on their own without trying too hard to divine the author's original intentions. But here, I think it's worth acknowledging Ifueko's afterword in which she says that writing Redemptor in the middle of a pandemic, major depression, and the George Floyd protests was the most difficult thing she'd ever done. I've seen authors write similar things in afterwords when the effect on the book was minimal or invisible, but I don't think that was the case here. Redemptor is furious, anxious, depressed, and at points despairing, and while it's okay for novels to be all of those things when it's under the author's control, here they felt like emotions that were imposed on the story from outside. Raybearer was an adventure story about found family and ethics that happened to involve a lot of politics. Redemptor is a story about political activism and governance, but written in a universe whose bones are set up for an adventure story. The mismatch bothered me throughout; not only did these not feel like the right characters to tell this story with, but the politics were too simple, too morally clear-cut, and too amenable to easy solutions for a good political fantasy. Raybearer focused its political attention on colonialism. That's a deep enough topic by itself to support a duology (or more), but Redemptor adds in property rights, land reform, economic and social disparity, unfair magical systems, and a grab bag of other issues, and it overwhelms the plot. There isn't space and time to support solutions with sufficient complexity to satisfyingly address the problems. Ifueko falls back on benevolent dictator solutions, and I understand why, but that's not the path to a satisfying resolution in an overtly political fantasy. This is the sort of sequel that leaves me wondering if I can recommend reading the first book and not the second, and that makes me sad. Redemptor is not without its occasional flashes of brilliance, but I did not have fun reading this book and I can't recommend the experience. That said, I think this is a book problem, not an author problem; I will happily read Ifueko's next novel, and I suspect it will be much better. Rating: 5 out of 10

Matthew Garrett: Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

8 January 2022

John Goerzen: Make the Internet Yours Again With an Instant Mesh Network

I m going to lead with the technical punch line, and then explain it: Yggdrasil Network is an opportunistic mesh that can be deployed privately or as part of a global-scale network. Each node gets a stable IPv6 address (or even an entire /64) that is derived from its public key and is bound to that node as long as the node wants it (of course, it can generate a new keypair anytime) and is valid wherever the node joins the mesh. All traffic is end-to-end encrypted. Yggdrasil will automatically discover peers on a LAN via broadcast beacons, and requires zero configuration to peer in such a way. It can also run as an overlay network atop the public Internet. Public peers serve as places to join the global network, and since it s a mesh, if one device on your LAN joins the global network, the others will automatically have visibility on it also, thanks to the mesh routing. It neatly solves a lot of problems of portability (my ssh sessions stay live as I move networks, for instance), VPN (incoming ports aren t required since local nodes can connect to a public peer via an outbound connection), security, and so forth. Now on to the explanation: The Tyranny of IP rigidity Every device on the Internet, at one time, had its own globally-unique IP address. This number was its identifier to the world; with an IP address, you can connect to any machine anywhere. Even now, when you connect to a computer to download a webpage or send a message, under the hood, your computer is talking to the other one by IP address. Only, now it s hard to get one. The Internet protocol we all grew up with, version 4 (IPv4), didn t have enough addresses for the explosive growth we ve seen. Internet providers and IT departments had to use a trick called NAT (Network Address Translation) to give you a sort of fake IP address, so they could put hundreds or thousands of devices behind a single public one. That, plus the mobility of devices changing IPs whenever they change locations has meant that a fundamental rule of the old Internet is now broken: Every participant is an equal peer. (Well, not any more.) Nowadays, you can t you host your own website from your phone. Or share files from your house. (Without, that is, the use of some third-party service that locks you down and acts as an intermediary.) Back in the 90s, I worked at a university, and I, like every other employee, had a PC on my desk with an unfirewalled public IP. I installed a webserver, and poof instant website. Nowadays, running a website from home is just about impossible. You may not have a public IP, and if you do, it likely changes from time to time. And even then, your ISP probably blocks you from running servers on it. In short, you have to buy your way into the resources to participate on the Internet. I wrote about these problems in more detail in my article Recovering Our Lost Free Will Online. Enter Yggdrasil I already gave away the punch line at the top. But what does all that mean?
  • Every device that participates gets an IP address that is fully live on the Yggdrasil network.
  • You can host a website, or a mail server, or whatever you like with your Yggdrasil IP.
  • Encryption and authentication are smaller (though not nonexistent) worries thanks to the built-in end-to-end encryption.
  • You can travel the globe, and your IP will follow you: onto a plane, from continent to continent, wherever. Yggdrasil will find you.
  • I ve set up /etc/hosts on my laptop to use the Yggdrasil IPs for other machines on my LAN. Now I can just ssh foo and it will work from home, from a coffee shop, from a 4G tether, wherever. Now, other tools like tinc can do this, obviously. And I could stop there; I could have a completely closed, private Yggdrasil network. Or, I can join the global Yggdrasil network. Each device, in addition to accepting peers it finds on the LAN, can also be configured to establish outbound peering connections or accept inbound ones over the Internet. Put a public peer or two in your configuration and you ve joined the global network. Most people will probably want to do that on every device (because why not?), but you could also do that from just one device on your LAN. Again, there s no need to explicitly build routes via it; your other machines on the LAN will discover the route s existence and use it. This is one of many projects that are working to democratize and decentralize the Internet. So far, it has been quite successful, growing to over 2000 nodes. It is the direct successor to the earlier cjdns/Hyperboria and BATMAN networks, and aims to be a proof of concept and a viable tool for global expansion. Finally, think about how much easier development is when you don t have to necessarily worry about TLS complexity in every single application. When you don t have to worry about port forwarding and firewall penetration. It s what the Internet should be.

    7 January 2022

    Ingo Juergensmann: Moving my repositories from Github to Codeberg.org

    Some weeks ago I moved my repositories from Github (evil, Microsoft, blabla) to Codeberg. Codeberg is a non-profit organisation located in Germany. When you really dislike Microsoft products it is somewhat a natural reaction (at least for me) to move away from Github, which was bought by Microsoft, to some more independent service provider for hosting source code. Nice thing with Codeberg is as well that it offers a migration tool from Github to Codeberg. Additionally Codeberg is also on Mastodon. If you are looking for a good service hosting your git repositories and want to move away from Github as well, please give Codeberg a try. So, please update your git settings to https://github.com/ingoj to https://codeberg.org/Windfluechter (or the specific repo).

    6 January 2022

    Jacob Adams: Linux Hibernation Documentation

    Recently I ve been curious about how hibernation works on Linux, as it s an interesting interaction between hardware and software. There are some notes in the Arch wiki and the kernel documentation (as well as some kernel documentation on debugging hibernation and on sleep states more generally), and of course the ACPI Specification

    The Formal Definition ACPI (Advanced Configuration and Power Interface) is, according to the spec, an architecture-independent power management and configuration framework that forms a subsystem within the host OS which defines a hardware register set to define power states. ACPI defines four global system states G0, working/on, G1, sleeping, G2, soft off, and G3, mechanical off1. Within G1 there are 4 sleep states, numbered S1 through S4. There are also S0 and S5, which are equivalent to G0 and G2 respectively2.

    Sleep According to the spec, the ACPI S1-S4 states all do the same thing from the operating system s perspective, but each saves progressively more power, so the operating system is expected to pick the deepest of these states when entering sleep. However, most operating systems3 distinguish between S1-S3, which are typically referred to as sleep or suspend, and S4, which is typically referred to as hibernation.

    S1: CPU Stop and Cache Wipe The CPU caches are wiped and then the CPU is stopped, which the spec notes is equivalent to the WBINVD instruction followed by the STPCLK signal on x86. However, nothing is powered off.

    S2: Processor Power off The system stops the processor and most system clocks (except the real time clock), then powers off the processor. Upon waking, the processor will not continue what it was doing before, but instead use its reset vector4.

    S3: Suspend/Sleep (Suspend-to-RAM) Mostly equivalent to S2, but hardware ensures that only memory and whatever other hardware memory requires are powered.

    S4: Hibernate (Suspend-to-Disk) In this state, all hardware is completely powered off and an image of the system is written to disk, to be restored from upon reapplying power. Writing the system image to disk can be handled by the operating system if supported, or by the firmware.

    Linux Sleep States Linux has its own set of sleep states which mostly correspond with ACPI states.

    Suspend-to-Idle This is a software only sleep that puts all hardware into the lowest power state it can, suspends timekeeping, and freezes userspace processes. All userspace and some kernel threads5, except those tagged with PF_NOFREEZE, are frozen before the system enters a sleep state. Frozen tasks are sent to the __refrigerator(), where they set TASK_UNINTERRUPTIBLE and PF_FROZEN and infinitely loop until PF_FROZEN is unset6. This prevents these tasks from doing anything during the imaging process. Any userspace process running on a different CPU while the kernel is trying to create a memory image would cause havoc. This is also done because any filesystem changes made during this would be lost and could cause the filesystem and its related in-memory structures to become inconsistent. Also, creating a hibernation image requires about 50% of memory free, so no tasks should be allocating memory, which freezing also prevents.

    Standby This is equivalent to ACPI S1.

    Suspend-to-RAM This is equivalent to ACPI S3.

    Hibernation Hibernation is mostly equivalent to ACPI S4 but does not require S4, only requiring low-level code for resuming the system to be present for the underlying CPU architecture according to the Linux sleep state docs. To hibernate, everything is stopped and the kernel takes a snapshot of memory. Then, the system writes out the memory image to disk. Finally, the system either enters S4 or turns off completely. When the system restores power it boots a new kernel, which looks for a hibernation image and loads it into memory. It then overwrites itself with the hibernation image and jumps to a resume area of the original kernel7. The resumed kernel restores the system to its previous state and resumes all processes.

    Hybrid Suspend Hybrid suspend does not correspond to an official ACPI state, but instead is effectively a combination of S3 and S4. The system writes out a hibernation image, but then enters suspend-to-RAM. If the system wakes up from suspend it will discard the hibernation image, but if the system loses power it can safely restore from the hibernation image.
    1. The difference between soft and mechanical off is that mechanical off is entered and left by a mechanical means (for example, turning off the system s power through the movement of a large red switch)
    2. It s unclear to me why G and S states overlap like this. I assume this is a relic of an older spec that only had S states, but I have not as yet found any evidence of this. If someone has any information on this, please let me know and I ll update this footnote.
    3. Of the operating systems I know of that support ACPI sleep states (I checked Windows, Mac, Linux, and the three BSDs8), only MacOS does not allow the user to deliberately enable hibernation, instead supporting a hybrid suspend it calls safe sleep
    4. The reset vector of a processor is the default location where, upon a reset, the processor will go to find the first instruction to execute. In other words, the reset vector is a pointer or address where the processor should always begin its execution. This first instruction typically branches to the system initialization code. Xiaocong Fan, Real-Time Embedded Systems, 2015
    5. All kernel threads are tagged with PF_NOFREEZE by default, so they must specifically opt-in to task freezing.
    6. This is not from the docs, but from kernel/freezer.c which also notes Refrigerator is place where frozen processes are stored :-).
    7. This is the operation that requires special architecture-specific low-level code .
    8. Interestingly NetBSD has a setting to enable hibernation, but does not actually support hibernation

    5 January 2022

    Reproducible Builds: Reproducible Builds in December 2021

    Welcome to the December 2021 report from the Reproducible Builds project! In these reports, we try and summarise what we have been up to over the past month, as well as what else has been occurring in the world of software supply-chain security. As a quick recap of what reproducible builds is trying to address, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As always, if you would like to contribute to the project, please get in touch with us directly or visit the Contribute page on our website.
    Early in December, Julien Voisin blogged about setting up a rebuilderd instance in order to reproduce Tails images. Working on previous work from 2018, Julien has now set up a public-facing instance which is providing build attestations. As Julien dryly notes in his post, Currently, this isn t really super-useful to anyone, except maybe some Tails developers who want to check that the release manager didn t backdoor the released image. Naturally, we would contend sincerely that this is indeed useful.
    The secure/anonymous Tor browser now supports reproducible source releases. According to the project s changelog, version 0.4.7.3-alpha of Tor can now build reproducible tarballs via the make dist-reprod command. This issue was tracked via Tor issue #26299.
    Fabian Keil posted a question to our mailing list this month asking how they might analyse differences in images produced with the FreeBSD and ElectroBSD s mkimg and makefs commands:
    After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
    I recently noticed that the "memstick" images are unfortunately
    still not 100% reproducible.
    Fabian s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.

    diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploading versions 195, 196, 197 and 198 to Debian, as well as made the following changes:
    • Support showing Ordering differences only within .dsc field values. [ ]
    • Add support for XMLb files. [ ]
    • Also add, for example, /usr/lib/x86_64-linux-gnu to our local binary search path. [ ]
    • Support OCaml versions 4.11, 4.12 and 4.13. [ ]
    • Drop some unnecessary has_same_content_as logging calls. [ ]
    • Replace token variable with an anonymously-named variable instead to remove extra lines. [ ]
    • Don t use the runtime platform s native endianness when unpacking .pyc files. This fixes test failures on big-endian machines. [ ]
    Mattia Rizzolo also made a number of changes to diffoscope this month as well, such as:
    • Also recognize GnuCash files as XML. [ ]
    • Support the pgpdump PGP packet visualiser version 0.34. [ ]
    • Ignore the new Lintian tag binary-with-bad-dynamic-table. [ ]
    • Fix the Enhances field in debian/control. [ ]
    Finally, Brent Spillner fixed the version detection for Black uncompromising code formatter [ ], Jelle van der Waa added an external tool reference for Arch Linux [ ] and Roland Clobus added support for reporting when the GNU_BUILD_ID field has been modified [ ]. Thank you for your contributions!

    Distribution work In Debian this month, 70 reviews of packages were added, 27 were updated and 41 were removed, adding to our database of knowledge about specific issues. A number of issue types were created as well, including: strip-nondeterminism version 1.13.0-1 was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism Debian integration interface uses the new get_non_binnmu_date_epoch() utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.
    In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.
    In NixOS, work towards the longer-term goal of making the graphical installation image reproducible is ongoing. For example, Artturin made the gnome-desktop package reproducible.

    Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In December, we wrote a large number of such patches, including:

    Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
    • Holger Levsen:
      • Run the Debian scheduler less often. [ ]
      • Fix the name of the Debian testing suite name. [ ]
      • Detect builds that are rescheduling due to problems with the diffoscope container. [ ]
      • No longer special-case particular machines having a different /boot partition size. [ ]
      • Automatically fix failed apt-daily and apt-daily-upgrade services [ ], failed e2scrub_all.service & user@ systemd units [ ][ ] as well as generic build failures [ ].
      • Simplify a script to powercycle arm64 architecture nodes hosted at/by codethink.co.uk. [ ]
      • Detect if the udd-mirror.debian.net service is down. [ ]
      • Various miscellaneous node maintenance. [ ][ ]
    • Roland Clobus (Debian live image generation):
      • If the latest snapshot is not complete yet, try to use the previous snapshot instead. [ ]
      • Minor: whitespace correction + comment correction. [ ]
      • Use unique folders and reports for each Debian version. [ ]
      • Turn off debugging. [ ]
      • Add a better error description for incorrect/missing arguments. [ ]
      • Report non-reproducible issues in Debian sid images. [ ]
    Lastly, Mattia Rizzolo updated the automatic logfile parsing rules in a number of ways (eg. to ignore a warning about the Python setuptools deprecation) [ ][ ] and Vagrant Cascadian adjusted the config for the Squid caching proxy on a node. [ ]

    If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

    4 January 2022

    Russell Coker: Terrorists Inspired by Fiction

    The Tom Clancy book Debt of Honor published in August 1994 first introduced the concept of a heavy passenger aircraft being used as a weapon by terrorists against a well defended building. In April 1994 there was an attempt to hijack and deliberately crash FedEx flight 705. It s possible for a book to be changed 4 months before publication, but it seems unlikely that a significant plot point in a series of books was changed in such a small amount of time so it s likely that Tom Clancy got the idea first. There have been other variations on that theme, such as the Yokosuka_MXY-7 Kamakazi flying bomb (known by the Allies as Baka which is Japanese for idiot). But Tom Clancy seemed to pioneer the idea of a commercial passenger jet being subverted for the purpose of ground attack. 7 years after Tom Clancy s book was published the 911 hijackings happened. The TV series Black Mirror first aired in 2011, and the first episode was about terrorists kidnapping a princess and demanding that the UK PM perform an indecent act with a pig for her release. While the plot was a little extreme (the entire series is extreme) the basic concept of sexual extortion based on terrorist acts is something that could be done in real life, and if terrorists were inspired by this they are taking longer than expected to do it. Most democracies seem to end up with two major parties that are closely matched. Even if a government was strict about not negotiating with terrorists it seems likely that terrorists demanding that a politician perform an unusual sex act on TV would change things, supporters would be divided into groups that support and oppose negotiating. Discussions wouldn t be as civil as when the negotiation involves money or freeing prisoners. If an election result was perceived to have been influenced by such terrorism then supporters of the side that lost would claim it to be unfair and reject the result. If the goal of terrorists was to cause chaos then that would be one way of achieving it, and they have had over 10 years to consider this possibility. Are we overdue for a terror attack inspired by Black Mirror?

    Jelmer Vernooij: Personal Streaming Audio Server

    For a while now, I ve been looking for a good way to stream music from my home music collection on my phone. There are quite a few options for music servers that support streaming. However, Android apps that can stream music from one of those servers tend to be unmaintained, clunky or slow (or more than one of those). It is possible to use something that runs in a web server, but that means no offline caching - which can be quite convenient in spots without connectivity, such as the Underground or other random bits of London with poor cell coverage.
    Server Most music servers today support some form of the subsonic API. I ve tried a couple, with mixed results:
    • supysonic; Python. Slow. Ran into some issues with subsonic clients. No real web UI.
    • gonic; Go. Works well & fast enough. Minimal web UI, i.e. no ability to play music from a browser.
    • airsonic; Java. Last in a chain of (abandoned) forks. More effort to get to work, and resource intensive.
    Eventually, I ve settled on Navidrome. It s got a couple of things going for it:
    • Good subsonic implementation that worked with all the Android apps I used it with.
    • Great Web UI for use in a browser
    I run Navidrome in Kubernetes. It s surprisingly easy to get going. Here s the deployment I m using:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: navidrome
    spec:
     replicas: 1
     selector:
       matchLabels:
         app: navidrome
     template:
       metadata:
         labels:
           app: navidrome
       spec:
         containers:
           - name: navidrome
             image: deluan/navidrome:latest
             imagePullPolicy: Always
             resources:
               limits:
                 cpu: ".5"
                 memory: "2Gi"
               requests:
                 cpu: "0.1"
                 memory: "10M"
             ports:
               - containerPort: 4533
             volumeMounts:
               - name: navidrome-data-volume
                 mountPath: /data
               - name: navidrome-music-volume
                 mountPath: /music
             env:
               - name: ND_SCANSCHEDULE
                 value: 1h
               - name: ND_LOGLEVEL
                 value: info
               - name: ND_SESSIONTIMEOUT
                 value: 24h
               - name: ND_BASEURL
                 value: /navidrome
             livenessProbe:
                httpGet:
                  path: /navidrome/app
                  port: 4533
                initialDelaySeconds: 30
                periodSeconds: 3
                timeoutSeconds: 90
         volumes:
            - name: navidrome-data-volume
              hostPath:
               path: /srv/navidrome
               type: Directory
            - name: navidrome-music-volume
              hostPath:
                path: /srv/media/music
                type: Directory
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: navidrome
    spec:
      ports:
        - port: 4533
          name: web
      selector:
        app: navidrome
      type: ClusterIP
    
    At the moment, this deployment is still tied to the machine with my music on it since it relies on hostPath volumes, but I m planning to move that to ceph in the future. I then expose this service on /navidrome on my private domain (here replaced with example.com) using an Ingress:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: navidrome
    spec:
      ingressClassName: nginx
      rules:
      - host: example.com
        http:
          paths:
          - backend:
              service:
                name: navidrome
                port:
                  name: web
            path: /navidrome(/ $)(.*)
            pathType: Prefix
    
    Client On the desktop, I usually just use navidrome s web interface. Clementine s support for subsonic is also okay. sublime-music is meant to be a music player specifically for Subsonic, but I ve not really found it stable enough for day-to-day usage. There are various Android clients for Subsonic, but I ve only really considered the Open Source ones that are hosted on F-Droid. Most of those are abandoned, but D-Sub works pretty well - as does my preferred option, Subtracks.

    Jonathan McDowell: Upgrading from a CC2531 to a CC2538 Zigbee coordinator

    Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it s ARM based, which I m hopeful means I might be able to build some firmware myself using a standard GCC toolchain. For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it s possible to do USB directly with the CC2538 my board doesn t have those bits so going the external USB UART route is easier. The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.
    OpenOCD config
    source [find interface/buspirate.cfg]
    buspirate_port /dev/ttyUSB1
    buspirate_mode normal
    buspirate_vreg 1
    buspirate_pullup 0
    transport select jtag
    source [find target/cc2538.cfg]
    
    Steps to erase
    $ telnet localhost 4444
    Trying ::1...
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Open On-Chip Debugger
    > mww 0x400D300C 0x7F800
    > mww 0x400D3008 0x0205
    > shutdown
    shutdown command invoked
    Connection closed by foreign host.
    
    At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:
    $ git clone https://github.com/JelmerT/cc2538-bsl.git
    $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
    Opening port /dev/ttyUSB1, baud 500000
    Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
    Firmware file: Intel Hex
    Connecting to target...
    CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
    Primary IEEE Address: 00:12:4B:00:22:22:22:22
        Performing mass erase
    Erasing 524288 bytes starting at address 0x00200000
        Erase done
    Writing 524256 bytes starting at address 0x00200000
    Write 232 bytes at 0x0027FEF88
        Write done
    Verifying by comparing CRC32 calculations.
        Verified (match: 0x74f2b0a1)
    
    I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):
    python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
    
    I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:
    python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
    
    The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error. After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:
    $ sqlite3 /etc/homeassistant/zigbee.db
    SQLite version 3.34.1 2021-01-20 14:10:07
    Enter ".help" for usage hints.
    sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> .quit
    
    So far it all seems a bit happier than with the CC2531; I ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved. (This post another in the set of things I should write down so I can just grep my own website when I forget what I did to do foo .)

    Russell Coker: Big Smart TVs

    Recently a relative who owned a 50 Plasma TV asked me for advice on getting a new TV. Looking at the options all the TVs seem to be smart TVs (running Android with built in support for YouTube and Netflix) and most of them seem to be 4K resolution. 4K doesn t provide much benefit now as most people don t have BlueRay DVD players and discs, there aren t a lot of 4K YouTube videos, and most streaming services don t offer 4K resolution. But as 4K doesn t cost much more it doesn t make sense not to get it. I gave my relative a list of good options from Kogan (the Australian company that has the cheapest consumer electronics) and they chose a 65 4K Smart TV from Kogan. That only cost $709 plus delivery which is reasonably affordable for something that will presumably last for a long time and be used by many people. Netflix on a web browser won t do more than FullHD resolution unless you use Edge on Windows 10. But Netflix on the smart tv has a row advertising 4K shows which indicates that 4K is supported. There are some 4K videos on YouTube but not a lot at this time. Size It turns out that 65 is very big. It didn t fit on the table that had been used for the 50 Plasma TV. Rtings.com has a good article about TV size vs distance [1]. According to their calculations if you want to sit 2 meters away from a TV and have a 30 degree field of view (recommended for mixed use) then a 45 TV is ideal. According to their calculations on pixel sizes, if you have a FullHD display (or the common modern case a FullHD signal displayed on a 4K monitor) that is between 1.8 and 2.5 meters away from you then a 45 TV is the largest that will be useful. To take proper advantage of a monitor larger than 45 at a distance of 2 meters you need a 4K signal. If you have a 4K signal then you can get best results by having a 45 monitor less than 1.8 meters away from you. As most TV watching involves less than 3 people it shouldn t be inconvenient to be less than 1.8 meters away from the TV. The 65 TV weighs 21Kg according to the specs, that isn t a huge amount for something small, but for something a large and inconvenient as a 65 TV it s impossible for one person to safely move. Kogan sells 43 TVs that weigh 6KG, that s something that most adults could move with one hand. I think that a medium size TV that can be easily moved to a convenient location would probably give an equivalent viewing result to an extremely large TV that can t be moved at all. I currently have a 40 LCD TV, the only reason I have that is because a friend didn t need it, the previous 32 TV that I used was adequate for my needs. Most of my TV viewing is on a 28 monitor, which I find adequate for 2 or 3 people. So I generally wouldn t recommend a 65 TV for anyone. Android for TVs Android wasn t designed for TVs and doesn t work that well on them. Having buttons on the remote for Netflix and YouTube is handy, but it would be nice if there were programmable buttons for other commonly used apps or a way to switch between the last few apps (like ALT-TAB on a PC). One good feature of Android for TV is that it can display a set of rows of shows (similar to the Netflix method of displaying) where each row is from a different app. The apps I ve installed on that TV which support the row view are Netflix, YouTube, YouTube Music, ABC iView (that s Australian ABC), 7plus, 9now, and SBS on Demand. That s nice, now we just need channel 10 s app to support that to have coverage for all Australian free TV stations in the Android TV interface. Conclusion It s a nice TV and it generally works well. Android is OK for TV use but far from great. It is running Android version 9, maybe a newer version of Android works better on TVs. It s too large for reasonable people to use in a home. I ve seen smaller TVs used for 20 people in an office in a video conference. It s cheap enough that most people can afford it, but it s easier and more convenient to have something smaller and lighter.

    Russell Coker: Curiosity Stream

    I have recently signed up for the Curiosity Stream [1] documentary site, this is designed to be like Netflix but for non-fiction content only. The service costs $US15 per annum or $52US per annum for 4K (I think the 4K service was about $US120 per annum when I signed up). The extra price for 4K seems excessive, while it is in line with the bandwidth requirements a large portion of the costs of the service would be about user support and running the service reliably for which 4K makes little difference. My aim in subscribing was to just get a service like Netflix with new documentary content as I have watched every documentary I want to watch on Netflix (I think I ve watched over 1000 hours of Netflix documentaries). So naturally I compare the service to Netflix and I found that it doesn t compare well. Curiosity Stream (CS) has no button to skip the intro and has a problem with using the right arrow to skip forward (seems to only work once and then I have to use the mouse), this costs me about 30 seconds for each episode which adds up when watching documentaries at 1.5* speed. The method of controlling the viewing is a little clunky, sometimes the popup menu at the bottom of the screen to control playback doesn t disappear by itself until you mouse over it and space bar doesn t select pause instead it selects the last action. CS allows selecting individual episodes for your watch list instead of entire series, this could be useful for some people but I just find it annoying, it might be good for classroom use. The method of searching for new shows to watch isn t as good as the Netflix method or the way things are displayed on Android TV (which seems to have an API for multiple video providers to show a list of shows with one row per provider). Some of these things might seem OK if you haven t used other services, but if you are used to Netflix and Amazon Prime then it will seem clunky. The amount of content on Curiosity Stream doesn t seem that large, I don t know how to get a full measure of it, but when I search for things I seem to get less results than on Netflix. That could be something to do with what I m searching for. In terms of value for money $US15 per annum for the content that CS provides is a good deal. Netflix overall offers better value for home users having fiction as well as non-fiction in large quantities. But for documentary content $US15 per annum is pretty cheap. I recommend signing up for CS, but for most people signing up for Netflix first will be a good option.
    1. [1] https://curiositystream.com/

    Next.