Search Results: "lus"

16 September 2021

Chris Lamb: On Colson Whitehead's Harlem Shuffle

Colson Whitehead's latest novel, Harlem Shuffle, was always going to be widely reviewed, if only because his last two books won Pulitzer prizes. Still, after enjoying both The Underground Railroad and The Nickel Boys, I was certainly going to read his next book, regardless of what the critics were saying indeed, it was actually quite agreeable to float above the manufactured energy of the book's launch. Saying that, I was encouraged to listen to an interview with the author by Ezra Klein. Now I had heard Whitehead speak once before when he accepted the Orwell Prize in 2020, and once again he came across as a pretty down-to-earth guy. Or if I were to emulate the detached and cynical tone Whitehead embodied in The Nickel Boys, after winning so many literary prizes in the past few years, he has clearly rehearsed how to respond to the cliched questions authors must be asked in every interview. With the obligatory throat-clearing of 'so, how did you get into writing?', for instance, Whitehead replies with his part of the catechism that 'It seemed like being a writer could be a cool job. You could work from home and not talk to people.' The response is the right combination of cute and self-effacing... and with its slight tone-deafness towards enforced isolation, it was no doubt honed before Covid-19. Harlem Shuffle tells three separate stories about Ray Carney, a furniture salesman and 'fence' for stolen goods in New York in the 1960s. Carney doesn't consider himself a genuine criminal though, and there's a certain logic to his relativistic morality. After all, everyone in New York City is on the take in some way, and if some 'lightly used items' in Carney's shop happened to have had 'previous owners', well, that's not quite his problem. 'Nothing solid in the city but the bedrock,' as one character dryly observes. Yet as Ezra pounces on in his NYT interview mentioned abov, the focus on the Harlem underworld means there are very few women in the book, and Whitehead's circular response ah well, it's a book about the criminals at that time! was a little unsatisfying. Not only did it feel uncharacteristically slippery of someone justly lauded for his unflinching power of observation (after all, it was the author who decided what to write about in the first place), it foreclosed on the opportunity to delve into why the heist and caper genres (from The Killing, The Feather Thief, Ocean's 11, etc.) have historically been a 'male' mode of storytelling. Perhaps knowing this to be the case, the conversation quickly steered towards Ray Carney's wife, Elizabeth, the only woman in the book who could be said possesses some plausible interiority. The following off-hand remark from Whitehead caught my attention:
My wife is convinced that [Elizabeth] knows everything about Carney's criminal life, and is sort of giving him a pass. And I'm not sure if that's true. I have to have to figure out exactly what she knows and when she knows it and how she feels about it.
I was quite taken by this, although not simply due to its effect on the story it self. As in, it immediately conjured up a charming picture of Whitehead's domestic arrangements: not only does Whitehead's wife feel free to disagree with what one of Whitehead's 'own' characters knows or believes, but that Colson has no problem whatsoever sharing that disagreement with the public at large. (It feels somehow natural that Whitehead's wife believes her counterpart knows more than she lets on, whilst Whitehead himself imbues the protagonist's wife with a kind of neo-Victorian innocence.) I'm minded to agree with Whitehead's partner myself, if only due to the passages where Elizabeth is studiously ignoring Carney's otherwise unexplained freak-outs. But all of these meta-thoughts simply underline just how emancipatory the Death of the Author can be. This product of academic literary criticism (the term was coined by Roland Barthes' 1967 essay of the same name) holds that the original author's intentions, ideas or biographical background carry no especial weight in determining how others should interpret their work. It is usually understood as meaning that a writer's own views are no more valid or 'correct' than the views held by someone else. (As an aside, I've found that most readers who encounter this concept for the first time have been reading books in this way since they were young. But the opposite is invariably true with cinephiles, who often have a bizarre obsession with researching or deciphering the 'true' interpretation of a film.) And with all that in mind, can you think of a more wry example of how freeing (and fun) nature of the Death of the Author than an author's own partner dissenting with their (Pulitzer Prize-winning) husband on the position of a lynchpin character?
The 1964 Harlem riot began after James Powell, a 15-year-old African American, was shot and killed by Thomas Gilligan, an NYPD police officer in front of 10s of witnesses. Gilligan was subsequently cleared by a grand jury.
As it turns out, the reviews for Harlem Shuffle have been almost universally positive, and after reading it in the two days after its release, I would certainly agree it is an above-average book. But it didn't quite take hold of me in the way that The Underground Railroad or The Nickel Boys did, especially the later chapters of The Nickel Boys that were set in contemporary New York and could thus make some (admittedly fairly explicit) connections from the 1960s to the present day that kind of connection is not there in Harlem Shuffle, or at least I did not pick up on it during my reading. I can see why one might take exception to that, though. For instance, it is certainly true that the week-long Harlem Riot forms a significant part of the plot, and some events in particular are entirely contingent on the ramifications of this momentous event. But it's difficult to argue the riot's impact are truly integral to the story, so not only is this uprising against police brutality almost regarded as a background event, any contemporary allusion to the murder of George Floyd is subsequently watered down. It's nowhere near the historical rubbernecking of Forrest Gump (1994), of course, but that's not a battle you should ever be fighting. Indeed, whilst a certain smoothness of affect is to be priced into the Whitehead reading experience, my initial overall reaction to Harlem Shuffle was fairly flat, despite all the action and intrigue on the page. The book perhaps belies its origins as a work conceived during quarantine after all, the book is essentially comprised of three loosely connected novellas, almost as if the unreality and mental turbulence of lockdown prevented the author from performing the psychological 'deep work' of producing a novel-length text with his usual depth of craft. A few other elements chimed with this being a 'lockdown novel' as well, particularly the book's preoccupation with the sheer physicality of the city compared to the usual complex interplay between its architecture and its inhabitants. This felt like it had been directly absorbed into the book from the author walking around his deserted city, and thus being able to take in details for the first time:
The doorways were entrances into different cities no, different entrances into one vast, secret city. Ever close, adjacent to all you know, just underneath. If you know where to look.
And I can't fail to mention that you can almost touch Whitehead's sublimated hunger to eat out again as well:
Stickups were chops they cook fast and hot, you re in and out. A stakeout was ribs fire down low, slow, taking your time. [ ] Sometimes when Carney jumped into the Hudson when he was a kid, some of that stuff got into his mouth. The Big Apple Diner served it up and called it coffee.
More seriously, however, the relatively thin personalities of minor characters then reminded me of the simulacrum of Zoom-based relationships, and the essentially unsatisfactory endings to the novellas felt reminiscent of lockdown pseudo-events that simply fizzle out without a bang. One of the stories ties up loose ends with: 'These things were usually enough to terminate a mob war, and they appeared to end the hostilities in this case as well.' They did? Well, okay, I guess.
The corner of 125th Street and Morningside Avenue in 2019, the purported location of Carney's fictional furniture store. Signage plays a prominent role in Harlem Shuffle, possibly due to the author's quarantine walks.
Still, it would be unfair to characterise myself as 'disappointed' with the novel, and none of this piece should be taken as really deep criticism. The book certainly was entertaining enough, and pretty funny in places as well:
Carney didn t have an etiquette book in front of him, but he was sure it was bad manners to sit on a man s safe. [ ] The manager of the laundromat was a scrawny man in a saggy undershirt painted with sweat stains. Launderer, heal thyself.
Yet I can't shake the feeling that every book you write is a book that you don't, and so we might need to hold out a little longer for Whitehead's 'George Floyd novel'. (Although it is for others to say how much of this sentiment is the expectations of a White Reader for The Black Author to ventriloquise the pain of 'their' community.) Some room for personal critique is surely permitted. I dearly missed the junk food energy of the dry and acerbic observations that run through Whitehead's previous work. At one point he had a good line on the model tokenisation that lurks behind 'The First Negro to...' labels, but the callbacks to this idea ceased without any payoff. Similar things happened with the not-so-subtle critiques of the American Dream:
Entrepreneur? Pepper said the last part like manure. That s just a hustler who pays taxes. [ ] One thing I ve learned in my job is that life is cheap, and when things start getting expensive, it gets cheaper still.
Ultimately, though, I think I just wanted more. I wanted a deeper exploration of how the real power in New York is not wielded by individual street hoodlums or even the cops but in the form of real estate, essentially serving as a synecdoche for Capital as a whole. (A recent take of this can be felt in Jed Rothstein's 2021 documentary, WeWork: Or the Making and Breaking of a $47 Billion Unicorn and it is perhaps pertinent to remember that the US President at the time this novel was written was affecting to be a real estate tycoon.). Indeed, just like the concluding scenes of J. J. Connolly's Layer Cake, although you can certainly pull off a cool heist against the Man, power ultimately resides in those who control the means of production... and a homespun furniture salesman on the corner of 125 & Morningside just ain't that. There are some nods to kind of analysis in the conclusion of the final story ('Their heist unwound as if it had never happened, and Van Wyck kept throwing up buildings.'), but, again, I would have simply liked more. And when I attempted then file this book away into the broader media landscape, given the current cultural visibility of 1960s pop culture (e.g. One Night in Miami (2020), Judas and the Black Messiah (2021), Summer of Soul (2021), etc.), Harlem Shuffle also seemed like a missed opportunity to critically analyse our (highly-qualified) longing for the civil rights era. I can certainly understand why we might look fondly on the cultural products from a period when politics was less alienated, when society was less atomised, and when it was still possible to imagine meaningful change, but in this dimension at least, Harlem Shuffle seems to merely contribute to this nostalgic escapism.

14 September 2021

Jonathan Dowland: GHC rewrite rules

The Glasgow Haskell Compiler (GHC) has support for user-supplied rewrite rules, which applied during one of the compiler optimisation stages. An example rule is
 -# RULES
      "streamFilter fuse" forall f g xs.
          streamFilter f (streamFilter g xs) = streamFilter (f.g) xs
  #- 
I spent some time today looking at these more closely. In order for rewrite rules to be applied, optimisation needs to be enabled. This conflicts with interactive use of GHC, so you can't explore these things in GHCi. I think rewrite rules are enabled by default (with optimisation), but you can ask for them explicitly. When investigating these it's also useful to ask ghc to always recompile, otherwise you have to change the source or manually remove .hi or .o files (etc.) between invocations. A starting set of command-line options to use then, are
-O -fenable-rewrite-rules -fforce-recomp
GHC runs several compilation stages, and the source program is transformed into several different languages or language dialects as it goes. Before the phase where rewrite rules are applied, some other optimisations take place, and the source gets desugared. You can see the results of the desugaring by passing the argument -ddump-ds. Here's an example program
main = print (unStream (streamFilter (>3) (streamFilter (<10)
    (mkStream [0..20]))))
And here's what it looks like after the first pass optimisation and desugaring:
main
  = print
      ($fShow[] $fShowInteger)
      (unStream
         (streamFilter
            (let  
               ds
               ds = 3   in
             \ ds -> > $fOrdInteger ds ds)
            (streamFilter
               (let  
                  ds
                  ds = 10   in
                \ ds -> < $fOrdInteger ds ds)
               (mkStream (enumFromTo $fEnumInteger 0 20)))))
(Note: I used -dsuppress-all and -dsuppress-uniques to improve the clarity of the above output. See Suppressing unwanted information for further details). Those short-hand sections ((<3)) in the input program are expanded to something quite a lot longer. Out of curiosity I tried it again with plain lambdas, not sections, and the result was smaller
main
  = print
      ($fShow[] $fShowInteger)
      (unStream
         (streamFilter
            (\ x -> > $fOrdInteger x 3)
            (streamFilter
               (\ x -> < $fOrdInteger x 10)
               (mkStream (enumFromTo $fEnumInteger 0 20)))))
Rewrite rules happen after this. Once they're done (and several other passes), the program is translated into an intermediate representation called Tiny Core. This language faintly resembles the input Haskell. GHC will output the Tiny Core program if you supply the argument -ddump-simpl. Here's (most) of the program in Tiny Core (I've substituted some constants for clarity):
main  = hPutStr' stdout main1 True
main1 = $fShowInteger_$cshowList (catMaybes1 (main_go 0)) []
main_go
  = \ x ->
      case gtInteger# x 20 of  
        DEFAULT ->
          case ltInteger# x 10 of  
            DEFAULT -> main_go (plusInteger x 1);
            1# ->
              case gtInteger# x 3 of  
                DEFAULT -> main_go (plusInteger x 1);
                1# -> : (Just x) (main_go (plusInteger x 1))
               
           ;
        1# -> []
       
After Tiny Core, GHC continues to translate the program into other forms (including STG, CMM, ASM) and you can ask GHC to dump those representations too, but this is all downstream from the rewrites so not relevant to them. The rewrite rule at the top of this blog post is never applied: It doesn't get a chance, because the function it operates on (streamFilter) is inlined by GHC. GHC can detect this in some circumstances (-Winline-rule-shadowing). You instruct GHC to report on which rules fired with -ddump-rule-firings and can see before-and-after snippets of Tiny Core for each rule applied with -ddump-rule-rewrites. I played around with adding -# NOINLINE functionName #- pragmas to disable inlining various functions to try and provoke a situation where the above rule could match, but it was a losing battle: GHC's built-in optimisations are just too good. But, it's also moot: the outcome I want (the filters to be fused) is happening, it's just the built-in rewrite rules are achieving it, once striot's functions have been inlined away.

12 September 2021

Russ Allbery: DocKnot 5.00

This release is the culmination of a project that I've been wanting to do for two years, but nearly all the work was done in the past week. That experience made me rethink some of my skepticism, but I'll get to that part of the story later. In March of 1999, I got tired of writing HTML by hand and wrote a small program called spin that implemented a macro language that translated into HTML. This makes it one of the oldest programs for which I have a continuous development history, predating podlators by three months. I think only News::Gateway (now very dormant) and Term::ANSIColor (still under active development but very stable) are older, as long as I'm not counting orphaned packages like newsyslog. I've used spin continuously ever since. It's grown features and an ecosystem of somewhat hackish scripts to do web publishing things I've wanted over the years: journal entries like this one, book reviews, a simple gallery (with some now-unfortunate decisions about maximum image size), RSS feeds, and translation of lots of different input files into HTML. But the core program itself, in all those years, has been one single Perl script written mostly in my Perl coding style from the early 2000s before I read Perl Best Practices. My web site is long overdue for an overhaul. Just to name a couple of obvious problems, it looks like trash on mobile browsers, and I'm using URL syntax from the early days of the web that, while it prompts some nostalgia for tildes, means all the URLs are annoyingly long and embed useless information such as the fact each page is written in HTML. Its internals also use a lot of ad hoc microformats (a bit of RFC 2822 here, a text-based format with significant indentation there, a weird space-separated database) and are supported by programs that extract meaning from human-written pages and perform automated updates to them rather than having a clear separation between structure and data. This will be a very large project, but it's the sort of quixotic personal project that I enjoy. Maintaining my own idiosyncratic static site generator is almost certainly not an efficient use of my time compared to, say, converting everything to Hugo. But I have 3,428 pages (currently) written in the thread macro language, plus numerous customizations that cater to my personal taste and interests, and, most importantly, I like having a highly customized system that I know exactly how to automate. The blocker has been that I didn't want to work on spin as it existed. It badly needed a structural overhaul and modernization, and even more badly needed a test suite since every release involved tedious manual testing by pouring over diffs between generations of the web site. And that was enough work to be intimidating, so I kept putting it off. I've separately been vaguely aware that I have been spending too much time reading Twitter (specifically) and the news (in general). It would be one thing if I were taking in that information to do something productive about it, but I haven't been. It's just doomscrolling. I've been thinking about taking a break for a while but it kept not sticking, so I decided to make a concerted effort this week. It took about four days to stop wanting to check Twitter and forcing myself to go do something else productive or at least play a game instead. Then I managed to get started on my giant refactoring project, and holy shit, Twitter has been bad for my attention span! I haven't been able to sustain this level of concentration for hours at a time in years. Twitter's not the only thing to blame (there are a few other stressers that I've fixed in the past couple of years), but it's obviously a huge part. Anyway, this long personal ramble is prelude to the first release of DocKnot that includes my static site generator. This is not yet the full tooling from my old web tools page; specifically, it's missing faq2html, cl2xhtml, and cvs2xhtml. (faq2html will get similar modernization treatment, cvs2xhtml will probably be rewritten in Perl since I have some old, obsolete scripts that may live in CVS forever, and I may retire cl2xhtml since I've stopped using the GNU ChangeLog format entirely.) But DocKnot now contains the core of my site generation system, including the thread macro language, POD conversion (by way of Pod::Thread), and RSS feeds. Will anyone else ever use this? I have no idea; realistically, probably not. If you were starting from scratch, I'm sure you'd be better off with one of the larger and more mature static site generators that's not the idiosyncratic personal project of one individual. It is packaged for Debian because it's part of the tool chain for generating files (specifically README.md) that are included in every package I maintain, and thus is part of the transitive closure of Debian main, but I'm not sure anyone will install it from there for any other purpose. But for once making something for someone else isn't the point. This is my quirky, individual way to maintain web sites that originated in an older era of the web and that I plan to keep up-to-date (I'm long overdue to figure out what they did to HTML after abandoning the XHTML approach) because it brings me joy to do things this way. In addition to adding the static site generator, this release also has the regular sorts of bug fixes and minor improvements: better formatting of software pages for software that's packaged for Debian, not assuming every package has a TODO file, and ignoring Autoconf 2.71 backup files when generating distribution tarballs. You can get the latest version of DocKnot from CPAN as App-DocKnot, or from its distribution page. I know I haven't yet updated my web tools page to reflect this move, or changed the URL in the footer of all of my pages. This transition will be a process over the next few months and will probably prompt several more minor releases.

Vincent Bernat: Short feedback on Cisco pyATS and Genie Parser

Cisco pyATS is a framework for network automation and testing. It includes, among other things, an open-source multi-vendor set of parsers and models, Genie Parser. It features 2700 parsers for various commands over many network OS. On the paper, this seems a great tool!
>>> from genie.conf.base import Device
>>> device = Device("router", os="iosxr")
>>> # Hack to parse outputs without connecting to a device
>>> device.custom.setdefault("abstraction",  )["order"] = ["os", "platform"]
>>> cmd = "show route ipv4 unicast"
>>> output = """
... Tue Oct 29 21:29:10.924 UTC
...
... O    10.13.110.0/24 [110/2] via 10.12.110.1, 5d23h, GigabitEthernet0/0/0/0.110
... """
>>> device.parse(cmd, output=output)
 'vrf':  'default':  'address_family':  'ipv4':  'routes':  '10.13.110.0/24':  'route': '10.13.110.0/24',
       'active': True,
       'route_preference': 110,
       'metric': 2,
       'source_protocol': 'ospf',
       'source_protocol_codes': 'O',
       'next_hop':  'next_hop_list':  1:  'index': 1,
          'next_hop': '10.12.110.1',
          'outgoing_interface': 'GigabitEthernet0/0/0/0.110',
          'updated': '5d23h' 
First deception: pyATS is closed-source with some exceptions. This is quite annoying if you run into some issues outside Genie Parser. For example, although pyATS is using the ssh command, it cannot leverage my ssh_config file: pyATS resolves hostnames before providing them to ssh. There is no plan to open source pyATS. Then, Genie Parser has two problems:
  1. The data models used are dependent on the vendor and OS, despite the documentation saying otherwise. For example, the data model used for IPv4 interfaces is different between NX-OS and IOS-XR.
  2. The parsers rely on line-by-line regular expressions to extract data and some Python code as glue. This is fragile and may break silently.
To illustrate the second point, let s assume the output of show ipv4 vrf all interface is:
  Loopback10 is Up, ipv4 protocol is Up
    Vrf is default (vrfid 0x60000000)
    Internet protocol processing disabled
  Loopback30 is Up, ipv4 protocol is Down [VRF in FWD reference]
    Vrf is ran (vrfid 0x0)
    Internet address is 203.0.113.17/32
    MTU is 1500 (1500 is available to IP)
    Helper address is not set
    Directed broadcast forwarding is disabled
    Outgoing access list is not set
    Inbound  common access list is not set, access list is not set
    Proxy ARP is disabled
    ICMP redirects are never sent
    ICMP unreachables are always sent
    ICMP mask replies are never sent
    Table Id is 0x0
Because the regular expression to parse an interface name does not expect the extra data after the interface state, Genie Parser ignores the line starting the definition of Loopback30 and parses the output to this structure:1
 
  "Loopback10":  
    "int_status": "up",
    "oper_status": "up",
    "vrf": "ran",
    "vrf_id": "0x0",
    "ipv4":  
      "203.0.113.17/32":  
        "ip": "203.0.113.17",
        "prefix_length": "32"
       ,
      "mtu": 1500,
      "mtu_available": 1500,
      "broadcast_forwarding": "disabled",
      "proxy_arp": "disabled",
      "icmp_redirects": "never sent",
      "icmp_unreachables": "always sent",
      "icmp_replies": "never sent",
      "table_id": "0x0"
     
   
 
While this bug is simple to fix, this is an uphill battle. Any existing or future slight variation in the output of a command could trigger another similar undetected bug, despite the extended test coverage. I have reported and fixed several other silent parsing errors: #516, #529, and #530. A more robust alternative would have been to use TextFSM and to trigger a warning when some output is not recognized, like Batfish, a configuration analysis tool, does. In the future, we should rely on YANG for data extraction, but it is currently not widely supported. SNMP is still a valid possibility but much information is not accessible through this protocol. In the meantime, I would advise you to only use Genie Parser with caution.

  1. As an exercise, the astute reader is asked to write the code to extract the IPv4 from this structure.

6 September 2021

Vincent Bernat: Switching to the i3 window manager

I have been using the awesome window manager for 10 years. It is a tiling window manager, configurable and extendable with the Lua language. Using a general-purpose programming language to configure every aspect is a double-edged sword. Due to laziness and the apparent difficulty of adapting my configuration about 3000 lines to newer releases, I was stuck with the 3.4 version, whose last release is from 2013. It was time for a rewrite. Instead, I have switched to the i3 window manager, lured by the possibility to migrate to Wayland and Sway later with minimal pain. Using an embedded interpreter for configuration is not as important to me as it was in the past: it brings both complexity and brittleness.
i3 dual screen setup
Dual screen desktop running i3, Emacs, some terminals, including a Quake console, Firefox, Polybar as the status bar, and Dunst as the notification daemon.
The window manager is only one part of a desktop environment. There are several options for the other components. I am also introducing them in this post.

i3: the window manager i3 aims to be a minimal tiling window manager. Its documentation can be read from top to bottom in less than an hour. i3 organize windows in a tree. Each non-leaf node contains one or several windows and has an orientation and a layout. This information arbitrates the window positions. i3 features three layouts: split, stacking, and tabbed. They are demonstrated in the below screenshot:
Example of layouts
Demonstration of the layouts available in i3. The main container is split horizontally. The first child is split vertically. The second one is tabbed. The last one is stacking.
Tree representation of the previous screenshot
Tree representation of the previous screenshot.
Most of the other tiling window managers, including the awesome window manager, use predefined layouts. They usually feature a large area for the main window and another area divided among the remaining windows. These layouts can be tuned a bit, but you mostly stick to a couple of them. When a new window is added, the behavior is quite predictable. Moreover, you can cycle through the various windows without thinking too much as they are ordered. i3 is more flexible with its ability to build any layout on the fly, it can feel quite overwhelming as you need to visualize the tree in your head. At first, it is not unusual to find yourself with a complex tree with many useless nested containers. Moreover, you have to navigate windows using directions. It takes some time to get used to. I set up a split layout for Emacs and a few terminals, but most of the other workspaces are using a tabbed layout. I don t use the stacking layout. You can find many scripts trying to emulate other tiling window managers but I did try to get my setup pristine of these tentatives and get a chance to familiarize myself. i3 can also save and restore layouts, which is quite a powerful feature. My configuration is quite similar to the default one and has less than 200 lines.

i3 companion: the missing bits i3 philosophy is to keep a minimal core and let the user implements missing features using the IPC protocol:
Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible. Introduction to the i3 window manager
While this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab. Instead of maintaining a script for each feature, I have centralized everything into a single Python process, i3-companion using asyncio and the i3ipc-python library. Each feature is self-contained into a function. It implements the following components:
make a workspace exclusive to an application
When a workspace contains Emacs or Firefox, I would like other applications to move to another workspace, except for the terminal which is allowed to intrude into any workspace. The workspace_exclusive() function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
implement a Quake console
The quake_console() function implements a drop-down console available from any workspace. It can be toggled with Mod+ . This is implemented as a scratchpad window.
back and forth workspace switching on the same output
With the workspace back_and_forth command, we can ask i3 to switch to the previous workspace. However, this feature is not restricted to the current output. I prefer to have one keybinding to switch to the workspace on the next output and one keybinding to switch to the previous workspace on the same output. This behavior is implemented in the previous_workspace() function by keeping a per-output history of the focused workspaces.
create a new empty workspace or move a window to an empty workspace
To create a new empty workspace or move a window to an empty workspace, you have to locate a free slot and use workspace number 4 or move container to workspace number 4. The new_workspace() function finds a free number and use it as the target workspace.
restart some services on output change
When adding or removing an output, some actions need to be executed: refresh the wallpaper, restart some components unable to adapt their configuration on their own, etc. i3 triggers an event for this purpose. The output_update() function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.
I will detail the other features as this post goes on. On the technical side, each function is decorated with the events it should react to:
@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS)
async def previous_workspace(i3, event):
    """Go to previous workspace on the same output."""
The CommandEvent() event class is my way to send a command to the companion, using either i3-msg -t send_tick or binding a key to a nop command. The latter is used to avoid spawning a shell and a i3-msg process just to send a message. The companion listens to binding events and checks if this is a nop command.
bindsym $mod+Tab nop "previous-workspace"
There are other decorators to avoid code duplication: @debounce() to coalesce multiple consecutive calls, @static() to define a static variable, and @retry() to retry a function on failure. The whole script is a bit more than 1000 lines. I think this is worth a read as I am quite happy with the result.

dunst: the notification daemon Unlike the awesome window manager, i3 does not come with a built-in notification system. Dunst is a lightweight notification daemon. I am running a modified version with HiDPI support for X11 and recursive icon lookup. The i3 companion has a helper function, notify(), to send notifications using DBus. container_info() and workspace_info() uses it to display information about the container or the tree for a workspace.
Notification showing i3 tree for a workspace
Notification showing i3 s tree for a workspace

polybar: the status bar i3 bundles i3bar, a versatile status bar, but I have opted for Polybar. A wrapper script runs one instance for each monitor. The first module is the built-in support for i3 workspaces. To not have to remember which application is running in a workspace, the i3 companion renames workspaces to include an icon for each application. This is done in the workspace_rename() function. The icons are from the Font Awesome project. I maintain a mapping between applications and icons. This is a bit cumbersome but it looks great.
i3 workspaces in Polybar
i3 workspaces in Polybar
For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details.
Various modules for Polybar
Polybar displaying various information: CPU usage, memory usage, screen brightness, battery status, Bluetooth status (with a connected headset), network status (connected to a wireless network and to a VPN), notification status, and speaker volume.
For Bluetooh, network, and notification statuses, I am using Polybar s ipc module: the next version of Polybar can receive an arbitrary text on an IPC socket. The module is defined with a single hook to be executed at the start to restore the latest status.
[module/network]
type = custom/ipc
hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null
initial = 1
It can be updated with polybar-msg action "#network.send.XXXX". In the i3 companion, the @polybar() decorator takes the string returned by a function and pushes the update through the IPC socket. The i3 companion reacts to DBus signals to update the Bluetooth and network icons. The @on() decorator accepts a DBusSignal() object:
@on(
    StartEvent,
    DBusSignal(
        path="/org/bluez",
        interface="org.freedesktop.DBus.Properties",
        member="PropertiesChanged",
        signature="sa sv as",
        onlyif=lambda args: (
            args[0] == "org.bluez.Device1"
            and "Connected" in args[1]
            or args[0] == "org.bluez.Adapter1"
            and "Powered" in args[1]
        ),
    ),
)
@retry(2)
@debounce(0.2)
@polybar("bluetooth")
async def bluetooth_status(i3, event, *args):
    """Update bluetooth status for Polybar."""
The middle of the bar is occupied by the date and a weather forecast. The latest also uses the IPC mechanism, but the source is a Python script triggered by a timer.
Date and weather in Polybar
Current date and weather forecast for the day in Polybar. The data is retrieved with the OpenWeather API.
I don t use the system tray integrated with Polybar. The embedded icons usually look horrible and they all behave differently. A few years back, Gnome has removed the system tray. Most of the problems are fixed by the DBus-based Status Notifier Item protocol also known as Application Indicators or Ayatana Indicators for GNOME. However, Polybar does not support this protocol. In the i3 companion, The implementation of Bluetooth and network icons, including displaying notifications on change, takes about 200 lines. I got to learn a bit about how DBus works and I get exactly the info I want.

picom: the compositor I like having slightly transparent backgrounds for terminals and to reduce the opacity of unfocused windows. This requires a compositor.1 picom is a lightweight compositor. It works well for me, but it may need some tweaking depending on your graphic card.2 Unlike the awesome window manager, i3 does not handle transparency, so the compositor needs to decide by itself the opacity of each window. Check my configuration for details.

systemd: the service manager I use systemd to start i3 and the various services around it. My xsession script only sets some environment variables and lets systemd handles everything else. Have a look at this article from Micha G ral for the rationale. Notably, each component can be easily restarted and their logs are not mangled inside the ~/.xsession-errors file.3 I am using a two-stage setup: i3.service depends on xsession.target to start services before i3:
[Unit]
Description=X session
BindsTo=graphical-session.target
Wants=autorandr.service
Wants=dunst.socket
Wants=inputplug.service
Wants=picom.service
Wants=pulseaudio.socket
Wants=policykit-agent.service
Wants=redshift.service
Wants=spotify-clean.timer
Wants=ssh-agent.service
Wants=xiccd.service
Wants=xsettingsd.service
Wants=xss-lock.service
Then, i3 executes the second stage by invoking the i3-session.target:
[Unit]
Description=i3 session
BindsTo=graphical-session.target
Wants=wallpaper.service
Wants=wallpaper.timer
Wants=polybar-weather.service
Wants=polybar-weather.timer
Wants=polybar.service
Wants=i3-companion.service
Wants=misc-x.service
Have a look on my configuration files for more details.

rofi: the application launcher Rofi is an application launcher. Its appearance can be customized through a CSS-like language and it comes with several themes. Have a look at my configuration for mine.
Rofi as an application launcher
Rofi as an application launcher
It can also act as a generic menu application. I have a script to control a media player and another one to select the wifi network. It is quite a flexible application.
Rofi as a wifi network selector
Rofi to select a wireless network

xss-lock and i3lock: the screen locker i3lock is a simple screen locker. xss-lock invokes it reliably on inactivity or before a system suspend. For inactivity, it uses the XScreenSaver events. The delay is configured using the xset s command. The locker can be invoked immediately with xset s activate. X11 applications know how to prevent the screen saver from running. I have also developed a small dimmer application that is executed 20 seconds before the locker to give me a chance to move the mouse if I am not away.4 Have a look at my configuration script.
Demonstration of xss-lock, xss-dimmer and i3lock with a 4 speedup.

The remaining components
  • autorandr is a tool to detect the connected display, match them against a set of profiles, and configure them with xrandr.
  • inputplug executes a script for each new mouse and keyboard plugged. This is quite useful to load the appropriate the keyboard map. See my configuration.
  • xsettingsd provides settings to X11 applications, not unlike xrdb but it notifies applications for changes. The main use is to configure the Gtk and DPI settings. See my article on HiDPI support on Linux with X11.
  • Redshift adjusts the color temperature of the screen according to the time of day.
  • maim is a utility to take screenshots. I use Prt Scn to trigger a screenshot of a window or a specific area and Mod+Prt Scn to capture the whole desktop to a file. Check the helper script for details.
  • I have a collection of wallpapers I rotate every hour. A script selects them using advanced machine learning algorithms and stitches them together on multi-screen setups. The selected wallpaper is reused by i3lock.

  1. Apart from the eye candy, a compositor also helps to get tear-free video playbacks.
  2. My configuration works with both Haswell (2014) and Whiskey Lake (2018) Intel GPUs. It also works with AMD GPU based on the Polaris chipset (2017).
  3. You cannot manage two different displays this way e.g. :0 and :1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons.
  4. I have only discovered later that XSecureLock ships such a dimmer with a similar implementation. But mine has a cool countdown!

5 September 2021

Antoine Beaupr : Automating major Debian upgrades

It's major upgrade time again! The Debian project just published the Debian 11 "bullseye" release, and it's pretty awesome! This makes me realized that I have never written here about my peculiar upgrade process, and figured it was worth bringing that up to a wider audience. My upgrade process also has a notable changes section which includes major version changes (e.g. Inkscape 1.0!), new packages (e.g. podman!) and important behavior changes (e.g. driverless scanning and printing!). I'm particularly interested to hear about any significant change I might have missed. If you know of a cool new package that shipped with bullseye and that I forgot, do let me know! But that's for the cool new stuff. We need to talk about the problems with Debian major upgrades.

Background I have been maintaining detailed upgrade guides, on my wiki, starting with the jessie release, but I have actually written such guides for Koumbit.org as far back as Debian squeeze in 2011 (another worker wrote the older Debian lenny upgrade guide in 2009). Koumbit, since then, has kept maintaining those guides all the way to the latest bullseye upgrade, through 7 major releases! Over the years, those guides evolved from a quick "cheat-sheet" format copied from the release notes into a more or less "scripted" form that I currently use. Each guide has a procedure made of a few steps that can be basically copy-pasted to batch-upgrade a host (or multiple hosts in parallel) as quickly as possible. There is also the predict-os script which allows you to keep track of progress of the upgrades in a Puppet cluster.

Limitations of the official procedure In comparison with my procedure, the official upgrade guide is mostly designed to upgrade a single machine, typically a workstation, with a rather slow and exhaustive process. The PDF version of the upgrade guide is 14 pages long! This, obviously, does not work when you have tens or hundreds of machines to upgrade. Debian upgrades are notorious for being extremely reliable, but we have a lot of packages, and there are always corner cases where the upgrade will just fail because of a bug specific to your environment. Those will only be fixed after some back and forth in the community (and that's assuming users report those bugs, which is not always the case). There's no obvious way to deploy "hot fixes" in this context, at least not without fixing the package and publishing it on an unofficial Debian archive while the official ones catch up. This is slow and difficult. Or some packages require manual labor. Examples of this are the PostgreSQL or Ganeti packages which require you to upgrade your clusters by hand, while the old and new packages live side by side. Debian packages bring you far in the upgrade process, but sometimes not all the way. Which means every Debian install needs to be manually upgraded and inspected when a new release comes out. That's slow and error prone and we can do better.

How to automate major upgrades I have a proposal to automate this. It's been mostly dormant in the Debian wiki, for 5 years now. Fundamentally, this is a hard problem: Debian gets installed in so many different environments, from workstations to physical servers to virtual machines, embedded systems and so on, that it's extremely hard to come up with a "one size fits all" system. The (manual) procedure I'm using is mostly targeting servers, but I'm also using it on workstations. And I'll note that it's specific to my home setup: I have a different procedure at work, although it has a lot of common code. To automate this, I would factor out that common code with hooks where you could easily inject special code like "you need to upgrade ferm first", "you need an extra reboot here", or "this is how you finish the PostgreSQL upgrade". With Debian getting closer to a 2 year release cycle, with the previous release being supported basically only one year after the new stable comes out, I feel more and more strongly that this needs better automation. So I'm thinking that I should write a prototype for this. Ubuntu has do-release-upgrade that is too Ubuntu-specific to be reused. An attempt at collaborating on this has been mostly met with silence from Ubuntu's side as well. I'm thinking that using something like Fabric, Mitogen, or Transilience: anything that will allow me to write simple, portable Python code that can run transparently on a local machine (for single systems upgrades, possibly with a GUI frontend) to remote servers (for large clusters of servers, maybe with canaries and grouping using Cumin). I'll note that Koumbit started experimenting with Puppet Bolt in the bullseye upgrade process, but that feels too site-specific to be useful more broadly.

Trade-offs I am not sure where this stands in the XKCD time trade-off evaluation, because the table doesn't actually cover the time frequency of Debian release (which is basically "biennial") and the amount of time the upgrade would take across a cluster (which varies a lot, but that I estimate to be between one to 6 hours per machine). Assuming I have 80 machines to upgrade, that is 80 to 480 hours (between ~3 to 20 days) of work! It's unclear how much work such an automated system would shave off, however. Assuming things are an order of magnitude faster (say I upgrade 10 machines at a time), I would shave off between 3 and 18 days of work, which implies I might allow myself to spend a minimum of 5 days working on such a project.

The other option: never upgrade Before people mention those: I am aware of containers, Kubernetes, and other deployment mechanisms. Indeed, those may be a long-term solution, we currently can't afford to migrate everything over to containers right now: that is a huge migration and a total paradigm shift. At that point, whatever is left might not even be Debian in the first place. And besides, if you run Kubernetes, you still need to run some OS underneath and upgrade that, so that problem never completely disappears. Still, maybe that's the final answer: never upgrade. For some stateless machines like DNS replicas or load balancers, that might make a lot of sense as there's no or little data to carry to the new host. But this implies a seamless and fast provisioning process, and we don't have that either: at my work, installing a machine takes about as long as upgrading it, and that's after a significant amount of work automating that process, partly writing my own Debian installer with Fabric (!).

What is your process? I'm curious to hear what people think of those ideas. It strikes me as really odd that no one has really tackled that problem yet, considering how many clusters of Debian machines are out there. Surely people are upgrading those, and not following that slow step by step guide, right? I suspect everyone is doing the same thing: we all have our little copy-paste script we batch onto multiple machines, sometimes in parallel. That is what the Debian.org sysadmins are doing as well. There must be a better way. What is yours?

My upgrades so far So far, I have upgraded 2 out of my 3 home machines running buster -- others have been installed directly in bullseye -- with only my main, old, messy server left. Upgrades have been pretty painless so far (see another report, for example), much better than the previous buster upgrade. Obviously, for me personal use, automating this is pointless. Work-side, however, is another story: we have over 80 boxes to upgrade there and that will take a while. The last stretch to buster cycle took about two years to complete, so we might be done by the time the next release (12, "bookworm") is released, but that's actually a full year after "buster" becomes EOL, so it's actually too late... At least I fixed the installers so that new the machines we create all ship with bullseye, so we stopped accumulating new buster hosts...
Thanks to lelutin and pabs for reviewing a draft of this post.

1 September 2021

Paul Wise: FLOSS Activities August 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian servers: expand LV, fix debbugs config
  • Debian wiki: unblock IP addresses, approve accounts
  • Debian QA services: deploy changes

Communication

Sponsors The pyemd, pytest-rerunfailures, libpst, sptag, librecaptcha work was sponsored by my employer. All other work was done on a volunteer basis.

31 August 2021

Benjamin Mako Hill: Returning to DebConf

I first started using Debian sometime in the mid 90s and started contributing as a developer and package maintainer more than two decades years ago. My first very first scholarly publication, collaborative work led by Martin Michlmayr that I did when I was still an undergrad at Hampshire College, was about quality and the reliance on individuals in Debian. To this day, many of my closest friends are people I first met through Debian. I met many of them at Debian s annual conference DebConf. Given my strong connections to Debian, I find it somewhat surprising that although all of my academic research has focused on peer production, free culture, and free software, I haven t actually published any Debian related research since that first paper with Martin in 2003! So it felt like coming full circle when, several days ago, I was able to sit in the virtual DebConf audience and watch two of my graduate student advisees Kaylea Champion and Wm Salt Hale present their research about Debian at DebConf21. Salt presented his masters thesis work which tried to understand the social dynamics behind organizational resilience among free software projects. Kaylea presented her work on a new technique she developed to identifying at risk software packages that are lower quality than we might hope given their popularity (you can read more about Kaylea s project in our blog post from earlier this year). If you missed either presentation, check out the blog post my research collective put up or watch the videos below. If you want to hear about new work we re doing including work on Debian you should follow our research group blog, and/or follow or engage with us in the Fediverse (@communitydata@social.coop), or on Twitter (@comdatasci). And if you re interested in joining us perhaps to do more research on FLOSS and/or Debian and/or a graduate degree of your own? please be in touch with me directly!
Wm Salt Hale s presentation plus Q&A. (WebM available)
Kaylea Champion s presentation plus Q&A. (WebM available)

30 August 2021

Andrew Cater: Oh, my goodness, where's the fantastic barbeque [OMGWTFBBQ 2021]

I'm guessing the last glasses will be through the dishwasher (again) and Pepper the dog can settle down without having to cope with so many people.For those who don't know - Steve and his wife Jo (Sledge and Randombird) hold a barbeque in their garden every August Bank Holiday weekend [UK Bank Holiday on the last Monday in August]. The barbeque is not small - it's the dominating feature in the suburban garden, brick built, with a dedication stone, lights, electricity. The garden is small, generally made smaller by forty or so Debian friends and allies standing and sitting around. People are talking, arguing, hugging people they've not seen for (literal) years and putting the world to rights. This is Debian central point - with large quantities of meat and salads, an amount of beer/alcohol and "Cambridge gin" and general goodwill. This year was more than usually atmospheric because for some of us it was the first time with a large group of people in a while. Side conversations abound: for me it was learning something about the high energy particle physics community, how to precision build helicopters, fly quadcopters and precision 3D print anything, the maths of Isy counting crochet stitches to sew together randomly sized squares ... and, of course, obligatory things like how random is random and what's good enough entropy. And a few sessions of the game of our leader.
This is also a place for stuff to get done: I was unashamedly using this to upgrade the storage in my laptop while there were sensible engineers around. A corner of the table, a RattusRattus and it was quickly sorted - then a discussion around the internals of Thinkpads as he took his apart. Then getting a full install - Gb Ethernet to the Debian mirror in the cupboard six feet away is faster bandwidth than a jumbo jet full of tapes. Then getting mail to work again - it's handy when the mailserver owner is next to you, having come in from the garden to help, and finally IRC. And not just me: "You need a GPG key signed - there's three DPLs here, there's a release manager - but you've just missed one of the DAMs." plus an in-depth GPG how-to session on the other side of the table.I was the luckiest one with the most comfortable bed in the house overnight but I couldn't stay for last night. Thanks once again to all involved but especially Steve and Jo who do this for the love of it, and the fun, and the community and the family. Oh, and thanks to Lenovo - not just for being a platinum sponsor of Debconf but also for providing the official laptop of this and most Debian occasions

25 August 2021

John Goerzen: Excellent Experience with Debian Bullseye

I ve appreciated the bullseye upgrade, like most Debian upgrades. I m not quite sure how, since I was already running a backports kernel, but somehow the entire system is snappier. Maybe newer X or something? I m really pleased with it. Hardware integration is even nicer now, particularly the automatic driverless support for scanners in addition to the existing support for printers. All in all, a very nice upgrade, and pretty painless. I experienced a few odd situations. For one, I had been using Gnome Flashback. Since xmonad-log-applet didn t compile there (due to bitrot in the log applet, not flashback), and I had been finding Gnome Flashback to be a rather dusty and forgotten corner of Gnome for a long time, I decided to try Mate. Mate just seemed utterly unable to handle a situation with a laptop and an external monitor very well. I want to use only the external monitor with the laptop lid is closed, and it just couldn t remember how to do the right thing external monitor on, laptop monitor off, laptop not put into suspend. gdm3 also didn t seem to be able to put the external monitor to sleep, either, causing a few nights of wasted power. So off I went to XFCE, which I had been using for years on my workstation anyhow. Lots more settings available in XFCE, plus things Just Worked there. Odd that XFCE, the thin and light DE, is now the one that has the most relevant settings. It seems the Gnome let s remove a bunch of features approach has extended to MATE as well. When I switched to XFCE, I also removed gdm3 from my system, leaving lightdm as the only DM on it. That matched what my desktop machine was using, and also what task-xfce-desktop called for. But strangely, the XFCE settings for lightdm were completely different between the laptop and the desktop. It turns out that with lightdm, you can have the lightdm-gtk-greeter and the accompanying lightdm-gtk-greeter-settings, or slick-greeter and the accompanying lightdm-settings. One machine had one greeter and settings, and the other had the other. Why, I don t know. But lightdm-gtk-greeter-settings had the necessary options for putting monitors to sleep on the login screen, so I went with it. This does highlight a bit of a weakness in Debian upgrades. There is SO MUCH choice in Debian, which I highly value. At some point, almost certainly without my conscious choice, one machine got one greeter and another got the other. Despite both having task-xfce-desktop installed, they got different desktop experiences. There isn t a great way to say OK, I know I had a bunch of things installed before, but NOW I want the default bullseye experience . But overall, it is an absolutely fantastic distribution. It is great to see this nonprofit community distribution continue to have such quality on such an immense scale. And hard to believe I ve been a Debian developer for 25 years. That seems almost impossible!

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, July 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In July, we put aside 2400 EUR to fund Debian projects. We haven t received proposals of projects to fund in the last months, so we have scheduled a discussion during Debconf to try to to figure out why that is and how we can fix that. Join us on August 26th at 16:00 UTC on this link. We are pleased to announce that Jeremiah Foster will help out to make this initiative a success : he can help Debian members to come up with solid proposals, he can look for people willing to do the work once the project has been formalized and approved, and he will make sure that the project implementation keeps on track when the actual work has begun. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In July, 12 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In July we released 30 DLAs. Also we were glad to welcome Neil Williams and Lee Garrett who became active contributors. The security tracker currently lists 63 packages with a known CVE and the dla-needed.txt file has 17 packages needing an update. We would like to thank Holger Levsen for the years of work where he managed/coordinated the paid LTS contributors. Jeremiah Foster will take over his duties. Thanks to our sponsors Sponsors that joined recently are in bold.

23 August 2021

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Final Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Final Report.

IntroMy name is Leandro Doctors ( allentiak on IRC), and I ve been the GSoC intern working with the Debian Clojure Team during 2021. This is my final report. You can also check my original proposal and my first report.

SummaryWhereas the raw data may not sound by itself very positive, my personal conclusion is. This is, whereas I didn t fully finish the required deliverables envisioned in my original proposal, I do feel I am much closer to, eventually, becoming a Debian Developer. So, by all means, I consider this project has had a positive outcome.

ProjectThe goal of the Clojure Build Tools in Debian project was to provide Clojure Debian users with some of the latest advanced build tools and libraries the upstream Clojure developers have been lately working on. These include tools.deps.alpha, a library for dependency graph resolution and classpath building, and the CLI tool clj, for REPL interaction. If time permitted, I was also to improve the quality of both new and existing Clojure packages, and the overall Debian Clojure packaging process. My mentor was Louis-Louis-Philippe V ronneau, and my co-mentor was Utkarsh Gupta.

MotivationWhy this project? On the one side, if you re a Clojure lover like me, you may have noticed that the Clojure experience in Debian is, as of mid 2021, well... still quiet limited. Additionally, this project aligned with my own background in Free Software community building and my research interest in Peer Production.When I mention how limited today s Clojure experience in Debian is, I can see two reasons for this, deeply intertwined. The first one is that there currently aren t many Clojure-specific packaging tools in Debian (such as a clojure-debian-helper). The second reason for which we only currently have a suboptimal Clojure experience in Debian, and probably the root of the previous one, is that many core build tools and libraries for the language have not simply been packaged yet. My project aimed to attack that seemingly root cause.As I said, another reason for me choosing this project is my own experience as the Co-founder and Leader of, probably, the first Free Software Community experience in my hometown of San Juan, Argentina. That interest in Free Software evolved in a first PhD attempt in what is now known as the field of Peer Production. A subject that has lived within me as a research interest during my day job at a University.Being a Clojure fan, it felt only logical combining all those interests somehow. And this project seemed like the ideal combination.

The Debian Clojure TeamI ve been working with a small, yet very warm team. The current incarnation of the Debian Clojure Team exists thanks to the hard work of three people.Elana Hashman (aka the Clojure necromancer ), revived the team around three years ago. Later on, the team gained the invaluable presence of Louis-Philippe V ronneau and Utkarsh Gupta (my mentor and co-mentor, respectively).Together, these Three Musketeers have maintained the team alive, allowing us, Debian users, to enjoy Clojure.

StatusDuring the first part of my project, I mainly worked on learning the basics of Debian packaging, and got my first package uploaded. I have to thank Louis-Philippe, Utkarsh, and Elana for their immense patience and support during that part, as it took me quite some effort grasping the basics of Debian packaging.During the second part of my project, I worked on my last packages, and almost completed the originally required scope of the project. I only have to finish working on the transition from the currently provided set of packages (based on a Debian-specific clojure runner) to the newly provided upstream clojure and clj runners.Unfortunately, I didn t have much time left to start working on the opportunities for improvement already identified by the Debian Clojure Team originally outlined in my proposal. Whereas I did update one older Clojure package not built using leiningen (tools-data-xml-clojure), I didn t write any Lintian tags to make Clojure packaging in Debian more robust, nor worked towards the automation of Clojure unit tests in autopkgtests via autodep8.

Deliverables: Data vs. ConclusionsIf we are to talk about deliverables, we should start with the data. According to my original proposal, I was required to provide both new and updated Clojure packages accepted into Debian unstable , and updated Clojure packaging documentation. Additionally, if time permitted, I was to also provide new Clojure Lintian tags merged by the Lintian maintainers, and new Clojure autodep8 scripts merged by the autodep8 maintainers. Whereas I partially accomplished both required tasks, I didn t manage to start working on any of the optional deliverables.When looked in isolation, those numbers may look somewhat disappointing for some people. However, I can draw a much more positive conclusion. Why?Firstly, GSoC is supposed to be a learning experience. Moreover, as I said in my original proposal, I approach[ed] this project as a great opportunity to, finally, start my journey towards becoming a Debian Developer . In that sense, I consider the time invested into this project fruitful. In this way, I have learned the basics of packaging, how to interact with the Debian Clojure Team, and and already got my first packages accepted. Plus, I m looking forward to continuing to work with the Debian Clojure Team so I can attain the original scope of the project. Therefore, all things considered, I can consider this experience as a moderate success.

Lessons LearnedTechnically speaking, if I have learned one thing during these weeks, is that packaging, although easy to be underestimated, is by no means a trivial process. As any Debian Developer surely knows, the onboarding process can take some time. Plus, what is easy for some people, can be difficult for others. In my case, this was quite evident. Whereas I can speak several languages and learning new ones takes me little effort, grasping the basics of packaging took me (literally!) blood, sweat and tears. Indeed, the packaging learning curve was quite steep for me.That being said, I did learn a thing or two about packaging. So, if I managed to get here, I m sure many others can. It may take them more or less time than what it took me, but learning (at least the basics) of packaging is an achievable goal.Technical skill learning aside, I value very highly the non-technical skills I have so far improved during this project.For instance, I also learned that it can take some time to adapt to real-time online communication. Before this project, remote working meant either exchanging emails or getting into video or audio calls, with a low emphasis on chat-based interaction. Early on, I realized that the Debian Clojure Team interacts almost exclusively via, well... chat! And those two approaches are very different indeed. It has taken me some time to adjust, but I ve improved greatly in this aspect as well.Finally, improving my time management skills has been also a key part of this process. Whereas I had already been working remotely for over a year and a half already, my day job is not so interaction-dependent as this project (specially in the beginning). So it took me some time to adapt to this way of working, and to plan my workload so I could use those waiting moments to advance in other parts of the project. Still a lot to improve here, but improving nevertheless.

AcknowledgmentsI first have to thank upstream. More specifically, one of the upstream developers of the clojure-tools, Alex Miller. Everytime I needed specific information on what do specific parts of the Clojure CLI tools s codebase do, tools.deps.alpha do, he popped up a reply in a matter of hours. He has shown genuine interest in the success of is project during by carefully replying to my emails with detailed explanations of code intent and form, both in private and in public conversations. Thank you for all that, Alex!Let s move on to the Debian Clojure Team.First, Elana. I thank Elana for her initial openness when I first contacted her about this idea. It was *her* who initially contacted Louis-Philippe so he would become my mentor. I wouldn t have started to work on this project if it wasn t for her. Plus, she provided quite a piece of advise in more that one ocassion. So, thank you very much for all that, Elana!I also thank Utkarsh, my co-mentor, for his overall technical advise. And a special mention to his initial help to setup my Matrix client for OFTC chat. At that moment, it was *him* who took the time to help me in real time so I could solve that problem. So, thank you very much for all that, Utkarsh!I finally have to thank Louis-Philippe, my mentor, for his patient guidance during the whole process. His dedication and hard work has been *instrumental* for my progress. And a special mention for his tolerance with respect to some unforeseen personal circumstances I had to endure during the first weeks. When one is playing the newbie, times abound when one depends on other people s feedback. And Debian is made of volunteers, who have a life outside it. Every time I asked, Louis-Philippe was there. I wouldn t have gotten here if it wasn t for him. So, thank you *so* much for all that, Louis-Philippe!

Final WordsI would like to close this report with a reflection.I have been using Debian for many, many years now, and I had been looking for a way to contribute back to the project for some time already. I even did some work on a non-packaging Debian project. That being said, I never managed to deliver much, really.So, the very existence of outreach programs as this one is, in my humble opinion, crucial. In my case, the funding I got through the GSoC program was instrumental in being able to allocate time for this endeavor, and to finally get started contributing to Debian. Plus, it has had a very positive impact on me; in many ways, some of which I am only starting to discover now that the project is ending.When I put things into perspective, this project is very important for me. Actually, it is nothing but the first step within a long-term journey: becoming a Debian member. Hopefully, I would like to be able to apply for Debian membership by the end of this year.

Questions?Thank you very much for your time reading this! I look forward to hearing (or reading) your feedback. Please come and meet with the Debian Clojure Team Moreover, I will be in the Clojure BoF on DebConf2021. Moreover, do not hesitate to send me an email.

Data

Task Status
  • Required Tasks:
    • T1: Setting up a full Debian packaging development environment and learning the basics of Debian packaging.
      • Successfully completed the first part during the Application period.
      • Successfully completed the second part during the Coding periods.
    • T2: Identifying and packaging the missing dependencies to package clojure-cli.
      • Successfully completed as of the end of Coding II.
    • T3: Packaging clojure-cli.
      • 90% done as of the end of Coding II.
    • T4: Updating clojure to use clojure-cli.
      • To be completed after GSoC.
    • T5: Updating the Clojure Packaging Guide with information on how to use the new clojure-cli scripts.
      • Improved existing documentation. To be completed after GSoC.
  • Optional Tasks:
    • T6: Writing Lintian tags to make Clojure packaging in Debian more robust.
      • To be completed after GSoC.
    • T7: Working to automate Clojure unit testing in autopkgtests using autodep8.
      • To be completed after GSoC.
    • T8: Updating older Clojure packages not built using leiningen or clojure-cli.
      • To be completed after GSoC.

Packages
  1. Updated packages:
  2. New packages:
    • tools-gitlibs-clojure -- Clojure API for programatically accessing git libraries. ITP: #905543 in NEW.
    • ITP: tools-deps-alpha-clojure -- functional API for dependency management and classpath creation https://bugs.debian.org/891136 Needs to be uploaded by Louis-Philippe.
  3. In-Progress packages:
    • ITP: clojure-cli -- upstream CLI entrypoints for Clojure https://bugs.debian.org/891141 90% done - Package completed. I only need to finish implementing the transition from existing clojure scripts. To be completed after GSoC.
  4. Opened ITPs:
  5. Reported bugs

Other
  1. Interactions with upstream in the Debian-Clojure mailing list:
    • Many productive interactions with one of the upstream developers, Alex Miller (June, August).
  2. Wiki page:
    1. https://wiki.debian.org/Clojure/Goals/ClojureCLIToolsInDebian

15 August 2021

Dirk Eddelbuettel: RcppBDT 0.2.4 on CRAN: Updates

After a seven-year break (!!), the RcppBDT packages has been updated on CRAN. The RcppBDT package is an early adopter of Rcpp and was one of the first packages utilizing Boost and its Date_Time library. The now more widely-used package anytime is a direct descentant of RcppBDT. In fact, the last time RcppBDT was released, anytime did not yet exist. And some of the changes now finally released here in this version are some of the first steps made towards what became anytime. RcppBDT is broader in scope and provides a wider range of functionality but in a somewhat rougher form as we never sat down writing higher-end wrappers in R for all the potential use cases. When we wrote the first RcppBDT versions, many other popular date/time packages were all in R code and not compiled, and this package showed how things could be done at the compiled level. Now other packages, including anytime have filled the void so fully polishing RcppBDT may never happen. In any event, this release refreshes the package and brings it to full R CMD check --as-cran compliance. The NEWS entry follows:

Changes in version 0.2.4 (2021-08-15)
  • New utility function toPOSIXct which can take multitple input format (integer, floating point or character) vectors and can convert by relying on a wide variety of standard formats. This predates what has long been split off into a new package anytime which is more functional and feaureful.
  • New demo 'toPOSIXct' illustrating the feature.
  • New demo 'toPOSIXctTiming' benchmarking it.
  • Documentation for new functions was added as well.
  • CI now uses run.sh from r-ci.
  • Functions getLastDayOfWeekInMonth and getFirstDayOfWeekInMonth now use dow argument.
  • The shared library is now registered when loaded from NAMESPACE.
  • C level entry points are now registered as R now recommends.
  • Several badges were added to the README.md file.
  • Several fields were added to the DESCRIPTION file, and/or updated.
  • Documentation URLs where both updated as needed and converted to https.

Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 August 2021

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

9 August 2021

Dirk Eddelbuettel: nanotime 0.3.3 on CRAN: Some Updates

Leonardo and I are pleased to share that a new nanotime version 0.3.3 was released today, and arrived on CRAN. This release brings a new (plotting) demo, an updated documentation site, additional nanoduration and nanoperiod functionality, and enhanced testing. nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by co-author Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations. The NEWS snippet adds full details.

Changes in version 0.3.3 (2021-08-09)
  • New demo ggplot2Example.R (Leonardo and Dirk).
  • New documentation website using mkdocs-material (Dirk).
  • Updated unit test to account for r-devel POSIXct changes, and re-enable full testing under r-devel (Dirk).
  • Additional nanoduration and character ops plus tests (Colin Umansky in #88 addressing #87).
  • New plus and minus functions for periods (Leonardo in #91).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Ian Wienand: nutdrv_qx setup for Synology DSM7

I have a cheap no-name UPS acquired from Jaycar and was wondering if I could get it to connect to my Synology DS918+. It rather unhelpfully identifies itself as MEC0003 and comes with some blob of non-working software on a CD; however some investigation found it could maybe work on my Synology NAS using the Network UPS Tools nutdrv_qx driver with the hunnox subdriver type. Unfortunately this is a fairly recent addition to the NUTs source, requiring rebuilding the driver for DSM7. I don't fully understand the Synology environment but I did get this working. Firstly I downloaded the toolchain from https://archive.synology.com/download/ToolChain/toolchain/ and extracted it. I then used the script from https://github.com/SynologyOpenSource/pkgscripts-ng to download some sort of build environment. This appears to want root access and possibly sets up some sort of chroot. Anyway, for DSM7 on the DS918+ I ran EnvDeploy -v 7.0 -p apollolake and it downloaded some tarballs into toolkit_tarballs that I simply extracted into the same directory as the toolchain. I then grabbed the NUTs source from https://github.com/networkupstools/nut. I then built NUTS similar to the following
./autogen.sh
PATH_TO_TC=/home/your/path
export CC=$ PATH_TO_CC /x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-gcc
export LD=$ PATH_TO_LD /x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-ld
./configure \
  --prefix= \
  --with-statepath=/var/run/ups_state \
  --sysconfdir=/etc/ups \
  --with-sysroot=$ PATH_TO_TC /usr/local/sysroot \
  --with-usb=yes
  --with-usb-libs="-L$ PATH_TO_TC /usr/local/x86_64-pc-linux-gnu/x86_64-pc-linux-gnu/sys-root/usr/lib/ -lusb" \
  --with-usb-includes="-I$ PATH_TO_TC /usr/local/sysroot/usr/include/"
make
The tricks to be aware of are setting the locations DSM wants status/config files and overriding the USB detection done by configure which doesn't seem to obey sysroot. If you would prefer to avoid this you can try this prebuilt nutdrv_qx (ebb184505abd1ca1750e13bb9c5f991eaa999cbea95da94b20f66ae4bd02db41). SSH to the DSM7 machine; as root move /usr/bin/nutdrv_qx out of the way to save it; scp the new version and move it into place. If you cat /dev/bus/usb/devices I found this device has a Vendor 0001 and ProdID 0000.
T:  Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  3 Spd=1.5  MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs=  1
P:  Vendor=0001 ProdID=0000 Rev= 1.00
S:  Product=MEC0003
S:  SerialNumber=ffffff87ffffffb7ffffff87ffffffb7
C:* #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=100mA
I:* If#= 0 Alt= 0 #EPs= 2 Cls=03(HID  ) Sub=00 Prot=00 Driver=usbfs
E:  Ad=81(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
E:  Ad=02(O) Atr=03(Int.) MxPS=   8 Ivl=10ms
DSM does a bunch of magic to autodetect and configure NUTs when a UPS is plugged in. The first thing you'll need to do is edit /etc/nutscan-usb.sh and override where it tries to use the blazer_usb driver for this obviously incorrect vendor/product id. The line should now look like
static usb_device_id_t usb_device_table[] =  
    0x0001, 0x0000, "nutdrv_qx"  ,
    0x03f0, 0x0001, "usbhid-ups"  ,
  ... and so on ...
Then you want to edit the file /usr/syno/lib/systemd/scripts/ups-usb.sh to start the nutdrv_qx; find the DRV_LIST in that file and update it like so:
local DRV_LIST="nutdrv_qx usbhid-ups blazer_usb bcmxcp_usb richcomm_usb tripplite_usb"
This is triggered by /usr/lib/systemd/system/ups-usb.service and is ultimately what tries to setup the UPS configuration. Lastly, you will need to edit the /etc/ups/ups.conf file. This will probably vary depending on your UPS. One important thing is to add user=root above the driver; it seems recent NUT has become more secure and drops permissions, but the result it will not find USB devices in this environment (if you're getting something like no appropriate HID device found this is likely the cause). So the configuration should look something like:
user=root
[ups]
driver = nutdrv_qx
port = auto
subdriver = hunnox
vendorid = "0001"
productid = "0000"
langid_fix = 0x0409
novendor
noscanlangid
#pollonly
#community =
#snmp_version = v2c
#mibs =
#secName =
#secLevel =
#authProtocol =
#authPassword =
#privProtocol =
#privPassword =
I then restarted the UPS daemon by enabling/disabling UPS support in the UI. This should tell you that your UPS is connected. You can also check /var/log/ups.log which shows for me
2021-08-09T18:14:51+10:00 synology synoups[11994]: =====log UPS status start=====
2021-08-09T18:14:51+10:00 synology synoups[11996]: device.mfr=
2021-08-09T18:14:51+10:00 synology synoups[11998]: device.model=
2021-08-09T18:14:51+10:00 synology synoups[12000]: battery.charge=
2021-08-09T18:14:51+10:00 synology synoups[12002]: battery.runtime=
2021-08-09T18:14:51+10:00 synology synoups[12004]: battery.voltage=13.80
2021-08-09T18:14:51+10:00 synology synoups[12006]: input.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12008]: output.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12010]: ups.load=31
2021-08-09T18:14:51+10:00 synology synoups[12012]: ups.status=OL
2021-08-09T18:14:51+10:00 synology synoups[12013]: =====log UPS status end=====
Which corresponds to the correct input/output voltage and state. Of course this is all unsupported and probably likely to break -- although I don't imagine much of these bits are updated very frequently. It will likely be OK until the UPS battery dies; at which point I would reccommend buying a better UPS on the Synology support list.

Russ Allbery: Review: The Last Battle

Review: The Last Battle, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #7
Publisher: Collier Books
Copyright: 1956
Printing: 1978
ISBN: 0-02-044210-6
Format: Mass market
Pages: 184
The Last Battle is the seventh and final book of the Chronicles of Narnia in every reading order. It ties together (and spoils) every previous Narnia book, so you do indeed want to read it last (or skip it entirely, but I'll get into that). In the far west of Narnia, beyond the Lantern Waste and near the great waterfall that marks Narnia's western boundary, live a talking ape named Shift and a talking donkey named Puzzle. Shift is a narcissistic asshole who has been gaslighting and manipulating Puzzle for years, convincing the poor donkey that he's stupid and useless for anything other than being Shift's servant. At the start of the book, a lion skin washes over the waterfall and into the Cauldron Pool. Shift, seeing a great opportunity, convinces Puzzle to retrieve it. The king of Narnia at this time is Tirian. I would tell you more about Tirian except, despite being the protagonist, that's about all the characterization he gets. He's the king, he's broad-shouldered and strong, he behaves in a correct kingly fashion by preferring hunting lodges and simple camps to the capital at Cair Paravel, and his close companion is a unicorn named Jewel. Other than that, he's another character like Rilian from The Silver Chair who feels like he was taken from a medieval Arthurian story. (Thankfully, unlike Rilian, he doesn't talk like he's in a medieval Arthurian story.) Tirian finds out about Shift's scheme when a dryad appears at Tirian's camp, calling for justice for the trees of Lantern Waste who are being felled. Tirian rushes to investigate and stop this monstrous act, only to find the beasts of Narnia cutting down trees and hauling them away for Calormene overseers. When challenged on why they would do such a thing, they reply that it's at Aslan's orders. The Last Battle is largely the reason why I decided to do this re-read and review series. It is, let me be clear, a bad book. The plot is absurd, insulting to the characters, and in places actively offensive. It is also, unlike the rest of the Narnia series, dark and depressing for nearly all of the book. The theology suffers from problems faced by modern literature that tries to use the Book of Revelation and related Christian mythology as a basis. And it is, most famously, the site of one of the most notorious authorial betrayals of a character in fiction. And yet, The Last Battle, probably more than any other single book, taught me to be a better human being. It contains two very specific pieces of theology that I would now critique in multiple ways but which were exactly the pieces of theology that I needed to hear when I first understood them. This book steered me away from a closed, judgmental, and condemnatory mindset at exactly the age when I needed something to do that. For that, I will always have a warm spot in my heart for it. I'm going to start with the bad parts, though, because that's how the book starts. MAJOR SPOILERS BELOW. First, and most seriously, this is a second-order idiot plot. Shift shows up with a donkey wearing a lion skin (badly), only lets anyone see him via firelight, claims he's Aslan, and starts ordering the talking animals of Narnia to completely betray their laws and moral principles and reverse every long-standing political position of the country... and everyone just nods and goes along with this. This is the most blatant example of a long-standing problem in this series: Lewis does not respect his animal characters. They are the best feature of his world, and he treats them as barely more intelligent than their non-speaking equivalents and in need of humans to tell them what to do. Furthermore, despite the assertion of the narrator, Shift is not even close to clever. His deception has all the subtlety of a five-year-old who doesn't want to go to bed, and he offers the Narnians absolutely nothing in exchange for betraying their principles. I can forgive Puzzle for going along with the scheme since Puzzle has been so emotionally abused that he doesn't know what else to do, but no one else has any excuse, especially Shift's neighbors. Given his behavior in the book, everyone within a ten mile radius would be so sick of his whining, bullying, and lying within a month that they'd never believe anything he said again. Rishda and Ginger, a Calormene captain and a sociopathic cat who later take over Shift's scheme, do qualify as clever, but there's no realistic way Shift's plot would have gotten far enough for them to get involved. The things that Shift gets the Narnians to do are awful. This is by far the most depressing book in the series, even more than the worst parts of The Silver Chair. I'm sure I'm not the only one who struggled to read through the first part of this book, and raced through it on re-reads because everything is so hard to watch. The destruction is wanton and purposeless, and the frequent warnings from both characters and narration that these are the last days of Narnia add to the despair. Lewis takes all the beautiful things that he built over six books and smashes them before your eyes. It's a lot to take, given that previous books would have treated the felling of a single tree as an unspeakable catastrophe. I think some of these problems are due to the difficulty of using Christian eschatology in a modern novel. An antichrist is obligatory, but the animals of Narnia have no reason to follow an antichrist given their direct experience with Aslan, particularly not the aloof one that Shift tries to give them. Lewis forces the plot by making everyone act stupidly and out of character. Similarly, Christian eschatology says everything must become as awful as possible right before the return of Christ, hence the difficult-to-read sections of Narnia's destruction, but there's no in-book reason for the Narnians' complicity in that destruction. One can argue about whether this is good theology, but it's certainly bad storytelling. I can see the outlines of the moral points Lewis is trying to make about greed and rapacity, abuse of the natural world, dubious alliances, cynicism, and ill-chosen prophets, but because there is no explicable reason for Tirian's quiet kingdom to suddenly turn to murderous resource exploitation, none of those moral points land with any force. The best moral apocalypse shows the reader how, were they living through it, they would be complicit in the devastation as well. Lewis does none of that work, so the reader is just left angry and confused. The book also has several smaller poor authorial choices, such as the blackface incident. Tirian, Jill, and Eustace need to infiltrate Shift's camp, and use blackface to disguise themselves as Calormenes. That alone uncomfortably reveals how much skin tone determines nationality in this world, but Lewis makes it far worse by having Tirian comment that he "feel[s] a true man again" after removing the blackface and switching to Narnian clothes. All of this drags on and on, unlike Lewis's normally tighter pacing, to the point that I remembered this book being twice the length of any other Narnia book. It's not; it's about the same length as the rest, but it's such a grind that it feels interminable. The sum total of the bright points of the first two-thirds of the book are the arrival of Jill and Eustace, Jill's one moment of true heroism, and the loyalty of a single Dwarf. The rest is all horror and betrayal and doomed battles and abject stupidity. I do, though, have to describe Jill's moment of glory, since I complained about her and Eustace throughout The Silver Chair. Eustace is still useless, but Jill learned forestcraft during her previous adventures (not that we saw much sign of this previously) and slips through the forest like a ghost to steal Puzzle and his lion costume out from the under the nose of the villains. Even better, she finds Puzzle and the lion costume hilarious, which is the one moment in the book where one of the characters seems to understand how absurd and ridiculous this all is. I loved Jill so much in that moment that it makes up for all of the pointless bickering of The Silver Chair. She doesn't get to do much else in this book, but I wish the Jill who shows up in The Last Battle had gotten her own book. The end of this book, and the only reason why it's worth reading, happens once the heroes are forced into the stable that Shift and his co-conspirators have been using as the stage for their fake Aslan. Its door (for no well-explained reason) has become a door to Aslan's Country and leads to a reunion with all the protagonists of the series. It also becomes the frame of Aslan's final destruction of Narnia and judging of its inhabitants, which I suspect would be confusing if you didn't already know something about Christian eschatology. But before that, this happens, which is sufficiently and deservedly notorious that I think it needs to be quoted in full.
"Sir," said Tirian, when he had greeted all these. "If I have read the chronicle aright, there should be another. Has not your Majesty two sisters? Where is Queen Susan?" "My sister Susan," answered Peter shortly and gravely, "is no longer a friend of Narnia." "Yes," said Eustace, "and whenever you've tried to get her to come and talk about Narnia or do anything about Narnia, she says 'What wonderful memories you have! Fancy your still thinking about all those funny games we used to play when we were children.'" "Oh Susan!" said Jill. "She's interested in nothing nowadays except nylons and lipstick and invitations. She always was a jolly sight too keen on being grown-up." "Grown-up indeed," said the Lady Polly. "I wish she would grow up. She wasted all her school time wanting to be the age she is now, and she'll waste all the rest of her life trying to stay that age. Her whole idea is to race on to the silliest time of one's life as quick as she can and then stop there as long as she can."
There are so many obvious and dire problems with this passage, and so many others have written about it at length, that I will only add a few points. First, I find it interesting that neither Lucy nor Edmund says a thing. (I would like to think that Edmund knows better.) The real criticism comes from three characters who never interacted with Susan in the series: the two characters introduced after she was no longer allowed to return to Narnia, and a character from the story that predated hers. (And Eustace certainly has some gall to criticize someone else for treating Narnia as a childish game.) It also doesn't say anything good about Lewis that he puts his rather sexist attack on Susan into the mouths of two other female characters. Polly's criticism is a somewhat generic attack on puberty that could arguably apply to either sex (although "silliness" is usually reserved for women), but Jill makes the attack explicitly gendered. It's the attack of a girl who wants to be one of the boys on a girl who embraces things that are coded feminine, and there's a whole lot of politics around the construction of gender happening here that Lewis is blindly reinforcing and not grappling with at all. Plus, this is only barely supported by single sentences in The Voyage of the Dawn Treader and The Horse and His Boy and directly contradicts the earlier books. We're expected to believe that Susan the archer, the best swimmer, the most sensible and thoughtful of the four kids has abruptly changed her whole personality. Lewis could have made me believe Susan had soured on Narnia after the attempted kidnapping (and, although left unstated, presumably eventual attempted rape) in The Horse and His Boy, if one ignores the fact that incident supposedly happens before Prince Caspian where there is no sign of such a reaction. But not for those reasons, and not in that way. Thankfully, after this, the book gets better, starting with the Dwarfs, which is one of the two passages that had a profound influence on me. Except for one Dwarf who allied with Tirian, the Dwarfs reacted to the exposure of Shift's lies by disbelieving both Tirian and Shift, calling a pox on both their houses, and deciding to make their own side. During the last fight in front of the stable, they started killing whichever side looked like they were winning. (Although this is horrific in the story, I think this is accurate social commentary on a certain type of cynicism, even if I suspect Lewis may have been aiming it at atheists.) Eventually, they're thrown through the stable door by the Calormenes. However, rather than seeing the land of beauty and plenty that everyone else sees, they are firmly convinced they're in a dark, musty stable surrounded by refuse and dirty straw. This is, quite explicitly, not something imposed on them. Lucy rebukes Eustace for wishing Tash had killed them, and tries to make friends with them. Aslan tries to show them how wrong their perceptions are, to no avail. Their unwillingness to admit they were wrong is so strong that they make themselves believe that everything is worse than it actually is.
"You see," said Aslan. "They will not let us help them. They have chosen cunning instead of belief. Their prison is only in their own minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out."
I grew up with the US evangelical version of Hell as a place of eternal torment, which in turn was used to justify religious atrocities in the name of saving people from Hell. But there is no Hell of that type in this book. There is a shadow into which many evil characters simply disappear, and there's this passage. Reading this was the first time I understood the alternative idea of Hell as the absence of God instead of active divine punishment. Lewis doesn't use the word "Hell," but it's obvious from context that the Dwarfs are in Hell. But it's not something Aslan does to them and no one wants them there; they could leave any time they wanted, but they're too unwilling to be wrong. You may have to be raised in conservative Christianity to understand how profoundly this rethinking of Hell (which Lewis tackles at greater length in The Great Divorce) undermines the system of guilt and fear that's used as motivation and control. It took me several re-readings and a lot of thinking about this passage, but this is where I stopped believing in a vengeful God who will eternally torture nonbelievers, and thus stopped believing in all of the other theology that goes with it. The second passage that changed me is Emeth's story. Emeth is a devout Calormene, a follower of Tash, who volunteered to enter the stable when Shift and his co-conspirators were claiming Aslan/Tash was inside. Some time after going through, he encounters Aslan, and this is part of his telling of that story (and yes, Lewis still has Calormenes telling stories as if they were British translators of the Arabian Nights):
[...] Lord, is it then true, as the Ape said, that thou and Tash are one? The Lion growled so that the earth shook (but his wrath was not against me) and said, It is false. Not because he and I are one, but because we are opposites, I take to me the services which thou hast done to him. For I and he are of such different kinds that no service which is vile can be done to me, and none which is not vile can be done to him. Therefore if any man swear by Tash and keep his oath for the oath's sake, it is by me that he has truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted. Dost thou understand, Child? I said, Lord, thou knowest how much I understand. But I said also (for the truth constrained me), Yet I have been seeking Tash all my days. Beloved, said the Glorious One, unless thy desire had been for me, thou wouldst not have sought so long and so truly. For all find what they truly seek.
So, first, don't ever say this to anyone. It's horribly condescending and, since it's normally said by white Christians to other people, usually explicitly colonialist. Telling someone that their god is evil but since they seem to be a good person they're truly worshiping your god is only barely better than saying yours is the only true religion. But it is better, and as someone who, at the time, was wholly steeped in the belief that only Christians were saved and every follower of another religion was following Satan and was damned to Hell, this passage blew my mind. This was the first place I encountered the idea that someone who followed a different religion could be saved, or that God could transcend religion, and it came with exactly the context and justification that I needed given how close-minded I was at the time. Today, I would say that the Christian side of this analysis needs far more humility, and fobbing off all the evil done in the name of the Christian God by saying "oh, those people were really following Satan" is a total moral copout. But, nonetheless, Lewis opened a door for me that I was able to step through and move beyond to a less judgmental, dismissive, and hostile view of others. There's not much else in the book after this. It's mostly Lewis's charmingly Platonic view of the afterlife, in which the characters go inward and upward to truer and more complete versions of both Narnia and England and are reunited (very briefly) with every character of the series. Lewis knows not to try too hard to describe the indescribable, but it remains one of my favorite visions of an afterlife because it makes so explicit that this world is neither static or the last, but only the beginning of a new adventure. This final section of The Last Battle is deeply flawed, rather arrogant, a little bizarre, and involves more lectures on theology than precise description, but I still love it. By itself, it's not a bad ending for the series, although I don't think it has half the beauty or wonder of the end of The Voyage of the Dawn Treader. It's a shame about the rest of the book, and it's a worse shame that Lewis chose to sacrifice Susan on the altar of his prejudices. Those problems made it very hard to read this book again and make it impossible to recommend. Thankfully, you can read the series without it, and perhaps most readers would be better off imagining their own ending (or lack of ending) to Narnia than the one Lewis chose to give it. But the one redeeming quality The Last Battle will always have for me is that, despite all of its flaws, it was exactly the book that I needed to read when I read it. Rating: 4 out of 10

2 August 2021

Colin Watson: Launchpad now runs on Python 3!

After a very long porting journey, Launchpad is finally running on Python 3 across all of our systems. I wanted to take a bit of time to reflect on why my emotional responses to this port differ so much from those of some others who ve done large ports, such as the Mercurial maintainers. It s hard to deny that we ve had to burn a lot of time on this, which I m sure has had an opportunity cost, and from one point of view it s essentially running to stand still: there is no single compelling feature that we get solely by porting to Python 3, although it s clearly a prerequisite for tidying up old compatibility code and being able to use modern language facilities in the future. And yet, on the whole, I found this a rewarding project and enjoyed doing it. Some of this may be because by inclination I m a maintenance programmer and actually enjoy this sort of thing. My default view tends to be that software version upgrades may be a pain but it s much better to get that pain over with as soon as you can rather than trying to hold back the tide; you can certainly get involved and try to shape where things end up, but rightly or wrongly I can t think of many cases when a righteously indignant user base managed to arrange for the old version to be maintained in perpetuity so that they never had to deal with the new thing (OK, maybe Perl 5 counts here). I think a more compelling difference between Launchpad and Mercurial, though, may be that very few other people really had a vested interest in what Python version Launchpad happened to be running, because it s all server-side code (aside from some client libraries such as launchpadlib, which were ported years ago). As such, we weren t trying to do this with the internet having Strong Opinions at us. We were doing this because it was obviously the only long-term-maintainable path forward, and in more recent times because some of our library dependencies were starting to drop support for Python 2 and so it was obviously going to become a practical problem for us sooner or later; but if we d just stayed on Python 2 forever then fundamentally hardly anyone else would really have cared directly, only maybe about some indirect consequences of that. I don t follow Mercurial development so I may be entirely off-base, but if other people were yelling at me about how late my project was to finish its port, that in itself would make me feel more negatively about the project even if I thought it was a good idea. Having most of the pressure come from ourselves rather than from outside meant that wasn t an issue for us. I m somewhat inclined to think of the process as an extreme version of paying down technical debt. Moving from Python 2.7 to 3.5, as we just did, means skipping over multiple language versions in one go, and if similar changes had been made more gradually it would probably have felt a lot more like the typical dependency update treadmill. I appreciate why not everyone might want to think of it this way: maybe this is just my own rationalization. Reflections on porting to Python 3 I m not going to defend the Python 3 migration process; it was pretty rough in a lot of ways. Nor am I going to spend much effort relitigating it here, as it s already been done to death elsewhere, and as I understand it the core Python developers have got the message loud and clear by now. At a bare minimum, a lot of valuable time was lost early in Python 3 s lifetime hanging on to flag-day-type porting strategies that were impractical for large projects, when it should have been providing for bilingual strategies (code that runs in both Python 2 and 3 for a transitional period) which is where most libraries and most large migrations ended up in practice. For instance, the early advice to library maintainers to maintain two parallel versions or perhaps translate dynamically with 2to3 was entirely impractical in most non-trivial cases and wasn t what most people ended up doing, and yet the idea that 2to3 is all you need still floats around Stack Overflow and the like as a result. (These days, I would probably point people towards something more like Eevee s porting FAQ as somewhere to start.) There are various fairly straightforward things that people often suggest could have been done to smooth the path, and I largely agree: not removing the u'' string prefix only to put it back in 3.3, fewer gratuitous compatibility breaks in the name of tidiness, and so on. But if I had a time machine, the number one thing I would ask to have been done differently would be introducing type annotations in Python 2 before Python 3 branched off. It s true that it s technically possible to do type annotations in Python 2, but the fact that it s a different syntax that would have to be fixed later is offputting, and in practice it wasn t widely used in Python 2 code. To make a significant difference to the ease of porting, annotations would need to have been introduced early enough that lots of Python 2 library code used them so that porting code didn t have to be quite so much of an exercise of manually figuring out the exact nature of string types from context. Launchpad is a complex piece of software that interacts with multiple domains: for example, it deals with a database, HTTP, web page rendering, Debian-format archive publishing, and multiple revision control systems, and there s often overlap between domains. Each of these tends to imply different kinds of string handling. Web page rendering is normally done mainly in Unicode, converting to bytes as late as possible; revision control systems normally want to spend most of their time working with bytes, although the exact details vary; HTTP is of course bytes on the wire, but Python s WSGI interface has some string type subtleties. In practice I found myself thinking about at least four string-like types (that is, things that in a language with a stricter type system I might well want to define as distinct types and restrict conversion between them): bytes, text, ordinary native strings (str in either language, encoded to UTF-8 in Python 2), and native strings with WSGI s encoding rules. Some of these are emergent properties of writing in the intersection of Python 2 and 3, which is effectively a specialized language of its own without coherent official documentation whose users must intuit its behaviour by comparing multiple sources of information, or by referring to unofficial porting guides: not a very satisfactory situation. Fortunately much of the complexity collapses once it becomes possible to write solely in Python 3. Some of the difficulties we ran into are not ones that are typically thought of as Python 2-to-3 porting issues, because they were changed later in Python 3 s development process. For instance, the email module was substantially improved in around the 3.2/3.3 timeframe to handle Python 3 s bytes/text model more correctly, and since Launchpad sends quite a few different kinds of email messages and has some quite picky tests for exactly what it emits, this entailed a lot of work in our email sending code and in our test suite to account for that. (It took me a while to work out whether we should be treating raw email messages as bytes or as text; bytes turned out to work best.) 3.4 made some tweaks to the implementation of quoted-printable encoding that broke a number of our tests in ways that took some effort to fix, because the tests needed to work on both 2.7 and 3.5. The list goes on. I got quite proficient at digging through Python s git history to figure out when and why some particular bit of behaviour had changed. One of the thorniest problems was parsing HTTP form data. We mainly rely on zope.publisher for this, which in turn relied on cgi.FieldStorage; but cgi.FieldStorage is badly broken in some situations on Python 3. Even if that bug were fixed in a more recent version of Python, we can t easily use anything newer than 3.5 for the first stage of our port due to the version of the base OS we re currently running, so it wouldn t help much. In the end I fixed some minor issues in the multipart module (and was kindly given co-maintenance of it) and converted zope.publisher to use it. Although this took a while to sort out, it seems to have gone very well. A couple of other interesting late-arriving issues were around pickle. For most things we normally prefer safer formats such as JSON, but there are a few cases where we use pickle, particularly for our session databases. One of my colleagues pointed out that I needed to remember to tell pickle to stick to protocol 2, so that we d be able to switch back and forward between Python 2 and 3 for a while; quite right, and we later ran into a similar problem with marshal too. A more surprising problem was that datetime.datetime objects pickled on Python 2 require special care when unpickling on Python 3; rather than the approach that ended up being implemented and documented for Python 3.6, though, I preferred a custom unpickler, both so that things would work on Python 3.5 and so that I wouldn t have to risk affecting the decoding of other pickled strings in the session database. General lessons Writing this over a year after Python 2 s end-of-life date, and certainly nowhere near the leading edge of Python 3 porting work, it s perhaps more useful to look at this in terms of the lessons it has for other large technical debt projects. I mentioned in my previous article that I used the approach of an enormous and frequently-rebased git branch as a working area for the port, committing often and sometimes combining and extracting commits for review once they seemed to be ready. A port of this scale would have been entirely intractable without a tool of similar power to git rebase, so I m very glad that we finished migrating to git in 2019. I relied on this right up to the end of the port, and it also allowed for quick assessments of how much more there was to land. git worktree was also helpful, in that I could easily maintain working trees built for each of Python 2 and 3 for comparison. As is usual for most multi-developer projects, all changes to Launchpad need to go through code review, although we sometimes make exceptions for very simple and obvious changes that can be self-reviewed. Since I knew from the outset that this was going to generate a lot of changes for review, I therefore structured my work from the outset to try to make it as easy as possible for my colleagues to review it. This generally involved keeping most changes to a somewhat manageable size of 800 lines or less (although this wasn t always possible), and arranging commits mainly according to the kind of change they made rather than their location. For example, when I needed to fix issues with / in Python 3 being true division rather than floor division, I did so in one commit across the various places where it mattered and took care not to mix it with other unrelated changes. This is good practice for nearly any kind of development, but it was especially important here since it allowed reviewers to consider a clear explanation of what I was doing in the commit message and then skim-read the rest of it much more quickly. It was vital to keep the codebase in a working state at all times, and deploy to production reasonably often: this way if something went wrong the amount of code we had to debug to figure out what had happened was always tractable. (Although I can t seem to find it now to link to it, I saw an account a while back of a company that had taken a flag-day approach instead with a large codebase. It seemed to work for them, but I m certain we couldn t have made it work for Launchpad.) I can t speak too highly of Launchpad s test suite, much of which originated before my time. Without a great deal of extensive coverage of all sorts of interesting edge cases at both the unit and functional level, and a corresponding culture of maintaining that test suite well when making new changes, it would have been impossible to be anything like as confident of the port as we were. As part of the porting work, we split out a couple of substantial chunks of the Launchpad codebase that could easily be decoupled from the core: its Mailman integration and its code import worker. Both of these had substantial dependencies with complex requirements for porting to Python 3, and arranging to be able to do these separately on their own schedule was absolutely worth it. Like disentangling balls of wool, any opportunity you can take to make things less tightly-coupled is probably going to make it easier to disentangle the rest. (I can see a tractable way forward to porting the code import worker, so we may well get that done soon. Our Mailman integration will need to be rewritten, though, since it currently depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.) Python lessons Our database layer was already in pretty good shape for a port, since at least the modern bits of its table modelling interface were already strict about using Unicode for text columns. If you have any kind of pervasive low-level framework like this, then making it be pedantic at you in advance of a Python 3 port will probably incur much less swearing in the long run, as you won t be trying to deal with quite so many bytes/text issues at the same time as everything else. Early in our port, we established a standard set of __future__ imports and started incrementally converting files over to them, mainly because we weren t yet sure what else to do and it seemed likely to be helpful. absolute_import was definitely reasonable (and not often a problem in our code), and print_function was annoying but necessary. In hindsight I m not sure about unicode_literals, though. For files that only deal with bytes and text it was reasonable enough, but as I mentioned above there were also a number of cases where we needed literals of the language s native str type, i.e. bytes in Python 2 and text in Python 3: this was particularly noticeable in WSGI contexts, but also cropped up in some other surprising places. We generally either omitted unicode_literals or used six.ensure_str in such cases, but it was definitely a bit awkward and maybe I should have listened more to people telling me it might be a bad idea. A lot of Launchpad s early tests used doctest, mainly in the style where you have text files that interleave narrative commentary with examples. The development team later reached consensus that this was best avoided in most cases, but by then there were far too many doctests to conveniently rewrite in some other form. Porting doctests to Python 3 is really annoying. You run into all the little changes in how objects are represented as text (particularly u'...' versus '...', but plenty of other cases as well); you have next to no tools to do anything useful like skipping individual bits of a doctest that don t apply; using __future__ imports requires the rather obscure approach of adding the relevant names to the doctest s globals in the relevant DocFileSuite or DocTestSuite; dealing with many exception tracebacks requires something like zope.testing.renormalizing; and whatever code refactoring tools you re using probably don t work properly. Basically, don t have done that. It did all turn out to be tractable for us in the end, and I managed to avoid using much in the way of fragile doctest extensions aside from the aforementioned zope.testing.renormalizing, but it was not an enjoyable experience. Regressions I know of nine regressions that reached Launchpad s production systems as a result of this porting work; of course there were various other regressions caught by CI or in manual testing. (Considering the size of this project, I count it as a resounding success that there were only nine production issues, and that for the most part we were able to fix them quickly.) Equality testing of removed database objects One of the things we had to do while porting to Python 3 was to implement the __eq__, __ne__, and __hash__ special methods for all our database objects. This was quite conceptually fiddly, because doing this requires knowing each object s primary key, and that may not yet be available if we ve created an object in Python but not yet flushed the actual INSERT statement to the database (most of our primary keys are auto-incrementing sequences). We thus had to take care to flush pending SQL statements in such cases in order to ensure that we know the primary keys. However, it s possible to have a problem at the other end of the object lifecycle: that is, a Python object might still be reachable in memory even though the underlying row has been DELETEd from the database. In most cases we don t keep removed objects around for obvious reasons, but it can happen in caching code, and buildd-manager crashed as a result (in fact while it was still running on Python 2). We had to take extra care to avoid this problem. Debian imports crashed on non-UTF-8 filenames Python 2 has some unfortunate behaviour around passing bytes or Unicode strings (depending on the platform) to shutil.rmtree, and the combination of some porting work and a particular source package in Debian that contained a non-UTF-8 file name caused us to run into this. The fix was to ensure that the argument passed to shutil.rmtree is a str regardless of Python version. We d actually run into something similar before: it s a subtle porting gotcha, since it s quite easy to end up passing Unicode strings to shutil.rmtree if you re in the process of porting your code to Python 3, and you might easily not notice if the file names in your tests are all encoded using UTF-8. lazr.restful ETags We eventually got far enough along that we could switch one of our four appserver machines (we have quite a number of other machines too, but the appservers handle web and API requests) to Python 3 and see what happened. By this point our extensive test suite had shaken out the vast majority of the things that could go wrong, but there was always going to be room for some interesting edge cases. One of the Ubuntu kernel team reported that they were seeing an increase in 412 Precondition Failed errors in some of their scripts that use our webservice API. These can happen when you re trying to modify an existing resource: the underlying protocol involves sending an If-Match header with the ETag that the client thinks the resource has, and if this doesn t match the ETag that the server calculates for the resource then the client has to refresh its copy of the resource and try again. We initially thought that this might be legitimate since it can happen in normal operation if you collide with another client making changes to the same resource, but it soon became clear that something stranger was going on: we were getting inconsistent ETags for the same object even when it was unchanged. Since we d recently switched a quarter of our appservers to Python 3, that was a natural suspect. Our lazr.restful package provides the framework for our webservice API, and roughly speaking it generates ETags by serializing objects into some kind of canonical form and hashing the result. Unfortunately the serialization was dependent on the Python version in a few ways, and in particular it serialized lists of strings such as lists of bug tags differently: Python 2 used [u'foo', u'bar', u'baz'] where Python 3 used ['foo', 'bar', 'baz']. In lazr.restful 1.0.3 we switched to using JSON for this, removing the Python version dependency and ensuring consistent behaviour between appservers. Memory leaks This problem took the longest to solve. We noticed fairly quickly from our graphs that the appserver machine we d switched to Python 3 had a serious memory leak. Our appservers had always been a bit leaky, but now it wasn t so much a small hole that we can bail occasionally as the boat is sinking rapidly : A serious memory leak (Yes, this got in the way of working out what was going on with ETags for a while.) I spent ages messing around with various attempts to fix this. Since only a quarter of our appservers were affected, and we could get by on 75% capacity for a while, it wasn t urgent but it was definitely annoying. After spending some quality time with objgraph, for some time I thought traceback reference cycles might be at fault, and I sent a number of fixes to various upstream projects for those (e.g. zope.pagetemplate). Those didn t help the leaks much though, and after a while it became clear to me that this couldn t be the sole problem: Python has a cyclic garbage collector that will eventually collect reference cycles as long as there are no strong references to any objects in them, although it might not happen very quickly. Something else must be going on. Debugging reference leaks in any non-trivial and long-running Python program is extremely arduous, especially with ORMs that naturally tend to end up with lots of cycles and caches. After a while I formed a hypothesis that zope.server might be keeping a strong reference to something, although I never managed to nail it down more firmly than that. This was an attractive theory as we were already in the process of migrating to Gunicorn for other reasons anyway, and Gunicorn also has a convenient max_requests setting that s good at mitigating memory leaks. Getting this all in place took some time, but once we did we found that everything was much more stable: A rather flat memory graph This isn t completely satisfying as we never quite got to the bottom of the leak itself, and it s entirely possible that we ve only papered over it using max_requests: I expect we ll gradually back off on how frequently we restart workers over time to try to track this down. However, pragmatically, it s no longer an operational concern. Mirror prober HTTPS proxy handling After we switched our script servers to Python 3, we had several reports of mirror probing failures. (Launchpad keeps lists of Ubuntu archive and image mirrors, and probes them every so often to check that they re reasonably complete and up to date.) This only affected HTTPS mirrors when probed via a proxy server, support for which is a relatively recent feature in Launchpad and involved some code that we never managed to unit-test properly: of course this is exactly the code that went wrong. Sadly I wasn t able to sort out that gap, but at least the fix was simple. Non-MIME-encoded email headers As I mentioned above, there were substantial changes in the email package between Python 2 and 3, and indeed between minor versions of Python 3. Our test coverage here is pretty good, but it s an area where it s very easy to have gaps. We noticed that a script that processes incoming email was crashing on messages with headers that were non-ASCII but not MIME-encoded (and indeed then crashing again when it tried to send a notification of the crash!). The only examples of these I looked at were spam, but we still didn t want to crash on them. The fix involved being somewhat more careful about both the handling of headers returned by Python s email parser and the building of outgoing email notifications. This seems to be working well so far, although I wouldn t be surprised to find the odd other incorrect detail in this sort of area. Failure to handle non-ISO-8859-1 URL-encoded form input Remember how I said that parsing HTTP form data was thorny? After we finished upgrading all our appservers to Python 3, people started reporting that they couldn t post Unicode comments to bugs, which turned out to be only if the attempt was made using JavaScript, and was because I hadn t quite managed to get URL-encoded form data working properly with zope.publisher and multipart. The current standard describes the URL-encoded format for form data as in many ways an aberrant monstrosity , so this was no great surprise. Part of the problem was some very strange choices in zope.publisher dating back to 2004 or earlier, which I attempted to clean up and simplify. The rest was that Python 2 s urlparse.parse_qs unconditionally decodes percent-encoded sequences as ISO-8859-1 if they re passed in as part of a Unicode string, so multipart needs to work around this on Python 2. I m still not completely confident that this is correct in all situations, but at least now that we re on Python 3 everywhere the matrix of cases we need to care about is smaller. Inconsistent marshalling of Loggerhead s disk cache We use Loggerhead for providing web browsing of Bazaar branches. When we upgraded one of its two servers to Python 3, we immediately noticed that the one still on Python 2 was failing to read back its revision information cache, which it stores in a database on disk. (We noticed this because it caused a deployment to fail: when we tried to roll out new code to the instance still on Python 2, Nagios checks had already caused an incompatible cache to be written for one branch from the Python 3 instance.) This turned out to be a similar problem to the pickle issue mentioned above, except this one was with marshal, which I didn t think to look for because it s a relatively obscure module mostly used for internal purposes by Python itself; I m not sure that Loggerhead should really be using it in the first place. The fix was relatively straightforward, complicated mainly by now needing to cope with throwing away unreadable cache data. Ironically, if we d just gone ahead and taken the nominally riskier path of upgrading both servers at the same time, we might never have had a problem here. Intermittent bzr failures Finally, after we upgraded one of our two Bazaar codehosting servers to Python 3, we had a report of intermittent bzr branch hangs. After some digging I found this in our logs:
Traceback (most recent call last):
  ...
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/conch/ssh/channel.py", line 136, in addWindowBytes
    self.startWriting()
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/lazr/sshserver/session.py", line 88, in startWriting
    resumeProducing()
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/internet/process.py", line 894, in resumeProducing
    for p in self.pipes.itervalues():
builtins.AttributeError: 'dict' object has no attribute 'itervalues'
I d seen this before in our git hosting service: it was a bug in Twisted s Python 3 port, fixed after 20.3.0 but unfortunately after the last release that supported Python 2, so we had to backport that patch. Using the same backport dealt with this. Onwards!

1 August 2021

Fran ois Marier: Time-stretch in Kodi

VLC has a really neat feature which consists of time-stretching audio to allow users to speed up or slow video playback with the [ and ] keys without affecting the pitch of the sound. I recently switched to Kodi as my video player of choice and I was looking for the equivalent feature.

Kodi equivalent To enable this feature in Kodi, you first need to enable Sync playback to display in Settings Player Videos. Then map the tempoup and tempodown commands to the same keyboard shorcuts as VLC. In my case however, I wanted to map these functions to buttons on my Streamzap remote and so I put the following in my ~/.kodi/userdata/keymaps/remote.xml:
  <FullscreenVideo>
    <remote>
      <pageminus>PlayerControl(tempodown)</pageminus>
      <pageplus>PlayerControl(tempoup)</pageplus>
    </remote>
  </FullscreenVideo>
which allows me to press the Ch + and Ch - buttons on the remote to adjust the speed while the video is playing (in full-screen mode only, not with the menu displayed).

Examples Here are three ways I use this functionality:
  • I set it to 0.9x for movies in languages I'm not totally proficient in.
  • I set it to 1.1x for almost everything since the difference is not especially perceptible, but it still allows me to watch 10% more movies in the same amount of time :)
  • I set it to 1.2x for Rick & Morty because it makes Rick even more hilariously reckless and impatient.
Unfortunately, I haven't found a way to set the default tempo value. The closest setting I could find is the one which allows you to set the maximum tempo value maxtempo. If you know of a way, please leave a comment!

Next.