Search Results: "jak"

23 February 2022

Joey Hess: announcing zephyr-copilot

I recently learned about the Zephyr Project which is a rather neat embedded OS for devices too small to run Linux. This led me to wondering if I could adapt arduino-copilot to target Zephyr, and so be able to program any of the 350+ boards it supports using Haskell. At the same time I had an opportunity to give a talk at the Houston Functional Programmers group. On February 1st I decided to give that talk, about arduino-copilot. That left 2 weeks to buy some hardware supported by Zephyr and port arduino-copilot to it. The result is zephyr-copilot, and I was able to demo it during my talk. This example can be used with any of 293 different boards, and will blink an on-board LED:
module Examples.Blink.Demo where
import Copilot.Zephyr.Board.Generic
main :: IO ()
main = zephyr $ do
        led0 =: blinking
        delay =: MilliSeconds (constant 100)
Doing much more than that needs a board specific module to set up GPIO pins etc. So far I only have written those for a couple of boards I have, but they are fairly easy to write. I'd be happy to help anyone who wants to contribute one. Due to the time constraints I have not implemented serial port support, or PWM or ADC yet, although all should be fairly easy. Zephyr also has no end of other capabilities, from networking to file systems to sensors, that could perhaps be supported in zephyr-copilot. My talk has now been published on youtube. I really enjoyed presenting again for the first time in 4 years(!), and to a very nice group of people. Thanks to Claude Rubinson for his persistence in getting me to give a talk.
Development of zephyr-copilot was sponsored by Mark Reidenbach, Erik Bj reholt, Jake Vosloo, and Graham Spencer on Patreon.

9 December 2021

David Kalnischkies: APT for Advent of Code

Screenshot of my Advent of Code 2021 status page as of today Advent of Code 2021
Advent of Code, for those not in the know, is a yearly Advent calendar (since 2015) of coding puzzles many people participate in for a plenary of reasons ranging from speed coding to code golf with stops at learning a new language or practicing already known ones. I usually write boring C++, but any language and then some can be used. There are reports of people implementing it in hardware, solving them by hand on paper or using Microsoft Excel so, after solving a puzzle the easy way yesterday, this time I thought: CHALLENGE ACCEPTED! as I somehow remembered an old 2008 article about solving Sudoku with aptitude (Daniel Burrows via archive.org as the blog is long gone) and the good same old a package management system that can solve [puzzles] based on package dependency rules is not something that I think would be useful or worth having (Russell Coker). Day 8 has a rather lengthy problem description and can reasonably be approached in a bunch of different way. One unreasonable approach might be to massage the problem description into Debian packages and let apt help me solve the problem (specifically Part 2, which you unlock by solving Part 1. You can do that now, I will wait here.) Be warned: I am spoiling Part 2 in the following, so solve it yourself first if you are interested. I will try to be reasonable consistent in naming things in the following and so have chosen: The input we get are lines like acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab cdfeb fcadb cdfeb cdbaf. The letters are wires mixed up and connected to the segments of the displays: A group of these letters is hence a digit (the first 10) which represent one of the digits 0 to 9 and (after the pipe) the four displays which match (after sorting) one of the digits which means this display shows this digit. We are interested in which digits are displayed to solve the puzzle. To help us we also know which segments form which digit, we just don't know the wiring in the back. So we should identify which wire maps to which segment! We are introducing the packages wire-X-connects-to-Y for this which each provide & conflict1 with the virtual packages segment-Y and wire-X-connects. The later ensures that for a given wire we can only pick one segment and the former ensures that not multiple wires map onto the same segment. As an example: wire a's possible association with segment b is described as:
Package: wire-a-connects-to-b
Provides: segment-b, wire-a-connects
Conflicts: segment-b, wire-a-connects
Note that we do not know if this is true! We generate packages for all possible (and then some) combinations and hope dependency resolution will solve the problem for us. So don't worry, the hard part will be done by apt, we just have to provide all (im)possibilities! What we need now is to translate the 10 digits (and 4 outputs) from something like acedgfb into digit-0-is-eight and not, say digit-0-is-one. A clever solution might realize that a one consists only of two segments so a digit wiring up seven segments can not be a 1 (and must be 8 instead), but again we aren't here to be clever: We want apt to figure that out for us! So what we do is simply making every digit-0-is-N (im)possible choice available as a package and apply constraints: A given digit-N can only display one number and each N is unique as digit so for both we deploy Provides & Conflicts again. We also need to reason about the segments in the digits: Each of the digit packages gets Depends on wire-X-connects-to-Y where X is each possible wire (e.g. acedgfb) and Y each segment forming the digit (e.g. cf for one). The different choices for X are or'ed together, so that either of them satisfies the Y. We know something else too through: The segments which are not used by the digit can not be wired to any of the Xs. We model this with Conflicts on wire-X-connects-to-Y. As an example: If digit-0s acedgfb would be displaying a one (remember, it can't) the following package would be installable:
Package: digit-0-is-one
Version: 1
Depends: wire-a-connects-to-c   wire-c-connects-to-c   wire-e-connects-to-c   wire-d-connects-to-c   wire-g-connects-to-c   wire-f-connects-to-c   wire-b-connects-to-c,
         wire-a-connects-to-f   wire-c-connects-to-f   wire-e-connects-to-f   wire-d-connects-to-f   wire-g-connects-to-f   wire-f-connects-to-f   wire-b-connects-to-f
Provides: digit-0, digit-is-one
Conflicts: digit-0, digit-is-one,
  wire-a-connects-to-a, wire-c-connects-to-a, wire-e-connects-to-a, wire-d-connects-to-a, wire-g-connects-to-a, wire-f-connects-to-a, wire-b-connects-to-a,
  wire-a-connects-to-b, wire-c-connects-to-b, wire-e-connects-to-b, wire-d-connects-to-b, wire-g-connects-to-b, wire-f-connects-to-b, wire-b-connects-to-b,
  wire-a-connects-to-d, wire-c-connects-to-d, wire-e-connects-to-d, wire-d-connects-to-d, wire-g-connects-to-d, wire-f-connects-to-d, wire-b-connects-to-d,
  wire-a-connects-to-e, wire-c-connects-to-e, wire-e-connects-to-e, wire-d-connects-to-e, wire-g-connects-to-e, wire-f-connects-to-e, wire-b-connects-to-e,
  wire-a-connects-to-g, wire-c-connects-to-g, wire-e-connects-to-g, wire-d-connects-to-g, wire-g-connects-to-g, wire-f-connects-to-g, wire-b-connects-to-g
Repeat such stanzas for all 10 possible digits for digit-0 and then repeat this for all the other nine digit-N. We produce pretty much the same stanzas for display-0(-is-one), just that we omit the second Provides & Conflicts from above (digit-is-one) as in the display digits can be repeated. The rest is the same (modulo using display instead of digit as name of course). Lastly we create a package dubbed solution which depends on all 10 digit-N and 4 display-N all of them virtual packages apt will have to choose an installable provider from and we are nearly done! The resulting Packages file2 we can give to apt while requesting to install the package solution and it will spit out not only the display values we are interested in but also which number each digit represents and which wire is connected to which segment. Nifty!
$ ./skip-aoc 'acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab   cdfeb fcadb cdfeb cdbaf'
[ ]
The following additional packages will be installed:
  digit-0-is-eight digit-1-is-five digit-2-is-two digit-3-is-three
  digit-4-is-seven digit-5-is-nine digit-6-is-six digit-7-is-four
  digit-8-is-zero digit-9-is-one display-1-is-five display-2-is-three
  display-3-is-five display-4-is-three wire-a-connects-to-c
  wire-b-connects-to-f wire-c-connects-to-g wire-d-connects-to-a
  wire-e-connects-to-b wire-f-connects-to-d wire-g-connects-to-e
[ ]
0 upgraded, 22 newly installed, 0 to remove and 0 not upgraded.
We are only interested in the numbers on the display through, so grepping the apt output (-V is our friend here) a bit should let us end up with what we need as calculating3 is (unsurprisingly) not a strong suit of our package relationship language so we need a few shell commands to help us with the rest.
$ ./skip-aoc 'acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab   cdfeb fcadb cdfeb cdbaf' -qq
5353
I have written the skip-aoc script as a testcase for apt, so to run it you need to place it in /path/to/source/of/apt/test/integration and built apt first, but that is only due to my laziness. We could write a standalone script interfacing with the system installed apt directly and in any apt version since ~2011. To hand in the solution for the puzzle we just need to run this on each line of the input (~200 lines) and add all numbers together. In other words: Behold this beautiful shell one-liner: parallel -I ' ' ./skip-aoc ' ' -qq < input.txt paste -s -d'+' - bc (You may want to run parallel with -P to properly grill your CPU as that process can take a while otherwise and it still does anyhow as I haven't optimized it at all the testing framework does a lot of pointless things wasting time here, but we aren't aiming for the leaderboard so ) That might or even likely will fail through as I have so far omitted a not unimportant detail: The default APT resolver is not able to solve this puzzle with the given problem description we need another solver! Thankfully that is as easy as installing apt-cudf (and with it aspcud) which the script is using via --solver aspcud to make apt hand over the puzzle to a "proper" solver (or better: A solver who is supposed to be good at "answering set" questions). The buildds are using this for experimental and/or backports builds and also for installability checks via dose3 btw, so you might have encountered it before. Be careful however: Just because aspcud can solve this puzzle doesn't mean it is a good default resolver for your day to day apt. One of the reasons the default resolver has such a hard time solving this here is that or-groups have usually an order in which the first is preferred over every later option and so fort. This is of no concern here as all these alternatives will collapse to a single solution anyhow, but if there are multiple viable solutions (which is often the case) picking the "wrong" alternative can have bad consequences. A classic example would be exim4 postfix nullmailer. They are all MTAs but behave very different. The non-default solvers also tend to lack certain features like keeping track of auto-installed packages or installing Recommends/Suggests. That said, Julian is working on another solver as I write this which might deal with more of these issues. And lastly: I am also relatively sure that with a bit of massaging the default resolver could be made to understand the problem, but I can't play all day with this maybe some other day. Disclaimer: Originally posted in the daily megathread on reddit, the version here is just slightly better understandable as I have hopefully renamed all the packages to have more conventional names and tried to explain what I am actually doing. No cows were harmed in this improved version, either.

  1. If you would upload those packages somewhere, it would be good style to add Replaces as well, but it is of minor concern for apt so I am leaving them out here for readability.
  2. We have generated 49 wires, 100 digits, 40 display and 1 solution package for a grant total of 190 packages. We are also making use of a few purely virtual ones, but that doesn't add up to many packages in total. So few packages are practically childs play for apt given it usually deals with thousand times more. The instability for those packages tends to be a lot better through as only 22 of 190 packages we generated can (and will) be installed. Britney will hate you if your uploads to Debian unstable are even remotely as bad as this.
  3. What we could do is introduce 10.000 packages which denote every possible display value from 0000 to 9999. We would then need to duplicate our 10.190 packages for each line (namespace them) and then add a bit more than a million packages with the correct dependencies for summing up the individual packages for apt to be able to display the final result all by itself. That would take a while through as at that point we are looking at working with ~22 million packages with a gazillion amount of dependencies probably overworking every solver we would throw at it a bit of shell glue seems the better option for now.
This article was written by David Kalnischkies on apt-get a life and republished here by pulling it from a syndication feed. You should check there for updates and more articles about apt and EDSP.

21 November 2021

Julian Andres Klode: APT Z3 Solver Basics

Z3 is a theorem prover developed at Microsoft research and available as a dynamically linked C++ library in Debian-based distributions. While the library is a whopping 16 MB, and the solver is a tad slow, it s permissive licensing, and number of tactics offered give it a huge potential for use in solving dependencies in a wide variety of applications. Z3 does not need normalized formulas, but offers higher level abstractions like atmost and atleast and implies, that we will make use of together with boolean variables to translate the dependency problem to a form Z3 understands. In this post, we ll see how we can apply Z3 to the dependency resolution in APT. We ll only discuss the basics here, a future post will explore optimization criteria and recommends.

Translating the universe APT s package universe consists of 3 relevant things: packages (the tuple of name and architecture), versions (basically a .deb), and dependencies between versions. While we could translate our entire universe to Z3 problems, we instead will construct a root set from packages that were manually installed and versions marked for installation, and then build the transitive root set from it by translating all versions reachable from the root set. For each package P in the transitive root set, we create a boolean literal P. We then translate each version P1, P2, and so on. Translating a version means building a boolean literal for it, e.g. P1, and then translating the dependencies as shown below. We now need to create two more clauses to satisfy the basic requirements for debs:
  1. If a version is installed, the package is installed; and vice versa. We can encode this requirement for P above as P == atleast( P1,P2 , 1).
  2. There can only be one version installed. We add an additional constraint of the form atmost( P1,P2 , 1).
We also encode the requirements of the operation.
  1. For each package P that is manually installed, add a constraint P.
  2. For each version V that is marked for install, add a constraint V.
  3. For each package P that is marked for removal, add a constraint !P.

Dependencies Packages in APT have dependencies of two basic forms: Depends and Conflicts, as well as variations like Breaks (identical to Conflicts in solving terms), and Recommends (soft Depends) - we ll ignore those for now. We ll discuss Conflicts in the next section. Let s take a basic dependency list: A Depends: X Y, Z. To represent that dependency, we expand each name to a list of versions that can satisfy the dependency, for example X1 X2 Y1, Z1. Translating this dependency list to our Z3 solver, we create boolean variables X1,X2,Y1,Z1 and define two rules:
  1. A implies atleast( X1,X2,Y1 , 1)
  2. A implies atleast( Z1 , 1)
If there actually was nothing that satisfied the Z requirement, we d have added a rule not A. It would be possible to simply not tell Z3 about the version at all as an optimization, but that adds more complexity, and the not A constraint should not cause too many problems.

Conflicts Conflicts cannot have or in them. A dependency B Conflicts: X, Y means that only one of B, X, and Y can be installed. We can directly encode this in Z3 by using the constraint atmost( B,X,Y , 1). This is an optimized encoding of the constraint: We could have encoded each conflict in the form !B or !X, !B or !X, and so on. Usually this leads to worse performance as it introduces additional clauses.

Complete example Let s assume we start with an empty install and want to install the package a below.
Package: a
Version: 1
Depends: c   b
Package: b
Version: 1
Package: b
Version: 2
Conflicts: x
Package: d
Version: 1
Package: x
Version: 1
The translation in Z3 rules looks like this:
  1. Package rules for a:
    1. a == atleast( a1 , 1) - package is installed iff one version is
    2. atmost( a1 , 1) - only one version may be installed
    3. a a must be installed
  2. Dependency rules for a
    1. implies(a1, atleast( b2, b1 , 1)) the translated dependency above. note that c is gone, it s not reachable.
  3. Package rules for b:
    1. b == atleast( b1,b2 , 1) - package is installed iff one version is
    2. atmost( b1, b2 , 1) - only one version may be installed
  4. Dependencies for b (= 2):
    1. atmost( b2, x1 , 1) - the conflicts between x and b = 2 above
  5. Package rules for x:
    1. x == atleast( x1 , 1) - package is installed iff one version is
    2. atmost( x1 , 1) - only one version may be installed
The package d is not translated, as it is not reachable from the root set a1 , the transitive root set is a1,b1,b2,x1 .

Next iteration: Optimization We have now constructed the basic set of rules that allows us to solve solve our dependency problems (equivalent to SAT), however it might lead to suboptimal solutions where it removes automatically installed packages, or installs more packages than necessary, to name a few examples. In our next iteration, we have to look at introducing optimization; for example, have the minimum number of removals, the minimal number of changed packages, or satisfy as many recommends as possible. We will also look at the upgrade problem (upgrade as many packages as possible), the autoremove problem (remove as many automatically installed packages as possible).

22 October 2021

Enrico Zini: Scanning for imports in Python scripts

I had to package a nontrivial Python codebase, and I needed to put dependencies in setup.py. I could do git grep -h import sort -u, then review the output by hand, but I lacked the motivation for it. Much better to take a stab at solving the general problem The result is at https://github.com/spanezz/python-devel-tools. One fun part is scanning a directory tree, using ast to find import statements scattered around the code:
class Scanner:
    def __init__(self):
        self.names: Set[str] = set()
    def scan_dir(self, root: str):
        for dirpath, dirnames, filenames, dir_fd in os.fwalk(root):
            for fn in filenames:
                if fn.endswith(".py"):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        self.scan_file(fd, os.path.join(dirpath, fn))
                st = os.stat(fn, dir_fd=dir_fd)
                if st.st_mode & (stat.S_IXUSR   stat.S_IXGRP   stat.S_IXOTH):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        try:
                            lead = fd.readline()
                        except UnicodeDecodeError:
                            continue
                        if re_python_shebang.match(lead):
                            fd.seek(0)
                            self.scan_file(fd, os.path.join(dirpath, fn))
    def scan_file(self, fd: TextIO, pathname: str):
        log.info("Reading file %s", pathname)
        try:
            tree = ast.parse(fd.read(), pathname)
        except SyntaxError as e:
            log.warning("%s: file cannot be parsed", pathname, exc_info=e)
            return
        self.scan_tree(tree)
    def scan_tree(self, tree: ast.AST):
        for stm in tree.body:
            if isinstance(stm, ast.Import):
                for alias in stm.names:
                    if not isinstance(alias.name, str):
                        print("NAME", repr(alias.name), stm)
                    self.names.add(alias.name)
            elif isinstance(stm, ast.ImportFrom):
                if stm.module is not None:
                    self.names.add(stm.module)
            elif hasattr(stm, "body"):
                self.scan_tree(stm)
Another fun part is grouping the imported module names by where in sys.path they have been found:
    scanner = Scanner()
    scanner.scan_dir(args.dir)
    sys.path.append(args.dir)
    by_sys_path: Dict[str, List[str]] = collections.defaultdict(list)
    for name in sorted(scanner.names):
        spec = importlib.util.find_spec(name)
        if spec is None or spec.origin is None:
            by_sys_path[""].append(name)
        else:
            for sp in sys.path:
                if spec.origin.startswith(sp):
                    by_sys_path[sp].append(name)
                    break
            else:
                by_sys_path[spec.origin].append(name)
    for sys_path, names in sorted(by_sys_path.items()):
        print(f" sys_path or 'unidentified' :")
        for name in names:
            print(f"   name ")
An example. It's kind of nice how it can at least tell apart stdlib modules so one doesn't need to read through those:
$ ./scan-imports  /himblick
unidentified:
  changemonitor
  chroot
  cmdline
  mediadir
  player
  server
  settings
  static
  syncer
  utils
 /himblick:
  himblib.cmdline
  himblib.host_setup
  himblib.player
  himblib.sd
/usr/lib/python3.9:
  __future__
  argparse
  asyncio
  collections
  configparser
  contextlib
  datetime
  io
  json
  logging
  mimetypes
  os
  pathlib
  re
  secrets
  shlex
  shutil
  signal
  subprocess
  tempfile
  textwrap
  typing
/usr/lib/python3/dist-packages:
  asyncssh
  parted
  progressbar
  pyinotify
  setuptools
  tornado
  tornado.escape
  tornado.httpserver
  tornado.ioloop
  tornado.netutil
  tornado.web
  tornado.websocket
  yaml
built-in:
  sys
  time
Maybe such a tool already exists and works much better than this? From a quick search I didn't find it, and it was fun to (re)invent it. Updates: Jakub Wilk pointed out to an old python-modules script that finds Debian dependencies. The AST scanning code should be refactored to use ast.NodeVisitor.

5 July 2021

B lint R czey: Hello zstd compressed .debs in Ubuntu!

When Julian Andres Klode and I added initial Zstandard compression support to Ubuntu s APT and dpkg in Ubuntu 18.04 LTS we planned getting the changes accepted to Debian quickly and making Ubuntu 18.10 the first release where the new compression could speed up package installations and upgrades. Well, it took slightly longer than that. Since then many other packages have been updated to support zstd compressed packages and read-only compression has been back-ported to the 16.04 Xenial LTS release, too, on Ubuntu s side. In Debian, zstd support is available now in APT, debootstrap and reprepro (thanks Dimitri!). It is still under review for inclusion in Debian s dpkg (BTS bug 892664). Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed!

20 June 2021

Julian Andres Klode: Migrating away from apt-key

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world. The manual page already provides all you need to know for replacing apt-key add usage:
Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either gpg or asc as file extension
So it s kind of surprising people need step by step instructions for how to copy/download a file into a directory. I ll also discuss the alternative security snakeoil approach with signed-by that s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root. Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation Assume you currently have:
wget -qO- https://myrepo.example/myrepo.asc   sudo apt-key add  
To translate this directly for bionic and newer, you can use:
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc
or to avoid downloading as root:
wget -qO-  https://myrepo.example/myrepo.asc   sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc
Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg
Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems. wget might not be available everywhere so you can use apt-helper:
sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc
or, to avoid downloading as root:
/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by People say it s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it s better for publicity :) As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I d personally not tell people to modify /usr as it s supposed to be managed by the package manager, but we don t have an /etc/apt/keyrings or similar at the moment; it s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).
# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.
# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc   gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg   sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main'  \
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop
I do wonder why they do wget gpg --dearmor, pipe that into the file and then cat sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance We have three scenarios: For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted. Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess. Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don t have another dir inside /etc (or /usr/local), so it s hard to suggest something else. There s no significant benefit from actually using signed-by, so it s kind of extra work for little gain, though.

Addendum: Future work This part is new, just for this blog post. Let s look at upcoming changes and how they make things easier.

Bundled .sources files Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):
Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----
Then you can just provide a .sources files to users, they place it into sources.list.d, and everything magically works Probably adding a nice apt add-source command for it I guess. Well, python-apt s aptsources package still does not support deb822 sources, and never will, we ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It s temporarily named aptsign, but it s a generic signer for single-section deb822 files, similar to signify/minisign. We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

10 June 2021

Vincent Bernat: Serving WebP & AVIF images with Nginx

WebP and AVIF are two image formats for the web. They aim to produce smaller files than JPEG and PNG. They both support lossy and lossless compression, as well as alpha transparency. WebP was developed by Google and is a derivative of the VP8 video format.1 It is supported on most browsers. AVIF is using the newer AV1 video format to achieve better results. It is supported by Chromium-based browsers and has experimental support for Firefox.2

Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.

Without JavaScript, I can t tell what your browser supports.

Converting and optimizing images For this blog, I am using the following shell snippets to convert and optimize JPEG and PNG images. Skip to the next section if you are only interested in the Nginx setup.

JPEG images JPEG images are converted to WebP using cwebp.
find media/images -type f -name '*.jpg' -print0 \
    xargs -0n1 -P$(nproc) -i \
      cwebp -q 84 -af ' ' -o ' '.webp
They are converted to AVIF using avifenc from libavif:
find media/images -type f -name '*.jpg' -print0 \
    xargs -0n1 -P$(nproc) -i \
      avifenc --codec aom --yuv 420 --min 20 --max 25 ' ' ' '.avif
Then, they are optimized using jpegoptim built with Mozilla s improved JPEG encoder, via Nix. This is one reason I love Nix.
jpegoptim=$(nix-build --no-out-link \
      -E 'with (import <nixpkgs> ); jpegoptim.override   libjpeg = mozjpeg;  ')
find media/images -type f -name '*.jpg' -print0 \
    sort -z
    xargs -0n10 -P$(nproc) \
      $ jpegoptim /bin/jpegoptim --max=84 --all-progressive --strip-all

PNG images PNG images are down-sampled to 8-bit RGBA-palette using pngquant. The conversion reduces file sizes significantly while being mostly invisible.
find media/images -type f -name '*.png' -print0 \
    sort -z
    xargs -0n10 -P$(nproc) \
      pngquant --skip-if-larger --strip \
               --quiet --ext .png --force
Then, they are converted to WebP with cwebp in lossless mode:
find media/images -type f -name '*.png' -print0 \
    xargs -0n1 -P$(nproc) -i \
      cwebp -z 8 ' ' -o ' '.webp
No conversion is done to AVIF: lossless compression is not as efficient as pngquant and lossy compression is only marginally better than what I get with WebP.

Keeping only the smallest files I am only keeping WebP and AVIF images if they are at least 10% smaller than the original format: decoding is usually faster for JPEG and PNG; and JPEG images can be decoded progressively.3
for f in media/images/**/*. webp,avif ; do
  orig=$(stat --format %s $ f%.* )
  new=$(stat --format %s $f)
  (( orig*0.90 > new ))   rm $f
done
I only keep AVIF images if they are smaller than WebP.
for f in media/images/**/*.avif; do
  [[ -f $ f%.* .webp ]]   continue
  orig=$(stat --format %s $ f%.* .webp)
  new=$(stat --format %s $f)
  (( $orig > $new ))   rm $f
done
We can compare how many images are kept when converted to WebP or AVIF:
printf "     %10s %10s %10s\n" Original WebP AVIF
for format in png jpg; do
  printf " $ format:u  %10s %10s %10s\n" \
    $(find media/images -name "*.$format"   wc -l) \
    $(find media/images -name "*.$format.webp"   wc -l) \
    $(find media/images -name "*.$format.avif"   wc -l)
done
AVIF is better than MozJPEG for most JPEG files while WebP beats MozJPEG only for one file out of two:
       Original       WebP       AVIF
 PNG         64         47          0
 JPG         83         40         74

Further reading I didn t detail my choices for quality parameters and there is not much science in it. Here are two resources providing more insight on AVIF:

Serving WebP & AVIF with Nginx To serve WebP and AVIF images, there are two possibilities:
  1. use <picture> to let the browser pick the format it supports, or
  2. use content negotiation to let the server send the best-supported format.
I use the second approach. It relies on inspecting the Accept HTTP header in the request. For Chrome, it looks like this:
Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8
I configure Nginx to serve AVIF image, then the WebP image, and fallback to the original JPEG/PNG image depending on what the browser advertises:4
http  
  map $http_accept $webp_suffix  
    default        "";
    "~image/webp"  ".webp";
   
  map $http_accept $avif_suffix  
    default        "";
    "~image/avif"  ".avif";
   
 
server  
  # [ ]
  location ~ ^/images/.*\.(png jpe?g)$  
    add_header Vary Accept;
    try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;
   
 
For example, let s suppose the browser requests /images/ont-box-orange@2x.jpg. If it supports WebP but not AVIF, $webp_suffix is set to .webp while $avif_suffix is set to the empty string. The server tries to serve the first existing file in this list:
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
If the browser supports both AVIF and WebP, Nginx walks the following list:
  • /images/ont-box-orange@2x.jpg.webp.avif (it never exists)
  • /images/ont-box-orange@2x.jpg.avif
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
Eugene Lazutkin explains in more detail how this works. I have only presented a variation of his setup supporting both WebP and AVIF.

  1. VP8 is only used for lossy compression. Lossless compression is using an unrelated format.
  2. Firefox support was scheduled for Firefox 86 but because of the lack of proper color space support, it is still not enabled by default.
  3. Progressive decoding is not planned for WebP but could be implemented using low-quality thumbnail images for AVIF. See this issue for a discussion.
  4. The Vary header ensures an intermediary cache (a proxy or a CDN) checks the Accept header before using a cached response. Internet Explorer has trouble with this header and may not be able to cache the resource properly. There is a workaround but Internet Explorer s market share is now so small that it is pointless to implement it.

27 May 2021

Michael Prokop: What to expect from Debian/bullseye #newinbullseye

Bullseye Banner, Copyright 2020 Juliette Taka Debian v11 with codename bullseye is supposed to be released as new stable release soon-ish (let s hope for June, 2021! :)). Similar to what we had with #newinbuster and previous releases, now it s time for #newinbullseye! I was the driving force at several of my customers to be well prepared for bullseye before its freeze, and since then we re on good track there overall. In my opinion, Debian s release team did (and still does) a great job I m very happy about how unblock requests (not only mine but also ones I kept an eye on) were handled so far. As usual with major upgrades, there are some things to be aware of, and hereby I m starting my public notes on bullseye that might be worth also for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective. Further readings Of course start with taking a look at the official Debian release notes, make sure to especially go through What s new in Debian 11 + Issues to be aware of for bullseye. Chris published notes on upgrading to Debian bullseye, and also anarcat published upgrade notes for bullseye. Package versions As a starting point, let s look at some selected packages and their versions in buster vs. bullseye as of 2021-05-27 (mainly having amd64 in mind):
Package buster/v10 bullseye/v11
ansible 2.7.7 2.10.8
apache 2.4.38 2.4.46
apt 1.8.2.2 2.2.3
bash 5.0 5.1
ceph 12.2.11 14.2.20
docker 18.09.1 20.10.5
dovecot 2.3.4 2.3.13
dpkg 1.19.7 1.20.9
emacs 26.1 27.1
gcc 8.3.0 10.2.1
git 2.20.1 2.30.2
golang 1.11 1.15
libc 2.28 2.31
linux kernel 4.19 5.10
llvm 7.0 11.0
lxc 3.0.3 4.0.6
mariadb 10.3.27 10.5.10
nginx 1.14.2 1.18.0
nodejs 10.24.0 12.21.0
openjdk 11.0.9.1 11.0.11+9 + 17~19
openssh 7.9p1 8.4p1
openssl 1.1.1d 1.1.1k
perl 5.28.1 5.32.1
php 7.3 7.4+76
postfix 3.4.14 3.5.6
postgres 11 13
puppet 5.5.10 5.5.22
python2 2.7.16 2.7.18
python3 3.7.3 3.9.2
qemu/kvm 3.1 5.2
ruby 2.5.1 2.7+2
rust 1.41.1 1.48.0
samba 4.9.5 4.13.5
systemd 241 247.3
unattended-upgrades 1.11.2 2.8
util-linux 2.33.1 2.36.1
vagrant 2.2.3 2.2.14
vim 8.1.0875 8.2.2434
zsh 5.7.1 5.8
Linux Kernel The bullseye release will ship a Linux kernel based on v5.10 (v5.10.28 as of 2021-05-27, with v5.10.38 pending in unstable/sid), whereas buster shipped kernel 4.19. As usual there are plenty of changes in the kernel area and this might warrant a separate blog entry, but to highlight some issues: One surprising change might be that the scrollback buffer (Shift + PageUp) is gone from the Linux console. Make sure to always use screen/tmux or handle output through a pager of your choice if you need all of it and you re in the console. The kernel provides BTF support (via CONFIG_DEBUG_INFO_BTF, see #973870), which means it s no longer necessary to install LLVM, Clang, etc (requiring >100MB of disk space), see Gregg s excellent blog post regarding the underlying rational. Sadly the libbpf-tools packaging didn t make it into bullseye (#978727), but if you want to use your own self-made Debian packages, my notes might be useful. With kernel version 5.4, SUBDIRS support was removed from kbuild, so if an out-of-tree kernel module (like a *-dkms package) fails to compile on bullseye, make sure to use a recent version of it which uses M= or KBUILD_EXTMOD= instead. Unprivileged user namespaces are enabled by default (see #898446 + #987777), so programs can create more restricted sandboxes without the need to run as root or via a setuid-root helper. If you prefer to keep this feature restricted (or tools like web browsers, WebKitGTK, Flatpak, don t work), use sysctl -w kernel.unprivileged_userns_clone=0 . The /boot/System.map file(s) no longer provide the actual data, you need to switch to the dbg package if you rely on that information:
% cat /boot/System.map-5.10.0-6-amd64 
ffffffffffffffff B The real System.map is in the linux-image-<version>-dbg package
Be aware though, that the *-dbg package requires ~5GB of additional disk space. Systemd systemd v247 made it into bullseye (updated from v241). Same as for the kernel this might warrant a separate blog entry, but to mention some highlights: Systemd in bullseye activates its persistent journal functionality by default (storing its files in /var/log/journal/, see #717388). systemd-timesyncd is no longer part of the systemd binary package itself, but available as standalone package. This allows usage of ntp, chrony, openntpd, without having systemd-timesyncd installed (which prevents race conditions like #889290, which was biting me more than once). journalctl gained new options:
--cursor-file=FILE      Show entries after cursor in FILE and update FILE
--facility=FACILITY...  Show entries with the specified facilities
--image=IMAGE           Operate on files in filesystem image
--namespace=NAMESPACE   Show journal data from specified namespace
--relinquish-var        Stop logging to disk, log to temporary file system
--smart-relinquish-var  Similar, but NOP if log directory is on root mount
systemctl gained new options:
clean UNIT...                       Clean runtime, cache, state, logs or configuration of unit
freeze PATTERN...                   Freeze execution of unit processes
thaw PATTERN...                     Resume execution of a frozen unit
log-level [LEVEL]                   Get/set logging threshold for manager
log-target [TARGET]                 Get/set logging target for manager
service-watchdogs [BOOL]            Get/set service watchdog state
--with-dependencies                 Show unit dependencies with 'status', 'cat', 'list-units', and 'list-unit-files'
 -T --show-transaction              When enqueuing a unit job, show full transaction
 --what=RESOURCES                   Which types of resources to remove
--boot-loader-menu=TIME             Boot into boot loader menu on next boot
--boot-loader-entry=NAME            Boot into a specific boot loader entry on next boot
--timestamp=FORMAT                  Change format of printed timestamps
If you use systemctl edit to adjust overrides, then you ll now also get the existing configuration file listed as comment, which I consider very helpful. The MACAddressPolicy behavior with systemd naming schema v241 changed for virtual devices (I plan to write about this in a separate blog post). There are plenty of new manual pages: systemd also gained new unit configurations related to security hardening: Another new unit configuration is SystemCallLog= , which supports listing the system calls to be logged. This is very useful for for auditing or temporarily when constructing system call filters. The cgroupv2 change is also documented in the release notes, but to explicitly mention it also here, quoting from /usr/share/doc/systemd/NEWS.Debian.gz:
systemd now defaults to the unified cgroup hierarchy (i.e. cgroupv2).
This change reflects the fact that cgroups2 support has matured
substantially in both systemd and in the kernel.
All major container tools nowadays should support cgroupv2.
If you run into problems with cgroupv2, you can switch back to the previous,
hybrid setup by adding systemd.unified_cgroup_hierarchy=false to the
kernel command line.
You can read more about the benefits of cgroupv2 at
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
Note that cgroup-tools (lssubsys + lscgroup etc) don t work in cgroup2/unified hierarchy yet (see #959022 for the details). Configuration management puppet s upstream doesn t provide packages for bullseye yet (see PA-3624 + MODULES-11060), and sadly neither v6 nor v7 made it into bullseye, so when using the packages from Debian you re still stuck with v5.5 (also see #950182). ansible is also available, and while it looked like that only version 2.9.16 would make it into bullseye (see #984557 + #986213), actually version 2.10.8 made it into bullseye. chef was removed from Debian and is not available with bullseye (due to trademark issues). Prometheus stack Prometheus server was updated from v2.7.1 to v2.24.1, and the prometheus service by default applies some systemd hardening now. Also all the usual exporters are still there, but bullseye also gained some new ones: Virtualization docker (v20.10.5), ganeti (v3.0.1), libvirt (v7.0.0), lxc (v4.0.6), openstack, qemu/kvm (v5.2), xen (v4.14.1), are all still around, though what s new and noteworthy is that podman version 3.0.1 (tool for managing OCI containers and pods) made it into bullseye. If you re using the docker packages from upstream, be aware that they still don t seem to understand Debian package version handling. The docker* packages will not be automatically considered for upgrade, as 5:20.10.6~3-0~debian-buster is considered newer than 5:20.10.6~3-0~debian-bullseye:
% apt-cache policy docker-ce
  docker-ce:
    Installed: 5:20.10.6~3-0~debian-buster
    Candidate: 5:20.10.6~3-0~debian-buster
    Version table:
   *** 5:20.10.6~3-0~debian-buster 100
          100 /var/lib/dpkg/status
       5:20.10.6~3-0~debian-bullseye 500
          500 https://download.docker.com/linux/debian bullseye/stable amd64 Packages
Vagrant is available in version 2.2.14, the package from upstream works perfectly fine on bullseye as well. If you re relying on VirtualBox, be aware that upstream doesn t provide packages for bullseye yet, but the package from Debian/unstable (v6.1.22 as of 2021-05-27) works fine on bullseye (VirtualBox isn t shipped with stable releases since quite some time due to lack of cooperation from upstream on security support for older releases, see #794466). If you rely on the virtualbox-guest-additions-iso and its shared folders support, you might be glad to hear that v6.1.22 made it into bullseye (see #988783), properly supporting more recent kernel versions like present in bullseye. debuginfod There s a new service debuginfod.debian.net (see debian-devel-announce and Debian Wiki), which makes the debugging experience way smoother. You no longer need to download the debugging Debian packages (*-dbgsym/*-dbg), but instead can fetch them on demand, by exporting the following variables (before invoking gdb or alike):
% export DEBUGINFOD_PROGRESS=1    # for optional download progress reporting
% export DEBUGINFOD_URLS="https://debuginfod.debian.net"
BTW: if you can t rely on debuginfod (for whatever reason), I d like to point your attention towards find-dbgsym-packages from the debian-goodies package. Vim Sadly Vim 8.2 once again makes another change for bad defaults (hello mouse behavior!). When incsearch is set, it also applies to :substitute. This makes it veeeeeeeeeery annoying when running something like :%s/\s\+$// to get rid of trailing whitespace characters, because if there are no matches it jumps to the beginning of the file and then back, sigh. To get the old behavior back, you can use this:
au CmdLineEnter : let s:incs = &incsearch   set noincsearch
au CmdLineLeave : let &incsearch = s:incs
rsync rsync was updated from v3.1.3 to v3.2.3. It provides various checksum enhancements (see option --checksum-choice). We got new capabilities (hardlink-specials, atimes, optional protect-args, stop-at, no crtimes) and the addition of zstd and lz4 compression algorithms. And we got new options: OpenSSH OpenSSH was updated from v7.9p1 to 8.4p1, so if you re interested in all the changes, check out the release notes between those version (8.0, 8.1, 8.2, 8.3 + 8.4). Let s highlight some notable new features: Misc unsorted

18 February 2021

Julian Andres Klode: APT 2.2 released

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series. Let s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features
  • Various patterns related to dependencies, such as ?depends are now available (2.1.16)
  • The Protected field is now supported. It replaces the previous Important field and is like Essential, but only for installed packages (some minor more differences maybe in terms of ordering the installs).
  • The update command has gained an --error-on=any option that makes it error out on any failure, not just what it considers persistent ons.
  • The rred method can now be used as a standalone program to merge pdiff files
  • APT now implements phased updates. Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket, but has previously only been available to desktop users using update-manager.

Other behavioral changes
  • The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time, rather than at kernel install time, in order to correctly protect the kernel that is running now, rather than the kernel that was running when we were installing the newest one. It also now protects only up to 3 kernels, instead of up to 4, as was originally intended, and was the case before 1.1 series. This avoids /boot partitions from running out of space, especially on Ubuntu which has boot partitions sized for the original spec.

Performance improvements
  • The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
  • The hash table size has been increased

Bug fixes
  • * wildcards work normally again (since 2.1.0)
  • The cache file now includes all translation files in /var/lib/apt/lists, so multi-user systems with different locales correctly show translated descriptions now.
  • URLs are no longer dequoted on redirects only to be requoted again, fixing some redirects where servers did not expect different quoting.
  • Immediate configuration is now best-effort, and failure is no longer fatal.
  • various changes to solver marking leading to different/better results in some cases (since 2.1.0)
  • The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
  • The HTTP method no longer infinitely retries downloads on some connection errors
  • The pkgnames command no longer accidentally includes source packages
  • Various fixes from fuzzing efforts by David

Security fixes
  • Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
  • Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)
(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities
  • N/A - there were no breaking changes in apt 2.2 that we are aware of.

Deprecations
  • apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added. apt-key was made obsolete in version 0.7.25.1, released in January 2010, by /etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files, and was since then only intended for deleting keys in the legacy trusted.gpg keyring. Please manage files in trusted.gpg.d yourself; or place them in a different location such as /etc/apt/keyrings (or make up your own, there s no standard location) or /usr/share/keyrings, and use signed-by in the sources.list.d files. The legacy trusted.gpg keyring still works, but will also stop working eventually. Please make sure you have all your keys in trusted.gpg.d. Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d. Future versions of APT might switch away from GPG.
  • As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0). They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.

3 October 2020

Julian Andres Klode: Google Pixel 4a: Initial Impressions

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel? Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod. Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don t want it to be constantly behind. Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy. Size and weight: OnePlus phones keep getting bigger and bigger. By today s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions

Accessories The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the o in Google? Google sells you fabric cases for 45 . That seems a bit excessive, although I like that a lot of it is recycled.

Haptics Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function. The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it s a Model M. It just feels great. The plastic back feels really good, it s that sort of high quality smooth plastic you used to see on those high-end Nokia devices. The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.

Software The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone. There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany. The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I ll be like Oh the font is huge , but right now, it feels a bit small on the Pixel. You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I d need to calibrate the screen then - work!).

Migration experience Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored. I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup). I ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.

Performance It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance. Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.

Audio The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I ve never had this before, and it will take some time getting used to.

Final thoughts This is a boring phone. There s no wow factor at all. It s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a only 350 phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is you guessed it, plenty. The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620 on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn t get much use out of it, seeing as its not available anywhere I am. Because I m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.

Outlook The big question for me is whether I ll be able to adjust to the smaller display. I now have a tablet, so I m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call. Oh while we re talking about calls - I only have a data-only SIM in it, so I could not test calling. I m transferring to a new phone contract this month, and I ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10 discount for being an existing cable customer. I m also looking forward to playing around with the camera (especially night sight), and eSIM. And I m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can t really judge the screen quality much, as I still have the factory protection film on it, and that s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case. I might report back in two weeks when I have spent some more time with the device.

29 July 2020

Dirk Eddelbuettel: Installing and Running Ubuntu on a 2015-ish MacBook Air

So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent genuis bar verdict of it s a goner a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work. Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine s content was really (really, and check again: really) no longer needed, I got hold of it for good. tl;dr It works just fine. It is a little heavier than I thought (and isn t air supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix ). Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.

Initial Steps
  • Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck
  • Startup Disk Creator using a so-far unused 8gb usb drive
  • Plug into USB, recycle power, press Option on macOS keyboard: voila
  • After a quick hunch no to live/test only and yes to install, whole disk
  • install easy, very few questions, somehow skips wifi
  • so activate wifi manually and everythings pretty much works

Customization
  • First deal with fn and ctrl key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in half a decade? Longer?
  • Fire up firefox, go to download chrome , install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.
  • syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop
  • keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.
  • Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.
  • Added emacs via apt, so far empty , so config files yet
  • Added ssh via apt, need to propagate keys to github and gitlab
  • Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio
  • Added wajig (apt frontend) and byobu, both via apt
  • Created ssh key, shipped it to server and github + gitlab
  • Cloned (not-public) dotfiles repo and linked some dotfiles in
  • Cloned git repo for nord-theme for gnome terminal and installed it; also added it to RStudio via this repo
  • Emacs installed, activated dotfiles, then incrementally install a few elpa-* packages and a few M-x package-install including nord-theme, of course
  • Installed JetBrains Mono font from my own local package; activated for Gnome Terminal and Emacs
  • Install gnome-tweak-tool via apt, adjusted a few settings
  • Ran gsettings set org.gnome.desktop.wm.preferences focus-mode 'sloppy'
  • Set up camera following this useful GH repo
  • At some point also added slack and zoom, because, well, it is 2020
  • STILL TODO:
    • docker
    • bother with email setup?,
    • maybe atom/code/ ?

26 June 2020

Chris Lamb: On the pleasure of hating

People love to tell you that they "don't watch sports" but the story of Lance Armstrong provides a fascinating lens through which to observe our culture at large. For example, even granting all that he did and all the context in which he did it, why do sports cheats act like a lightning rod for such an instinctive hatred? After all, the sheer level of distaste directed at people such as Lance eludes countless other criminals in our society, many of whom have taken a lot more with far fewer scruples. The question is not one of logic or rationality, but of proportionality. In some ways it should be unsurprising. In all areas of life, we instinctively prefer binary judgements to moral ambiguities and the sports cheat is a clich of moral bankruptcy cheating at something so seemingly trivial as a sport actually makes it more, not less, offensive to us. But we then find ourselves strangely enthralled by them, drawn together in admiration of their outlaw-like tenacity, placing them strangely close to criminal folk heroes. Clearly, sport is not as unimportant as we like to claim it is. In Lance's case in particular though, there is undeniably a Shakespearean quality to the story and we are forced to let go of our strict ideas of right and wrong and appreciate all the nuance.

There is a lot of this nuance in Marina Zenovich's new documentary. In fact, there's a lot of everything. At just under four hours, ESPN's Lance combines the duration of a Tour de France stage with the depth of the peloton an endurance event compared to the bite-sized hagiography of Michael Jordan's The Last Dance. Even for those who follow Armstrong's story like a mini-sport in itself, Lance reveals new sides to this man for all seasons. For me, not only was this captured in his clumsy approximations at being a father figure but also in him being asked something I had not read in countless tell-all books: did his earlier experiments in drug-taking contribute to his cancer? But even in 2020 there are questions that remain unanswered. By needlessly returning to the sport in 2009, did Lance subconsciously want to get caught? Why does he not admit he confessed to Betsy Andreu back in 1999 but will happily apologise to her today for slurring her publicly on this very point? And why does he remain so vindictive towards former-teammate Floyd Landis? In all of Armstrong's evasions and masterful control of the narrative, there is the gnawing feeling that we don't even know what questions we should be even asking. As ever, the questions are more interesting than the answers.

Lance also reminded me of how professional cycling's obsession with national identity. Although I was intuitively aware of it to some degree, I had not fully grasped how much this kind of stereotyping runs through the veins of the sport itself, just like the drugs themselves. Journalist Daniel Friebe first offers us the portrait of:
Spaniards tend to be modest, very humble. Very unpretentious. And the Italians are loud, vain and outrageous showmen.
Former directeur sportif Johan Bruyneel then asserts that "Belgians are hard workers... they are ambitious to a certain point, but not overly ambitious", and cyclist J rg Jaksche concludes with:
The Germans are very organised and very structured. And then the French, now I have to be very careful because I am German, but the French are slightly superior.
This kind of lazy caricature is nothing new, especially for those brought up on a solid diet of Tintin and Asterix, but although all these examples are seemingly harmless, why does the underlying idea of ascribing moral, social or political significance to genetic lineage remain so durable in today's age of anti-racism? To be sure, culture is not quite the same thing as race, but being judged by the character of one's ancestors rather than the actions of an individual is, at its core, one of the many conflations at the heart of racism. There is certainly a large amount of cognitive dissonance at work, especially when Friebe elaborates:
East German athletes were like incredible robotic figures, fallen off a production line somewhere behind the Iron Curtain...
... but then bermensch Jan Ullrich is immediately described as "emotional" and "struggled to live the life of a professional cyclist 365 days a year". We see the habit to stereotype is so ingrained that even in the face of this obvious contradiction, Friebe unironically excuses Ullrich's failure to live up his German roots due to him actually being "Mediterranean".

I mention all this as I am known within my circles for remarking on these national characters, even collecting stereotypical examples of Italians 'being Italian' and the French 'being French' at times. Contrary to evidence, I don't believe in this kind of innate quality but what I do suspect is that people generally behave how they think they ought to behave, perhaps out of sheer imitation or the simple pleasure of conformity. As the novelist Will Self put it:
It's quite a complicated collective imposture, people pretending to be British and people pretending to be French, and then they get really angry with each other over what they're pretending to be.
The really remarkable thing about this tendency is that even if we consciously notice it there is no seemingly no escape even I could not smirk when I considered that a brash Texan winning the Tour de France actually combines two of America's cherished obsessions: winning... and annoying the French.

10 June 2020

Joey Hess: bracketing and async exceptions in haskell

I've been digging into async exceptions in haskell, and getting more and more concerned. In particular, bracket seems to be often used in ways that are not async exception safe. I've found multiple libraries with problems. Here's an example:
withTempFile a = bracket setup cleanup a
  where
    setup = openTempFile "/tmp" "tmpfile"
    cleanup (name, h) = do
        hClose h
        removeFile name
This looks reasonably good, it makes sure to clean up after itself even when the action throws an exception. But, in fact that code can leave stale temp files lying around. If the thread receives an async exception when hClose is running, it will be interrupted before the file is removed. We normally think of bracket as masking exceptions, but it doesn't prevent async exceptions in all cases. See Control.Exception on "interruptible operations", which can receive async exceptions even when other exceptions are masked. It's a bit surprising, but hClose is such an interruptable operation, because it flushes the write buffer. The only way to know is to read the code. It can be quite hard to determine if an operation is interruptable, since it can come down to whether it retries a STM transaction, or uses a MVar that is not always full. I've been auditing libraries and I often have to look at code several dependencies away, and even then may not be sure if a library has this problem. So far, around half of the libraries I've looked at, that use bracket or onException or the like probably have this problem. What can libraries do? My impression of the state of things now is that you should be very cautious using race or cancel or withAsync or the like, unless the thread is small and easy to audit for these problems. Kind of a shame, since I had wanted to be able to cancel a thread that is big and sprawling and uses all the libraries mentioned above.
This work was sponsored by Jake Vosloo and Graham Spencer on Patreon.

9 June 2020

Julian Andres Klode: Review: Chromebook Duet

Sporting a beautiful 10.1 1920x1200 display, the Lenovo IdeaPad Duet Chromebook or Duet Chromebook, is one of the latest Chromebooks released, and one of the few slate-style tablets, and it s only about 300 EUR (300 USD). I ve had one for about 2 weeks now, and here are my thoughts.

Build & Accessories The tablet is a fairly Pixel-style affair, in that the back has two components, one softer blue one housing the camera and a metal feeling gray one. Build quality is fairly good. The volume and power buttons are located on the right side of the tablet, and this is one of the main issues: You end up accidentally pressing the power button when you want to turn your volume lower, despite the power button having a different texture. Alongside the tablet, you also find a kickstand with a textile back, and a keyboard, both of which attach via magnets (and pogo pins for the keyboard). The keyboard is crammed, with punctuation keys being halfed in size, and it feels mushed compared to my usual experiences of ThinkPads and Model Ms, but it s on par with other Chromebooks, which is surprising, given it s a tablet attachment.
fully assembled chromebook duet fully assembled chromebook duet
I mostly use the Duet as a tablet, and only attach the keyboard occasionally. Typing with the keyboard on your lap is suboptimal. My first Duet had a few bunches of dead pixels, so I returned it, as I had a second one I could not cancel ordered as well. Oh dear. That one was fine!

Hardware & Connectivity The Chromebook Duet is powered by a Mediatek Helio P60T SoC, 4GB of RAM, and a choice of 64 or 128 GB of main storage. The tablet provides one USB-C port for charging, audio output (a 3.5mm adapter is provided in the box), USB hub, and video output; though, sadly, the latter is restricted to a maximum of 1080p30, or 1440x900 at 60 Hz. It can be charged using the included 10W charger, or use up to I believe 18W from a higher powered USB-C PD charger. I ve successfully used the Chromebook with a USB-C monitor with attached keyboard, mouse, and DAC without any issues. On the wireless side, the tablet provides 2x2 Wifi AC and Bluetooth 4.2. WiFi reception seemed just fine, though I have not done any speed testing, missing a sensible connection at the moment. I used Bluetooth to connect to my smartphone for instant tethering, and my Sony WH1000XM2 headphones, both of which worked without any issues. The screen is a bright 400 nit display with excellent viewing angles, and the speakers do a decent job, meaning you can use easily use this for watching a movie when you re alone in a room and idling around. It has a resolution of 1920x1200. The device supports styluses following the USI standard. As of right now, the only such stylus I know about is an HP one, and it costs about 70 or so. Cameras are provided on the front and the rear, but produce terrible images.

Software: The tablet experience The Chromebook Duet runs Chrome OS, and comes with access to Android apps using the play store (and sideloading in dev mode) and access to full Linux environments powered by LXD inside VMs. The screen which has 1920x1200 is scaled to a ridiculous 1080x675 by default which is good for being able to tap buttons and stuff, but provides next to no content. Scaling it to 1350x844 makes things more balanced. The Linux integration is buggy. Touches register in different places than where they happened, and the screen is cut off in full screen extremetuxracer, making it hard to recommend for such uses. Android apps generally work fine. There are some issues with the back gesture not registering, but otherwise I have not found issues I can remember. One major drawback as a portable media consumption device is that Android apps only work in Widevine level 3, and hence do not have access to HD content, and the web apps of Netflix and co do not support downloading. Though one of the Duets actually said L1 in check apps at some point (reported in issue 1090330). It s also worth noting that Amazon Prime Video only renders in HD, unless you change your user agent to say you are Chrome on Windows - bad Amazon! The tablet experience also lags in some other ways, as the palm rejection is overly extreme, causing it to reject valid clicks close to the edge of the display (reported in issue 1090326). The on screen keyboard is terrible. It only does one language at a time, forcing me to switch between German and English all the time, and does not behave as you d expect it when editing existing words - it does not know about them and thinks you are starting a new one. It does provide a small keyboard that you can move around, as well as a draw your letters keyboard, which could come in handy for stylus users, I guess. In any case, it s miles away from gboard on Android. Stability is a mixed bag right now. As of Chrome OS 83, sites (well only Disney+ so far ) sometimes get killed with SIGILL or SIGTRAP, and the device rebooted on its own once or twice. Android apps that use the DRM sometimes do not start, and the Netflix Android app sometimes reports it cannot connect to the servers.

Performance Performance is decent to sluggish, with micro stuttering in a lot of places. The Mediatek CPU is comparable to Intel Atoms, and with only 4GB of RAM, and an entire Android container running, it s starting to show how weak it is. I found that Google Docs worked perfectly fine, as did websites such as Mastodon, Twitter, Facebook. Where the device really struggled was Reddit, where closing or opening a post, or getting a reply box could take 5 seconds or more. If you are looking for a Reddit browsing device, this is not for you. Performance in Netflix was fine, and Disney+ was fairly slow but still usable. All in all, it s acceptable, and given the price point and the build quality, probably the compromise you d expect.

Summary tl;dr:
  • good: Build quality, bright screen, low price, included accessories
  • bad: DRM issues, performance, limited USB-C video output, charging speed, on-screen keyboard, software bugs
The Chromebook Duet or IdeaPad Duet Chromebook is a decent tablet that is built well above its price point. It s lackluster performance and DRM woes make it hard to give a general recommendation, though. It s not a good laptop. I can see this as the perfect note taking device for students, and as a cheap tablet for couch surfing, or as your on-the-go laptop replacement, if you need it only occasionally. I cannot see anyone using this as their main laptop, although I guess some people only have phones these days, so: what do I know? I can see you getting this device if you want to tinker with Linux on ARM, as Chromebooks are quite nice to tinker with, and a tablet is super nice.

4 June 2020

Reproducible Builds: Reproducible Builds in May 2020

Welcome to the May 2020 report from the Reproducible Builds project. One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. Nonetheless, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

News The Corona-Warn app that helps trace infection chains of SARS-CoV-2/COVID-19 in Germany had a feature request filed against it that it build reproducibly. A number of academics from Cornell University have published a paper titled Backstabber s Knife Collection which reviews various open source software supply chain attacks:
Recent years saw a number of supply chain attacks that leverage the increasing use of open source during software development, which is facilitated by dependency managers that automatically resolve, download and install hundreds of open source packages throughout the software life cycle.
In related news, the LineageOS Android distribution announced that a hacker had access to the infrastructure of their servers after exploiting an unpatched vulnerability. Marcin Jachymiak of the Sia decentralised cloud storage platform posted on their blog that their siac and siad utilities can now be built reproducibly:
This means that anyone can recreate the same binaries produced from our official release process. Now anyone can verify that the release binaries were created using the source code we say they were created from. No single person or computer needs to be trusted when producing the binaries now, which greatly reduces the attack surface for Sia users.
Synchronicity is a distributed build system for Rust build artifacts which have been published to crates.io. The goal of Synchronicity is to provide a distributed binary transparency system which is independent of any central operator. The Comparison of Linux distributions article on Wikipedia now features a Reproducible Builds column indicating whether distributions approach and progress towards achieving reproducible builds.

Distribution work In Debian this month: In Alpine Linux, an issue was filed and closed regarding the reproducibility of .apk packages. Allan McRae of the ArchLinux project posted their third Reproducible builds progress report to the arch-dev-public mailing list which includes the following call for help:
We also need help to investigate and fix the packages that fail to reproduce that we have not investigated as of yet.
In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Software development

diffoscope Chris Lamb made the changes listed below to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. He also prepared and uploaded versions 142, 143, 144, 145 and 146 to Debian, PyPI, etc.
  • Comparison improvements:
    • Improve fuzzy matching of JSON files as file now supports recognising JSON data. (#106)
    • Refactor .changes and .buildinfo handling to show all details (including the GnuPG header and footer components) even when referenced files are not present. (#122)
    • Use our BuildinfoFile comparator (etc.) regardless of whether the associated files (such as the orig.tar.gz and the .deb) are present. [ ]
    • Include GnuPG signature data when comparing .buildinfo, .changes, etc. [ ]
    • Add support for printing Android APK signatures via apksigner(1). (#121)
    • Identify iOS App Zip archive data as .zip files. (#116)
    • Add support for Apple Xcode .mobilepovision files. (#113)
  • Bug fixes:
    • Don t print a traceback if we pass a single, missing argument to diffoscope (eg. a JSON diff to re-load). [ ]
    • Correct differences typo in the ApkFile handler. (#127)
  • Output improvements:
    • Never emit the same id="foo" anchor reference twice in the HTML output, otherwise identically-named parts will not be able to linked to via a #foo anchor. (#120)
    • Never emit an empty id anchor either; it is not possible to link to #. [ ]
    • Don t pretty-print the output when using the --json presenter; it will usually be too complicated to be readable by the human anyway. [ ]
    • Use the SHA256 over MD5 hash when generating page names for the HTML directory-style presenter. (#124)
  • Reporting improvements:
    • Clarify the message when we truncate the number of lines to standard error [ ] and reduce the number of maximum lines printed to 25 as usually the error is obvious by then [ ].
    • Print the amount of free space that we have available in our temporary directory as a debugging message. [ ]
    • Clarify Command [ ] failed with exit code messages to remove duplicate exited with exit but also to note that diffoscope is interpreting this as an error. [ ]
    • Don t leak the full path of the temporary directory in Command [ ] exited with 1 messages. (#126)
    • Clarify the warning message when we cannot import the debian Python module. [ ]
    • Don t repeat stderr from if both commands emit the same output. [ ]
    • Clarify that an external command emits for both files, otherwise it can look like we are repeating itself when, in reality, it is being run twice. [ ]
  • Testsuite improvements:
    • Prevent apksigner test failures due to lack of binfmt_misc, eg. on Salsa CI and elsewhere. [ ]
    • Drop .travis.yml as we use Salsa instead. [ ]
  • Dockerfile improvements:
    • Add a .dockerignore file to whitelist files we actually need in our container. (#105)
    • Use ARG instead of ENV when setting up the DEBIAN_FRONTEND environment variable at runtime. (#103)
    • Run as a non-root user in container. (#102)
    • Install/remove the build-essential during build so we can install the recommended packages from Git. [ ]
  • Codebase improvements:
    • Bump the officially required version of Python from 3.5 to 3.6. (#117)
    • Drop the (default) shell=False keyword argument to subprocess.Popen so that the potentially-unsafe shell=True is more obvious. [ ]
    • Perform string normalisation in Black [ ] and include the Black output in the assertion failure too [ ].
    • Inline MissingFile s special handling of deb822 to prevent leaking through abstract layers. [ ][ ]
    • Allow a bare try/except block when cleaning up temporary files with respect to the flake8 quality assurance tool. [ ]
    • Rename in_dsc_path to dsc_in_same_dir to clarify the use of this variable. [ ]
    • Abstract out the duplicated parts of the debian_fallback class [ ] and add descriptions for the file types. [ ]
    • Various commenting and internal documentation improvements. [ ][ ]
    • Rename the Openssl command class to OpenSSLPKCS7 to accommodate other command names with this prefix. [ ]
  • Misc:
    • Rename the --debugger command-line argument to --pdb. [ ]
    • Normalise filesystem stat(2) birth times (ie. st_birthtime) in the same way we do with the stat(1) command s Access: and Change: times to fix a nondeterministic build failure in GNU Guix. (#74)
    • Ignore case when ordering our file format descriptions. [ ]
    • Drop, add and tidy various module imports. [ ][ ][ ][ ]
In addition:
  • Jean-Romain Garnier fixed a general issue where, for example, LibarchiveMember s has_same_content method was called regardless of the underlying type of file. [ ]
  • Daniel Fullmer fixed an issue where some filesystems could only be mounted read-only. (!49)
  • Emanuel Bronshtein provided a patch to prevent a build of the Docker image containing parts of the build s. (#123)
  • Mattia Rizzolo added an entry to debian/py3dist-overrides to ensure the rpm-python module is used in package dependencies (#89) and moved to using the new execute_after_* and execute_before_* Debhelper rules [ ].

Chris Lamb also performed a huge overhaul of diffoscope s website:
  • Add a completely new design. [ ][ ]
  • Dynamically generate our contributor list [ ] and supported file formats [ ] from the main Git repository.
  • Add a separate, canonical page for every new release. [ ][ ][ ]
  • Generate a latest release section and display that with the corresponding date on the homepage. [ ]
  • Add an RSS feed of our releases [ ][ ][ ][ ][ ] and add to Planet Debian [ ].
  • Use Jekyll s absolute_url and relative_url where possible [ ][ ] and move a number of configuration variables to _config.yml [ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Other tools Elsewhere in our tooling: strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. In May, Chris Lamb uploaded version 1.8.1-1 to Debian unstable and Bernhard M. Wiedemann fixed an off-by-one error when parsing PNG image modification times. (#16) In disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, Chris Lamb replaced the term dirents in place of directory entries in human-readable output/log messages [ ] and used the astyle source code formatter with the default settings to the main disorderfs.cpp source file [ ]. Holger Levsen bumped the debhelper-compat level to 13 in disorderfs [ ] and reprotest [ ], and for the GNU Guix distribution Vagrant Cascadian updated the versions of disorderfs to version 0.5.10 [ ] and diffoscope to version 145 [ ].

Project documentation & website
  • Carl Dong:
  • Chris Lamb:
    • Rename the Who page to Projects . [ ]
    • Ensure that Jekyll enters the _docs subdirectory to find the _docs/index.md file after an internal move. (#27)
    • Wrap ltmain.sh etc. in preformatted quotes. [ ]
    • Wrap the SOURCE_DATE_EPOCH Python examples onto more lines to prevent visual overflow on the page. [ ]
    • Correct a preferred spelling error. [ ]
  • Holger Levsen:
    • Sort our Academic publications page by publication year [ ] and add Trusting Trust and Fully Countering Trusting Trust through Diverse Double-Compiling [ ].
  • Juri Dispan:

Testing framework We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced. Holger Levsen made the following changes:
  • System health status:
    • Improve page description. [ ]
    • Add more weight to proxy failures. [ ]
    • More verbose debug/failure messages. [ ][ ][ ]
    • Work around strangeness in the Bash shell let VARIABLE=0 exits with an error. [ ]
  • Debian:
    • Fail loudly if there are more than three .buildinfo files with the same name. [ ]
    • Fix a typo which prevented /usr merge variation on Debian unstable. [ ]
    • Temporarily ignore PHP s horde](https://www.horde.org/) packages in Debian bullseye. [ ]
    • Document how to reboot all nodes in parallel, working around molly-guard. [ ]
  • Further work on a Debian package rebuilder:
    • Workaround and document various issues in the debrebuild script. [ ][ ][ ][ ]
    • Improve output in the case of errors. [ ][ ][ ][ ]
    • Improve documentation and future goals [ ][ ][ ][ ], in particular documentiing two real world tests case for an impossible to recreate build environment [ ].
    • Find the right source package to rebuild. [ ]
    • Increase the frequency we run the script. [ ][ ][ ][ ]
    • Improve downloading and selection of the sources to build. [ ][ ][ ]
    • Improve version string handling.. [ ]
    • Handle build failures better. [ ]. [ ]. [ ]
    • Also consider architecture all .buildinfo files. [ ][ ]
In addition:
  • kpcyrd, for Alpine Linux, updated the alpine_schroot.sh script now that a patch for abuild had been released upstream. [ ]
  • Alexander Couzens of the OpenWrt project renamed the brcm47xx target to bcm47xx. [ ]
  • Mattia Rizzolo fixed the printing of the build environment during the second build [ ][ ][ ] and made a number of improvements to the script that deploys Jenkins across our infrastructure [ ][ ][ ].
Lastly, Vagrant Cascadian clarified in the documentation that you need to be user jenkins to run the blacklist command [ ] and the usual build node maintenance was performed was performed by Holger Levsen [ ][ ][ ], Mattia Rizzolo [ ][ ] and Vagrant Cascadian [ ][ ][ ].

Mailing list: There were a number of discussions on our mailing list this month: Paul Spooren started a thread titled Reproducible Builds Verification Format which reopens the discussion around a schema for sharing the results from distributed rebuilders:
To make the results accessible, storable and create tools around them, they should all follow the same schema, a reproducible builds verification format. The format tries to be as generic as possible to cover all open source projects offering precompiled source code. It stores the rebuilder results of what is reproducible and what not.
Hans-Christoph Steiner of the Guardian Project also continued his previous discussion regarding making our website translatable. Lastly, Leo Wandersleb posted a detailed request for feedback on a question of supply chain security and other issues of software review; Leo is the founder of the Wallet Scrutiny project which aims to prove the security of Android Bitcoin Wallets:
Do you own your Bitcoins or do you trust that your app allows you to use your coins while they are actually controlled by them ? Do you have a backup? Do they have a copy they didn t tell you about? Did anybody check the wallet for deliberate backdoors or vulnerabilities? Could anybody check the wallet for those?
Elsewhere, Leo had posted instructions on his attempts to reproduce the binaries for the BlueWallet Bitcoin wallet for iOS and Android platforms.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

25 April 2020

Julian Andres Klode: An - EPYC - Focal Upgrade

Ubuntu Focal Fossa 20.04 was released two days ago, so I took the opportunity yesterday and this morning to upgrade my VPS from Ubuntu 18.04 to 20.04. The VPS provides: I rebooted one more time than necessary, though, as my cloud provider Hetzner recently started offering 2nd generation EPYC instances which I upgraded to from my Skylake Xeon based instance. I switched from the CX21 for 5.83 /mo to the CPX11 for 4.15 /mo. This involved a RAM downgrade - from 4GB to 2GB, but that s fine, the maximum usage I saw was about 1.3 GB when running dose-distcheck (running hourly); and it s good for everyone that AMD is giving Intel some good competition, I think. Anyway, to get back to the distribution upgrade - it was fairly boring. I started yesterday by taking a copy of the server and launching it locally in a lxd container, and then tested the upgrade in there; to make sure I m prepared for the real thing :) I got a confusing prompt from postfix as to which site I m operating (which is a normal prompt, but I don t know why I see it on an upgrade); and a few config files I had changed locally. As the server is managed by ansible, I just installed the distribution config files and dropped my changes (setting DPkg::Options "--force-confnew"; ;" in apt.conf), and then after the upgrade, ran ansible to redeploy the changes (after checking what changes it would do and adjusting a few things). There are two remaining flaws:
  1. I run rspamd from the upstream repository, and that s not built for focal yet. So I m still using the bionic binary, and have to keep bionic s icu 60 and libhyperscan4 around for it. This is still preventing CI of the ansible config from passing for focal, because it won t have the needed bionic packages around.
  2. I run weechat from the upstream repository, and apt can t tell the versions apart. Well, it can for the repositories, because they have Size fields - but status does not. Hence, it merges the installed version with the first repository it sees. What happens is that it installs from weechat.org, but then it believes the installed version is from archive.ubuntu.com and replaces it each dist-upgrade. I worked around it by moving the weechat.org repo to the front of sources.list, so that the it gets merged with that instead of the archive.ubuntu.com one, as it should be, but that s a bit ugly.
I also should start the migration to EC certificates for TLS, and 0-RTT handshakes, so that the initial visit experience is faster. I guess I ll have to move away from certbot for that, but I have not investigated this recently.

15 April 2020

Antoine Beaupr : OpenDKIM configuration to send debian.org email

Debian.org added support for DKIM in 2020. To configure this on my side, I had to do the following, on top of my email configuration.
  1. add this line to /etc/opendkim/signing.table:
    *@debian.org marcos-debian.anarcat.user
    
  2. add this line to /etc/opendkim/key.table:
    marcos-debian.anarcat.user debian.org:marcos-debian.anarcat.user:/etc/opendkim/keys/marcos-debian.anarcat.user.private
    
    Yes, that's quite a mouthful! That magic selector is long in that way because it needs a special syntax (specifically the .anarcat.user suffix) for Debian to be happy. The -debian string is to tell me where the key is published. The marcos prefix is to remind me where the private is used.
  3. generate the key with:
    opendkim-genkey --directory=/etc/opendkim/keys/ --selector=marcos-debian.anarcat.user --domain=debian.org --verbose
    
    This creates the DNS record in /etc/opendkim/keys/marcos-debian.anarcat.user.txt (alongside the private key in .key).
  4. restart OpenDKIM:
    service opendkim restart
    
    The DNS record will look something like this:
    marcos-debian.anarcat.user._domainkey   IN  TXT ( "v=DKIM1; h=sha256; k=rsa; "
    "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtKzBK2f8vg5yV307WAOatOhypQt3ANQ95iDaewkVehmx42lZ6b4PzA1k5DkIarxjkk+7m6oSpx5H3egrUSLMirUiMGsIb5XVGBPFmKZhDVmC7F5G1SV7SRqqKZYrXTufRRSne1eEtA31xpMP0B32f6v6lkoIZwS07yQ7DDbwA9MHfyb6MkgAvDwNJ45H4cOcdlCt0AnTSVndcl"
    "pci5/2o/oKD05J9hxFTtlEblrhDXWRQR7pmthN8qg4WaNI4WszbB3Or4eBCxhUdvAt2NF9c9eYLQGf0jfRsbOcjSfeus0e2fpsKW7JMvFzX8+O5pWfSpRpdPatOt80yy0eqpm1uQIDAQAB" )  ; ----- DKIM key marcos-debian.anarcat.user for debian.org
    
  5. The "p=MIIB..." string needs to be joined together, without the quotes and the p=, and sent in a signed email to changes@db.debian.org:
    -----BEGIN PGP SIGNED MESSAGE-----
    dkimPubKey: marcos.anarcat.user MIIB[...]
    -----BEGIN PGP SIGNATURE-----
    [...]
    
  6. Wait a few minutes for DNS to propagate. You can check if they have with:
    host -t TXT marcos-debian.anarcat.user._domainkey.debian.org nsp.dnsnode.net
    
    (nsp.dnsnode.net being one of the NS records of the debian.org zone.)
If all goes well, the tests should pass when sending from your server as anarcat@debian.org.

Testing Test messages can be sent to dkimvalidator, mail-tester.com or check-auth@verifier.port25.com. Those tools will run Spamassassin on the received emails and report the results. What you are looking for is:
  • -0.1 DKIM_VALID: Message has at least one valid DKIM or DK signature
  • -0.1 DKIM_VALID_AU: Message has a valid DKIM or DK signature from author's domain
  • -0.1 DKIM_VALID_EF: Message has a valid DKIM or DK signature from envelope-from domain
If one of those is missing, then you are doing something wrong and your "spamminess" score will be worse. The latter is especially tricky as it validates the "Envelope From", which is the MAIL FROM: header as sent by the originating MTA, which you see as from=<> in the postfix lost. The following will happen anyways, as soon as you have a signature, that's normal:
  • 0.1 DKIM_SIGNED: Message has a DKIM or DK signature, not necessarily valid
And this might happen if you have a ADSP record but do not correctly sign the message with a domain field that matches the record:
  • 1.1 DKIM_ADSP_ALL No valid author signature, domain signs all mail
That's bad and will affect your spam core badly. I fixed that issue by using a wildcard key in the key table:
--- a/opendkim/key.table
+++ b/opendkim/key.table
@@ -1 +1 @@
-marcos anarc.at:marcos:/etc/opendkim/keys/marcos.private
+marcos %:marcos:/etc/opendkim/keys/marcos.private

References This is a copy of a subset of my more complete email configuration.

23 March 2020

Joey Hess: quarantimer: a coronovirus quarantine timer for your things

I am trying to avoid bringing coronovirus into my house on anything, and I also don't want to sterilize a lot of stuff. (Tedious and easy to make a mistake.) Currently it seems that the best approach is to leave stuff to sit undisturbed someplace safe for long enough for the virus to degrade away. Following that policy, I've quickly ended up with a porch full of stuff in different stages of quarantine, and I am quickly losing track of how long things have been in quarantine. If you have the same problem, here is a solution: https://quarantimer.app/ Open it on your mobile device, and you can take photos of each thing, select the kind of surfaces it has, and it will track the quarantine time for you. You can share the link to other devices or other people to collaborate. I anticipate the javascript and css will improve, but it's good enough for now. I will provide this website until the crisis is over. Of course, it's free software and you can also host your own. If this seems useful, please tell your friends and family about it. Be well!
This is made possible by my supporters on Patreon, particularly Jake Vosloo.

14 March 2020

Dirk Eddelbuettel: RcppAPT 0.0.6

A new version of RcppAPT our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, commands and their cache powering Debian, Ubuntu and the like is now on CRAN. RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations. This new version corrects builds failures under the new and shiny Apt 2.0 release (and the pre-releases like the 1.9.* series in Ubuntu) as some header files moved around. My thanks to Kurt Hornik for the heads-up. I accomodated the change in the (very simple and shell-based) configure script by a) asking pkg-config about the version of pkg-apt and then using that to b) compare to a threshold value of 1.9.0 and c) setting another compiler #define if needed so that d) these headers could get included if defined. The neat part is that a) and b) are done in an R one-liner, and the whole script is still in shell. Now, CRAN being CRAN, I now split the script into two: one almost empty one not using bash that passes the omg but bash is not portable test, and which calls a second bash script doing the work. Fun and games The full set of changes follows.

Changes in version 0.0.6 (2020-03-14)
  • Accomodate Apt 2.0 code changes by including more header files
  • Change is backwards compatible and conditional
  • Added configure call using pkg-config and package version comparison (using R) to determine if the define is needed
  • Softened unit tests as we cannot assume optional source deb information to be present, so demo code runs but zero results tolerated

Courtesy of CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as as the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 March 2020

Julian Andres Klode: APT 2.0 released

After brewing in experimental for a while, and getting a first outing in the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable. 1.10 would be a boring, weird number, eh? Compared to the 1.8 series, the APT 2.0 series features several new features, as well as improvements in performance, hardening. A lot of code has been removed as well, reducing the size of the library.

Highlighted Changes Since 1.8

New Features
  • Commands accepting package names now accept aptitude-style patterns. The syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for more details.
  • apt(8) now waits for the dpkg locks - indefinitely, when connected to a tty, or for 120s otherwise.
  • When apt cannot acquire the lock, it prints the name and pid of the process that currently holds the lock.
  • A new satisfy command has been added to apt(8) and apt-get(8)
  • Pins can now be specified by source package, by prepending src: to the name of the package, e.g.:
    Package: src:apt
    Pin: version 2.0.0
    Pin-Priority: 990
    
    Will pin all binaries of the native architecture produced by the source package apt to version 2.0.0. To pin packages across all architectures, append :any.

Performance
  • APT now uses libgcrypt for hashing instead of embedded reference implementations of MD5, SHA1, and SHA2 hash families.
  • Distribution of rred and decompression work during update has been improved to take into account the backlog instead of randomly assigning a worker, which should yield higher parallelization.

Incompatibilities
  • The apt(8) command no longer accepts regular expressions or wildcards as package arguments, use patterns (see New Features).

Hardening
  • Credentials specified in auth.conf now only apply to HTTPS sources, preventing malicious actors from reading credentials after they redirected users from a HTTP source to an http url matching the credentials in auth.conf. Another protocol can be specified, see apt_auth.conf(5) for the syntax.

Developer changes
  • A more extensible cache format, allowing us to add new fields without breaking the ABI
  • All code marked as deprecated in 1.8 has been removed
  • Implementations of CRC16, MD5, SHA1, SHA2 have been removed
  • The apt-inst library has been merged into the apt-pkg library.
  • apt-pkg can now be found by pkg-config
  • The apt-pkg library now compiles with hidden visibility by default.
  • Pointers inside the cache are now statically typed. They cannot be compared against integers (except 0 via nullptr) anymore.

python-apt 2.0 python-apt 2.0 is not yet ready, I m hoping to add a new cleaner API for cache access before making the jump from 1.9 to 2.0 versioning.

libept 1.2 I ve moved the maintenance of libept to the APT team. We need to investigate how to EOL this properly and provide facilities inside APT itself to replace it. There are no plans to provide new features, only bugfixes / rebuilds for new apt versions.

Next.