Search Results: "Julian Andres Klode"

16 November 2023

Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


Photo by Pixabay
Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.

This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.

The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.

The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.

Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.

Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:

For questions and comments please post to Kernel section on Ubuntu Discourse.



10 October 2023

Julian Andres Klode: Divergence - A case for different upgrade approaches

APT currently knows about three types of upgrades: All of these upgrade types are necessary to deal with upgrades within a distribution release. Yes, sometimes even removals may be needed because bug fixes require adding a Conflicts somewhere. In Ubuntu we have a third type of upgrades, handled by a separate tool: release upgrades. ubuntu-release-upgrader changes your sources.list, and applies various quirks to the upgrade. In this post, I want to look not at the quirk aspects but discuss how dependency solving should differ between intra-release and inter-release upgrades. Previous solver projects (such as Mancoosi) operated under the assumption that minimizing the number of changes performed should ultimately be the main goal of a solver. This makes sense as every change causes risks. However it ignores a different risk, which especially applies when upgrading from one distribution release to a newer one: Increasing divergence from the norm. Consider a person installs foo in Debian 12. foo depends on a b, so a will be automatically installed to satisfy the dependency. A release later, a has some known issues and b is prefered, the dependency now reads: b a. A classic solver would continue to keep a installed because it was installed before, leading upgraded installs to have foo, a installed whereas new systems have foo, b installed. As systems get upgraded over and over, they continue to diverge further and further from new installs to the point that it adds substantial support effort. My proposal for the new APT solver is that when we perform release upgrades, we forget which packages where previously automatically installed. We effectively perform a normalization: All systems with the same set of manually installed packages will end up with the same set of automatically installed packages. Consider the solving starting with an empty set and then installing the latest version of each previously manually installed package: It will see now that foo depends b a and install b (and a will be removed later on as its not part of the solution). Another case of divergence is Suggests handling. Consider that foo also Suggests s. You now install another package bar that depends s, hence s gets installed. Upon removing bar, s is not being removed automatically because foo still suggests it (and you may have grown used to foo s integration of s). This is because apt considers Suggests to be important - they won t be automatically installed, but will not be automatically removed. In Ubuntu, we unset that policy on release upgrades to normalize the systems. The reasoning for that is simple: While you may have grown to use s as part of foo during the release, an upgrade to the next release already is big enough that removing s is going to have less of an impact - breakage of workflows is expected between release upgrades. I believe that apt release-upgrade will benefit from both of these design choices, and in the end it boils down to a simple mantra:

1 February 2023

Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work? Booting on Ubuntu involves three components after the firmware:
  1. shim
  2. grub
  3. linux
Each of these is a PE binary signed with a key. The shim is signed by Microsoft s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA In Ubuntu s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes. This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh. We decided we want to rotate our signing key next time. This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable. We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh. Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them. We also submitted a shim 15.7 with the old keys revoked which came back at around the same time. Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys. So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks. (Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.) Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we d simply add Breaks: linux-image-generic ( 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage. I explored checking the kernels at runtime and aborting if we don t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:
  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.
Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice. So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two: In it s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred. Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP. Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions Of course, the first version I uploaded had still some remaining hardcoded shimx64 in the scripts and so failed to install on arm64 where shimaa64 is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts). shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues. We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work. We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week. With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round
  • Intel TDX support in grub and shim
  • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s

am I using this yet? The new signing keys are used in:
  • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
  • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
  • fwupd-signed 1.51~ or newer
  • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.
If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:
$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50
If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished. For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it s in proposed)?
  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.
If you already upgraded your shim before your kernel, don t worry:
  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed
And you ll be all good to go.

deep dive: uploading signed boot assets to Ubuntu For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets. OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing. The entire workflow looks something like this:
  1. Upload the unsigned package to one of the following build PPAs:
  2. Upload the signed package to the same PPA
  3. For stable release uploads:
    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads. The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb. Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed
  5. Review the binaries themselves
  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public. This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private proposed PPA.
  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive
Lots of steps!

WIP As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

21 November 2021

Julian Andres Klode: APT Z3 Solver Basics

Z3 is a theorem prover developed at Microsoft research and available as a dynamically linked C++ library in Debian-based distributions. While the library is a whopping 16 MB, and the solver is a tad slow, it s permissive licensing, and number of tactics offered give it a huge potential for use in solving dependencies in a wide variety of applications. Z3 does not need normalized formulas, but offers higher level abstractions like atmost and atleast and implies, that we will make use of together with boolean variables to translate the dependency problem to a form Z3 understands. In this post, we ll see how we can apply Z3 to the dependency resolution in APT. We ll only discuss the basics here, a future post will explore optimization criteria and recommends.

Translating the universe APT s package universe consists of 3 relevant things: packages (the tuple of name and architecture), versions (basically a .deb), and dependencies between versions. While we could translate our entire universe to Z3 problems, we instead will construct a root set from packages that were manually installed and versions marked for installation, and then build the transitive root set from it by translating all versions reachable from the root set. For each package P in the transitive root set, we create a boolean literal P. We then translate each version P1, P2, and so on. Translating a version means building a boolean literal for it, e.g. P1, and then translating the dependencies as shown below. We now need to create two more clauses to satisfy the basic requirements for debs:
  1. If a version is installed, the package is installed; and vice versa. We can encode this requirement for P above as P == atleast( P1,P2 , 1).
  2. There can only be one version installed. We add an additional constraint of the form atmost( P1,P2 , 1).
We also encode the requirements of the operation.
  1. For each package P that is manually installed, add a constraint P.
  2. For each version V that is marked for install, add a constraint V.
  3. For each package P that is marked for removal, add a constraint !P.

Dependencies Packages in APT have dependencies of two basic forms: Depends and Conflicts, as well as variations like Breaks (identical to Conflicts in solving terms), and Recommends (soft Depends) - we ll ignore those for now. We ll discuss Conflicts in the next section. Let s take a basic dependency list: A Depends: X Y, Z. To represent that dependency, we expand each name to a list of versions that can satisfy the dependency, for example X1 X2 Y1, Z1. Translating this dependency list to our Z3 solver, we create boolean variables X1,X2,Y1,Z1 and define two rules:
  1. A implies atleast( X1,X2,Y1 , 1)
  2. A implies atleast( Z1 , 1)
If there actually was nothing that satisfied the Z requirement, we d have added a rule not A. It would be possible to simply not tell Z3 about the version at all as an optimization, but that adds more complexity, and the not A constraint should not cause too many problems.

Conflicts Conflicts cannot have or in them. A dependency B Conflicts: X, Y means that only one of B, X, and Y can be installed. We can directly encode this in Z3 by using the constraint atmost( B,X,Y , 1). This is an optimized encoding of the constraint: We could have encoded each conflict in the form !B or !X, !B or !X, and so on. Usually this leads to worse performance as it introduces additional clauses.

Complete example Let s assume we start with an empty install and want to install the package a below.
Package: a
Version: 1
Depends: c   b
Package: b
Version: 1
Package: b
Version: 2
Conflicts: x
Package: d
Version: 1
Package: x
Version: 1
The translation in Z3 rules looks like this:
  1. Package rules for a:
    1. a == atleast( a1 , 1) - package is installed iff one version is
    2. atmost( a1 , 1) - only one version may be installed
    3. a a must be installed
  2. Dependency rules for a
    1. implies(a1, atleast( b2, b1 , 1)) the translated dependency above. note that c is gone, it s not reachable.
  3. Package rules for b:
    1. b == atleast( b1,b2 , 1) - package is installed iff one version is
    2. atmost( b1, b2 , 1) - only one version may be installed
  4. Dependencies for b (= 2):
    1. atmost( b2, x1 , 1) - the conflicts between x and b = 2 above
  5. Package rules for x:
    1. x == atleast( x1 , 1) - package is installed iff one version is
    2. atmost( x1 , 1) - only one version may be installed
The package d is not translated, as it is not reachable from the root set a1 , the transitive root set is a1,b1,b2,x1 .

Next iteration: Optimization We have now constructed the basic set of rules that allows us to solve solve our dependency problems (equivalent to SAT), however it might lead to suboptimal solutions where it removes automatically installed packages, or installs more packages than necessary, to name a few examples. In our next iteration, we have to look at introducing optimization; for example, have the minimum number of removals, the minimal number of changed packages, or satisfy as many recommends as possible. We will also look at the upgrade problem (upgrade as many packages as possible), the autoremove problem (remove as many automatically installed packages as possible).

5 July 2021

B lint R czey: Hello zstd compressed .debs in Ubuntu!

When Julian Andres Klode and I added initial Zstandard compression support to Ubuntu s APT and dpkg in Ubuntu 18.04 LTS we planned getting the changes accepted to Debian quickly and making Ubuntu 18.10 the first release where the new compression could speed up package installations and upgrades. Well, it took slightly longer than that. Since then many other packages have been updated to support zstd compressed packages and read-only compression has been back-ported to the 16.04 Xenial LTS release, too, on Ubuntu s side. In Debian, zstd support is available now in APT, debootstrap and reprepro (thanks Dimitri!). It is still under review for inclusion in Debian s dpkg (BTS bug 892664). Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed!

20 June 2021

Julian Andres Klode: Migrating away from apt-key

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world. The manual page already provides all you need to know for replacing apt-key add usage:
Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either gpg or asc as file extension
So it s kind of surprising people need step by step instructions for how to copy/download a file into a directory. I ll also discuss the alternative security snakeoil approach with signed-by that s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root. Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation Assume you currently have:
wget -qO- https://myrepo.example/myrepo.asc   sudo apt-key add  
To translate this directly for bionic and newer, you can use:
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc
or to avoid downloading as root:
wget -qO-  https://myrepo.example/myrepo.asc   sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc
Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg
Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems. wget might not be available everywhere so you can use apt-helper:
sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc
or, to avoid downloading as root:
/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by People say it s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it s better for publicity :) As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I d personally not tell people to modify /usr as it s supposed to be managed by the package manager, but we don t have an /etc/apt/keyrings or similar at the moment; it s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).
# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.
# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc   gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg   sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main'  \
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop
I do wonder why they do wget gpg --dearmor, pipe that into the file and then cat sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance We have three scenarios: For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted. Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess. Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don t have another dir inside /etc (or /usr/local), so it s hard to suggest something else. There s no significant benefit from actually using signed-by, so it s kind of extra work for little gain, though.

Addendum: Future work This part is new, just for this blog post. Let s look at upcoming changes and how they make things easier.

Bundled .sources files Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):
Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----
Then you can just provide a .sources files to users, they place it into sources.list.d, and everything magically works Probably adding a nice apt add-source command for it I guess. Well, python-apt s aptsources package still does not support deb822 sources, and never will, we ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It s temporarily named aptsign, but it s a generic signer for single-section deb822 files, similar to signify/minisign. We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

27 May 2021

Michael Prokop: What to expect from Debian/bullseye #newinbullseye

Bullseye Banner, Copyright 2020 Juliette Taka Debian v11 with codename bullseye is supposed to be released as new stable release soon-ish (let s hope for June, 2021! :)). Similar to what we had with #newinbuster and previous releases, now it s time for #newinbullseye! I was the driving force at several of my customers to be well prepared for bullseye before its freeze, and since then we re on good track there overall. In my opinion, Debian s release team did (and still does) a great job I m very happy about how unblock requests (not only mine but also ones I kept an eye on) were handled so far. As usual with major upgrades, there are some things to be aware of, and hereby I m starting my public notes on bullseye that might be worth also for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective. Further readings Of course start with taking a look at the official Debian release notes, make sure to especially go through What s new in Debian 11 + Issues to be aware of for bullseye. Chris published notes on upgrading to Debian bullseye, and also anarcat published upgrade notes for bullseye. Package versions As a starting point, let s look at some selected packages and their versions in buster vs. bullseye as of 2021-05-27 (mainly having amd64 in mind):
Package buster/v10 bullseye/v11
ansible 2.7.7 2.10.8
apache 2.4.38 2.4.46
apt 1.8.2.2 2.2.3
bash 5.0 5.1
ceph 12.2.11 14.2.20
docker 18.09.1 20.10.5
dovecot 2.3.4 2.3.13
dpkg 1.19.7 1.20.9
emacs 26.1 27.1
gcc 8.3.0 10.2.1
git 2.20.1 2.30.2
golang 1.11 1.15
libc 2.28 2.31
linux kernel 4.19 5.10
llvm 7.0 11.0
lxc 3.0.3 4.0.6
mariadb 10.3.27 10.5.10
nginx 1.14.2 1.18.0
nodejs 10.24.0 12.21.0
openjdk 11.0.9.1 11.0.11+9 + 17~19
openssh 7.9p1 8.4p1
openssl 1.1.1d 1.1.1k
perl 5.28.1 5.32.1
php 7.3 7.4+76
postfix 3.4.14 3.5.6
postgres 11 13
puppet 5.5.10 5.5.22
python2 2.7.16 2.7.18
python3 3.7.3 3.9.2
qemu/kvm 3.1 5.2
ruby 2.5.1 2.7+2
rust 1.41.1 1.48.0
samba 4.9.5 4.13.5
systemd 241 247.3
unattended-upgrades 1.11.2 2.8
util-linux 2.33.1 2.36.1
vagrant 2.2.3 2.2.14
vim 8.1.0875 8.2.2434
zsh 5.7.1 5.8
Linux Kernel The bullseye release will ship a Linux kernel based on v5.10 (v5.10.28 as of 2021-05-27, with v5.10.38 pending in unstable/sid), whereas buster shipped kernel 4.19. As usual there are plenty of changes in the kernel area and this might warrant a separate blog entry, but to highlight some issues: One surprising change might be that the scrollback buffer (Shift + PageUp) is gone from the Linux console. Make sure to always use screen/tmux or handle output through a pager of your choice if you need all of it and you re in the console. The kernel provides BTF support (via CONFIG_DEBUG_INFO_BTF, see #973870), which means it s no longer necessary to install LLVM, Clang, etc (requiring >100MB of disk space), see Gregg s excellent blog post regarding the underlying rational. Sadly the libbpf-tools packaging didn t make it into bullseye (#978727), but if you want to use your own self-made Debian packages, my notes might be useful. With kernel version 5.4, SUBDIRS support was removed from kbuild, so if an out-of-tree kernel module (like a *-dkms package) fails to compile on bullseye, make sure to use a recent version of it which uses M= or KBUILD_EXTMOD= instead. Unprivileged user namespaces are enabled by default (see #898446 + #987777), so programs can create more restricted sandboxes without the need to run as root or via a setuid-root helper. If you prefer to keep this feature restricted (or tools like web browsers, WebKitGTK, Flatpak, don t work), use sysctl -w kernel.unprivileged_userns_clone=0 . The /boot/System.map file(s) no longer provide the actual data, you need to switch to the dbg package if you rely on that information:
% cat /boot/System.map-5.10.0-6-amd64 
ffffffffffffffff B The real System.map is in the linux-image-<version>-dbg package
Be aware though, that the *-dbg package requires ~5GB of additional disk space. Systemd systemd v247 made it into bullseye (updated from v241). Same as for the kernel this might warrant a separate blog entry, but to mention some highlights: Systemd in bullseye activates its persistent journal functionality by default (storing its files in /var/log/journal/, see #717388). systemd-timesyncd is no longer part of the systemd binary package itself, but available as standalone package. This allows usage of ntp, chrony, openntpd, without having systemd-timesyncd installed (which prevents race conditions like #889290, which was biting me more than once). journalctl gained new options:
--cursor-file=FILE      Show entries after cursor in FILE and update FILE
--facility=FACILITY...  Show entries with the specified facilities
--image=IMAGE           Operate on files in filesystem image
--namespace=NAMESPACE   Show journal data from specified namespace
--relinquish-var        Stop logging to disk, log to temporary file system
--smart-relinquish-var  Similar, but NOP if log directory is on root mount
systemctl gained new options:
clean UNIT...                       Clean runtime, cache, state, logs or configuration of unit
freeze PATTERN...                   Freeze execution of unit processes
thaw PATTERN...                     Resume execution of a frozen unit
log-level [LEVEL]                   Get/set logging threshold for manager
log-target [TARGET]                 Get/set logging target for manager
service-watchdogs [BOOL]            Get/set service watchdog state
--with-dependencies                 Show unit dependencies with 'status', 'cat', 'list-units', and 'list-unit-files'
 -T --show-transaction              When enqueuing a unit job, show full transaction
 --what=RESOURCES                   Which types of resources to remove
--boot-loader-menu=TIME             Boot into boot loader menu on next boot
--boot-loader-entry=NAME            Boot into a specific boot loader entry on next boot
--timestamp=FORMAT                  Change format of printed timestamps
If you use systemctl edit to adjust overrides, then you ll now also get the existing configuration file listed as comment, which I consider very helpful. The MACAddressPolicy behavior with systemd naming schema v241 changed for virtual devices (I plan to write about this in a separate blog post). There are plenty of new manual pages: systemd also gained new unit configurations related to security hardening: Another new unit configuration is SystemCallLog= , which supports listing the system calls to be logged. This is very useful for for auditing or temporarily when constructing system call filters. The cgroupv2 change is also documented in the release notes, but to explicitly mention it also here, quoting from /usr/share/doc/systemd/NEWS.Debian.gz:
systemd now defaults to the unified cgroup hierarchy (i.e. cgroupv2).
This change reflects the fact that cgroups2 support has matured
substantially in both systemd and in the kernel.
All major container tools nowadays should support cgroupv2.
If you run into problems with cgroupv2, you can switch back to the previous,
hybrid setup by adding systemd.unified_cgroup_hierarchy=false to the
kernel command line.
You can read more about the benefits of cgroupv2 at
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
Note that cgroup-tools (lssubsys + lscgroup etc) don t work in cgroup2/unified hierarchy yet (see #959022 for the details). Configuration management puppet s upstream doesn t provide packages for bullseye yet (see PA-3624 + MODULES-11060), and sadly neither v6 nor v7 made it into bullseye, so when using the packages from Debian you re still stuck with v5.5 (also see #950182). ansible is also available, and while it looked like that only version 2.9.16 would make it into bullseye (see #984557 + #986213), actually version 2.10.8 made it into bullseye. chef was removed from Debian and is not available with bullseye (due to trademark issues). Prometheus stack Prometheus server was updated from v2.7.1 to v2.24.1, and the prometheus service by default applies some systemd hardening now. Also all the usual exporters are still there, but bullseye also gained some new ones: Virtualization docker (v20.10.5), ganeti (v3.0.1), libvirt (v7.0.0), lxc (v4.0.6), openstack, qemu/kvm (v5.2), xen (v4.14.1), are all still around, though what s new and noteworthy is that podman version 3.0.1 (tool for managing OCI containers and pods) made it into bullseye. If you re using the docker packages from upstream, be aware that they still don t seem to understand Debian package version handling. The docker* packages will not be automatically considered for upgrade, as 5:20.10.6~3-0~debian-buster is considered newer than 5:20.10.6~3-0~debian-bullseye:
% apt-cache policy docker-ce
  docker-ce:
    Installed: 5:20.10.6~3-0~debian-buster
    Candidate: 5:20.10.6~3-0~debian-buster
    Version table:
   *** 5:20.10.6~3-0~debian-buster 100
          100 /var/lib/dpkg/status
       5:20.10.6~3-0~debian-bullseye 500
          500 https://download.docker.com/linux/debian bullseye/stable amd64 Packages
Vagrant is available in version 2.2.14, the package from upstream works perfectly fine on bullseye as well. If you re relying on VirtualBox, be aware that upstream doesn t provide packages for bullseye yet, but the package from Debian/unstable (v6.1.22 as of 2021-05-27) works fine on bullseye (VirtualBox isn t shipped with stable releases since quite some time due to lack of cooperation from upstream on security support for older releases, see #794466). If you rely on the virtualbox-guest-additions-iso and its shared folders support, you might be glad to hear that v6.1.22 made it into bullseye (see #988783), properly supporting more recent kernel versions like present in bullseye. debuginfod There s a new service debuginfod.debian.net (see debian-devel-announce and Debian Wiki), which makes the debugging experience way smoother. You no longer need to download the debugging Debian packages (*-dbgsym/*-dbg), but instead can fetch them on demand, by exporting the following variables (before invoking gdb or alike):
% export DEBUGINFOD_PROGRESS=1    # for optional download progress reporting
% export DEBUGINFOD_URLS="https://debuginfod.debian.net"
BTW: if you can t rely on debuginfod (for whatever reason), I d like to point your attention towards find-dbgsym-packages from the debian-goodies package. Vim Sadly Vim 8.2 once again makes another change for bad defaults (hello mouse behavior!). When incsearch is set, it also applies to :substitute. This makes it veeeeeeeeeery annoying when running something like :%s/\s\+$// to get rid of trailing whitespace characters, because if there are no matches it jumps to the beginning of the file and then back, sigh. To get the old behavior back, you can use this:
au CmdLineEnter : let s:incs = &incsearch   set noincsearch
au CmdLineLeave : let &incsearch = s:incs
rsync rsync was updated from v3.1.3 to v3.2.3. It provides various checksum enhancements (see option --checksum-choice). We got new capabilities (hardlink-specials, atimes, optional protect-args, stop-at, no crtimes) and the addition of zstd and lz4 compression algorithms. And we got new options: OpenSSH OpenSSH was updated from v7.9p1 to 8.4p1, so if you re interested in all the changes, check out the release notes between those version (8.0, 8.1, 8.2, 8.3 + 8.4). Let s highlight some notable new features: Misc unsorted

18 February 2021

Julian Andres Klode: APT 2.2 released

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series. Let s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features
  • Various patterns related to dependencies, such as ?depends are now available (2.1.16)
  • The Protected field is now supported. It replaces the previous Important field and is like Essential, but only for installed packages (some minor more differences maybe in terms of ordering the installs).
  • The update command has gained an --error-on=any option that makes it error out on any failure, not just what it considers persistent ons.
  • The rred method can now be used as a standalone program to merge pdiff files
  • APT now implements phased updates. Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket, but has previously only been available to desktop users using update-manager.

Other behavioral changes
  • The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time, rather than at kernel install time, in order to correctly protect the kernel that is running now, rather than the kernel that was running when we were installing the newest one. It also now protects only up to 3 kernels, instead of up to 4, as was originally intended, and was the case before 1.1 series. This avoids /boot partitions from running out of space, especially on Ubuntu which has boot partitions sized for the original spec.

Performance improvements
  • The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
  • The hash table size has been increased

Bug fixes
  • * wildcards work normally again (since 2.1.0)
  • The cache file now includes all translation files in /var/lib/apt/lists, so multi-user systems with different locales correctly show translated descriptions now.
  • URLs are no longer dequoted on redirects only to be requoted again, fixing some redirects where servers did not expect different quoting.
  • Immediate configuration is now best-effort, and failure is no longer fatal.
  • various changes to solver marking leading to different/better results in some cases (since 2.1.0)
  • The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
  • The HTTP method no longer infinitely retries downloads on some connection errors
  • The pkgnames command no longer accidentally includes source packages
  • Various fixes from fuzzing efforts by David

Security fixes
  • Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
  • Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)
(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities
  • N/A - there were no breaking changes in apt 2.2 that we are aware of.

Deprecations
  • apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added. apt-key was made obsolete in version 0.7.25.1, released in January 2010, by /etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files, and was since then only intended for deleting keys in the legacy trusted.gpg keyring. Please manage files in trusted.gpg.d yourself; or place them in a different location such as /etc/apt/keyrings (or make up your own, there s no standard location) or /usr/share/keyrings, and use signed-by in the sources.list.d files. The legacy trusted.gpg keyring still works, but will also stop working eventually. Please make sure you have all your keys in trusted.gpg.d. Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d. Future versions of APT might switch away from GPG.
  • As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0). They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.

30 October 2020

Kees Cook: combining apt install and get dist-upgrade ?

I frequently see a pattern in image build/refresh scripts where a set of packages is installed, and then all packages are updated:
apt update
apt install -y pkg1 pkg2 pkg2
apt dist-upgrade -y
While it s not much, this results in redundant work. For example reading/writing package database, potentially running triggers (man-page refresh, ldconfig, etc). The internal package dependency resolution stuff isn t actually different: install will also do upgrades of needed packages, etc. Combining them should be entirely possible, but I haven t found a clean way to do this yet. The best I ve got so far is:
apt update
apt-cache dumpavail   dpkg --merge-avail -
(for i in pkg1 pkg2 pkg3; do echo "$i install")   dpkg --set-selections
apt-get dselect-upgrade
This gets me the effect of running install and upgrade at the same time, but not dist-upgrade (which has slightly different resolution logic that d I d prefer to use). Also, it includes the overhead of what should be an unnecessary update of dpkg s database. Anyone know a better way to do this? Update: Julian Andres Klode pointed out that dist-upgrade actually takes package arguments too just like install. *face palm* I didn t even try it I believed the man-page and the -h output. It works perfectly!

2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

3 October 2020

Julian Andres Klode: Google Pixel 4a: Initial Impressions

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel? Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod. Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don t want it to be constantly behind. Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy. Size and weight: OnePlus phones keep getting bigger and bigger. By today s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions

Accessories The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the o in Google? Google sells you fabric cases for 45 . That seems a bit excessive, although I like that a lot of it is recycled.

Haptics Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function. The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it s a Model M. It just feels great. The plastic back feels really good, it s that sort of high quality smooth plastic you used to see on those high-end Nokia devices. The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.

Software The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone. There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany. The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I ll be like Oh the font is huge , but right now, it feels a bit small on the Pixel. You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I d need to calibrate the screen then - work!).

Migration experience Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored. I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup). I ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.

Performance It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance. Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.

Audio The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I ve never had this before, and it will take some time getting used to.

Final thoughts This is a boring phone. There s no wow factor at all. It s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a only 350 phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is you guessed it, plenty. The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620 on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn t get much use out of it, seeing as its not available anywhere I am. Because I m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.

Outlook The big question for me is whether I ll be able to adjust to the smaller display. I now have a tablet, so I m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call. Oh while we re talking about calls - I only have a data-only SIM in it, so I could not test calling. I m transferring to a new phone contract this month, and I ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10 discount for being an existing cable customer. I m also looking forward to playing around with the camera (especially night sight), and eSIM. And I m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can t really judge the screen quality much, as I still have the factory protection film on it, and that s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case. I might report back in two weeks when I have spent some more time with the device.

9 June 2020

Julian Andres Klode: Review: Chromebook Duet

Sporting a beautiful 10.1 1920x1200 display, the Lenovo IdeaPad Duet Chromebook or Duet Chromebook, is one of the latest Chromebooks released, and one of the few slate-style tablets, and it s only about 300 EUR (300 USD). I ve had one for about 2 weeks now, and here are my thoughts.

Build & Accessories The tablet is a fairly Pixel-style affair, in that the back has two components, one softer blue one housing the camera and a metal feeling gray one. Build quality is fairly good. The volume and power buttons are located on the right side of the tablet, and this is one of the main issues: You end up accidentally pressing the power button when you want to turn your volume lower, despite the power button having a different texture. Alongside the tablet, you also find a kickstand with a textile back, and a keyboard, both of which attach via magnets (and pogo pins for the keyboard). The keyboard is crammed, with punctuation keys being halfed in size, and it feels mushed compared to my usual experiences of ThinkPads and Model Ms, but it s on par with other Chromebooks, which is surprising, given it s a tablet attachment.
fully assembled chromebook duet fully assembled chromebook duet
I mostly use the Duet as a tablet, and only attach the keyboard occasionally. Typing with the keyboard on your lap is suboptimal. My first Duet had a few bunches of dead pixels, so I returned it, as I had a second one I could not cancel ordered as well. Oh dear. That one was fine!

Hardware & Connectivity The Chromebook Duet is powered by a Mediatek Helio P60T SoC, 4GB of RAM, and a choice of 64 or 128 GB of main storage. The tablet provides one USB-C port for charging, audio output (a 3.5mm adapter is provided in the box), USB hub, and video output; though, sadly, the latter is restricted to a maximum of 1080p30, or 1440x900 at 60 Hz. It can be charged using the included 10W charger, or use up to I believe 18W from a higher powered USB-C PD charger. I ve successfully used the Chromebook with a USB-C monitor with attached keyboard, mouse, and DAC without any issues. On the wireless side, the tablet provides 2x2 Wifi AC and Bluetooth 4.2. WiFi reception seemed just fine, though I have not done any speed testing, missing a sensible connection at the moment. I used Bluetooth to connect to my smartphone for instant tethering, and my Sony WH1000XM2 headphones, both of which worked without any issues. The screen is a bright 400 nit display with excellent viewing angles, and the speakers do a decent job, meaning you can use easily use this for watching a movie when you re alone in a room and idling around. It has a resolution of 1920x1200. The device supports styluses following the USI standard. As of right now, the only such stylus I know about is an HP one, and it costs about 70 or so. Cameras are provided on the front and the rear, but produce terrible images.

Software: The tablet experience The Chromebook Duet runs Chrome OS, and comes with access to Android apps using the play store (and sideloading in dev mode) and access to full Linux environments powered by LXD inside VMs. The screen which has 1920x1200 is scaled to a ridiculous 1080x675 by default which is good for being able to tap buttons and stuff, but provides next to no content. Scaling it to 1350x844 makes things more balanced. The Linux integration is buggy. Touches register in different places than where they happened, and the screen is cut off in full screen extremetuxracer, making it hard to recommend for such uses. Android apps generally work fine. There are some issues with the back gesture not registering, but otherwise I have not found issues I can remember. One major drawback as a portable media consumption device is that Android apps only work in Widevine level 3, and hence do not have access to HD content, and the web apps of Netflix and co do not support downloading. Though one of the Duets actually said L1 in check apps at some point (reported in issue 1090330). It s also worth noting that Amazon Prime Video only renders in HD, unless you change your user agent to say you are Chrome on Windows - bad Amazon! The tablet experience also lags in some other ways, as the palm rejection is overly extreme, causing it to reject valid clicks close to the edge of the display (reported in issue 1090326). The on screen keyboard is terrible. It only does one language at a time, forcing me to switch between German and English all the time, and does not behave as you d expect it when editing existing words - it does not know about them and thinks you are starting a new one. It does provide a small keyboard that you can move around, as well as a draw your letters keyboard, which could come in handy for stylus users, I guess. In any case, it s miles away from gboard on Android. Stability is a mixed bag right now. As of Chrome OS 83, sites (well only Disney+ so far ) sometimes get killed with SIGILL or SIGTRAP, and the device rebooted on its own once or twice. Android apps that use the DRM sometimes do not start, and the Netflix Android app sometimes reports it cannot connect to the servers.

Performance Performance is decent to sluggish, with micro stuttering in a lot of places. The Mediatek CPU is comparable to Intel Atoms, and with only 4GB of RAM, and an entire Android container running, it s starting to show how weak it is. I found that Google Docs worked perfectly fine, as did websites such as Mastodon, Twitter, Facebook. Where the device really struggled was Reddit, where closing or opening a post, or getting a reply box could take 5 seconds or more. If you are looking for a Reddit browsing device, this is not for you. Performance in Netflix was fine, and Disney+ was fairly slow but still usable. All in all, it s acceptable, and given the price point and the build quality, probably the compromise you d expect.

Summary tl;dr:
  • good: Build quality, bright screen, low price, included accessories
  • bad: DRM issues, performance, limited USB-C video output, charging speed, on-screen keyboard, software bugs
The Chromebook Duet or IdeaPad Duet Chromebook is a decent tablet that is built well above its price point. It s lackluster performance and DRM woes make it hard to give a general recommendation, though. It s not a good laptop. I can see this as the perfect note taking device for students, and as a cheap tablet for couch surfing, or as your on-the-go laptop replacement, if you need it only occasionally. I cannot see anyone using this as their main laptop, although I guess some people only have phones these days, so: what do I know? I can see you getting this device if you want to tinker with Linux on ARM, as Chromebooks are quite nice to tinker with, and a tablet is super nice.

25 April 2020

Julian Andres Klode: An - EPYC - Focal Upgrade

Ubuntu Focal Fossa 20.04 was released two days ago, so I took the opportunity yesterday and this morning to upgrade my VPS from Ubuntu 18.04 to 20.04. The VPS provides: I rebooted one more time than necessary, though, as my cloud provider Hetzner recently started offering 2nd generation EPYC instances which I upgraded to from my Skylake Xeon based instance. I switched from the CX21 for 5.83 /mo to the CPX11 for 4.15 /mo. This involved a RAM downgrade - from 4GB to 2GB, but that s fine, the maximum usage I saw was about 1.3 GB when running dose-distcheck (running hourly); and it s good for everyone that AMD is giving Intel some good competition, I think. Anyway, to get back to the distribution upgrade - it was fairly boring. I started yesterday by taking a copy of the server and launching it locally in a lxd container, and then tested the upgrade in there; to make sure I m prepared for the real thing :) I got a confusing prompt from postfix as to which site I m operating (which is a normal prompt, but I don t know why I see it on an upgrade); and a few config files I had changed locally. As the server is managed by ansible, I just installed the distribution config files and dropped my changes (setting DPkg::Options "--force-confnew"; ;" in apt.conf), and then after the upgrade, ran ansible to redeploy the changes (after checking what changes it would do and adjusting a few things). There are two remaining flaws:
  1. I run rspamd from the upstream repository, and that s not built for focal yet. So I m still using the bionic binary, and have to keep bionic s icu 60 and libhyperscan4 around for it. This is still preventing CI of the ansible config from passing for focal, because it won t have the needed bionic packages around.
  2. I run weechat from the upstream repository, and apt can t tell the versions apart. Well, it can for the repositories, because they have Size fields - but status does not. Hence, it merges the installed version with the first repository it sees. What happens is that it installs from weechat.org, but then it believes the installed version is from archive.ubuntu.com and replaces it each dist-upgrade. I worked around it by moving the weechat.org repo to the front of sources.list, so that the it gets merged with that instead of the archive.ubuntu.com one, as it should be, but that s a bit ugly.
I also should start the migration to EC certificates for TLS, and 0-RTT handshakes, so that the initial visit experience is faster. I guess I ll have to move away from certbot for that, but I have not investigated this recently.

7 March 2020

Julian Andres Klode: APT 2.0 released

After brewing in experimental for a while, and getting a first outing in the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable. 1.10 would be a boring, weird number, eh? Compared to the 1.8 series, the APT 2.0 series features several new features, as well as improvements in performance, hardening. A lot of code has been removed as well, reducing the size of the library.

Highlighted Changes Since 1.8

New Features
  • Commands accepting package names now accept aptitude-style patterns. The syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for more details.
  • apt(8) now waits for the dpkg locks - indefinitely, when connected to a tty, or for 120s otherwise.
  • When apt cannot acquire the lock, it prints the name and pid of the process that currently holds the lock.
  • A new satisfy command has been added to apt(8) and apt-get(8)
  • Pins can now be specified by source package, by prepending src: to the name of the package, e.g.:
    Package: src:apt
    Pin: version 2.0.0
    Pin-Priority: 990
    
    Will pin all binaries of the native architecture produced by the source package apt to version 2.0.0. To pin packages across all architectures, append :any.

Performance
  • APT now uses libgcrypt for hashing instead of embedded reference implementations of MD5, SHA1, and SHA2 hash families.
  • Distribution of rred and decompression work during update has been improved to take into account the backlog instead of randomly assigning a worker, which should yield higher parallelization.

Incompatibilities
  • The apt(8) command no longer accepts regular expressions or wildcards as package arguments, use patterns (see New Features).

Hardening
  • Credentials specified in auth.conf now only apply to HTTPS sources, preventing malicious actors from reading credentials after they redirected users from a HTTP source to an http url matching the credentials in auth.conf. Another protocol can be specified, see apt_auth.conf(5) for the syntax.

Developer changes
  • A more extensible cache format, allowing us to add new fields without breaking the ABI
  • All code marked as deprecated in 1.8 has been removed
  • Implementations of CRC16, MD5, SHA1, SHA2 have been removed
  • The apt-inst library has been merged into the apt-pkg library.
  • apt-pkg can now be found by pkg-config
  • The apt-pkg library now compiles with hidden visibility by default.
  • Pointers inside the cache are now statically typed. They cannot be compared against integers (except 0 via nullptr) anymore.

python-apt 2.0 python-apt 2.0 is not yet ready, I m hoping to add a new cleaner API for cache access before making the jump from 1.9 to 2.0 versioning.

libept 1.2 I ve moved the maintenance of libept to the APT team. We need to investigate how to EOL this properly and provide facilities inside APT itself to replace it. There are no plans to provide new features, only bugfixes / rebuilds for new apt versions.

23 October 2017

Julian Andres Klode: APT 1.6 alpha 1 seccomp and more

I just uploaded APT 1.6 alpha 1, introducing a very scary thing: Seccomp sandboxing for methods, the programs downloading files from the internet and decompressing or compressing stuff. With seccomp I reduced the number of system calls these methods can use to 149 from 430. Specifically we excluded most ways of IPC, xattrs, and most importantly, the ability for methods to clone(2), fork(2), or execve(2) (or execveat(2)). Yes, that s right methods can no longer execute programs. This was a real problem, because the http method did in fact execute programs there is this small option called ProxyAutoDetect or Proxy-Auto-Detect where you can specify a script to run for an URL and the script outputs a (list of) proxies. In order to be able to seccomp the http method, I moved the invocation of the script to the parent process. The parent process now executes the script within the sandbox user, but without seccomp (obviously). I tested the code on amd64, ppc64el, s390x, arm64, mipsel, i386, and armhf. I hope it works on all other architectures libseccomp is currently built for in Debian, but I did not check that, so your apt might be broken now if you use powerpc, powerpcspe, armel, mips, mips64el, hhpa, or x32 (I don t think you can even really use x32). Also, apt-transport-https is gone for good now. When installing the new apt release, any installed apt-transport-https package is removed (apt breaks apt-transport-https now, but it also provides it versioned, so any dependencies should still be satisfiable). David also did a few cool bug fixes again, finally teaching apt-key to ignore unsupported GPG key files instead of causing weird errors
Filed under: Uncategorized

30 September 2017

Chris Lamb: Free software activities in September 2017

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)


I also made the following changes to our tooling:
reproducible-check

reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]
I also blogged about this utility. [...]
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in setup.py. [...]
    • Compare types with identity not equality. [...] [...]
    • Use logging.py's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.



Debian My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.
Lintian I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers: I also blogged specifically about the Lintian 2.5.54 release.

Patches contributed
  • debconf: Please add a context manager to debconf.py. (#877096)
  • nm.debian.org: Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)
I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.

Debian LTS

This month I have been paid to work 15 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.

Uploads
  • python-django:
    • 1.11.5-1 New upstream security release. (#874415)
    • 1.11.5-2 Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 New upstream release.
    • 4.0.2-2 Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 Upload to stretch-backports.
  • aptfs (0.11.0-1) New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) New upstream release.
  • docbook-to-man (1:2.0.0-39) Tighten autopkgtests and enable testing via travis.debian.net.
  • python-daiquiri (1.3.0-1) New upstream release.

I also made the following non-maintainer uploads (NMUs):

Debian bugs filed
  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • bugs.debian.org: Please explicitly link to packages,tracker .debian.org. (#876746)
  • Requests for packaging:
    • selfspy log everything you do on the computer. (#873955)
    • shoogle use the Google API from the shell. (#873916)

FTP Team

As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston. I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

24 September 2017

Julian Andres Klode: APT 1.5 is out

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty. This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https. Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files: The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot. An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time. Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8) We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again). We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 we now link against it normally. Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable. Paul Wise contributed patches to allow configuring the apt-daily intervals more easily apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify always to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms). Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don t expect any life-changing changes for now. I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs. Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so ).
Filed under: Debian, Ubuntu

17 August 2017

Julian Andres Klode: Why TUF does not shine (for APT repositories)

In DebConf17 there was a talk about The Update Framework, short TUF. TUF claims to be a plug-in solution to software updates, but while it has the same practical level of security as apt, it also has the same shortcomings, including no way to effectively revoke keys. TUF divides signing responsibilities into roles: A root role, a targets rule (signing stuff to download), a snapshots rule (signing meta data), and a time stamp rule (signing a time stamp file). There also is a mirror role for signing a list of mirrors, but we can ignore that for now. It strongly recommends that all keys except for timestamp and mirrors are kept offline, which is not applicable for APT repositories Ubuntu updates the repository every 30 minutes, imagine doing that with offline keys. An insane proposal. In APT repositories, we effectively only have a snapshots rule the only thing we sign are Release files, and trust is then chained down by hashes (Release files hashes Packages index files, and they have hashes of individual packages). The keys used to sign repositories are online keys, after all, all the metadata files change every 30 minutes (Ubuntu) or 6 hours (Debian) it s impossible to sign them by hand. The timestamp role is replaced by a field in the Release file specifying until when the Release file is considered valid. Let s check the attacks TUF protects again: As we can see, APT addresses all attacks TUF addresses. But both do not handle key revocation. So, if a key & mirror gets compromised (or just key and the mirror is MITMed), we cannot inform the user that the key has been compromised and block updates from the compromised repository. I just wrote up a proposal to allow APT to query for revoked keys from a different host with a key revocation list (KRL) file that is signed by different keys than the repository. This would solve the problem of key revocation easily even if the repository host is MITMed or compromised, we can still revoke the keys signing the repository from a different location.
Filed under: Debian, Ubuntu

14 February 2017

Julian Andres Klode: jak-linux.org moved / backing up

In the past two days, I moved my main web site jak-linux.org (and jak-software.de) from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by uberspace.de. Encryption is provided by Let s Encrypt. I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that s quite a good time  And, for those of you who don t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That s quite cool. I m considering migrating the blog away from wordpress at some point in the future having a more integrated experience is a bit nicer than having my web presence split over two sites. I m unsure if I shouldn t add something like cloudflare there I don t want to overload the servers (but I only serve static pages, so how much load is this really going to get?). in other news: off-site backups I also recently started doing offsite backups via borg to a server operated by the wonderful rsync.net. For those of you who do not know rsync.net: You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there. The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. you just have to send them an email. Finally, I must say that uberspace and rsync.net feel similar in spirit. Both heavily emphasise the command line, and don t really have any fancy click stuff. I like that.
Filed under: General

21 December 2016

Hideki Yamane: considering package delta


From Android Developers Blog: Saving Data: Reducing the size of App Updates by 65%

We should consider providing delta package, especially update packages from security.debian.org, IMO.

Update:
Yes, B lint R czey and others via email pointed out there's debdelta.debian.net for this purpose. But for general usage, it should be intergrated to the infrastructure, without any extra manual setup. Probably apt (as Julian Andres Klode said in his talk in DebConf16and also infrastructure (dak?) would need to be modified to implement it.

Applying delta to daily unstable/testing update may be hard, but security update packages from security.debian.org and stable point release is worse for the effort at least, IMO.

Some rpm distro (Fedora and openSUSE) have already provided delta package, so we can do it, too. Right? :-)

25 November 2016

Julian Andres Klode: Starting the faster, more secure APT 1.4 series

We just released the first beta of APT 1.4 to Debian unstable (beta here means that we don t know any other big stuff to add to it, but are still open to further extensions). This is the release series that will be released with Debian stretch, Ubuntu zesty, and possibly Ubuntu zesty+1 (if the Debian freeze takes a very long time, even zesty+2 is possible). It should reach the master archive in a few hours, and your mirrors shortly after that. Security changes APT 1.4 by default disables support for repositories signed with SHA1 keys. I announced back in January that it was my intention to do this during the summer for development releases, but I only remembered the Jan 1st deadline for stable releases supporting that (APT 1.2 and 1.3), so better late than never. Around January 1st, the same or a similar change will occur in the APT 1.2 and 1.3 series in Ubuntu 16.04 and 16.10 (subject to approval by Ubuntu s release team). This should mean that repository provides had about one year to fix their repositories, and more than 8 months since the release of 16.04. I believe that 8 months is a reasonable time frame to upgrade a repository signing key, and hope that providers who have not updated their repositories yet will do so as soon as possible. Performance work APT 1.4 provides a 10-20% performance increase in cache generation (and according to callgrind, we went from approx 6.8 billion to 5.3 billion instructions for my laptop s configuration, a reduction of more than 21%). The major improvements are: We switched the parsing of Deb822 files (such as Packages files) to my perfect hash function TrieHash. TrieHash which generates C code from a set of words is about equal or twice as fast as the previously used hash function (and two to three times faster than gperf), and we save an additional 50% of that time as we only have to hash once during parsing now, instead of during look up as well. APT 1.4 marks the first time TrieHash is used in any software. I hope that it will spread to dpkg and other software at a later point in time.vendors. Another important change was to drop normalization of Description-MD5 values, the fields mapping a description in a Packages files to a translated description. We used to parse the hex digits into a native binary stream, and then compared it back to hex digits for comparisons, which cost us about 5% of the run time performance. We also optimized one of our hash functions the VersionHash that hashes the important fields of a package to recognize packages with the same version, but different content to not normalize data to a temporary buffer anymore. This buffer has been the subject of some bugs (overflow, incompleteness) in the recent past, and also caused some slowdown due to the additional writes to the stack. Instead, we now pass the bytes we are interested in directly to our CRC code, one byte at a time. There were also some other micro-optimisations: For example, the hash tables in the cache used to be ordered by standard compare (alphabetical followed by shortest). It is now ordered by size first, meaning we can avoid data comparisons for strings of different lengths. We also got rid of a std::string that cannot use short string optimisation in a hot path of the code. Finally, we also converted our case-insensitive djb hashes to not use a normal tolower_ascii(), but introduced tolower_ascii_unsafe() which just sets the lowercase bit ( 0x20) in the character. Others For a more complete overview of all changes, consult the changelog.
Filed under: Debian, Ubuntu

Next.