Search Results: "tar"

20 October 2021

Russell Coker: Strange Apache Reload Issue

I recently had to renew the SSL certificate for my web server, nothing exciting about that but Certbot created a new directory for the key because I had removed some domains (moved to a different web server). This normally isn t a big deal, change the Apache configuration to the new file names and run the reload command. My monitoring system initially said that the SSL certificate wasn t going to expire in the near future so it looked fine. Then an hour later my monitoring system told me that the certificate was about to expire, apparently the old certificate came back! I viewed my site with my web browser and the new certificate was being used, it seemed strange. Then I did more tests with gnutls-cli which revealed that exactly half the connections got the new certificate and half got the old one. Because my web server isn t doing anything particularly demanding the mpm_event configuration only starts 2 servers, and even that may be excessive for what it does. So it seems that the Apache reload command had reloaded the configuration on one mpm_event server but not the other! Fortunately this was something that was easy to test and was something that was automatically tested. If the change that didn t get accepted was something small it would be a particularly insidious bug. I haven t yet tried to reproduce this. But if I get the time I ll do so and file a bug report.

15 October 2021

Sven Hoexter: ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display. Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:
xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary
Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log. From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:
$ lspci grep -E "VGA 3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

Adnan Hodzic: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

12 October 2021

Antonio Terceiro: Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually. This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools. I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch. collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools. Installing collab-qa-tools The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries). One of the patches I sent, and was already accepted, is the ability to run it without the need to install:
source /path/to/collab-qa-tools/activate.sh
This will add the tools to your $PATH. Preparation The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named $ PACKAGE _*.log or just $ PACKAGE .log. Creating a TODO file
cqa-scanlogs   grep -v OK > todo
todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:
split --lines=30 --numeric-suffixes todo todo
Triaging You can now do the triaging. Let's say we split the TODO files, and will start with todo01. The first step is calling cqa-fetchbugs (it does what it says on the tin):
cqa-fetchbugs --TODO=todo01
Then, cqa-annotate will guide you through the logs and allow you to report bugs:
cqa-annotate --TODO=todo01
I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:
#!/bin/sh
set -eu
for todo in $@; do
  # force downloading bugs
  awk ' print(".bugs." $1) ' "$ todo "   xargs rm -f
  cqa-fetchbugs --TODO="$ todo "
  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="$ todo "
done
The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:
From:   fullname   <  email  >
To: submit@bugs.debian.org
Subject:   package  : FTBFS with ruby3.0:   summary  
Source:   package  
Version:   version   split:'+rebuild'   first  
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0
Hi,
We are about to enable building against ruby3.0 on unstable. During a test
rebuild,   package   was found to fail to build in that situation.
To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.
Relevant part (hopefully):
 % for line in extract % >   line  
 % endfor % 
The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/  package  /  filename   replace:".log",".build.txt"  
The cqa-annotate loop cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:
######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus
     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in  undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in  singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in  assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in  block (2 levels) in <top (required)>'
Deprecation Warnings:
Using  should  from rspec-expectations' old  :should  syntax without explicitly enabling the syntax is deprecated. Use the new  :expect  syntax or explicitly enable  :should  with  config.expect_with(:rspec)    c  c.syntax = :should   instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in  block (2 levels) in <top (required)>'.
If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
 config.raise_errors_for_deprecations! , and it will turn the
deprecation warnings into errors, giving you the full backtrace.
1 deprecation warning total
Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure
Failed examples:
rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution
/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i r f]:
You can then choose one of the options: When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus  
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i 1 r f]:
Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

10 October 2021

Ben Hutchings: Debian LTS work, September 2021

In August I was assigned 12.75 hours of work by Freexian's Debian LTS initiative and carried over 18 hours from earlier months. I worked 2 hours and will carry over the remainder. I started work on an update to the linux package, but did not make an upload yet.

9 October 2021

Thorsten Alteholz: My Debian Activities in September 2021

FTP master This month I accepted 224 and rejected 47 packages. This is almost thrice the rejects of last month. Please, be more careful and check your package twice before uploading. The overall number of packages that got accepted was 233. Debian LTS This was my eighty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 24.75h. During that time I did LTS and normal security uploads of: I also started to work on exiv2 and faad2. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirty-ninth ELTS month. Unfortunately during my allocated time I could not process any upload. I worked on openssl, curl and squashfs-tools but for one reason or another the prepared packages didn t pass all tests. In order to avoid regressions, I postponed the uploads (meanwhile an ELA for curl was published ). Last but not least I did some days of frontdesk duties. Other stuff On my neverending golang challenge I again uploaded some packages either for NEW or as source upload. As Odyx took a break from all Debian activities, I volunteered to take care of the printing packages. Please be merciful when somethings breaks after I did an upload. My first printing upload was hplip

7 October 2021

Kentaro Hayashi: Sharing mentoring a new Debian contributor experience, lots of fun

I recently did mentoring a new Debian contributor. This is carried out in a framework with OSS Gate on-boarding. oss-gate.github.io In "OSS Gate on-boarding", recruit a new contributor who want to work on continuously. Then, corporation sponsor its employee as a mentor. Thus, employees can do it as a one of their job. During Aug - Oct period, I worked with a new debian contributor every 2h in a week. This experience is lots of fun, and learned a new things for me. The most important point is: a new Debian contributor aimed to do their work continuously even though mentoring period has finished. So, some of the work has been finished, but not for all. I tried to transfer knowledge for it. I'm looking forward that he makes things forward in consultation with other person's help. Here is the report about my activity as a mentor. First OSS Gate onboarding (The article is written by Japanese) The original blog entry is written by Japanese, I don't afford to translate it, so just paste link to google translate for your hints I hope someone can do a similar attempt too! For the record, I worked with a new Debian contributor about:

6 October 2021

Reproducible Builds: Reproducible Builds in September 2021

The goal behind reproducible builds is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus. In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:
First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a public good, [the] software-signing equivalent to Let s Encrypt . The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:
Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.
The second post (titled Signing Software The Easy Way with Sigstore and Cosign) goes into some technical details of getting started.
There was an interesting thread in the /r/Signal subreddit that started from the observation that Signal s apk doesn t match with the source code:
Some time ago I checked Signal s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn t match either.

BitcoinBinary.org was announced this month, which aims to be a repository of Reproducible Build Proofs for Bitcoin Projects :
Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project s build, others at least have a secondary source of validation.

Distribution work Fr d ric Pierret announceda new testing service at beta.tests.reproducible-builds.org, showing actual rebuilds of binaries distributed by both the Debian and Qubes distributions. In Debian specifically, however, 51 reviews of Debian packages were added, 31 were updated and 31 were removed this month to our database of classified issues. As part of this, Chris Lamb refreshed a number of notes, including the build_path_in_record_file_generated_by_pybuild_flit_plugin issue. Elsewhere in Debian, Roland Clobus posted his Fourth status update about reproducible live-build ISO images in Jenkins to our mailing list, which mentions (amongst other things) that:
  • All major configurations are still built regularly using live-build and bullseye.
  • All major configurations are reproducible now; Jenkins is green.
    • I ve worked around the issue for the Cinnamon image.
    • The patch was accepted and released within a few hours.
  • My main focus for the last month was on the live-build tool itself.
Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian live images which were being worked on by Roland Clobus.
Ariadne Conill published another detailed blog post related to various security initiatives within the Alpine Linux distribution. After summarising some conventional security work being done (eg. with sudo and the release of OpenSSH version 3.0), Ariadne included another section on reproducible builds: The main blocker [was] determining what to do about storing the build metadata so that a build environment can be recreated precisely . Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report.

Community news On our website this month, Bernhard M. Wiedemann fixed some broken links [ ] and Holger Levsen made a number of changes to the Who is Involved? page [ ][ ][ ]. On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:
I m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. [ ] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We ve come far, but there are still issues I d like to address. [ ]

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:
  • New features:
    • Support a newer format version of the R language s .rds files. [ ]
    • Update tests for OCaml 4.12. [ ]
    • Add a missing format_class import. [ ]
  • Bug fixes:
    • Don t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer s cleanup routines were rightfully assuming that its temporary directory had actually been created. [ ]
    • Fix (and test) the comparison of R language s .rdb files after refactoring temporary directory handling. [ ]
    • Ensure that RPM archives exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. [ ]
  • Codebase improvements:
    • Use our assert_diff routine in tests/comparators/test_rdata.py. [ ]
    • Move diffoscope.versions to diffoscope.tests.utils.versions. [ ]
    • Reformat a number of modules with Black. [ ][ ]
However, the following changes were also made:
  • Mattia Rizzolo:
    • Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. [ ]
    • Appease a shellcheck warning in debian/tests/control.sh. [ ]
    • Ignore a warning from h5py in our tests that doesn t concern us. [ ]
    • Drop a trailing .1 from the Standards-Version field as it s required. [ ]
  • Zbigniew J drzejewski-Szmek:
    • Stop using the deprecated distutils.spawn.find_executable utility. [ ][ ][ ][ ][ ]
    • Adjust an LLVM-related test for LLVM version 13. [ ]
    • Update invocations of llvm-objdump. [ ]
    • Adjust a test with a one-byte text file for file version 5.40. [ ]
And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size [ ] and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 [ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Drop my package rebuilder prototype as it s not useful anymore. [ ]
    • Schedule old packages in Debian bookworm. [ ]
    • Stop scheduling packages for Debian buster. [ ][ ]
    • Don t include PostgreSQL debug output in package lists. [ ]
    • Detect Python library mismatches during build in the node health check. [ ]
    • Update a note on updating the FreeBSD system. [ ]
  • Mattia Rizzolo:
    • Silence a warning from Git. [ ]
    • Update a setting to reflect that Debian bookworm is the new testing. [ ]
    • Upgrade the PostgreSQL database to version 13. [ ]
  • Roland Clobus (Debian live image generation):
    • Workaround non-reproducible config files in the libxml-sax-perl package. [ ]
    • Use the new DNS for the snapshot service. [ ]
  • Vagrant Cascadian:
    • Also note that the armhf architecture also systematically varies by the kernel. [ ]

Contributing If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Dirk Eddelbuettel: littler 0.3.14: Updates

max-heap image The fifteenth release of littler as a CRAN package just landed, following in the now fifteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo, as well as in the examples vignette. This release updates the helper scripts to download nighlies of RStudio Server and Desktop to their new naming scheme, adds a downloader for Quarto, extends the roxy.r wrapper with a new option, and updates the configure setting as requestion by CRAN and more. See the NEWS file entry below for more.

Changes in littler version 0.3.14 (2021-10-05)
  • Changes in examples
    • Updated RStudio download helper to changed file names
    • Added a new option to roxy.r wrapper
    • Added a downloader for Quarto command-line tool
  • Changes in package
    • The configure files were updated to the standard of autoconf version 2.69 following a CRAN request

My CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and now also on the new package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

4 October 2021

Paul Wise: FLOSS Activities September 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian BTS: reopened bugs closed by a spammer
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The purple-discord/harmony/pyemd/librecaptcha/esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

3 October 2021

Junichi Uekawa: Using podman for most of my local development environment.

Using podman for most of my local development environment. For my personal/upstream development I started using podman instead of lxc and pbuilder and other toolings. Most projects provide reasonable docker images (such as rust) and I am happier keeping my environment as a whole stable while I can iterate. I have a Dockerfile for the development environment like this:

Louis-Philippe V ronneau: ANC is not for me

Active noise cancellation (ANC) has been all the rage lately in the headphones and in-ear monitors market. It seems after Apple got heavily praised for their AirPods Pro, every somewhat serious electronics manufacturer released their own design incorporating this technology. The first headphones with ANC I remember trying on (in the early 2010s) were the Bose QuietComfort 15. Although the concept did work (they indeed cancelled some sounds), they weren't amazing and did a great job of convincing me ANC was some weird fad for people who flew often. The Sony WH-1000X M3 folded in their case As the years passed, chip size decreased, battery capacity improved and machine learning blossomed truly a perfect storm for the wireless ANC headphones market. I had mostly stayed a sceptic of this tech until recently a kind friend offered to let me try a pair of Sony WH-1000X M3. Having tested them thoroughly, I have to say I'm really tempted to buy them from him, as they truly are fantastic headphones1. They are very light, comfortable, work without a proprietary app and sound very good with the ANC on2 if a little bass-heavy for my taste3. The ANC itself is truly astounding and is leaps and bounds beyond what was available five years ago. It still isn't perfect and doesn't cancel ALL sounds, but transforms the low hum of the subway I find myself sitting in too often these days into a light *swoosh*. When you turn the ANC on, HVAC simply disappears. Most impressive to me is the way they completely cancel the dreaded sound of your footsteps resonating in your headphones when you walk with them. My old pair of Senheiser HD 280 Pro, with aftermarket sheepskin earpads I won't be keeping them though. Whilst I really like what Sony has achieved here, I've grown to understand ANC simply isn't for me. Some of the drawbacks of ANC somewhat bother me: the ear pressure it creates is tolerable, but is an additional energy drain over long periods of time and eventually gives me headaches. I've also found ANC accentuates the motion sickness I suffer from, probably because it messes up with some part of the inner ear balance system. Most of all, I found that it didn't provide noticeable improvements over good passive noise cancellation solutions, at least in terms of how high I have to turn the volume up to hear music or podcasts clearly. The human brain works in mysterious ways and it seems ANC cancelling a class of noises (low hums, constant noises, etc.) makes other noises so much more noticeable. People talking or bursty high pitched noises bothered me much more with ANC on than without. So for now, I'll keep using my trusty Senheiser HD 280 Pro4 at work and good in-ear monitors with Comply foam tips on the go.

  1. This blog post certainly doesn't aim to be a comprehensive review of these headphones. See Zeos' review if you want something more in-depth.
  2. As most ANC headphones, they don't sound as good when used passively through the 3.5mm port, but that's just a testament of how a great job Sony did of tuning the DSP.
  3. Easily fixed using an EQ.
  4. Retrofitted with aftermarket sheepskin earpads, they provide more than 32db of passive noise reduction.

2 October 2021

Jacob Adams: SSH Port Forwarding and the Command Cargo Cult

Someone is Wrong on the Internet If you look up how to only forward ports with ssh, you may come across solutions like this:
ssh -nNT -L 8000:example.com:80 user@bastion.example.com
Or perhaps this, if you also wanted to send ssh to the background:
ssh -NT -L 3306:db.example.com:3306 example.com &
Both of these use at least one option that is entirely redundant, and the second can cause ssh to fail to connect if you happen to be using password authentication. However, they seem to still persist in various articles about ssh port forwarding. I myself was using the first variation until just recently, and I figured I would write this up to inform others who might be still using these solutions. The correct option for this situation is not -nNT but simply -N, as in:
ssh -N -L 8000:example.com:80 user@bastion.example.com
If you want to also send ssh to the background, then you ll want to add -f instead of using your shell s built-in & feature, because you can then input passwords into ssh if necessary1 Honestly, that s the point of this article, so you can stop here if you want. If you re looking for a detailed explaination of what each of these options actually does, or if you have no idea what I m talking about, read on!

What is SSH Port Forwarding? ssh is a powerful tool for remote access to servers, allowing you to execute commands on a remote machine. It can also forward ports through a secure tunnel with the -L and -R options. Basically, you can forward a connection to a local port to a remote server like so:
ssh -L 8080:other.example.com:80 ssh.example.com
In this example, you connect to ssh.example.com and then ssh forwards any traffic on your local machine port 80802 to other.example.com port 80 via ssh.example.com. This is a really powerful feature, allowing you to jump3 inside your firewall with just an ssh server exposed to the world. It can work in reverse as well with the -R option, allowing connections on a remote host in to a server running on your local machine. For example, say you were running a website on your local machine on port 8080 but wanted it accessible on example.com port 804. You could use something like:
ssh -R 8080:example.com:80 example.com
The trouble with ssh port forwarding is that, absent any additional options, you also open a shell on the remote machine. If you re planning to both work on a remote machine and use it to forward some connection, this is fine, but if you just need to forward a port quickly and don t care about a shell at that moment, it can be annoying, especially since if the shell closes ssh will close the forwarding port as well. This is where the -N option comes in.

SSH just forwarding ports In the ssh manual page5, -N is explained like so:
Do not execute a remote command. This is useful for just forwarding ports.
This is all we need. It instructs ssh to run no commands on the remote server, just forward the ports specified in the -L or -R options. But people seem to think that there are a bunch of other necessary options, so what do those do?

SSH and stdin -n controls how ssh interacts with standard input, specifically telling it not to:
Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically for warded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)

SSH passwords and backgrounding -f sends ssh to background, freeing up the terminal in which you ran ssh to do other things.
Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.
As indicated in the description of -n, this does the same thing as using the shell s & feature with -n, but allows you to put in any necessary passwords first.

SSH and pseudo-terminals -T is a little more complicated than the others and has a very short explanation:
Disable pseudo-terminal allocation.
It has a counterpart in -t, which is explained a little better:
Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
As the description of -t indicates, ssh is allocating a pseudo-terminal on the remote machine, not the local one. However, I have confirmed6 that -N doesn t allocate a pseudo-terminal either, since it doesn t run any commands. Thus this option is entirely unnecessary.

What s a pseudo-terminal? This is a bit complicated, but basically it s an interface used in UNIX-like systems, like Linux or BSD, that pretends to be a terminal (thus pseudo-terminal). Programs like your shell, or any text-based menu system made in libraries like ncurses, expect to be connected to one (when used interactively at least). Basically it fakes as if the input it is given (over the network, in the case of ssh) was typed on a physical terminal device and do things like raise an interrupt (SIGINT) if Ctrl+C is pressed.

Why? I don t know why these incorrect uses of ssh got passed around as correct, but I suspect it s a form of cargo cult, where we use example commands others provide and don t question what they do. One stack overflow answer I read that provided these options seemed to think -T was disabling the local pseudo-terminal, which might go some way towards explaining why they thought it was necessary. I guess the moral of this story is to question everything and actually read the manual, instead of just googling it.
  1. Not that you SHOULD be using ssh with password authentication anyway, but people do.
  2. Only on your loopback address by default, so that you re not allowing random people on your network to use your tunnel.
  3. In fact, ssh even supports Jump Hosts, allowing you to automatically forward an ssh connection through another machine.
  4. I can t say I recommend a setup like this for anything serious, as you d need to ssh as root to forward ports less than 1024. SSH forwarding is not for permanent solutions, just short-lived connections to machines that would be otherwise inaccessible.
  5. Specifically, my source is the ssh(1) manual page in OpenSSH 8.4, shipped as 1:8.4p1-5 in Debian bullseye.
  6. I just forwarded ports with -N and then logged in to that same machine and looked at psuedo-terminal allocations via ps ux. No terminal is associated with ssh connections using just the -N option.

1 October 2021

Russell Coker: Getting Started With Kali

Kali is a Debian based distribution aimed at penetration testing. I haven t felt a need to use it in the past because Debian has packages for all the scanning tools I regularly use, and all the rest are free software that can be obtained separately. But I recently decided to try it. Here s the URL to get Kali [1]. For a VM you can get VMWare or VirtualBox images, I chose VMWare as it s the most popular image format and also a much smaller download (2.7G vs 4G). For unknown reasons the torrent for it didn t work (might be a problem with my torrent client). The download link for it was extremely slow in Australia, so I downloaded it to a system in Germany and then copied it from there. I don t want to use either VMWare or VirtualBox because I find KVM/Qemu sufficient to do everything I want and they are in the Main section of Debian, so I needed to convert the image files. Some of the documentation on converting image formats to use with QEMU/KVM says to use a program called kvm-img which doesn t seem to exist, I used qemu-img from the qemu-utils package in Debian/Bullseye. The man page qemu-img(1) doesn t list the types of output format supported by the -O option and the examples returned by a web search show using -O qcow2 . It turns out that the following command will convert the image to raw format which is the format I prefer. I use BTRFS for storing all my VM images and that does all the copy-on-write I need.
qemu-img convert Kali-Linux-2021.3-vmware-amd64.vmdk ../kali
After converting it the file was 500M smaller than the VMWare files (10.2 vs 10.7G). Probably the Kali distribution file could be reduced in size by converting it to raw and then back to VMWare format. The Kali VMWare image is compressed with 7zip which has a good compression ratio, I waited almost 90 minutes for zstd to compress it with -19 and the result was 12% larger than the 7zip file. VMWare apparently likes to use an emulated SCSI controller, I spent some time trying to get that going in KVM. Apparently recent versions of QEMU changed the way this works and therefore older web pages aren t helpful. Also allegedly the SCSI emulation is buggy and unreliable (but I didn t manage to get it going so can t be sure). It turns out that the VM is configured to work with the virtio interface, the initramfs.conf has the configuration option MODULES=most which makes it boot on all common configurations (good work by the initramfs-tools maintainers). The image works well with the Spice display interface, so it doesn t capture my mouse, the window for the VM works the same way as other windows on my desktop and doesn t capture the mouse cursor. I don t know if this level of Spice integration is in Debian now, last time I tested it didn t work that way. I also downloaded Metasploitable [2] which is a VM image designed to be full of security flaws for testing the tools that are in Kali. Again it worked nicely after converting from VMWare to raw format. One thing to note about Metasploitable is that you must not make it available on the public Internet. My home network has NAT for IPv4 but all systems get public IPv6 addresses. It s usually nice that those things just work on VMs but not for this. So I added an iptables command to block IPv6 to /etc/rc.local. Conclusion Installing VMs for both these distributions was quite easy. Most of my time was spent downloading from a slow server, trying to get SCSI emulation working, working out how to convert image files, and testing different compression options. The time spent doing stuff once I knew what to do was very small. Kali has zsh as the default shell, it s quite nice. I ve been happy with bash for decades, but I might end up trying zsh out on other machines.

29 September 2021

Ingo Juergensmann: LetsEncrypt CA Chain Issues with Ejabberd

UPDATE:
It s not as simple as described below, I m afraid It appears that it s not that easy to obtain new/correct certs from LetsEncrypt that are not cross-signed by DST Root X3 CA. Additionally older OpenSSL version (1.0.x) seems to have problems. So even when you think that your system is now ok, the remote server might refuse to accept your SSL cert. The same is valid for the SSL check on xmpp.net, which seems to be very outdated and beyond repair. Honestly, I think the solution needs to be provided by LetsEncrypt
I was having some strange issues on my ejabberd XMPP server the other day: some users complained that they couldn t connect anymore to the MUC rooms on my server and in the logfiles I discovered some weird warnings about LetsEncrypt certificates being expired although they were just new and valid until end of December. It looks like this:
[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed
and
[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 seconds
When checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days:
After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not. What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run update-ca-certificates and ejabberdctl restart (instead of just ejabberdctl reload-config). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again. Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well. Disclaimer: Again: this helped me in my case. I don t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.

Ian Jackson: Rust for the Polyglot Programmer

Rust is definitely in the news. I'm definitely on the bandwagon. (To me it feels like I've been wanting something like Rust for many years.) There're a huge number of intro tutorials, and of course there's the Rust Book. A friend observed to me, though, that while there's a lot of "write your first simple Rust program" there's a dearth of material aimed at the programmer who already knows a dozen diverse languages, and is familiar with computer architecture, basic type theory, and so on. Or indeed, for the impatient and confident reader more generally. I thought I would have a go. Rust for the Polyglot Programmer is the result. Compared to much other information about Rust, Rust for the Polyglot Programmer is: After reading Rust for the Polyglot Programmer, you won't know everything you need to know to use Rust for any project, but should know where to find it. Thanks are due to Simon Tatham, Mark Wooding, Daniel Silverstone, and others, for encouragement, and helpful reviews including important corrections. Particular thanks to Mark Wooding for wrestling pandoc and LaTeX into producing a pretty good-looking PDF. Remaining errors are, of course, mine. Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you're making a contribution, please indicate your agreement with the Developer Certificate of Origin.)
edited 2021-09-29 16:58 UTC to fix Salsa link targe, and 17:01 and 17:21 to for minor grammar fixes


comment count unavailable comments

27 September 2021

Russ Allbery: Review: The Problem with Work

Review: The Problem with Work, by Kathi Weeks
Publisher: Duke University Press
Copyright: 2011
ISBN: 0-8223-5112-9
Format: Kindle
Pages: 304
One of the assumptions baked deeply into US society (and many others) is that people are largely defined by the work they do, and that work is the primary focus of life. Even in Marxist analysis, which is otherwise critical of how work is economically organized, work itself reigns supreme. This has been part of the feminist critique of both capitalism and Marxism, namely that both devalue domestic labor that has traditionally been unpaid, but even that criticism is normally framed as expanding the definition of work to include more of human activity. A few exceptions aside, we shy away from fundamentally rethinking the centrality of work to human experience. The Problem with Work begins as a critical analysis of that centrality of work and a history of some less-well-known movements against it. But, more valuably for me, it becomes a discussion of the types and merits of utopian thinking, including why convincing other people is not the only purpose for making a political demand. The largest problem with this book will be obvious early on: the writing style ranges from unnecessarily complex to nearly unreadable. Here's an excerpt from the first chapter:
The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.
This is only a quarter of a paragraph, and the entire book is written like this. I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks. The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me. The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s. Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself. Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought. Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible. The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach. I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me. Rating: 5 out of 10

23 September 2021

Dirk Eddelbuettel: prrd 0.0.5: Incremental Mode

prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for Rcpp, RcppArmadillo, RcppEigen, BH, and others. prrd screenshot image The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session). This release brings some new features I used of late when testing and re-testing reverse dependencies for Rcpp. Enqueuing jobs can now consider the most recent prior job queue file. This allows us to find new packages that were not part of the previous runs. We added a second toggle to also add those packages who failed in the previous run. Finally, the dequeue interface allows to specify a date (rather than defaulting to the current date, useful for long-running jobs or restarts). The release is summarised in the NEWS entry:

Changes in prrd version 0.0.5 (2021-09-22)
  • Some remaing http URLs were changed to https.
  • The dequeueJobs script has a new argument date to help specify a queue file.
  • The enqueueJobs can now compute just a delta of (new) packages relative to a given prior queuefile and run.
  • When running in delta mode, previously failed packages can also be selected.

My CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 September 2021

Ian Jackson: Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way. Audiences for this post Background and context Error handling principles Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config. If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong. Rust's portability aims The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly. That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever). Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error. Rust's stability aims and approach Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:
  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.
By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves. This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious. Rust enums, as relevant to io::ErrorKind (Very briefly:) When you have a value which is an io::ErrorKind, you can compare it with specific values:
    if error.kind() == ErrorKind::NotFound   ...
  
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind()  
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file  :  ", &file, &error),
     
  
Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile. Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling. Improving the error categorisation The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months. The trouble with Other and tests Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all". Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct. But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other. Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors. Unfortunately, the documentation note
Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything. The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code. Chosen solution: Uncategorized The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution: There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised. This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change. The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect. The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like. Alternatives considered and rejected by the Rust developers Not adding more ErrorKinds This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time. Just adding ErrorKinds as had been done before This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally. Somehow using Rust's Edition system The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually. It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling. Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow. How to fix code broken by this change Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right. How to fix thorough tests The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure". What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written". IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right. You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures. Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural. Conclusions This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good. It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.
edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips


comment count unavailable comments

21 September 2021

Russell Coker: Links September 2021

Matthew Garrett wrote an interesting and insightful blog post about the license of software developed or co-developed by machine-learning systems [1]. One of his main points is that people in the FOSS community should aim for less copyright protection. The USENIX ATC 21/OSDI 21 Joint Keynote Address titled It s Time for Operating Systems to Rediscover Hardware has some inssightful points to make [2]. Timothy Roscoe makes some incendiaty points but backs them up with evidence. Is Linux really an OS? I recommend that everyone who s interested in OS design watch this lecture. Cory Doctorow wrote an interesting set of 6 articles about Disneyland, ride pricing, and crowd control [3]. He proposes some interesting ideas for reforming Disneyland. Benjamin Bratton wrote an insightful article about how philosophy failed in the pandemic [4]. He focuses on the Italian philosopher Giorgio Agamben who has a history of writing stupid articles that match Qanon talking points but with better language skills. Arstechnica has an interesting article about penetration testers extracting an encryption key from the bus used by the TPM on a laptop [5]. It s not a likely attack in the real world as most networks can be broken more easily by other methods. But it s still interesting to learn about how the technology works. The Portalist has an article about David Brin s Startide Rising series of novels and his thought s on the concept of Uplift (which he denies inventing) [6]. Jacobin has an insightful article titled You re Not Lazy But Your Boss Wants You to Think You Are [7]. Making people identify as lazy is bad for them and bad for getting them to do work. But this is the first time I ve seen it described as a facet of abusive capitalism. Jacobin has an insightful article about free public transport [8]. Apparently there are already many regions that have free public transport (Tallinn the Capital of Estonia being one example). Fare free public transport allows bus drivers to concentrate on driving not taking fares, removes the need for ticket inspectors, and generally provides a better service. It allows passengers to board buses and trams faster thus reducing traffic congestion and encourages more people to use public transport instead of driving and reduces road maintenance costs. Interesting research from Israel about bypassing facial ID [9]. Apparently they can make a set of 9 images that can pass for over 40% of the population. I didn t expect facial recognition to be an effective form of authentication, but I didn t expect it to be that bad. Edward Snowden wrote an insightful blog post about types of conspiracies [10]. Kevin Rudd wrote an informative article about Sky News in Australia [11]. We need to have a Royal Commission now before we have our own 6th Jan event. Steve from Big Mess O Wires wrote an informative blog post about USB-C and 4K 60Hz video [12]. Basically you can t have a single USB-C hub do 4K 60Hz video and be a USB 3.x hub unless you have compression software running on your PC (slow and only works on Windows), or have DisplayPort 1.4 or Thunderbolt (both not well supported). All of the options are not well documented on online store pages so lots of people will get unpleasant surprises when their deliveries arrive. Computers suck. Steinar H. Gunderson wrote an informative blog post about GaN technology for smaller power supplies [13]. A 65W USB-C PSU that fits the usual wall wart form factor is an interesting development.

Next.