eDP-1, the external one is
DP-3and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:
Modelines for 50Hz and 60Hz generated with
xrandr --newmode 2560x1440_54.97 221.00 2560 2608 2640 2720 1440 1443 1447 1478 +HSync -VSync xrandr --addmode DP-3 2560x1440_54.97 xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary
cvt 2560 1440 60did not work, neither did the one extracted with
edid-decode -Xfrom the hex blob found in
.local/share/xorg/Xorg.0.log. From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:
$ lspci grep -E "VGA 3D" 00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05) 01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)
ruby3.0branch. collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools. Installing collab-qa-tools The first step is to clone the git repository. Make sure you have the dependencies from
debian/controlinstalled (a few Ruby libraries). One of the patches I sent, and was already accepted, is the ability to run it without the need to install:
This will add the tools to your $PATH. Preparation The first think you need to do is getting all your build logs in a directory. The tools assume
.logfile extension, and they can be named
$ PACKAGE _*.logor just
$ PACKAGE .log. Creating a TODO file
cqa-scanlogs grep -v OK > todo
todowill contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs It's a good idea to split the TODO file in multiple ones. This can easily be done with
split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create
todo01, ..., each containing 30 lines:
Triaging You can now do the triaging. Let's say we split the TODO files, and will start with
split --lines=30 --numeric-suffixes todo todo
todo01. The first step is calling
cqa-fetchbugs(it does what it says on the tin):
cqa-annotatewill guide you through the logs and allow you to report bugs:
I wrote myself a
process.shwrapper script for
cqa-annotatethat looks like this:
#!/bin/sh set -eu for todo in $@; do # force downloading bugs awk ' print(".bugs." $1) ' "$ todo " xargs rm -f cqa-fetchbugs --TODO="$ todo " cqa-annotate \ --template=template.txt.jinja2 \ --TODO="$ todo " done
--templateoption is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:
The cqa-annotate loop
From: fullname < email > To: email@example.com Subject: package : FTBFS with ruby3.0: summary Source: package Version: version split:'+rebuild' first Severity: serious Justification: FTBFS Tags: bookworm sid ftbfs User: firstname.lastname@example.org Usertags: ruby3.0 Hi, We are about to enable building against ruby3.0 on unstable. During a test rebuild, package was found to fail to build in that situation. To reproduce this locally, you need to install ruby-all-dev from experimental on an unstable system or build chroot. Relevant part (hopefully): % for line in extract % > line % endfor % The full build log is available at https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/ package / filename replace:".log",".build.txt"
cqa-annotatewill parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:
You can then choose one of the options:
######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ######## --------- Error: Failure/Error: undef_method :exitstatus FrozenError: can't modify frozen object: pid 2351759 exit 0 # ./spec/support/unsetting_exitstatus.rb:4:in undef_method' # ./spec/support/unsetting_exitstatus.rb:4:in singleton class' # ./spec/support/unsetting_exitstatus.rb:3:in assuming_no_processes_have_been_run' # ./spec/cocaine/errors_spec.rb:55:in block (2 levels) in <top (required)>' Deprecation Warnings: Using should from rspec-expectations' old :should syntax without explicitly enabling the syntax is deprecated. Use the new :expect syntax or explicitly enable :should with config.expect_with(:rspec) c c.syntax = :should instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in block (2 levels) in <top (required)>'. If you need more of the backtrace for any of these deprecations to identify where to make the necessary changes, you can configure config.raise_errors_for_deprecations! , and it will turn the deprecation warnings into errors, giving you the full backtrace. 1 deprecation warning total Finished in 6.87 seconds (files took 2.68 seconds to load) 67 examples, 1 failure Failed examples: rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution /usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed ERROR: Test "ruby3.0" failed: ---------------- ERROR: Test "ruby3.0" failed: Failure/Error: undef_method :exitstatus ---------------- package: ruby-cocaine lines: 30 ------------------------------------------------------------------------ s: skip i: ignore this package permanently r: report new bug f: view full log ------------------------------------------------------------------------ Action [s i r f]:
s- skip this package and do nothing. You can run
cqa-annotateagain later and come back to it.
i- ignore this package completely. New runs of
cqa-annotatewon't ask about it again. This is useful if the package only fails in your rebuilds due to another package, and would just work when that other package gets fixes. In the Ruby transition this happens when A depends on B, while B builds a C extension and failed to build against the new Ruby. So once B is fixed, A should just work (in principle). But even if A would even have problems of its own, we can't really know before B is fixed so we can retry A.
r- report a bug.
cqa-annotatewill expand the template with the data from the current log, and feed it to mutt. This is currently a limitation: you have to use mutt to report bugs. After you report the bug,
cqa-annotatewill ask if it should edit the TODO file. In my opinion it's best to not do this, and annotate the package with a bug number when you have one (see below).
f- view the full log. This is useful when the extract displayed doesn't have enough info, or you want to inspect something that happened earlier (or later) during the build.
cqa-annotatewill list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of
cqa-annotatewill not ask about that package anymore. For example after I reported a bug for
ruby-cocainefor the issue listed above, I aborted with a
ctrl-c, and when I run my
process.shscript again I then get this prompt:
---------------- ERROR: Test "ruby3.0" failed: Failure/Error: undef_method :exitstatus ---------------- package: ruby-cocaine lines: 30 ------------------------------------------------------------------------ s: skip i: ignore this package permanently 1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed: Failure/Error: undef_method :exitstatus r: report new bug f: view full log ------------------------------------------------------------------------ Action [s i 1 r f]:
1will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.
Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.The second post (titled Signing Software The Easy Way with Sigstore and Cosign) goes into some technical details of getting started.
Some time ago I checked Signal s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn t match either.
Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project s build, others at least have a secondary source of validation.
Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian live images which were being worked on by Roland Clobus.
- All major configurations are still built regularly using live-build and bullseye.
- All major configurations are reproducible now; Jenkins is green.
- I ve worked around the issue for the Cinnamon image.
- The patch was accepted and released within a few hours.
- My main focus for the last month was on the live-build tool itself.
- It will properly use the proxy for all HTTP traffic.
I m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. [ ] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We ve come far, but there are still issues I d like to address. [ ]
185as well as performed significant triaging of merge requests and other issues in addition to making the following changes:
close_archivewhen garbage collecting
open_archivedefinitely returned successfully. This prevents, for example, an
PGPContainers cleanup routines were rightfully assuming that its temporary directory had actually been created. [ ]
.rdbfiles after refactoring temporary directory handling. [ ]
python3-rpmis installed or not at build time. [ ]
androguardmodule not being in the (expected)
python3-androguardDebian package. [ ]
debian/tests/control.sh. [ ]
h5pyin our tests that doesn t concern us. [ ]
Standards-Versionfield as it s required. [ ]
--diff-contextoption to control unified diff context size [ ] and Jean-Romain Garnier fixed the Macho comparator for architectures other than
Rscript. It allows for piping as well for shebang scripting via
#!, uses command-line arguments more consistently and still starts faster. It also always loaded the
Rscriptonly started to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your
PATH. A few examples are highlighted at the Github repo, as well as in the examples vignette. This release updates the helper scripts to download nighlies of RStudio Server and Desktop to their new naming scheme, adds a downloader for Quarto, extends the
roxy.rwrapper with a new option, and updates the
configuresetting as requestion by CRAN and more. See the
NEWSfile entry below for more.
My CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and now also on the new package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via
Changes in littler version 0.3.14 (2021-10-05)
- Changes in examples
- Updated RStudio download helper to changed file names
- Added a new option to
- Added a downloader for Quarto command-line tool
- Changes in package
configurefiles were updated to the standard of
autoconfversion 2.69 following a CRAN request
install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.
ssh, you may come across solutions like this:
ssh -nNT -L 8000:example.com:80 email@example.com
sshto the background:
ssh -NT -L 3306:db.example.com:3306 example.com &
sshto fail to connect if you happen to be using password authentication. However, they seem to still persist in various articles about
sshport forwarding. I myself was using the first variation until just recently, and I figured I would write this up to inform others who might be still using these solutions. The correct option for this situation is not
-N, as in:
ssh -N -L 8000:example.com:80 firstname.lastname@example.org
sshto the background, then you ll want to add
-finstead of using your shell s built-in
&feature, because you can then input passwords into
sshif necessary1 Honestly, that s the point of this article, so you can stop here if you want. If you re looking for a detailed explaination of what each of these options actually does, or if you have no idea what I m talking about, read on!
sshis a powerful tool for remote access to servers, allowing you to execute commands on a remote machine. It can also forward ports through a secure tunnel with the
-Roptions. Basically, you can forward a connection to a local port to a remote server like so:
ssh -L 8080:other.example.com:80 ssh.example.com
sshforwards any traffic on your local machine port 80802 to
other.example.comport 80 via
ssh.example.com. This is a really powerful feature, allowing you to jump3 inside your firewall with just an
sshserver exposed to the world. It can work in reverse as well with the
-Roption, allowing connections on a remote host in to a server running on your local machine. For example, say you were running a website on your local machine on port 8080 but wanted it accessible on
example.comport 804. You could use something like:
ssh -R 8080:example.com:80 example.com
sshport forwarding is that, absent any additional options, you also open a shell on the remote machine. If you re planning to both work on a remote machine and use it to forward some connection, this is fine, but if you just need to forward a port quickly and don t care about a shell at that moment, it can be annoying, especially since if the shell closes
sshwill close the forwarding port as well. This is where the
-Noption comes in.
-Nis explained like so:
Do not execute a remote command. This is useful for just forwarding ports.This is all we need. It instructs
sshto run no commands on the remote server, just forward the ports specified in the
-Roptions. But people seem to think that there are a bunch of other necessary options, so what do those do?
sshinteracts with standard input, specifically telling it not to:
Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically for warded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)
sshto background, freeing up the terminal in which you ran
sshto do other things.
Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.As indicated in the description of
-n, this does the same thing as using the shell s
-n, but allows you to put in any necessary passwords first.
-Tis a little more complicated than the others and has a very short explanation:
Disable pseudo-terminal allocation.It has a counterpart in
-t, which is explained a little better:
Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.As the description of
sshis allocating a pseudo-terminal on the remote machine, not the local one. However, I have confirmed6 that
-Ndoesn t allocate a pseudo-terminal either, since it doesn t run any commands. Thus this option is entirely unnecessary.
ssh) was typed on a physical terminal device and do things like raise an interrupt (SIGINT) if Ctrl+C is pressed.
sshgot passed around as correct, but I suspect it s a form of cargo cult, where we use example commands others provide and don t question what they do. One stack overflow answer I read that provided these options seemed to think
-Twas disabling the local pseudo-terminal, which might go some way towards explaining why they thought it was necessary. I guess the moral of this story is to question everything and actually read the manual, instead of just googling it.
sshwith password authentication anyway, but people do.
ssheven supports Jump Hosts, allowing you to automatically forward an
sshconnection through another machine.
sshas root to forward ports less than 1024. SSH forwarding is not for permanent solutions, just short-lived connections to machines that would be otherwise inaccessible.
-Nand then logged in to that same machine and looked at psuedo-terminal allocations via
ps ux. No terminal is associated with
sshconnections using just the
qemu-img convert Kali-Linux-2021.3-vmware-amd64.vmdk ../kaliAfter converting it the file was 500M smaller than the VMWare files (10.2 vs 10.7G). Probably the Kali distribution file could be reduced in size by converting it to raw and then back to VMWare format. The Kali VMWare image is compressed with 7zip which has a good compression ratio, I waited almost 90 minutes for zstd to compress it with -19 and the result was 12% larger than the 7zip file. VMWare apparently likes to use an emulated SCSI controller, I spent some time trying to get that going in KVM. Apparently recent versions of QEMU changed the way this works and therefore older web pages aren t helpful. Also allegedly the SCSI emulation is buggy and unreliable (but I didn t manage to get it going so can t be sure). It turns out that the VM is configured to work with the virtio interface, the initramfs.conf has the configuration option MODULES=most which makes it boot on all common configurations (good work by the initramfs-tools maintainers). The image works well with the Spice display interface, so it doesn t capture my mouse, the window for the VM works the same way as other windows on my desktop and doesn t capture the mouse cursor. I don t know if this level of Spice integration is in Debian now, last time I tested it didn t work that way. I also downloaded Metasploitable  which is a VM image designed to be full of security flaws for testing the tools that are in Kali. Again it worked nicely after converting from VMWare to raw format. One thing to note about Metasploitable is that you must not make it available on the public Internet. My home network has NAT for IPv4 but all systems get public IPv6 addresses. It s usually nice that those things just work on VMs but not for this. So I added an iptables command to block IPv6 to /etc/rc.local. Conclusion Installing VMs for both these distributions was quite easy. Most of my time was spent downloading from a slow server, trying to get SCSI emulation working, working out how to convert image files, and testing different compression options. The time spent doing stuff once I knew what to do was very small. Kali has zsh as the default shell, it s quite nice. I ve been happy with bash for decades, but I might end up trying zsh out on other machines.
[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed
[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 secondsWhen checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days: After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not. What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run
ejabberdctl restart(instead of just
ejabberdctl reload-config). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again. Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well. Disclaimer: Again: this helped me in my case. I don t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.
|Publisher:||Duke University Press|
The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.This is only a quarter of a paragraph, and the entire book is written like this. I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks. The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me. The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s. Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself. Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought. Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible. The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach. I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me. Rating: 5 out of 10
My CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Changes in prrd version 0.0.5 (2021-09-22)
- Some remaing http URLs were changed to https.
dequeueJobsscript has a new argument date to help specify a queue file.
enqueueJobscan now compute just a delta of (new) packages relative to a given prior queuefile and run.
- When running in delta mode, previously failed packages can also be selected.
ErrorKind, which aims to categorise OS errors in a portable way. Audiences for this post
ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is
ErrorKind::NotFound(or whatever). Because
ErrorKindis so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an
ErrorKind. For this purpose, Rust provides a special category
ErrorKind::Other, which doesn't correspond to any particular OS error. Rust's stability aims and approach Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:
io::ErrorKind(Very briefly:) When you have a value which is an
io::ErrorKind, you can compare it with specific values:
if error.kind() == ErrorKind::NotFound ...But in Rust it's more usual to write something like this (which you can read like a
match error.kind() ErrorKind::NotFound => use_default_configuration(), _ => panic!("could not read config file : ", &file, &error),Here
_means "anything else". Rust insists that
matchstatements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the
_, it wouldn't compile. Rust enums can also be marked
non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for
ErrorKind, so the
_is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new
ErrorKinds appear, they won't stop your code compiling. Improving the error categorisation The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new
ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months. The trouble with
Otherand tests Rust has to assign an
ErrorKindto every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to
ErrorKind::Other- reusing the category for "not an OS error at all". Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct. But until very recently (still now, in Stable Rust), there was no
ErrorKind::StorageFull. You would get
ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for
ENOSPCon Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is
Other. Obvious but wrong.
non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-
OtherOS errors. Unfortunately, the documentation note
Errors that arewas only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything. The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously
Othernow may move to a different or a new
ErrorKindvariant in the future.
Otherbecomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code. Chosen solution:
UncategorizedThe Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as
Other. They chose the following solution: There is now a new
ErrorKind::Uncategorizedwhich is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from
Uncategorised. This is de jure justified by the fact that this enum has always been marked
non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change. The new
unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as
Uncategorized. So, one cannot now write code that will break when new
ErrorKinds are added. That's the intended effect. The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like. Alternatives considered and rejected by the Rust developers Not adding more
ErrorKinds This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time. Just adding
ErrorKinds as had been done before This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously
Otherbecomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally. Somehow using Rust's Edition system The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually. It's not clear how this could be used for
ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling. Also some of the schemes for making this change would mean that new
ErrorKinds could only be stabilised about once every 3 years, which is far too slow. How to fix code broken by this change Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of
_is right. How to fix thorough tests The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable
Uncategorizedvariant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure". What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written". IMO therefore the right solution for such a test case is to cut and paste the current list of stable
ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right. You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures. Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural. Conclusions This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good. It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved. edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips