Jonathan Dowland: Our Study, 2022

prefork-interp
, I found the result very straightforward to deploy.
prefork-interp
prefork-interp
is a small C program which wraps a script, plus a scripting language library to cooperate with the wrapper program. Together they achieve the following:
#!
line.prefork-interp
.AF_UNIX
socket (hopefully in /run/user/UID
, but in ~
if not) for rendezvous. We can try to connect without locking, but we must protect the socket with a separate lockfile to avoid two concurrent restart attempts.
We want stderr from the script setup (pre-initialisation) to be delivered to the caller, so the script ought to inherit our stderr and then will need to replace it later. Twice, in fact, because the daemonic server process can t have a stderr.
When a script is restarted for any reason, any old socket will be removed. We want the old server process to detect that and quit. (If hung about, it would wait for the idle timeout; if this happened a lot - eg, a constantly changing set of services - we might end up running out of pids or something.) Spotting the socket disappearing, without polling, involves use of a library capable of using inotify
(or the equivalent elsewhere). Choosing a C library to do this is not so hard, but portable interfaces to this functionality can be hard to find in scripting languages, and also we don t want every language binding to have to reimplement these checks. So for this purpose there s a little watcher process, and associated IPC.
When an invoking instance of prefork-interp
is killed, we must arrange for the executing service instance to stop reading from its stdin (and, ideally, writing its stdout). Otherwise it s stealing input from prefork-interp
s successors (maybe the user s shell)!
Cleanup ought not to depend on positive actions by failing processes, so each element of the system has to detect failures of its peers by means such as EOF on sockets/pipes.
Obtaining prefork-interp
I put this new tool in my chiark-utils package, which is a collection of useful miscellany. It s available from git.
Currently I make releases by uploading to Debian, where prefork-interp has just hit Debian unstable, in chiark-utils 7.0.0.
Support for other scripting languages
I would love Python to be supported. If any pythonistas reading this think you might like to help out, please get in touch. The specification for the protocol, and what the script library needs to do, is documented in the source code
Future plans for chiark-utils
chiark-utils as a whole is in need of some tidying up of its build system and packaging.
I intend to try to do some reorganisation. Currently I think it would be better to organising the source tree more strictly with a directory for each included facility, rather than grouping compiled and scripts together.
The Debian binary packages should be reorganised more fully according to their dependencies, so that installing a program will ensure that it works.
I should probably move the official git repo from my own git+gitweb to a forge (so we can have MRs and issues and so on).
And there should be a lot more testing, including Debian autopkgtests.
edited 2022-08-23 10:30 +01:00 to improve the formattingSeries: | Monk & Robot #2 |
Publisher: | Tordotcom |
Copyright: | 2022 |
ISBN: | 1-250-23624-X |
Format: | Kindle |
Pages: | 151 |
swtpm
and swtpm-tools
packages<devices>
<tpm model='tpm-tis'>
<backend type='emulator' version='2.0'/>
</tpm>
</devices>
Alternatively, if you prefer the graphical interface, click on the
"Add hardware" button in the VM properties, choose the TPM, set it
to Emulated, model TIS, and set its version to 2.0.i440fx
chipset. This one is limited to PCI and IDE, unlike the more
modern q35
chipset (which supports PCIe and SATA, and does not support
IDE nor SATA in IDE mode).
There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently
requires the q35
chipset,
Fun fact (which I found out the hard way): Windows stores where its boot
partition is somewhere. If you change the hard drive controller from an
IDE one to a SATA one, you will get a BSOD at startup. In order to fix
that, you need a recovery
drive.
To create the virtual USB disk, go to the VM properties, click "Add
hardware", choose "Storage", choose the USB bus, and then under
"Advanced options", select the "Removable" option, so it shows up as a
USB stick in the VM. Note: this takes a while to do (took about an hour
on my system), and your virtual USB drive needs to be 16G or larger (I
used the libvirt default of 20G).
There is no possibility, using the buttons in the virt-manager
GUI, to
convert the machine from i440fx
to q35
. However, that doesn't mean
it's not possible to do so. I found that the easiest way is to use the
direct XML editing capabilities in the virt-manager
interface; if you
edit the XML in an editor it will produce error messages if something
doesn't look right and tell you to go and fix it, whereas the
virt-manager
GUI will actually fix things itself in some cases (and
will produce helpful error messages if not).
What I did was:
machine
attribute of the domain.os.type
element, so that it says pc-q35-7.0
.domain.devices.controller
element that has pci
in
its type
attribute and pci-root
in its model
one, and set the
model
attribute to pcie-root
instead.domain.devices.disk.target
elements, setting their
dev=hdX
to dev=sdX
, and bus="ide"
to bus="sata"
domain.devices.controller
with
type="usb"
, and set its model
to qemu-xhci
. You may also want to
add ports="15"
if you didn't have that yet.<controller type="pci" index="1" model="pcie-root-port"/>
<controller type="pci" index="2" model="pcie-root-port"/>
<controller type="pci" index="3" model="pcie-root-port"/>
virt-manager
gives you an error when you hit the Apply
button, compare notes against the VM that you're in the process of
creating, and copy/paste things from there to the old VM to make the
errors go away. As long as you don't remove configuration that is
critical for things to start, this shouldn't break matters permanently
(but hey, use your backups if you do break -- you have backups, right?)
OK, cool, so now we have a Windows VM that is... unable to boot.
Remember what I said about Windows storing where the controller is?
Yeah, there you go. Boot from the virtual USB disk that you created
above, and select the "Fix the boot" option in the menu. That will fix
it.
Ha ha, only kidding. Of course it doesn't.
I honestly can't tell you everything that I fiddled with, but I
think the bit that eventually fixed it was where I chose "safe mode",
which caused the system to do a hickup, a regular reboot, and then
suddenly everything was working again. Meh.
Don't throw the virtual USB disk away yet, you'll still need it.
Anyway, once you have it booting again, you will now have a machine that
theoretically supports Secure Boot, but you're still running off an
MBR partition. I found a
procedure
on how to convert things from MBR to GPT that was written almost 10
years ago, but surprisingly it still works, except for the bit where the
procedure suggests you use diskmgmt.msc (for one thing, that was
renamed; and for another, it can't touch the partition table of the
system disk either).
The last step in that procedure says to restart your computer!,
which is fine, except at this point you obviously need to switch over to
the TianoCore firmware, otherwise you're trying to read a UEFI boot
configuration on a system that only supports MBR booting, which
obviously won't work. In order to do that, you need to add a loader
element to the domain.os
element of your libvirt configuration:
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
When you do this, you'll note that virt-manager
automatically adds an
nvram
element. That's fine, let it.
I figured this out by looking at the documentation for enabling Secure
Boot in a VM on the
Debian wiki, and using the same trick as for how to switch chipsets that
I explained above.
Okay, yay, so now secure boot is enabled, and we can install Windows 11!
All good? Well, almost.
I found that once I enabled secure boot, my display reverted to a
1024x768 screen. This turned out to be because I was using older
unsigned drivers, and since we're using Secure Boot, that's no longer
allowed, which means Windows reverts to the default VGA driver, and that
only supports the 1024x768 resolution. Yeah, I know. The solution is
to download the virtio-win ISO from one of the links in the virtio-win
github project,
connecting it to the VM, going to Device manager, selecting the display
controller, clicking on the "Update driver" button, telling the system
that you have the driver on your computer, browsing to the CD-ROM drive,
clicking the "include subdirectories" option, and then tell Windows to
do its thing. While there, it might be good to do the same thing for
unrecognized devices in the device manager, if any.
So, all I have to do next is to get used to the completely different
user interface of Windows 11. Sigh.
Oh, and to rename the "w10" VM to "w11", or some such. Maybe.
/usr/local
.)
chiark s install is also at the very high end of the installation complexity, and customisation, scale: reinstalling it completely would be an enormous amount of work. And it s unique.
chiark s upgrade history
chiark s last major OS upgrade was to jessie (Debian 8, released in April 2015). That was in 2016. Since then we have been relying on Debian s excellent security support posture, and the Debian LTS and more recently Freexian s Debian ELTS projects and some local updates, The use of ELTS - which supports only a subset of packages - was particularly uncomfortable.
Additionally, chiark was installed with 32-bit x86 Linux (Debian i386), since that was what was supported and available at the time. But 32-bit is looking very long in the tooth.
Why do a skip upgrade
So, I wanted to move to the fairly recent stable release - Debian 11 (bullseye), which is just short of a year old. And I wanted to crossgrade (as its called) to 64-bit.
In the past, I have found I have had greater success by doing direct upgrades, skipping intermediate releases, rather than by following the officially-supported path of going via every intermediate release.
Doing a skip upgrade avoids exposure to any packaging bugs which were present only in intermediate release(s). Debian does usually fix bugs, but Debian has many cautious users, so it is not uncommon for bugs to be found after release, and then not be fixed until the next one.
A skip upgrade avoids the need to try to upgrade to already-obsolete releases (which can involve messing about with multiple snapshots from snapshot.debian.org. It is also significantly faster and simpler, which is important not only because it reduces downtime, but also because it removes opportunities (and reduces the time available) for things to go badly.
One downside is that sometimes maintainers aggressively remove compatibility measures for older releases. (And compatibililty packages are generally removed quite quickly by even cautious maintainers.) That means that the sysadmin who wants to skip-upgrade needs to do more manual fixing of things that haven t been dealt with automatically. And occasionally one finds compatibility problems that show up only when mixing very old and very new software, that no-one else has seen.
Crossgrading
Crossgrading is fairly complex and hazardous. It is well supported by the low level tools (eg, dpkg) but the higher-level packaging tools (eg, apt) get very badly confused.
Nowadays the system is so complex that downloading things by hand and manually feeding them to dpkg is impractical, other than as a very occasional last resort.
The approach, generally, has been to set the system up to want to be the new architecture, run apt in a download-only mode, and do the package installation manually, with some fixing up and retrying, until the system is coherent enough for apt to work.
This is the approach I took. (In current releases, there are tools that will help but they are only in recent releases and I wanted to go direct. I also doubted that they would work properly on chiark, since it s so unusual.)
Peril and planning
Overall, this was a risky strategy to choose. The package dependencies wouldn t necessarily express all of the sequencing needed. But it still seemed that if I could come up with a working recipe, I could do it.
I restored most of one of chiark s backups onto a scratch volume on my laptop. With the LVM snapshot tools and chroots. I was able to develop and test a set of scripts that would perform the upgrade. This was a very effective approach: my super-fast laptop, with local caches of the package repositories, was able to do many edit, test, debug cycles.
My recipe made heavy use of snapshot.debian.org, to make sure that it wouldn t rot between testing and implementation.
When I had a working scheme, I told my users about the planned downtime. I warned everyone it might take even 2 or 3 days. I made sure that my access arrangemnts to the data centre were in place, in case I needed to visit in person. (I have remote serial console and power cycler access.)
Reality - the terrible rescue install
My first task on taking the service down was the check that the emergency rescue installation worked: chiark has an ancient USB stick in the back, which I can boot to from the BIOS. The idea being that many things that go wrong could be repaired from there.
I found that that install was too old to understand chiark s storage arrangements. mdadm tools gave very strange output. So I needed to upgrade it. After some experiments, I rebooted back into the main install, bringing chiark s service back online.
I then used the main install of chiark as a kind of meta-rescue-image for the rescue-image. The process of getting the rescue image upgraded (not even to amd64, but just to something not totally ancient) was fraught. Several times I had to rescue it by copying files in from the main install outside. And, the rescue install was on a truly ancient 2G USB stick which was terribly terribly slow, and also very small.
I hadn t done any significant planning for this subtask, because it was low-risk: there was little way to break the main install. Due to all these adverse factors, sorting out the rescue image took five hours.
If I had known how long it would take, at the beginning, I would have skipped it. 5 hours is more than it would have taken to go to London and fix something in person.
Reality - the actual core upgrade
I was able to start the actual upgrade in the mid-afternoon. I meticulously checked and executed the steps from my plan.
The terrifying scripts which sequenced the critical package updates ran flawlessly. Within an hour or so I had a system which was running bullseye amd64, albeit with many important packages still missing or unconfigured.
So I didn t need the rescue image after all, nor to go to the datacentre.
Fixing all the things
Then I had to deal with all the inevitable fallout from an upgrade.
Notable incidents:
exim4 has a new tainting system
This is to try to help the sysadmin avoid writing unsafe string interpolations. ( Little Bobby Tables. ) This was done by Exim upstream in a great hurry as part of a security response process.
The new checks meant that the mail configuration did not work at all. I had to turn off the taint check completely. I m fairly confident that this is correct, because I am hyper-aware of quoting issues and all of my configuration is written to avoid the problems that tainting is supposed to avoid.
One particular annoyance is that the approach taken for sqlite lookups makes it totally impossible to use more than one sqlite database. I think the sqlite quoting operator which one uses to interpolate values produces tainted output? I need to investigate this properly.
LVM now ignores PVs which are directly contained within LVs by default
chiark has LVM-on-RAID-on-LVM. This generally works really well.
However, there was one edge case where I ended up without the intermediate RAID layer. The result is LVM-on-LVM.
But recent versions of the LVM tools do not look at PVs inside LVs, by default. This is to help you avoid corrupting the state of any VMs you have on your system. I didn t know that at the time, though. All I knew was that LVM was claiming my PV was unusable , and wouldn t explain why.
I was about to start on a thorough reading of the 15,000-word essay that is the commentary in the default /etc/lvm/lvm.conf
to try to see if anything was relevant, when I received a helpful tipoff on IRC pointing me to the scan_lvs
option.
I need to file a bug asking for the LVM tools to explain why they have declared a PV unuseable.
apache2 s default config no longer read one of my config files
I had to do a merge (of my changes vs the maintainers changes) for /etc/apache2/apache2.conf
. When doing this merge I failed to notice that the file /etc/apache2/conf.d/httpd.conf
was no longer included by default. My merge dropped that line. There were some important things in there, and until I found this the webserver was broken.
dpkg --skip-same-version
DTWT during a crossgrade
(This is not a fix all the things - I found it when developing my upgrade process.)
When doing a crossgrade, one often wants to say to dpkg install all these things, but don t reinstall things that have already been done . That s what --skip-same-version
is for.
However, the logic had not been updated as part of the work to support multiarch, so it was wrong. I prepared a patched version of dpkg, and inserted it in the appropriate point in my prepared crossgrade plan.
The patch is now filed as bug #1014476 against dpkg upstream
Mailman
Mailman is no longer in bullseye. It s only available in the previous release, buster.
bullseye has Mailman 3 which is a totally different system - requiring basically, a completely new install and configuration. To even preserve existing archive links (a very important requirement) is decidedly nontrivial.
I decided to punt on this whole situation. Currently chiark is running buster s version of Mailman. I will have to deal with this at some point and I m not looking forward to it.
Python
Of course that Mailman is Python 2. The Python project s extremely badly handled transition includes a recommendation to change the meaning of #!/usr/bin/python
from Python 2, to Python 3.
But Python 3 is a new language, barely compatible with Python 2 even in the most recent iterations of both, and it is usual to need to coinstall them.
Happily Debian have provided the python-is-python2
package to make things work sensibly, albeit with unpleasant imprecations in the package summary description.
USENET news
Oh my god. INN uses many non-portable data formats, which just depend on your C types. And there are complicated daemons, statically linked libraries which cache on-disk data, and much to go wrong.
I had numerous problems with this, and several outages and malfunctions. I may write about that on a future occasion.
(edited 2022-07-20 11:36 +01:00 and 2022-07-30 12:28+01:00 to fix typos)--remap-path-prefix
solves this problem and has been used to great effect in build systems that rely on reproducibility (Bazel, Nix) to work at all and that there are efforts to teach cargo about it here .
TheAs their announcement later goes onto state, version-pinning using hash-checking mode can prevent this attack, although this does depend on specific installations using this mode, rather than a prevention that can be applied systematically.ctx
hosted project on PyPI was taken over via user account compromise and replaced with a malicious project which contained runtime code which collected the content ofos.environ.items()
when instantiating Ctx objects. The captured environment variables were sent as a base64 encoded query parameter to a Heroku application [ ]
.jar
may have been unnecessary given that diffoscope would have identified the, it must be said that there is something to be said with occasionally delving into seemingly low-level details, as well describing any debugging process. Indeed, as vanitasvitae writes:
Yes, this would have spared me from 3h of debugging But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I m not mad.
KBUILD_BUILD_TIMESTAMP
) in order to prepare my build with the known to disrupt code layout options disabled .
nondeterministic_checksum_generated_by_coq
and nondetermistic_js_output_from_webpack
.
After Holger Levsen found hundreds of packages in the bookworm distribution that lack .buildinfo
files, he uploaded 404 source packages to the archive (with no meaningful source changes). Currently bookworm now shows only 8 packages without .buildinfo
files, and those 8 are fixed in unstable and should migrate shortly. By contrast, Debian unstable will always have packages without .buildinfo
files, as this is how they come through the NEW queue. However, as these packages were not built on the official build servers (ie. they were uploaded by the maintainer) they will never migrate to Debian testing. In the future, therefore, testing should never have packages without .buildinfo
files again.
Roland Clobus posted yet another in-depth status report about his progress making the Debian Live images build reproducibly to our mailing list. In this update, Roland mentions that all major desktops build reproducibly with bullseye, bookworm and sid but also goes on to outline the progress made with automated testing of the generated images using openQA.
FORCE_SOURCE_DATE=1
in the environment of all builds in order to fix numerous timestamp issues in documentation generation tools.
maradns
package as it appears to embed a random prime number. (Patch)
This paper focuses on one research question: how can [Guix]((https://www.gnu.org/software/guix/) and similar systems allow users to securely update their software? [ ] Our main contribution is a model and tool to authenticate new Git revisions. We further show how, building on Git semantics, we build protections against downgrade attacks and related threats. We explain implementation choices. This work has been deployed in production two years ago, giving us insight on its actual use at scale every day. The Git checkout authentication at its core is applicable beyond the specific use case of Guix, and we think it could benefit to developer teams that use Git.A full PDF of the text is available.
215
, 216
and 217
to Debian unstable. Chris Lamb also made the following changes:
--profile
and we were killed via a TERM
signal. This should help in situations where diffoscope is terminated due to some sort of timeout. [ ]IndexError
exceptions (in addition to ValueError
) when parsing .pyc
files. (#1012258)argcomplete
module. [ ]readelf
(ie. binutils), as it appears that this patch level version change resulted in a change of output, not the minor version. [ ]@skip_unless_tool_is_at_least
decorator (NB. at_least
) over @skip_if_tool_version_is
(NB. is
) to fix tests under Debian stable. [ ]TERM
signal. [ ]build-compare
caused a regression for a few days.python-fasttext
(CPU-related issue).node-dommatrix
.rtpengine
.sphinxcontrib-mermaid
.yaru-theme
.mapproxy
(forwarded upstream).libxsmm
.yt-dlp
(forwarded upstream).lz4
, lzop
and xz-utils
packages on all nodes in order to detect running kernels. [ ]SOURCE_DATE_EPOCH
environment variable [ ]. In addition, Sebastian Crane very-helpfully updated the screenshot of salsa.debian.org s request access button on the How to join the Salsa group. [ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
Series: | Discworld #18 |
Publisher: | Harper |
Copyright: | 1995 |
Printing: | February 2014 |
ISBN: | 0-06-227552-6 |
Format: | Mass market |
Pages: | 360 |
Publisher: | Tillie Walden |
Copyright: | 2016-2017 |
Format: | Online graphic novel |
Pages: | 544 |
git
repository to track the ledger file, as well as a
notes.txt
"diary" that documents my planning around its structure and how to
use it, and a import.txt
which documents what account data I have imported
and confirmed that the resulting balances match those reported on monthly
statements.
For this evaluation, I decided to bite the bullet and track both family and
personal finances at the same time. I'm still keeping them conceptually very
separate. To reflect that I've organised my account names around that: all
accounts relating to family are prefixed family:
, and likewise personal
jon:
.1 Some example accounts:
family:assets:shared - shared bank account
family:dues:jon - I owe to family
family:expenses:cat - budget category for the cat
income - where money enters this universe
jon:assets:current - my personal account
jon:dues:peter - money Peter owes me
jon:expenses:snacks - budget category for coffees etc
jon:liabilities:amex - a personal credit card
I decided to make the calendar year a strict cut-over point: my personal
opening balances in hledger
are determined by what GNUCash reports. It's
possible those will change over this year, as adjustments are made to last
year's data: but it's easy enough to go in and update the opening balances
in hledger
to reflect that.
Credit cards are a small exception. January's credit card bills are paid in
January but cover transactions from mid-December. I import those transactions
into hledger
to balance the credit card payment. As a consequence, the "spend
per month" view of my data is a bit skewed: All the transactions in December
should be thought of as in January since that's when they were paid. I need to
explore options to fix this.
When I had family and personal managed separately, occasionally something
would be paid for on the wrong card and end up in the wrong data. The solution
I used last year was to keep an account dues:family
to which I posted those
and periodically I'd settle it with a real-world bank transfer.
I've realised that this doesn't work so well when I manage both together:
I can't track both dues and expense categorisation with just one posting.
The solution I'm using for now is hledger's unbalanced virtual postings:
a third posting for the transaction to the budget category, which is not
balanced, e.g.:
2022-01-02 ZTL*RELISH
family:liabilities:creditcard -3.00
family:dues:jon 3.00
(jon:expenses:snacks) 3.00
This works, but it completely side-steps double-entry book keeping, which
is the whole point of using a double-entry system. There's also no check
and balance that the figure I put in the virtual posting ( 3) matches the
figure in the rest of the transaction. I'm therefore open to other ideas.
hledger
where default
account names are used, such as the default place that expenses
are posted to during CSV imports: expenses:unknown
, that
obviously don't fit my family/jon:
prefix scheme. The solution
is to make sure I specify a default posting-to account in all my
CSV import rules. Publisher: | Subterranean Press |
Copyright: | 2020 |
ISBN: | 1-59606-992-9 |
Format: | Kindle |
Pages: | 111 |
"You are displaying a very small-minded attitude," said the fairy, who seemed genuinely grieved by this. "Consider the orange-peel, which by itself has many very nice properties. Now, if you had a more educated brain (I cannot consider myself educated; I have only attempted to better my situation) you would have immediately said, 'Why, if I had some liquor, or even very hot water, I could extract some oil from this orange-peel, which as everyone knows is antibacterial; that may well do my hands some good,' and you wouldn't be in such a stupid predicament."On balance, I think this style worked. It occasionally annoyed me, but it has some charm. About halfway through, I was finding the story lightly entertaining, although I would have preferred a bit less grime, illness, and physical injury. Unfortunately, the rest of the story didn't work for me. The dynamic between Floralinda and Cobweb turns into a sort of D&D progression through monster fights, and while there are some creative twists to those fights, they become all of a sameness. And while I won't spoil the ending, it didn't work for me. I think I see what Muir was trying to do, and I have some intellectual appreciation for the idea, but it wasn't emotionally satisfying. I think my root problem with this story is that Muir sets up a rather interesting world, one in which witches artistically imprison princesses, and particularly bright princesses (with the help of amateur chemist fairies) can use the trappings of a magical tower in ways the witch never intended. I liked that; it has a lot of potential. But I didn't feel like that potential went anywhere satisfying. There is some relationship and characterization work, and it reached some resolution, but it didn't go as far as I wanted. And, most significantly, I found the end point the characters reached in relation to the world to be deeply unsatisfying and vaguely irritating. I wanted to like this more than I did. I think there's a story idea in here that I would have enjoyed more. Unfortunately, it's not the one that Muir wrote, and since so much of my problem is with the ending, I can't provide much guidance on whether someone else would like this story better (and why). But if the idea of taking apart a fairy-tale tower and repurposing the pieces sounds appealing, and if you get along better with Muir's illness motif than I do, you may enjoy this more than I did. Rating: 5 out of 10
Publisher: | Tordotcom |
Copyright: | November 2021 |
ISBN: | 1-250-76871-3 |
Format: | Kindle |
Pages: | 199 |
Publisher: | New American Library |
Copyright: | 2016 |
ISBN: | 0-698-18327-4 |
Format: | Kindle |
Pages: | 572 |
Publisher: | Amazon |
Copyright: | 1899 |
Printing: | May 2012 |
ASIN: | B0082ZBXSI |
Format: | Kindle |
Pages: | 136 |
This is the story of the different ways we looked for treasure, and I think when you have read it you will see that we were not lazy about the looking. There are some things I must tell before I begin to tell about the treasure-seeking, because I have read books myself, and I know how beastly it is when a story begins, "Alas!" said Hildegarde with a deep sigh, "we must look our last on this ancestral home" and then some one else says something and you don't know for pages and pages where the home is, or who Hildegarde is, or anything about it.The first-person narrator of The Story of the Treasure Seekers is one of the six kids.
It is one of us that tells this story but I shall not tell you which: only at the very end perhaps I will.The narrator then goes on to elaborately praise one of the kids, occasionally accidentally uses "I" instead of their name, and then remembers and tries to hide who is telling the story again. It's beautifully done and had me snickering throughout the book. It's not much of a mystery (you will figure out who is telling the story very quickly), but Nesbit captures the writing style of a kid astonishingly well without making the story poorly written. Descriptions of events have a headlong style that captures a child's sense of adventure and heedless immortality mixed with quiet observations that remind the reader that kids don't miss as much as people think they do. I think the most skillful part of this book is the way Nesbit captures a kid's disregard of literary convention. The narrator in a book written by an adult tends to fit into a standard choice of story-telling style and follow it consistently. Even first-person narrators who break some of those rules feel like intentionally constructed characters. The Story of the Treasure Seekers is instead half "kid telling a story" and half "kid trying to emulate the way stories are told in books" and tends to veer wildly between the two when the narrator gets excited, as if they're vaguely aware of the conventions they're supposed to be following but are murky on the specifics. It feels exactly like the sort of book a smart and well-read kid would write (with extensive help from an editor). The other thing that Nesbit handles exceptionally well is the dynamic between the six kids. This is a collection of fairly short stories, so there isn't a lot of room for characterization. The kids are mostly sketched out with one or two memorable quirks. But Nesbit puts a lot of effort into the dynamics that arise between the children in a tight-knit family, properly making the group of kids as a whole and in various combinations a sort of character in their own right. Never for a moment does either the reader or the kids forget that they have siblings. Most adventures involve some process of sorting out who is going to come along and who is going to do other things, and there's a constant but unobtrusive background rhythm of bickering, making up, supporting each other, being frustrated by each other, and getting exasperated at each other's quirks. It's one of the better-written sibling dynamics that I've read. I somehow managed to miss Nesbit entirely as a kid, probably because she didn't write long series and child me was strongly biased towards books that were part of long series. (One book was at most a pleasant few hours; there needed to be a whole series attached to get any reasonable amount of reading out of the world.) This was nonetheless a fun bit of nostalgia because it was so much like the books I did read: kids finding adventures and making things up, getting into various trouble but getting out of it by being honest and kind, and only occasional and spotty adult supervision. Reading as an adult, I can see the touches of melancholy of loss that Nesbit embeds into this quest for riches, but part of the appeal of the stories is that the kids determinedly refuse to talk about it except as a problem to be solved. Nesbit was a rather famous progressive, but this is still a book of its time, which means there's one instance of the n-word and the kids have grown up playing the very racist version of cowboys and indians. The narrator also does a lot of stereotyping of boys and girls, although Nesbit undermines that a bit by making Alice a tomboy. I found all of this easier to ignore because the story is narrated by one of the kids who doesn't know any better, but your mileage may vary. I am always entertained by how anyone worth writing about in a British children's novel of this era has servants. You know the Bastables have fallen upon hard times because they only have one servant. The kids don't have much respect for Eliza, which I found a bit off-putting, and I wondered what this world looks like from her perspective. She clearly did a lot of the work of raising these motherless kids, but the kids view her as the hired help or an obstacle to be avoided, and there's not a lot of gratitude present. As the stories unfold, it becomes more and more clear that there's a quiet conspiracy of surrounding adults to watch out for these kids, which the kids never notice. This says good things about society, but it does undermine the adventures a little, and by the end of the book the sameness of the stories was wearing a bit thin. The high point of the book is probably chapter eight, in which the kids make their own newspaper, the entirety of which is reproduced in the book and is a note-perfect recreation of what an enterprising group of kids would come up with. In the last two stories, Nesbit tacks on an ending that was probably obligatory, but which I thought undermined some of the emotional subtext of the rest of the book. I'm not sure how else one could have put an ending on this book, but the ending she chose emphasized the degree to which the adventures really were just play, and the kids are rewarded in these stories for their ethics and their circumstances rather than for anything they concretely do. It's a bit unsatisfying. This is mostly a nostalgia read, but I'm glad I read it. If this book was not part of your childhood, it's worth reading if only for how well Nesbit captures a child's narrative voice. Rating: 7 out of 10
Next.