Search Results: "ssm"

19 March 2023

Michael Ablassmeier: small standalone sshds in go

Been looking into some existant sshd implementations in go. Most of the projects on github seem to use the standard x/crypto/ssh lib. During testing, i just wanted to see which banner these kind of ssh servers provide, using the simple command:
 nc localhost <port>
And noticed that at least some of these sshds did not accept any further connection. Simple DoS via netcat, nice. Until this day, the Golang documentation is missing some crucial hint that the function handling the connection should be called as goroutine, otherwise it simply blocks any further incoming connections. Created some pull requests on the most starred projects i found, seems even experienced golang devs missed this part.

24 February 2023

Scarlett Gately Moore: Snowstorms, Kittens and Shattered dreams

Icy morning Witch Wells AzIcy morning Witch Wells Az
Long ago I applied for my dream job at a company I have wanted to wok for since its beginning and I wasn t ready technically. Fast forward to now, I am ready! A big thank you goes out to Blue Systems for that. So I go out and find the perfect role and start the application process. The process was months long, but was going very well, the interviews and I passed the technical with flying colors. I got to the end where the hiring lead told me he was submitting my offer I was so excited, so much so, I told my husband and parents I got the job! I know, I jinxed myself there. Soon I receive the There was a problem .. One obscure assessment called GIA came back not so good. I remember that day, we were in the middle of a long series of winter storms and I when I took the test, my kitten decided right then it was me time. I couldn t very well throw her out into the snowstorm, so I continued on the best I could. It is my fault, it clearly states to be distraction free. So I speak again to the hiring lead and we both feel with my experience and technical knowledge and abilities we can still move forward. I still had hope. After some time passes, I asked for an update and got the dreaded rejection. I am told it wasn t just the GIA, but that I am not a good overall fit for the company. In one fell swoop my dreams are dashed and final, for this and all roles within that company. I wasn t given a reason either. I am devastated, heart broken, and shocked. I get along with everyone, I exceed the technical requirements, and I work well in the community. Dream door closed. I will not let this get me down. I am moving on. I will find my place where I fit in . With that said, I no longer have the will, passion, or drive to work on snaps anymore. I will leave instructions with Jonathon as to what needs to be done to move forward. The good news is my core22 kde-neon extension was merged into upstream snapcraft, so whomever takes over will have a much easier time knocking them out. @kubuntu-council I will do whatever it takes to pay back the money for the hardware you provided me to do snaps, I am truly sorry about this. What does my future hold? I will still continue with my Debian efforts. In fact, I have ventured out from the KDE umbrella and joined the go-team. I am finalizing my packaging for https://github.com/charmbracelet/gum and it s dependencies: roff, mango, mango-kong. I had my first golang patch for a failing test and have submitted it upstream. I will upload these to experimental while the freeze is on. I will be moving all the libraries in the mycroft team to the python umbrella as they are useful for other things and mycroft is no more. During the holidays I was tinkering around with selenium UI testing and stumbled on some accessibility issues within KDE, so I think this is a good place for me to dive into for my KDE contributions. I have been approached to collaborate with OpenOS on a few things, time permitting I will see what I can do there. I have a possible gig to do some websites, while I move forward in my job hunt. I will not give up! I will find my place where I fit in . Meanwhile, I must ask for donations to get us by. Anything helps, thank you for your consideration. https://gofund.me/a9c36b87

23 February 2023

Paul Tagliamonte: Announcing hz.tools

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.

If you're on the Fediverse, I'd very much appreciate boosts on my announcement toot!
Ever since 2019, I ve been learning about how radios work, and trying to learn about using them the hard way by writing as much of the stack as is practical (for some value of practical) myself. I wrote my first Hello World in 2018, which was a simple FM radio player, which used librtlsdr to read in an IQ stream, did some filtering, and played the real valued audio stream via pulseaudio. Over 4 years this has slowly grown through persistence, lots of questions to too many friends to thank (although I will try), and the eternal patience of my wife hearing about radios nonstop for years into a number of Go repos that can do quite a bit, and support a handful of radios. I ve resisted making the repos public not out of embarrassment or a desire to keep secrets, but rather, an attempt to keep myself free of any maintenance obligations to users so that I could freely break my own API, add and remove API surface as I saw fit. The worst case was to have this project feel like work, and I can t imagine that will happen if I feel frustrated by PRs that are getting ahead of me solving problems I didn t yet know about, or bugs I didn t understand the fix for. As my rate of changes to the most central dependencies has slowed, i ve begun to entertain the idea of publishing them. After a bit of back and forth, I ve decided it s time to make a number of them public, and to start working on them in the open, as I ve built up a bit of knowledge in the space, and I and feel confident that the repo doesn t contain overt lies. That s not to say it doesn t contain lies, but those lies are likely hidden and lurking in the dark. Beware. That being said, it shouldn t be a surprise to say I ve not published everything yet for the same reasons as above. I plan to open repos as the rate of changes slows and I understand the problems the library solves well enough or if the project dead ends and I ve stopped learning.

Intention behind hz.tools It s my sincere hope that my repos help to make Software Defined Radio (SDR) code a bit easier to understand, and serves as an understandable framework to learn with. It s a large codebase, but one that is possible to sit down and understand because, well, it was written by a single person. Frankly, I m also not productive enough in my free time in the middle of the night and on weekends and holidays to create a codebase that s too large to understand, I hope! I remain wary of this project turning into work, so my goal is to be very upfront about my boundaries, and the limits of what classes of contributions i m interested in seeing. Here s some goals of open sourcing these repos:
  • I do want this library to be used to learn with. Please go through it all and use it to learn about radios and how software can control them!
  • I am interested in bugs if there s a problem you discover. Such bugs are likely a great chance for me to fix something I ve misunderstood or typoed.
  • I am interested in PRs fixing bugs you find. I may need a bit of a back and forth to fully understand the problem if I do not understand the bug and fix yet. I hope you may have some grace if it s taking a long time.
Here s a list of some anti-goals of open sourcing these repos.
  • I do not want this library to become a critical dependency of an important project, since I do not have the time to deal with the maintenance burden. Putting me in that position is going to make me very uncomfortable.
  • I am not interested in feature requests, the features have grown as I ve hit problems, I m not interested in building or maintaining features for features sake. The API surface should be exposed enough to allow others to experiment with such things out-of-tree.
  • I m not interested in clever code replacing clear code without a very compelling reason.
  • I use GNU/Linux (specifically Debian ), and from time-to-time I ve made sure that my code runs on OpenBSD too. Platforms beyond that will likely not be supported at the expense of either of those two. I ll take fixes for bugs that fix a problem on another platform, but not damage the code to work around issues / lack of features on other platforms (like Windows).
I m not saying all this to be a jerk, I do it to make sure I can continue on my journey to learn about how radios work without my full time job becoming maintaining a radio framework single-handedly for other people to use even if it means I need to close PRs or bugs without merging it or fixing the issue. With all that out of the way, I m very happy to announce that the repos are now public under github.com/hztools.

Should you use this? Probably not. The intent here is not to provide a general purpose Go SDR framework for everyone to build on, although I am keenly aware it looks and feels like it, since that what it is to me. This is a learning project, so for any use beyond joining me in learning should use something like GNU Radio or a similar framework that has a community behind it. In fact, I suspect most contributors ought to be contributing to GNU Radio, and not this project. If I can encourage people to do so, contribute to GNU Radio! Nothing makes me happier than seeing GNU Radio continue to be the go-to, and well supported. Consider donating to GNU Radio!

hz.tools/rf - Frequency types The hz.tools/rf library contains the abstract concept of frequency, and some very basic helpers to interact with frequency ranges (such as helpers to deal with frequency ranges, or frequency range math) as well as frequencies and some very basic conversions (to meters, etc) and parsers (to parse values like 10MHz). This ensures that all the hz.tools libraries have a shared understanding of Frequencies, a standard way of representing ranges of Frequencies, and the ability to handle the IO boundary with things like CLI arguments, JSON or YAML. The git repo can be found at github.com/hztools/go-rf, and is importable as hz.tools/rf.
 // Parse a frequency using hz.tools/rf.ParseHz, and print it to stdout.
 freq := rf.MustParseHz("-10kHz")
fmt.Printf("Frequency: %s\n", freq+rf.MHz)
// Prints: 'Frequency: 990kHz'

// Return the Intersection between two RF ranges, and print
 // it to stdout.
 r1 := rf.Range rf.KHz, rf.MHz 
r2 := rf.Range rf.Hz(10), rf.KHz * 100 
fmt.Printf("Range: %s\n", r1.Intersection(r2))
// Prints: Range: 1000Hz->100kHz
These can be used to represent tons of things - ranges can be used for things like the tunable range of an SDR, the bandpass of a filter or the frequencies that correspond to a bin of an FFT, while frequencies can be used for things such as frequency offsets or the tuned center frequency.

hz.tools/sdr - SDR I/O and IQ Types This is the big one. This library represents the majority of the shared types and bindings, and is likely the most useful place to look at when learning about the IO boundary between a program and an SDR. The git repo can be found at github.com/hztools/go-sdr, and is importable as hz.tools/sdr. This library is designed to look (and in some cases, mirror) the Go io idioms so that this library feels as idiomatic as it can, so that Go builtins interact with IQ in a way that s possible to reason about, and to avoid reinventing the wheel by designing new API surface. While some of the API looks (and is even called) the same thing as a similar function in io, the implementation is usually a lot more naive, and may have unexpected sharp edges such as concurrency issues or performance problems. The following IQ types are implemented using the sdr.Samples interface. The hz.tools/sdr package contains helpers for conversion between types, and some basic manipulation of IQ streams.
IQ Format hz.tools Name Underlying Go Type
Interleaved uint8 (rtl-sdr) sdr.SamplesU8 [][2]uint8
Interleaved int8 (hackrf, uhd) sdr.SamplesI8 [][2]int8
Interleaved int16 (pluto, uhd) sdr.SamplesI16 [][2]int16
Interleaved float32 (airspy, uhd) sdr.SamplesC64 []complex64
The following SDRs have implemented drivers in-tree.
SDR Format RX/TX State
rtl u8 RX Good
HackRF i8 RX/TX Good
PlutoSDR i16 RX/TX Good
rtl kerberos u8 RX Old
uhd i16/c64/i8 RX/TX Good
airspyhf c64 RX Exp
The following major packages and subpackages exist at the time of writing:
Import What is it?
hz.tools/sdr Core IQ types, supporting types and implementations that interact with the byte boundary
hz.tools/sdr/rtl sdr.Receiver implementation using librtlsdr.
hz.tools/sdr/rtl/kerberos Helpers to enable coherent RX using the Kerberos SDR.
hz.tools/sdr/rtl/e4k Helpers to interact with the E4000 RTL-SDR dongle.
hz.tools/sdr/fft Interfaces for performing an FFT, which are implemented by other packages.
hz.tools/sdr/rtltcp sdr.Receiver implementation for rtl_tcp servers.
hz.tools/sdr/pluto sdr.Transceiver implementation for the PlutoSDR using libiio.
hz.tools/sdr/uhd sdr.Transceiver implementation for UHD radios, specifically the B210 and B200mini
hz.tools/sdr/hackrf sdr.Transceiver implementation for the HackRF using libhackrf.
hz.tools/sdr/mock Mock SDR for testing purposes.
hz.tools/sdr/airspyhf sdr.Receiver implementation for the AirspyHF+ Discovery with libairspyhf.
hz.tools/sdr/internal/simd SIMD helpers for IQ operations, written in Go ASM. This isn t the best to learn from, and it contains pure go implemtnations alongside.
hz.tools/sdr/stream Common Reader/Writer helpers that operate on IQ streams.

hz.tools/fftw - hz.tools/sdr/fft implementation The hz.tools/fftw package contains bindings to libfftw3 to implement the hz.tools/sdr/fft.Planner type to transform between the time and frequency domain. The git repo can be found at github.com/hztools/go-fftw, and is importable as hz.tools/fftw. This is the default throughout most of my codebase, although that default is only expressed at the leaf package libraries should not be hardcoding the use of this library in favor of taking an fft.Planner, unless it s used as part of testing. There are a bunch of ways to do an FFT out there, things like clFFT or a pure-go FFT implementation could be plugged in depending on what s being solved for.

hz.tools/ fm,am - analog audio demodulation and modulation The hz.tools/fm and hz.tools/am packages contain demodulators for AM analog radio, and FM analog radio. This code is a bit old, so it has a lot of room for cleanup, but it ll do a very basic demodulation of IQ to audio. The git repos can be found at github.com/hztools/go-fm and github.com/hztools/go-am, and are importable as hz.tools/fm and hz.tools/am. As a bonus, the hz.tools/fm package also contains a modulator, which has been tested on the air and with some of my handheld radios. This code is a bit old, since the hz.tools/fm code is effectively the first IQ processing code I d ever written, but it still runs and I run it from time to time.
 // Basic sketch for playing FM radio using a reader stream from
 // an SDR or other IQ stream.

bandwidth := 150*rf.KHz
reader, err = stream.ConvertReader(reader, sdr.SampleFormatC64)
if err != nil  
...
 
demod, err := fm.Demodulate(reader, fm.DemodulatorConfig 
Deviation: bandwidth / 2,
Downsample: 8, // some value here depending on sample rate
 Planner: fftw.Plan,
 )
if err != nil  
...
 
speaker, err := pulseaudio.NewWriter(pulseaudio.Config 
Format: pulseaudio.SampleFormatFloat32NE,
Rate: demod.SampleRate(),
AppName: "rf",
StreamName: "fm",
Channels: 1,
SinkName: "",
 )
if err != nil  
...
 
buf := make([]float32, 1024*64)
for  
i, err := demod.Read(buf)
if err != nil  
...
 
if i == 0  
panic("...")
 
if err := speaker.Write(buf[:i]); err != nil  
...
 
 

hz.tools/rfcap - byte serialization for IQ data The hz.tools/rfcap package is the reference implementation of the rfcap spec , and is how I store IQ captures locally, and how I send them across a byte boundary. The git repo can be found at github.com/hztools/go-rfcap, and is importable as hz.tools/rfcap. If you re interested in storing IQ in a way others can use, the better approach is to use SigMF rfcap exists for cases like using UNIX pipes to move IQ around, through APIs, or when I send IQ data through an OS socket, to ensure the sample format (and other metadata) is communicated with it. rfcap has a number of limitations, for instance, it can not express a change in frequency or sample rate during the capture, since the header is fixed at the beginning of the file.

19 February 2023

Russell Coker: New 18 Core CPU and NVMe

I just got a E5-2696 v3 CPU for my ML110 Gen9 home workstation, this has a Passmark score of 23326 which is almost 3 times faster than the E5-2620 v4 which rated 9224. Previously it took over 40 minutes real time to compile a 6.10 kernel that was based on the Debian kernel configuration, now it takes 14 minutes of real time, 202 minutes of user time, and 37 minutes of system CPU time. That s a definite benefit of having a faster CPU, I don t often compile kernels but when I do I don t want to wait 40+ minutes for a result. I also expanded the system from 96G of RAM to 128G, most of the time I don t need so much RAM but it s better to have too much than too little, particularly as my friend got me a good deal on RAM. The extra RAM might have helped improve performance too, going from 6/8 DIMM slots full to 8/8 might help the CPU balance access. That series of HP machines has a plastic mounting bracket for the CPU, see this video about the HP Proliant Smart Socket for details [1]. I was working on this with a friend who has the same model of HP server as I do, after buying myself a system I was so happy with it that I bought another the same when I saw it going for a good price and then sold it to my friend when I realised that I had too many tower servers at home. It turns out that getting the same model of computer as a friend is a really good strategy so then you can work together to solve problems with it. My friend s first idea was to try and buy new clips for the new CPUs (which would have delayed things and cost more money), but Reddit and some blog posts suggested that you can just skip the smart-socket guide clip and when the chip was resting in the socket it felt secure as the protrusions on the sides of the socket fit firmly enough into the notches in the CPU to prevent it moving far enough to short a connection. Testing on 2 systems showed that you don t need the clip. As an aside it would be nice if Intel made every CPU that fits a particular socket have the same physical dimensions so clips and heatsinks can work well on all CPUs. The TDP of the new CPU is 145W and the old one was 85W. One would hope that in a server class system that wouldn t make a lot of difference but unfortunately the difference was significant. Previously I could have the system running 7/8 cores with BOINC 24*7 and I wouldn t notice the fans being louder. It is possible that 100% CPU use on a hot day might make the fans sound louder if I didn t have an air-conditioner on that was loud enough to drown them out, but the noteworthy fact is that with the previous CPU the system fans were a minor annoyance. Now if I have 16 cores running BOINC it s quite loud, the sort of noise that makes most people avoid using tower servers as workstations! I ve found that if I limit it to 4 or 5 cores then the system is about as quiet as it was before. As a rough approximation I can use as much CPU power as before without making the fans louder but if I use more CPU power than was previously available it gets noisy. I also got some new NVMe devices, I was previously using 2*Crucial 1TB P1 NVMes in a BTRFS RAID-1 and now I have 2*Crucial 1TB P3 NVMes (where P1 is the slowest Crucial offering, P3 is better and more expensive, P5 is even better, etc). When doing the BTRFS migrations to move my workstation to new NVMe devices and my server to the old NVMe devices I found that the P3 series seem to have a limit of about 70MB/s for sustained random writes and the P1 series is about 35MB/s. Apparently with the cheaper NVMe devices they slow down if you do lots of random writes, pity that all the review articles talking about GB/s speeds don t mention this. To see how bad reviews are Google some reviews of these SSDs, you will find a couple of comment threads on places like Reddit of them slowing down with lots of writes and lots of review articles on well known sites that don t mention it. Generally I d recommend not upgrading from P1 to P3 NVMe devices, the benefit isn t enough to cover the effort. For every capacity of NVMe devices the most expensive devices cost more than twice as much as the cheapest devices, and sometimes it will be worth the money. Getting the most expensive device won t guarantee great performance but getting cheap devices will guarantee that it s slow. It seems that CPU development isn t progressing as well as it used to, the CPU I just bought was released in 2015 and scored 23,343 according to Passmark [2]. The most expensive Intel CPU on offer at my local computer store is the i9-13900K which was released this year and scores 62,914 [3]. One might say that CPUs designed for servers are different from ones designed for desktop PCs, but the i9 in question has a TDP Up of 253W which is too big for the PSU I have! According to the HP web site the new ML110 Gen10 servers aren t sold with a CPU as fast as the E5-2696 v3! In the period from 1988 to about 2015 every year there were new CPUs with new capabilities that were worth an upgrade. Now for the last 8 years or so there hasn t been much improvement at all. Buy a new PC for better USB ports or something not for a faster CPU!

2 February 2023

Matt Brown: 2023 Writing Plan

To achieve my goal of publishing one high-quality piece of writing per week this year, I ve put together a draft writing plan and a few organisational notes. Please let me know what you think - what s missing? what would you like to read more/less of from me? I aim for each piece of writing to generate discussion, inspire further writing, and raise my visibility and profile with potential customers and peers. Some of the writing will be opinion, but I expect a majority of it will take a learning by teaching approach - aiming to explain and present useful information to the reader while helping me learn more!

Topic Backlog The majority of my writing is going to fit into 4 series, allowing me to plan out a set of posts and narrative rather than having to come up with something novel to write about every week. To start with for Feb, my aim is to get an initial post in each series out the door. Long-term, it s likely that the order of posts will reflect my work focus (e.g. if I m spending a few weeks deep-diving into a particular product idea then expect more writing on that), but I will try and maintain some variety across the different series as well. This backlog will be maintained as a living page at https://www.mattb.nz/w/queue. Thoughts on SRE This series of posts will be pitched primarily at potential consulting customers who want to understand how I approach the development and operations of distributed software systems. Initial topics to cover include:
  • What is SRE? My philosophy on how it relates to DevOps, Platform Engineering and various other hot terms.
  • How SRE scales up and down in size.
  • My approach to managing oncall responsibilities, toil and operational work.
  • How to grow an SRE team, including the common futility of SRE transformations .
  • Learning from incidents, postmortems, incident response, etc.
Business plan drafts I have an ever-growing list of potential software opportunities and products which I think would be fun to build, but which generally don t ever leave my head due to lack of time to develop the idea, or being unable to convince myself that there s a viable business case or market for it. I d like to start sharing some very rudimentary business plan sketches for some of these ideas as a way of getting some feedback on my assessment of their potential. Whether that s confirmation that it s not worth pursuing, an expression of interest in the product, or potential partnership/collaboration opportunities - anything is better than the idea just sitting in my head. Initial ideas include:
  • Business oriented Mastodon hosting.
  • PDF E-signing - e.g. A Docusign competitor, but with a local twist through RealMe or drivers license validation.
  • A framework to enable simple, performant per-tenant at-rest encryption for SaaS products - stop the data leaks.
Product development updates For any product ideas that show merit and develop into a project, and particularly for the existing product ideas I ve already committed to exploring, I plan to document my product investigation and market research findings as a way of structuring and driving my learning in the space. To start with this will involve:
  • A series of explanatory posts diving into how NZ s electricity system works with a particular focus on how operational data that will be critical to managing a more dynamic grid flows (or doesn t flow!) today, and what opportunities or needs exist for generating, managing or distributing data that might be solvable with a software system I could build.
  • A series of product reviews and deep dives into existing farm management software and platforms in use by NZ farmers today, looking at the functionality they provide, how they integrate and generally testing the anecdotal feedback I have to date that they re clunky, hard to use and not well integrated.
  • For co2mon.nz the focus will be less on market research and more on exploring potential distribution channels (e.g. direct advertising vs partnership with air conditioning suppliers) and pricing models (e.g. buy vs rent).
Debugging walk-throughs Being able to debug and fix a system that you re not intimately familiar with is valuable skill and something that I ve always enjoyed, but it s also a skill that I observe many engineers are uncomfortable with. There s a set of techniques and processes that I ve honed and developed over the years for doing this which I think make the task of debugging an unfamiliar system more approachable. The idea, is that each post will take a problem or situation I ve encountered, from the initial symptom or problem report and walk through the process of how to narrow down and identify the trigger or root cause of the behaviour. Along the way, discussing techniques used, their pros and cons. In addition to learning about the process of debugging itself, the aim is to illustrate lessons that can be applied when designing and building software systems that facilitate and improve our experiences in the operational stage of a systems lifecycle where debugging takes place. Miscellaneous topics In addition the regular series above, stand-alone posts on the other topics may include:
  • The pros/cons I see of bootstrapping a business vs taking VC or other funding.
  • Thoughts on remote work and hiring staff.
  • AI - a confessional on how I didn t think it would progress in my lifetime, but maybe I was wrong.
  • Reflections on 15 years at Google and thoughts on subsequent events since my departure.
  • AWS vs GCP. Fight! Or with less click-bait, a level-headed comparison of the pros/cons I see in each platform.

Logistics

Discussion and comments A large part of my motivation for writing regularly is to seek feedback and generate discussion on these topics. Typically this is done by including comment functionality within the website itself. I ve decided not to do this - on-site commenting creates extra infrastructure to maintain, and limits the visibility and breadth of discussion to existing readers and followers. To provide opportunities for comment and feedback I plan to share and post notification and summarised snippets of selected posts to various social media platforms. Links to these social media posts will be added to each piece of writing to provide a path for readers to engage and discuss further while enabling the discussion and visibility of the post to grow and extend beyond my direct followers and subscribers. My current thinking is that I ll distribute via the following platforms:
  • Mastodon @matt@mastodon.nz - every post.
  • Twitter @xleem - selected posts. I m trying to reduce Twitter usage in favour of Mastodon, but there s no denying that it s still where a significant number of people and discussions are happening.
  • LinkedIn - probably primarily for posts in the business plan series, and notable milestones in the product development process.
In all cases, my aim will be to post a short teaser or summary paragraph that poses an question or relays an interesting fact to give some immediate value and signal to readers as to whether they want to click through rather than simply spamming links into the feed.

Feedback In addition to social media discussion I also plan to add a direct feedback path, particularly for readers who don t have time or inclination to participate in written discussion, by providing a simple thumbs up/thumbs down feedback widget to the bottom of each post, including those delivered via RSS and email.

Organisation To enable subscription to subsets of my writing (particularly for places like Planet Debian, etc where the more business focused content is likely to be off-topic), I plan to place each post into a set of categories:
  • Business
  • Technology
  • General
In addition to the categories, I ll also use more free-form tags to group writing with linked themes or that falls within one of the series described above.

28 January 2023

Emmanuel Kasper: Table of correspondence between AWS / Azure / Red Hat OpenShift Container Platform / upstream projects

If you know the Amazon Web Services or Azure portfolio, and you are interested in OpenShift or the OKD OpenShift community distribution, this is a table of corresponding technologies. OpenShift is Red Hat s Kubernetes distribution: it is basically the upstream Kubernetes delivered with monitoring, logging, CI/CD, underlying OS, tested upgrade paths not found with a manual kubernetes.io kubeadm install. After passing the two corresponding certifications, my opinion on cloud operators is that it is very much a step back in the direction of proprietary software. You can rebuild their cloud stack with opensource components, but it is also a lot of integration work, similar to using the Linux from scratch distribution instead of something like Debian. A good middle point are the OpenShift and OKD Kubernetes distributions, who integrate the most common cloud components, but allow an installation on your own hardware or cloud provider of your choice.
AWS Azure OpenShift *OpenShift upstream project&
Cloud Trail Kubernetes API Server audit log Kubernetes
Cloud Watch Azure Monitor, Azure Log Analytics OpenShift Monitoring Prometheus, Kubernetes Metrics
AWS Artifact Compliance Operator OpenSCAP
AWS Trusted Advisor Azure Advisor Insights
AWS Marketplace Red Hat Market place Operator Hub
AWS Identity and Access Management (IAM) Azure Active Directory, Azure AD DS Red Hat SSO Keycloack
AWS Elastisc Beanstalk Azure App Services OpenShift Source2Image (S2I) Source2Image (S2I)
AWS S3 Azure Blob Storage** ODF Rados Gateway Rook RGW
AWS Elastic Block Storage Azure Disk Storage ODF Rados Block Device Rook RBD
AWS Elastic File System Azure Files ODF Ceph FS Rook CephFS
AWS ELB Classic Azure Load Balancer MetalLB Operator MetalLB
AWS ELB Application Load Balancer Azure Application Gateway OpenShift Router HAProxy
Amazon Simple Notification Service OpenShift Streams for Apache Kafka Apache Kafka
Amazon Guard Duty Microsoft Defender for Cloud API Server audit log review, ACS Runtime detection Stackrox
Amazon Inspector Microsoft Defender for Cloud Quay.io container scanner, ACS Vulnerability Assessment Clair, Stackrox
AWS Lambda Azure Serverless Openshift Serverless* Knative
AWS Key Management System Azure Key Vault could be done with Hashicorp Vault Vault
AWS WAF NGINX Ingress Controller Operator with ModSecurity NGINX ModSecurity
Amazon Elasticache Redis Enterprise Operator Redis, memcached as alternative
AWS Relational Database Service Azure SQL Crunchy Data Operator PostgreSQL
Azure Arc OpenShift ACM Open Cluster Management
AWS Scaling Group Azure Scale Set OpenShift Autoscaler OKD Autoscaler
* OpenShift Serverless requires the application to be packaged as a container, something AWS Lambda does not require. ** Azure Blob Storage covers the object storage use case of S3, but is itself not S3 compatible

9 January 2023

Russ Allbery: Review: Black Stars

Review: Black Stars, edited by Nisi Shawl & Latoya Peterson
Publisher: Amazon Original Stories
Copyright: August 2021
ISBN: 1-5420-3272-5
ISBN: 1-5420-3270-9
ISBN: 1-5420-3271-7
ISBN: 1-5420-3273-3
ISBN: 1-5420-3268-7
ISBN: 1-5420-3269-5
Format: Kindle
Pages: 168
This is a bit of an odd duck from a metadata standpoint. Black Stars is a series of short stories (maybe one creeps into novelette range) published by Amazon for Kindle and audiobook. Each one can be purchased separately (or "borrowed" with Amazon Prime), and they have separate ISBNs, so my normal practice would be to give each its own review. They're much too short for that, though, so I'm reviewing the whole group as an anthology. The cover in the sidebar is for the first story of the series. The other covers have similar designs. I think the one for "We Travel the Spaceways" was my favorite. Each story is by a Black author and most of them are science fiction. ("The Black Pages" is fantasy.) I would classify them as afrofuturism, although I don't have a firm grasp on its definition. This anthology included several authors I've been meaning to read and was conveniently available, so I gave it a try, even though I'm not much of a short fiction reader. That will be apparent in the forthcoming grumbling. "The Visit" by Chimamanda Ngozi Adichie: This is a me problem rather than a story problem, and I suspect it's partly because the story is not for me, but I am very done with gender-swapped sexism. I get the point of telling stories of our own society with enough alienation to force the reader to approach them from a fresh angle, but the problem with a story where women are sexist and condescending to men is that you're still reading a story of condescending sexism. That's particularly true when the analogies to our world are more obvious than the internal logic of the story world, as they are here. "The Visit" tells the story of a reunion between two college friends, one of whom is now a stay-at-home husband and the other of whom has stayed single. There's not much story beyond that, just obvious political metaphor (the Male Masturbatory Act to ensure no potential child is wasted, blatant harrassment of the two men by female cops) and depressing character studies. Everyone in this story is an ass except maybe Obinna's single friend Eze, which means there's nothing to focus on except the sexism. The writing is competent and effective, but I didn't care in the slightest about any of these people or anything that was happening in their awful, dreary world. (4) "The Black Pages" by Nnedi Okorafor: Issaka has been living in Chicago, but the story opens with him returning to Timbouctou where he grew up. His parents know he's coming for a visit, but he's a week early as a surprise. Unfortunately, he's arriving at the same time as an al-Qaeda attack on the library. They set it on fire, but most of the books they were trying to destroy were already saved by his father and are now in Issaka's childhood bedroom. Unbeknownst to al-Qaeda, one of the books they did burn was imprisoning a djinn. A djinn who is now free and resident in Issaka's iPad. This was a great first chapter of a novel. The combination of a modern setting and a djinn trapped in books with an instant affinity with technology was great. Issaka is an interesting character who is well-placed to introduce the reader to the setting, and I was fully invested in Issaka and Faro negotiating their relationship. Then the story just stopped. I didn't understand the ending, which was probably me being dim, but the real problem was that I was not at all ready for an ending. I would read the novel this was setting up, though. (6) "2043... (A Merman I Should Turn to Be)" by Nisi Shawl: This is another story that felt like the setup for a novel, although not as good of a novel. The premise is that the United States has developed biological engineering that allows humans to live underwater for extended periods (although they still have to surface occasionally for air, like whales). The use to which that technology is being put is a rerun of Liberia with less colonialism: Blacks are given the option to be modified into merpeople and live under the sea off the US coast as a solution. White supremacists are not happy, of course, and try to stop them from claiming their patch of ocean floor. This was fine, as far as it went, but I wasn't fond of the lead character and there wasn't much plot. There was some sort of semi-secret plan that the protagonist stumbles across and that never made much sense to me. The best parts of the story were the underwater setting and the semi-realistic details about the merman transformation. (6) "These Alien Skies" by C.T. Rwizi: In the far future, humans are expanding across the galaxy via automatically-constructed wormhole gates. Msizi's job is to be the first ship through a new wormhole to survey the system previously reached only by the AI construction ship. The wormhole is not supposed to explode shortly after he goes through, leaving him stranded in an alien system with only his companion Tariro, who is not who she seems to be. This was a classic SF plot, but I still hadn't guessed where it was going, or the relevance of some undiscussed bits of Tariro's past. Once the plot happens, it's a bit predictable, but I enjoyed it despite the depressed protagonist. (6) "Clap Back" by Nalo Hopkinson: Apart from "The Visit," this was the most directly political of the stories. It opens with Wenda, a protest artist, whose final class project uses nanotech to put racist tchotchkes to an unexpected use. This is intercut with news clippings about a (white and much richer) designer who has found a way to embed memories into clothing and is using this to spread quotes of rather pointed "forgiveness" from a Malawi quilt. This was one of the few entries in this anthology that fit the short story shape for me. Wenda's project and Burri's clothing interact fifty years later in a surprising way. This was the second-best story of the group. (7) "We Travel the Spaceways" by Victor LaValle: Grimace (so named because he wears a huge purple coat) is a homeless man in New York who talks to cans. Most of his life is about finding food, but the cans occasionally give him missions and provide minor assistance. Apart from his cans, he's very much alone, but when he comforts a woman in McDonalds (after getting caught thinking about stealing her cheeseburger), he hopes he may have found a partner. If, that is, she still likes him when she discovers the nature of the cans' missions. This was the best-written story of the six. Grimace is the first-person narrator, and LaValle's handling of characterization and voice is excellent. Grimace makes perfect sense from inside his head, but the reader can also see how unsettling he is to those around him. This could have been a disturbing, realistic story about a schitzophrenic man. As one may have guessed from the theme of the anthology, that's not what it is. I admired the craft of this story, but I found Grimace's missions too horrific to truly like it. There is an in-story justification for them; suffice it to say that I didn't find it believable. An expansion with considerably more detail and history might have bridged that gap, but alas, short fiction. (6) Rating: 6 out of 10

30 December 2022

Chris Lamb: Favourite books of 2022: Non-fiction

In my three most recent posts, I went over the memoirs and biographies, classics and fiction books that I enjoyed the most in 2022. But in the last of my book-related posts for 2022, I'll be going over my favourite works of non-fiction. Books that just missed the cut here include Adam Hochschild's King Leopold's Ghost (1998) on the role of Leopold II of Belgium in the Congo Free State, Johann Hari's Stolen Focus (2022) (a personal memoir on relating to how technology is increasingly fragmenting our attention), Amia Srinivasan's The Right to Sex (2021) (a misleadingly named set of philosophic essays on feminism), Dana Heller et al.'s The Selling of 9/11: How a National Tragedy Became a Commodity (2005), John Berger's mindbending Ways of Seeing (1972) and Louise Richardson's What Terrorists Want (2006).

The Great War and Modern Memory (1975)
Wartime: Understanding and Behavior in the Second World War (1989) Paul Fussell Rather than describe the battles, weapons, geopolitics or big personalities of the two World Wars, Paul Fussell's The Great War and Modern Memory & Wartime are focused instead on how the two wars have been remembered by their everyday participants. Drawing on the memoirs and memories of soldiers and civilians along with a brief comparison with the actual events that shaped them, Fussell's two books are a compassionate, insightful and moving piece of analysis. Fussell primarily sets himself against the admixture of nostalgia and trauma that obscures the origins and unimaginable experience of participating in these wars; two wars that were, in his view, a "perceptual and rhetorical scandal from which total recovery is unlikely." He takes particular aim at the dishonesty of hindsight:
For the past fifty years, the Allied war has been sanitised and romanticised almost beyond recognition by the sentimental, the loony patriotic, the ignorant and the bloodthirsty. I have tried to balance the scales. [And] in unbombed America especially, the meaning of the war [seems] inaccessible.
The author does not engage in any of the customary rose-tinted view of war, yet he remains understanding and compassionate towards those who try to locate a reason within what was quite often senseless barbarism. If anything, his despondency and pessimism about the Second World War (the war that Fussell himself fought in) shines through quite acutely, and this is especially the case in what he chooses to quote from others:
"It was common [ ] throughout the [Okinawa] campaign for replacements to get hit before we even knew their names. They came up confused, frightened, and hopeful, got wounded or killed, and went right back to the rear on the route by which they had come, shocked, bleeding, or stiff. They were forlorn figures coming up to the meat grinder and going right back out of it like homeless waifs, unknown and faceless to us, like unread books on a shelf."
It would take a rather heartless reader to fail to be sobered by this final simile, and an even colder one to view Fussell's citation of such an emotive anecdote to be manipulative. Still, stories and cruel ironies like this one infuse this often-angry book, but it is not without astute and shrewd analysis as well, especially on the many qualitative differences between the two conflicts that simply cannot be captured by facts and figures alone. For example:
A measure of the psychological distance of the Second [World] War from the First is the rarity, in 1914 1918, of drinking and drunkenness poems.
Indeed so. In fact, what makes Fussell's project so compelling and perhaps even unique is that he uses these non-quantitive measures to try and take stock of what happened. After all, this was a war conducted by humans, not the abstract school of statistics. And what is the value of a list of armaments destroyed by such-and-such a regiment when compared with truly consequential insights into both how the war affected, say, the psychology of postwar literature ("Prolonged trench warfare, whether enacted or remembered, fosters paranoid melodrama, which I take to be a primary mode in modern writing."), the specific words adopted by combatants ("It is a truism of military propaganda that monosyllabic enemies are easier to despise than others") as well as the very grammar of interaction:
The Field Service Post Card [in WW1] has the honour of being the first widespread exemplary of that kind of document which uniquely characterises the modern world: the "Form". [And] as the first widely known example of dehumanised, automated communication, the post card popularised a mode of rhetoric indispensable to the conduct of later wars fought by great faceless conscripted armies.
And this wouldn't be a book review without argument-ending observations that:
Indicative of the German wartime conception [of victory] would be Hitler and Speer's elaborate plans for the ultimate reconstruction of Berlin, which made no provision for a library.
Our myths about the two world wars possess an undisputed power, in part because they contain an essential truth the atrocities committed by Germany and its allies were not merely extreme or revolting, but their full dimensions (embodied in the Holocaust and the Holodomor) remain essentially inaccessible within our current ideological framework. Yet the two wars are better understood as an abyss in which we were all dragged into the depths of moral depravity, rather than a battle pitched by the forces of light against the forces of darkness. Fussell is one of the few observers that can truly accept and understand this truth and is still able to speak to us cogently on the topic from the vantage point of experience. The Second World War which looms so large in our contemporary understanding of the modern world (see below) may have been necessary and unavoidable, but Fussell convinces his reader that it was morally complicated "beyond the power of any literary or philosophic analysis to suggest," and that the only way to maintain a na ve belief in the myth that these wars were a Manichaean fight between good and evil is to overlook reality. There are many texts on the two World Wars that can either stir the intellect or move the emotions, but Fussell's two books do both. A uniquely perceptive and intelligent commentary; outstanding.

Longitude (1995) Dava Sobel Since Man first decided to sail the oceans, knowing one's location has always been critical. Yet doing so reliably used to be a serious problem if you didn't know where you were, you are far more likely to die and/or lose your valuable cargo. But whilst finding one's latitude (ie. your north south position) had effectively been solved by the beginning of the 17th century, finding one's (east west) longitude was far from trustworthy in comparison. This book first published in 1995 is therefore something of an anachronism. As in, we readily use the GPS facilities of our phones today without hesitation, so we find it difficult to imagine a reality in which knowing something fundamental like your own location is essentially unthinkable. It became clear in the 18th century, though, that in order to accurately determine one's longitude, what you actually needed was an accurate clock. In Longitude, therefore, we read of the remarkable story of John Harrison and his quest to create a timepiece that would not only keep time during a long sea voyage but would survive the rough ocean conditions as well. Self-educated and a carpenter by trade, Harrison made a number of important breakthroughs in keeping accurate time at sea, and Longitude describes his novel breakthroughs in a way that is both engaging and without talking down to the reader. Still, this book covers much more than that, including the development of accurate longitude going hand-in-hand with advancements in cartography as well as in scientific experiments to determine the speed of light: experiments that led to the formulation of quantum mechanics. It also outlines the work being done by Harrison's competitors. 'Competitors' is indeed the correct word here, as Parliament offered a huge prize to whoever could create such a device, and the ramifications of this tremendous financial incentive are an essential part of this story. For the most part, though, Longitude sticks to the story of Harrison and his evolving obsession with his creating the perfect timepiece. Indeed, one reason that Longitude is so resonant with readers is that many of the tropes of the archetypical 'English inventor' are embedded within Harrison himself. That is to say, here is a self-made man pushing against the establishment of the time, with his groundbreaking ideas being underappreciated in his life, or dishonestly purloined by his intellectual inferiors. At the level of allegory, then, I am minded to interpret this portrait of Harrison as a symbolic distillation of postwar Britain a nation acutely embarrassed by the loss of the Empire that is now repositioning itself as a resourceful but plucky underdog; a country that, with a combination of the brains of boffins and a healthy dose of charisma and PR, can still keep up with the big boys. (It is this same search for postimperial meaning I find in the fiction of John le Carr , and, far more famously, in the James Bond franchise.) All of this is left to the reader, of course, as what makes Longitute singularly compelling is its gentle manner and tone. Indeed, at times it was as if the doyenne of sci-fi Ursula K. LeGuin had a sideline in popular non-fiction. I realise it's a mark of critical distinction to downgrade the importance of popular science in favour of erudite academic texts, but Latitude is ample evidence that so-called 'pop' science need not be patronising or reductive at all.

Closed Chambers: The Rise, Fall, and Future of the Modern Supreme Court (1998) Edward Lazarus After the landmark decision by the U.S. Supreme Court in *Dobbs v. Jackson Women's Health Organization that ended the Constitutional right to abortion conferred by Roe v Wade, I prioritised a few books in the queue about the judicial branch of the United States. One of these books was Closed Chambers, which attempts to assay, according to its subtitle, "The Rise, Fall and Future of the Modern Supreme Court". This book is not merely simply a learned guide to the history and functioning of the Court (although it is completely creditable in this respect); it's actually an 'insider' view of the workings of the institution as Lazurus was a clerk for Justice Harry Blackmun during the October term of 1988. Lazarus has therefore combined his experience as a clerk and his personal reflections (along with a substantial body of subsequent research) in order to communicate the collapse in comity between the Justices. Part of this book is therefore a pure history of the Court, detailing its important nineteenth-century judgements (such as Dred Scott which ruled that the Constitution did not consider Blacks to be citizens; and Plessy v. Ferguson which failed to find protection in the Constitution against racial segregation laws), as well as many twentieth-century cases that touch on the rather technical principle of substantive due process. Other layers of Lazurus' book are explicitly opinionated, however, and they capture the author's assessment of the Court's actions in the past and present [1998] day. Given the role in which he served at the Court, particular attention is given by Lazarus to the function of its clerks. These are revealed as being far more than the mere amanuenses they were hitherto believed to be. Indeed, the book is potentially unique in its the claim that the clerks have played a pivotal role in the deliberations, machinations and eventual rulings of the Court. By implication, then, the clerks have plaedy a crucial role in the internal controversies that surround many of the high-profile Supreme Court decisions decisions that, to the outsider at least, are presented as disinterested interpretations of Constitution of the United States. This is of especial importance given that, to Lazarus, "for all the attention we now pay to it, the Court remains shrouded in confusion and misunderstanding." Throughout his book, Lazarus complicates the commonplace view that the Court is divided into two simple right vs. left political factions, and instead documents an ever-evolving series of loosely held but strongly felt series of cabals, quid pro quo exchanges, outright equivocation and pure personal prejudices. (The age and concomitant illnesses of the Justices also appears to have a not insignificant effect on the Court's rulings as well.) In other words, Closed Chambers is not a book that will be read in a typical civics class in America, and the only time the book resorts to the customary breathless rhetoric about the US federal government is in its opening chapter:
The Court itself, a Greek-style temple commanding the crest of Capitol Hill, loomed above them in the dim light of the storm. Set atop a broad marble plaza and thirty-six steps, the Court stands in splendid isolation appropriate to its place at the pinnacle of the national judiciary, one of the three independent and "coequal" branches of American government. Once dubbed the Ivory Tower by architecture critics, the Court has a Corinthian colonnade and massive twenty-foot-high bronze doors that guard the single most powerful judicial institution in the Western world. Lights still shone in several offices to the right of the Court's entrance, and [ ]
Et cetera, et cetera. But, of course, this encomium to the inherent 'nobility' of the Supreme Court is quickly revealed to be a narrative foil, as Lazarus soon razes this dangerously na ve conception to the ground:
[The] institution is [now] broken into unyielding factions that have largely given up on a meaningful exchange of their respective views or, for that matter, a meaningful explication or defense of their own views. It is of Justices who in many important cases resort to transparently deceitful and hypocritical arguments and factual distortions as they discard judicial philosophy and consistent interpretation in favor of bottom-line results. This is a Court so badly splintered, yet so intent on lawmaking, that shifting 5-4 majorities, or even mere pluralities, rewrite whole swaths of constitutional law on the authority of a single, often idiosyncratic vote. It is also a Court where Justices yield great and excessive power to immature, ideologically driven clerks, who in turn use that power to manipulate their bosses and the institution they ostensibly serve.
Lazurus does not put forward a single, overarching thesis, but in the final chapters, he does suggest a potential future for the Court:
In the short run, the cure for what ails the Court lies solely with the Justices. It is their duty, under the shield of life tenure, to recognize the pathologies affecting their work and to restore the vitality of American constitutionalism. Ultimately, though, the long-term health of the Court depends on our own resolve on whom [we] select to join that institution.
Back in 1998, Lazurus might have had room for this qualified optimism. But from the vantage point of 2022, it appears that the "resolve" of the United States citizenry was not muscular enough to meet his challenge. After all, Lazurus was writing before Bush v. Gore in 2000, which arrogated to the judicial branch the ability to decide a presidential election; the disillusionment of Barack Obama's failure to nominate a replacement for Scalia; and many other missteps in the Court as well. All of which have now been compounded by the Trump administration's appointment of three Republican-friendly justices to the Court, including hypocritically appointing Justice Barrett a mere 38 days before the 2020 election. And, of course, the leaking and ruling in Dobbs v. Jackson, the true extent of which has not been yet. Not of a bit of this is Lazarus' fault, of course, but the Court's recent decisions (as well as the liberal hagiographies of 'RBG') most perforce affect one's reading of the concluding chapters. The other slight defect of Closed Chambers is that, whilst it often implies the importance of the federal and state courts within the judiciary, it only briefly positions the Supreme Court's decisions in relation to what was happening in the House, Senate and White House at the time. This seems to be increasingly relevant as time goes on: after all, it seems fairly clear even to this Brit that relying on an activist Supreme Court to enact progressive laws must be interpreted as a failure of the legislative branch to overcome the perennial problems of the filibuster, culture wars and partisan bickering. Nevertheless, Lazarus' book is in equal parts ambitious, opinionated, scholarly and dare I admit it? wonderfully gossipy. By juxtaposing history, memoir, and analysis, Closed Chambers combines an exacting evaluation of the Court's decisions with a lively portrait of the intellectual and emotional intensity that has grown within the Supreme Court's pseudo-monastic environment all while it struggles with the most impactful legal issues of the day. This book is an excellent and well-written achievement that will likely never be repeated, and a must-read for anyone interested in this ever-increasingly important branch of the US government.

Crashed: How a Decade of Financial Crises Changed the World (2018)
Shutdown: How Covid Shook the World's Economy (2021) Adam Tooze The economic historian Adam Tooze has often been labelled as an unlikely celebrity, but in the fourteen years since the global financial crisis of 2008, a growing audience has been looking for answers about the various failures of the modern economy. Tooze, a professor of history at New York's Columbia University, has written much that is penetrative and thought-provoking on this topic, and as a result, he has generated something of a cult following amongst economists, historians and the online left. I actually read two Tooze books this year. The first, Crashed (2018), catalogues the scale of government intervention required to prop up global finance after the 2008 financial crisis, and it characterises the different ways that countries around the world failed to live up to the situation, such as doing far too little, or taking action far too late. The connections between the high-risk subprime loans, credit default swaps and the resulting liquidity crisis in the US in late 2008 is fairly well known today in part thanks to films such as Adam McKay's 2015 The Big Short and much improved economic literacy in media reportage. But Crashed makes the implicit claim that, whilst the specific and structural origins of the 2008 crisis are worth scrutinising in exacting detail, it is the reaction of states in the months and years after the crash that has been overlooked as a result. After all, this is a reaction that has not only shaped a new economic order, it has created one that does not fit any conventional idea about the way the world 'ought' to be run. Tooze connects the original American banking crisis to the (multiple) European debt crises with a larger crisis of liberalism. Indeed, Tooze somehow manages to cover all these topics and more, weaving in Trump, Brexit and Russia's 2014 annexation of Crimea, as well as the evolving role of China in the post-2008 economic order. Where Crashed focused on the constellation of consequences that followed the events of 2008, Shutdown is a clear and comprehensive account of the way the world responded to the economic impact of Covid-19. The figures are often jaw-dropping: soon after the disease spread around the world, 95% of the world's economies contracted simultaneously, and at one point, the global economy shrunk by approximately 20%. Tooze's keen and sobering analysis of what happened is made all the more remarkable by the fact that it came out whilst the pandemic was still unfolding. In fact, this leads quickly to one of the book's few flaws: by being published so quickly, Shutdown prematurely over-praises China's 'zero Covid' policy, and these remarks will make a reader today squirm in their chair. Still, despite the regularity of these references (after all, mentioning China is very useful when one is directly comparing economic figures in early 2021, for examples), these are actually minor blemishes on the book's overall thesis. That is to say, Crashed is not merely a retelling of what happened in such-and-such a country during the pandemic; it offers in effect a prediction about what might be coming next. Whilst the economic responses to Covid averted what could easily have been another Great Depression (and thus showed it had learned some lessons from 2008), it had only done so by truly discarding the economic rule book. The by-product of inverting this set of written and unwritten conventions that have governed the world for the past 50 years, this 'Washington consensus' if you well, has yet to be fully felt. Of course, there are many parallels between these two books by Tooze. Both the liquidity crisis outlined in Crashed and the economic response to Covid in Shutdown exposed the fact that one of the central tenets of the modern economy ie. that financial markets can be trusted to regulate themselves was entirely untrue, and likely was false from the very beginning. And whilst Adam Tooze does not offer a singular piercing insight (conveying a sense of rigorous mastery instead), he may as well be asking whether we're simply going to lurch along from one crisis to the next, relying on the technocrats in power to fix problems when everything blows up again. The answer may very well be yes.

Looking for the Good War: American Amnesia and the Violent Pursuit of Happiness (2021) Elizabeth D. Samet Elizabeth D. Samet's Looking for the Good War answers the following question what would be the result if you asked a professor of English to disentangle the complex mythology we have about WW2 in the context of the recent US exit of Afghanistan? Samet's book acts as a twenty-first-century update of a kind to Paul Fussell's two books (reviewed above), as well as a deeper meditation on the idea that each new war is seen through the lens of the previous one. Indeed, like The Great War and Modern Memory (1975) and Wartime (1989), Samet's book is a perceptive work of demystification, but whilst Fussell seems to have been inspired by his own traumatic war experience, Samet is not only informed by her teaching West Point military cadets but by the physical and ontological wars that have occurred during her own life as well. A more scholarly and dispassionate text is the result of Samet's relative distance from armed combat, but it doesn't mean Looking for the Good War lacks energy or inspiration. Samet shares John Adams' belief that no political project can entirely shed the innate corruptions of power and ambition and so it is crucial to analyse and re-analyse the role of WW2 in contemporary American life. She is surely correct that the Second World War has been universally elevated as a special, 'good' war. Even those with exceptionally giddy minds seem to treat WW2 as hallowed:
It is nevertheless telling that one of the few occasions to which Trump responded with any kind of restraint while he was in office was the 75th anniversary of D-Day in 2019.
What is the source of this restraint, and what has nurtured its growth in the eight decades since WW2 began? Samet posits several reasons for this, including the fact that almost all of the media about the Second World War is not only suffused with symbolism and nostalgia but, less obviously, it has been made by people who have no experience of the events that they depict. Take Stephen Ambrose, author of Steven Spielberg's Band of Brothers miniseries: "I was 10 years old when the war ended," Samet quotes of Ambrose. "I thought the returning veterans were giants who had saved the world from barbarism. I still think so. I remain a hero worshiper." If Looking for the Good War has a primary thesis, then, it is that childhood hero worship is no basis for a system of government, let alone a crusading foreign policy. There is a straight line (to quote this book's subtitle) from the "American Amnesia" that obscures the reality of war to the "Violent Pursuit of Happiness." Samet's book doesn't merely just provide a modern appendix to Fussell's two works, however, as it adds further layers and dimensions he overlooked. For example, Samet provides some excellent insight on the role of Western, gangster and superhero movies, and she is especially good when looking at noir films as a kind of kaleidoscopic response to the Second World War:
Noir is a world ruled by bad decisions but also by bad timing. Chance, which plays such a pivotal role in war, bleeds into this world, too.
Samet rightfully weaves the role of women into the narrative as well. Women in film noir are often celebrated as 'independent' and sassy, correctly reflecting their newly-found independence gained during WW2. But these 'liberated' roles are not exactly a ringing endorsement of this independence: the 'femme fatale' and the 'tart', etc., reflect a kind of conditional freedom permitted to women by a post-War culture which is still wedded to an outmoded honour culture. In effect, far from being novel and subversive, these roles for women actually underwrote the ambient cultural disapproval of women's presence in the workforce. Samet later connects this highly-conditional independence with the liberation of Afghan women, which:
is inarguably one of the more palatable outcomes of our invasion, and the protection of women's rights has been invoked on the right and the left as an argument for staying the course in Afghanistan. How easily consequence is becoming justification. How flattering it will be one day to reimagine it as original objective.
Samet has ensured her book has a predominantly US angle as well, for she ends her book with a chapter on the pseudohistorical Lost Cause of the Civil War. The legacy of the Civil War is still visible in the physical phenomena of Confederate statues, but it also exists in deep-rooted racial injustice that has been shrouded in euphemism and other psychological devices for over 150 years. Samet believes that a key part of what drives the American mythology about the Second World War is the way in which it subconsciously cleanses the horrors of brother-on-brother murder that were seen in the Civil War. This is a book that is not only of interest to historians of the Second World War; it is a work for anyone who wishes to understand almost any American historical event, social issue, politician or movie that has appeared since the end of WW2. That is for better or worse everyone on earth.

27 December 2022

Chris Lamb: Favourite books of 2022: Fiction

This post marks the beginning my yearly roundups of the favourite books and movies that I read and watched in 2022 that I plan to publish over the next few days. Just as I did for 2020 and 2021, I won't reveal precisely how many books I read in the last year. I didn't get through as many books as I did in 2021, though, but that's partly due to reading a significant number of long nineteenth-century novels in particular, a fair number of those books that American writer Henry James once referred to as "large, loose, baggy monsters." However, in today's post I'll be looking at my favourite books that are typically filed under fiction, with 'classic' fiction following tomorrow. Works that just missed the cut here include John O'Brien's Leaving Las Vegas, Colson Whitehead's Sag Harbor and possibly The Name of the Rose by Umberto Eco, or Elif Batuman's The Idiot. I also feel obliged to mention (or is that show off?) that I also read the 1,079-page Infinite Jest by David Foster Wallace, but I can't say it was a favourite, let alone recommend others unless they are in the market for a good-quality under-monitor stand.

Mona (2021) Pola Oloixarac Mona is the story of a young woman who has just been nominated for the 'most important literary award in Europe'. Mona sees the nomination as a chance to escape her substance abuse on a Californian campus and so speedily decamps to the small village in the depths of Sweden where the nominees must convene for a week before the overall winner is announced. Mona didn't disappear merely to avoid pharmacological misadventures, though, but also to avoid the growing realisation that she is being treated as something of an anthropological curiosity at her university: a female writer of colour treasured for her flourish of exotic diversity that reflects well upon her department. But Mona is now stuck in the company of her literary competitors who all have now gathered from around the world in order to do what writers do: harbour private resentments, exchange empty flattery, embody the selfsame racialised stereotypes that Mona left the United States to avoid, stab rivals in the back, drink too much, and, of course, go to bed together. But as I read Mona, I slowly started to realise that something else is going on. Why does Mona keep finding traces of violence on her body, the origins of which she cannot or refuses to remember? There is something eerily defensive about her behaviour and sardonic demeanour in general as well. A genre-bending and mind-expanding novel unfolded itself, and, without getting into spoiler territory, Mona concludes with such a surprising ending that, according to Adam Thirlwell:
Perhaps we need to rethink what is meant by a gimmick. If a gimmick is anything that we want to reject as extra or excessive or ill-fitting, then it may be important to ask what inhibitions or arbitrary conventions have made it seem like excess, and to revel in the exorbitant fictional constructions it produces. [...]
Mona is a savage satire of the literary world, but it's also a very disturbing exploration of trauma and violence. The success of the book comes in equal measure from the author's commitment to both ideas, but also from the way the psychological damage component creeps up on you. And, as implied above, the last ten pages are quite literally out of this world.

My Brilliant Friend (2011)
The Story of a New Name (2012)
Those Who Leave and Those Who Stay (2013)
The Story of the Lost Child (2014) Elena Ferrante Elena Ferrante's Neopolitan Quartet follows two girls, both brilliant in their own way. Our protagonist-narrator is Elena, a studious girl from the lower rungs of the middle class of Naples who is inspired to be more by her childhood friend, Lila. Lila is, in turn, far more restricted by her poverty and class, but can transcend it at times through her fiery nature, which also brands her as somewhat unique within their inward-looking community. The four books follow the two girls from the perspective of Elena as they grow up together in post-war Italy, where they drift in-and-out of each other's lives due to the vicissitudes of change and the consequences of choice. All the time this is unfolding, however, the narrative is very always slightly charged by the background knowledge revealed on the very first page that Lila will, many years later, disappear from Elena's life. Whilst the quartet has the formal properties of a bildungsroman, its subject and conception are almost entirely different. In particular, the books are driven far more by character and incident than spectacular adventures in picturesque Italy. In fact, quite the opposite takes place: these are four books where ordinary-seeming occurrences take on an unexpected radiance against a background of poverty, ignorance, violence and other threats, often bringing to mind the films of the Italian neorealism movement. Brilliantly rendered from beginning to end, Ferrante has a seemingly studious eye for interpreting interactions and the psychology of adolescence and friendship. Some utterances indeed, perhaps even some glances are dissected at length over multiple pages, something that Vittorio De Sica's classic Bicycle Thieves (1948) could never do. Potential readers should not take any notice of the saccharine cover illustrations on most editions of the books. The quartet could even win an award for the most misleading artwork, potentially rivalling even Vladimir Nabokov's Lolita. I wouldn't be at all surprised if it is revealed that the drippy illustrations and syrupy blurbs ("a rich, intense and generous-hearted story ") turn out to be part of a larger metatextual game that Ferrante is playing with her readers. This idiosyncratic view of mine is partially supported by the fact that each of the four books has been given a misleading title, the true ambiguity of which often only becomes clear as each of the four books comes into sharper focus. Readers of the quartet often fall into debating which is the best of the four. I've heard from more than one reader that one has 'too much Italian politics' and another doesn't have enough 'classic' Lina moments. The first book then possesses the twin advantages of both establishing the environs and finishing with a breathtaking ending that is both satisfying and a cliffhanger as well but does this make it 'the best'? I prefer to liken the quartet more like the different seasons of The Wire (2002-2008) where, personal favourites and preferences aside, although each season is undoubtedly unique, it would take a certain kind of narrow-minded view of art to make the claim that, say, series one of The Wire is 'the best' or that the season that focuses on the Baltimore docks 'is boring'. Not to sound like a neo-Wagnerian, but each of them adds to final result in its own. That is to say, both The Wire and the Neopolitan Quartet achieve the rare feat of making the magisterial simultaneously intimate.

Out There: Stories (2022) Kate Folk Out There is a riveting collection of disturbing short stories by first-time author Kate Fork. The title story first appeared in the New Yorker in early 2020 imagines a near-future setting where a group of uncannily handsome artificial men called 'blots' have arrived on the San Francisco dating scene with the secret mission of sleeping with women, before stealing their personal data from their laptops and phones and then (quite literally) evaporating into thin air. Folk's satirical style is not at all didactic, so it rarely feels like she is making her points in a pedantic manner. But it's clear that the narrator of Out There is recounting her frustration with online dating. in a way that will resonate with anyone who s spent time with dating apps or indeed the contemporary hyper-centralised platform-based internet in general. Part social satire, part ghost story and part comic tales, the blurring of the lines between these factors is only one of the things that makes these stories so compelling. But whilst Folk constructs crazy scenarios and intentionally strange worlds, she also manages to also populate them with characters that feel real and genuinely sympathetic. Indeed, I challenge you not to feel some empathy for the 'blot' in the companion story Big Sur which concludes the collection, and it complicates any primary-coloured view of the dating world of consisting entirely of predatory men. And all of this is leavened with a few stories that are just plain surreal. I don't know what the deal is with Dating a Somnambulist (available online on Hobart Pulp), but I know that I like it.

Solaris (1961) Stanislaw Lem When Kelvin arrives at the planet Solaris to study the strange ocean that covers its surface, instead of finding an entirely physical scientific phenomenon, he soon discovers a previously unconscious memory embodied in the physical manifestation of a long-dead lover. The other scientists on the space station slowly reveal that they are also plagued with their own repressed corporeal memories. Many theories are put forward as to why all this is occuring, including the idea that Solaris is a massive brain that creates these incarnate memories. Yet if that is the case, the planet's purpose in doing so is entirely unknown, forcing the scientists to shift focus and wonder whether they can truly understand the universe without first understanding what lies within their own minds and in their desires. This would be an interesting outline for any good science fiction book, but one of the great strengths of Solaris is not only that it withholds from the reader why the planet is doing anything it does, but the book is so forcefully didactic in its dislike of the hubris, destructiveness and colonial thinking that can accompany scientific exploration. In one of its most vitriolic passages, Lem's own anger might be reaching out to the reader:
We are humanitarian and chivalrous; we don t want to enslave other races, we simply want to bequeath them our values and take over their heritage in exchange. We think of ourselves as the Knights of the Holy Contact. This is another lie. We are only seeking Man. We have no need of other worlds. We need mirrors. We don t know what to do with other worlds. A single world, our own, suffices us; but we can t accept it for what it is. We are searching for an ideal image of our own world: we go in quest of a planet, of a civilisation superior to our own, but developed on the basis of a prototype of our primaeval past. At the same time, there is something inside us that we don t like to face up to, from which we try to protect ourselves, but which nevertheless remains since we don t leave Earth in a state of primal innocence. We arrive here as we are in reality, and when the page is turned, and that reality is revealed to us that part of our reality that we would prefer to pass over in silence then we don t like it anymore.
An overwhelming preoccupation with this idea infuses Solaris, and it turns out to be a common theme in a lot of Lem's work of this period, such as in his 1959 'anti-police procedural' The Investigation. Perhaps it not a dislike of exploration in general or the modern scientific method in particular, but rather a savage critique of the arrogance and self-assuredness that accompanies most forms of scientific positivism, or at least pursuits that cloak themselves under the guise of being a laudatory 'scientific' pursuit:
Man has gone out to explore other worlds and other civilizations without having explored his own labyrinth of dark passages and secret chambers and without finding what lies behind doorways that he himself has sealed.
I doubt I need to cite specific instances of contemporary scientific pursuits that might meet Lem's punishing eye today, and the fact that his critique works both in 2022 and 1961 perhaps tells us more about the human condition than we'd care to know. Another striking thing about Solaris isn't just the specific Star Trek and Stargate SG-1 episodes that I retrospectively realised were purloined from the book, but that almost the entire register of Star Trek: The Next Generation in particular seems to be rehearsed here. That is to say, TNG presents itself as hard and fact-based 'sci-fi' on the surface, but, at its core, there are often human, existential and sometimes quite enormously emotionally devastating human themes being discussed such as memory, loss and grief. To take one example from many, the painful memories that the planet Solaris physically materialises in effect asks us to seriously consider what it actually is taking place when we 'love' another person: is it merely another 'mirror' of ourselves? (And, if that is the case, is that... bad?) It would be ahistorical to claim that all popular science fiction today can be found rehearsed in Solaris, but perhaps it isn't too much of a stretch:
[Solaris] renders unnecessary any more alien stories. Nothing further can be said on this topic ...] Possibly, it can be said that when one feels the urge for such a thing, one should simply reread Solaris and learn its lessons again. Kim Stanley Robinson [...]
I could go on praising this book for quite some time; perhaps by discussing the extreme framing devices used within the book at one point, the book diverges into a lengthy bibliography of fictional books-within-the-book, each encapsulating a different theory about what the mechanics and/or function of Solaris is, thereby demonstrating that 'Solaris studies' as it is called within the world of the book has been going on for years with no tangible results, which actually leads to extreme embarrassment and then a deliberate and willful blindness to the 'Solaris problem' on the part of the book's scientific community. But I'll leave it all here before this review gets too long... Highly recommended, and a likely reread in 2023.

Brokeback Mountain (1997) Annie Proulx Brokeback Mountain began as a short story by American author Annie Proulx which appeared in the New Yorker in 1997, although it is now more famous for the 2005 film adaptation directed by Taiwanese filmmaker Ang Lee. Both versions follow two young men who are hired for the summer to look after sheep at a range under the 'Brokeback' mountain in Wyoming. Unexpectedly, however, they form an intense emotional and sexual attachment, yet life intervenes and demands they part ways at the end of the summer. Over the next twenty years, though, as their individual lives play out with marriages, children and jobs, they continue reuniting for brief albeit secret liaisons on camping trips in remote settings. There's no feigned shyness or self-importance in Brokeback Mountain, just a close, compassionate and brutally honest observation of a doomed relationship and a bone-deep feeling for the hardscrabble life in the post-War West. To my mind, very few books have captured so acutely the desolation of a frustrated and repressed passion, as well as the particular flavour of undirected anger that can accompany this kind of yearning. That the original novella does all this in such a beautiful way (and without the crutch of the Wyoming landscape to look at ) is a tribute to Proulx's skills as a writer. Indeed, even without the devasting emotional undertones, Proulx's descriptions of the mountains and scree of the West is likely worth the read alone.

Luster (2020) Raven Leilani Edie is a young Black woman living in New York whose life seems to be spiralling out of control. She isn't good at making friends, her career is going nowhere, and she has no close family to speak of as well. She is, thus, your typical NYC millennial today, albeit seen through a lens of Blackness that complicates any reductive view of her privilege or minority status. A representative paragraph might communicate the simmering tone:
Before I start work, I browse through some photos of friends who are doing better than me, then an article on a black teenager who was killed on 115th for holding a weapon later identified as a showerhead, then an article on a black woman who was killed on the Grand Concourse for holding a weapon later identified as a cell phone, then I drown myself in the comments section and do some online shopping, by which I mean I put four dresses in my cart as a strictly theoretical exercise and then let the page expire.
She starts a sort-of affair with an older white man who has an affluent lifestyle in nearby New Jersey. Eric or so he claims has agreed upon an 'open relationship' with his wife, but Edie is far too inappropriate and disinhibited to respect any boundaries that Eric sets for her, and so Edie soon becomes deeply entangled in Eric's family life. It soon turns out that Eric and his wife have a twelve-year-old adopted daughter, Akila, who is also wait for it Black. Akila has been with Eric's family for two years now and they aren t exactly coping well together. They don t even know how to help her to manage her own hair, let alone deal with structural racism. Yet despite how dark the book's general demeanour is, there are faint glimmers of redemption here and there. Realistic almost to the end, Edie might finally realise what s important in her life, but it would be a stretch to say that she achieves them by the final page. Although the book is full of acerbic remarks on almost any topic (Dogs: "We made them needy and physically unfit. They used to be wolves, now they are pugs with asthma."), it is the comments on contemporary race relations that are most critically insightful. Indeed, unsentimental, incisive and funny, Luster had much of what I like in Colson Whitehead's books at times, but I can't remember a book so frantically fast-paced as this since the Booker-prize winning The Sellout by Paul Beatty or Sam Tallent's Running the Light.

12 December 2022

Russ Allbery: Review: The Unbroken

Review: The Unbroken, by C.L. Clark
Series: Magic of the Lost #1
Publisher: Orbit
Copyright: March 2021
ISBN: 0-316-54267-9
Format: Kindle
Pages: 490
The Unbroken is the first book of a projected fantasy trilogy. It is C.L. Clark's first novel. Lieutenant Touraine is one of the Sands, the derogatory name for the Balladairan Colonial Brigade. She, like the others of her squad, are conscript soldiers, kidnapped by the Balladairan Empire from their colonies as children and beaten into "civilized" behavior by Balladairan training. They fought in the Balladairan war against the Taargens. Now, they've been reassigned to El-Wast, capital city of Qaz l, the foremost of the southern colonies. The place where Touraine was born, from which she was taken at the age of five. Balladaire is not France and Qaz l is not Algeria, but the parallels are obvious and strongly implied by the map and the climates. Touraine and her squad are part of the forces accompanying Princess Luca, the crown princess of the Balladairan Empire, who has been sent to take charge of Qaz l and quell a rebellion. Luca's parents died in the Withering, the latest round of a recurrent plague that haunts Balladaire. She is the rightful heir, but her uncle rules as regent and is reluctant to give her the throne. Qaz l is where she is to prove herself. If she can bring the colony in line, she can prove that she's ready to rule: her birthright and her destiny. The Qaz li are uninterested in being part of Luca's grand plan of personal accomplishment. She steps off her ship into an assassination attempt, foiled by Touraine's sharp eyes and quick reactions, which brings the Sand to the princess's attention. Touraine's reward is to be assigned the execution of the captured rebels, one of whom recognizes her and names her mother before he dies. This sets up the core of the plot: Qaz li rebellion against an oppressive colonial empire, Luca's attempt to use the colony as a political stepping stone, and Touraine caught in between. One of the reasons why I am happy to see increased diversity in SFF authors is that the way we tell stories is shaped by our cultural upbringing. I was taught to tell stories about colonialism and rebellion in a specific ideological shape. It's hard to describe briefly, but the core idea is that being under the rule of someone else is unnatural as well as being an injustice. It's a deviation from the way the world should work, something unexpected that is inherently unstable. Once people unite to overthrow their oppressors, eventual success is inevitable; it's not only right or moral, it's the natural path of history. This is what you get when you try to peel the supremacy part away from white supremacy but leave the unshakable self-confidence and bedrock assumption that the universe cares what we think. We were also taught that rebellion is primarily ideological. One may be motivated by personal injustice, but the correct use of that injustice is to subsume it into concepts such as freedom and democracy. Those concepts are more "real" in some foundational sense, more central to the right functioning of the world, than individual circumstance. When the now-dominant group tells stories of long-ago revolution, there is no personal experience of oppression and survival in which to ground the story; instead, it's linked to anticipatory fear in the reader, to the idea that one's privileges could be taken away by a foreign oppressor and that the counter to this threat is ideological unity. Obviously, not every white fantasy author uses this story shape, but the tendency runs deep because we're taught it young. You can see it everywhere in fantasy, from Lord of the Rings to Tigana. The Unbroken uses a much different story shape, and I don't think it's a coincidence that the author is Black. Touraine is not sympathetic to the Qaz li. These are not her people and this is not her life. She went through hell in Balladairan schools, but she won a place, however tenuous. Her personal role model is General Cantic, the Balladairan Blood General who was also one of her instructors. Cantic is hard as nails, unforgiving, unbending, and probably a war criminal, but also the embodiment of a military ethic. She is tough but fair with the conscript soldiers. She doesn't put a stop to their harassment by the regular Balladairan troops, but neither does she let it go too far. Cantic has power, she knows how to keep it, and there is a place for Touraine in Cantic's world. And, critically, that place is not just hers: it's one she shares with her squad. Touraine's primary loyalty is not to Balladaire or to Qaz l. It's to the Sands. Her soldiers are neither one thing nor the other, and they disagree vehemently among themselves about what Qaz l and their other colonial homes should be to them, but they learned together, fought together, and died together. That theme is woven throughout The Unbroken: personal bonds, third and fourth loyalties, and practical ethics of survival that complicate and contradict simple dichotomies of oppressor and oppressed. Touraine is repeatedly offered ideological motives that the protagonist in the typical story shape would adopt. And she repeatedly rejects them for personal bonds: trying to keep her people safe, in a world that is not looking out for them. The consequence is that this book tears Touraine apart. She tries to walk a precarious path between Luca, the Qaz li, Cantic, and the Sands, and she falls off that path a lot. Each time I thought I knew where this book was going, there's another reversal, often brutal. I tend to be a happily-ever-after reader who wants the protagonist to get everything they need, so this isn't my normal fare. The amount of hell that Touraine goes through made for difficult reading, worse because much of it is due to her own mistakes or betrayals. But Clark makes those decisions believable given the impossible position Touraine is in and the lack of role models she has for making other choices. She's set up to fail, and the price of small victories is to have no one understand the decisions that she makes, or to believe her motives. Luca is the other viewpoint character of the book (and yes, this is also a love affair, which complicates both of their loyalties). She is the heroine of a more typical genre fantasy novel: the outsider princess with a physical disability and a razor-sharp mind, ambitious but fair (at least in her own mind), with a trusted bodyguard advisor who also knew her father and a sincere desire to be kinder and more even-handed in her governance of the colony. All of this is real; Luca is a protagonist, and the reader is not being set up to dislike her. But compared to Touraine's grappling with identity, loyalty, and ethics, Luca is never in any real danger, and her concerns start to feel too calculated and superficial. It's hard to be seriously invested in whether Luca proves herself or gets her throne when people are being slaughtered and abused. This, I think, is the best part of this book. Clark tells a traditional ideological fantasy of learning to be a good ruler, but she puts it alongside a much deeper and more complex story of multi-faceted oppression. She has the two protagonists fall in love with each other and challenges them to understand each other, and Luca does not come off well in this comparison. Touraine is frustrated, impulsive, physical, and sometimes has catastrophically poor judgment. Luca is analytical and calculating, and in most ways understands the political dynamics far better than Touraine. We know how this story usually goes: Luca sees Touraine's brilliance and lifts her out of the ranks into a role of importance and influence, which Touraine should reward with loyalty. But Touraine's world is more real, more grounded, and more authentic, and both Touraine and the reader know what Luca could offer is contingent and comes with a higher price than Luca understands. (Incidentally, the cover of The Unbroken, designed by Lauren Panepinto with art by Tommy Arnold, is astonishingly good at capturing both Touraine's character and the overall feeling of the book. Here's a larger version.) The writing is good but uneven. Clark loves reversals, and they did keep me reading, but I think there were too many of them. By the end of the book, the escalation of betrayals and setbacks was more exhausting than exciting, and I'd stopped trusting anything good would last. (Admittedly, this is an accurate reflection of how Touraine felt.) Touraine's inner monologue also gets a bit repetitive when she's thrashing in the jaws of an emotional trap. I think some of this is first-novel problems of over-explaining emotional states and character reasoning, but these problems combine to make the book feel a bit over-long. I'm also not in love with the ending. It's perhaps the one place in the book where I am more cynical about the politics than Clark is, although she does lay the groundwork for it. But this book is also full of places small and large where it goes a different direction than most fantasy and is better for it. I think my favorite small moment is Touraine's quiet refusal to defend herself against certain insinuations. This is such a beautiful bit of characterization; she knows she won't be believed anyway, and refuses to demean herself by trying. I'm not sure I can recommend this book unconditionally, since I think you have to be in the mood for it, but it's one of the most thoughtful and nuanced looks at colonialism and rebellion I can remember seeing in fantasy. I found it frustrating in places, but I'm also still thinking about it. If you're looking for a political fantasy with teeth, you could do a lot worse, although expect to come out the other side a bit battered and bruised. Followed by The Faithless, and I have no idea where Clark is going to go with the second book. I suppose I'll have to read and find out. Content note: In addition to a lot of violence, gore, and death, including significant character death, there's also a major plague. If you're not feeling up to reading about panic caused by contageous illness, proceed with caution. Rating: 7 out of 10

6 November 2022

Michael Ablassmeier: virtnbdbackup in unstable/bookworm

Besides several bugfixes, the latest version now supports using higher compression levels and logging to syslog facility. I also finished packaging and official packages are now available,

4 November 2022

Alastair McKinstry: 1.3 billion announced for new Forestry Support

1.3 billion announced for new Forestry Support Funds to be delivered through new Forestry Programme Premiums for planting trees to be increased by between 46% and 66% and extended to 20 years for farmers #GreensInGovernment The Taoiseach, Miche l Martin TD, Minister of State with responsibility for Forestry, Senator Pippa Hackett, and Minister for Agriculture, Food and the Marine, Charlie McConalogue T.D today announced a proposed investment by the Government of 1.3 billion in Irish forestry. The funding will be for the next national Forestry Programme and represents the largest ever investment by an Irish Government in tree-planting. The programme will now be the subject of state-aid approval by the European Commission. The Taoiseach said: This commitment by the Government to such a substantial financial package reflects the seriousness with which we view the climate change and biodiversity challenges, which affect all of society. Forestry is at the heart of delivering on our sustainability goals and strong support is needed to encourage engagement from all our stakeholders in reaching our objectives. Minister Hackett said: I m delighted to have secured a package of 1.318 billion for forestry. This will support the biggest and best-funded Forestry Programme ever in Ireland. It comes at an appropriate time, given the urgency of taking climate mitigation measures. Planting trees is one of the most effective methods of tackling climate change as well as contributing to improved biodiversity and water quality. One of my main aims is to re-engage farmers in afforestation. I m delighted therefore to be proposing a new 20-year premium term exclusively for farmers, as well as introducing a new small-scale native woodland scheme which will allow farmers to plant up to 1 hectare of native woodland on farmland and along watercourses outside of the forestry licensing process. Minister McConalogue said: Today we commit to providing unprecedented incentives to encourage the planting of trees that can provide a valuable addition to farm income and help to meet national climate and biodiversity objectives. This funding guarantees continued payments to those forest owners who planted under the current scheme and who are still in receipt of premiums. It also offers new and improved financial supports to those who undertake planting and sustainable forest management under the new Programme. We intend to increase premiums for planting trees by between 46% and 66% and to extend the premium period from 15 to 20 years for farmers. "We are approaching a new and exciting period for forestry in Ireland. The new Forestry Programme will drive a new and brighter future for forestry, for farmers and for our climate. The proposed new Forestry Programme is currently out to public consultation as part of the Strategic Environmental Assessment and Appropriate Assessment process. The Programme is the main implementation mechanism for the new Forest Strategy (2023 -2030) which reflects the ambitions contained in the recently published Shared National Vision for Trees, Woods and Forests in Ireland until 2050. The public consultation closes on 29th November, 2022 and any changes which result from this process will be incorporated into the Programme and the Forest Strategy. Minister Hackett commented: The draft Forestry Programme and Forest Strategy are the product of extensive stakeholder consultation and feedback, and both documents are open to public consultation for the next number of weeks. I would strongly encourage all interested parties to engage with the consultation in advance of the Strategy and Programme being finalised. The new Programme is built around the principle of right trees in the right places for the right reasons with the right management. It aims to deliver more diverse forest which will meet multiple societal objectives, economic, social and environmental. Higher grant rates for forest establishment are also proposed with increases of approximately 20% to reflect rising living costs. The new one hectare native tree area scheme will also make it easier for landowners who wish to plant small areas of trees on their farm. The Taoiseach concluded I welcome this milestone and I believe that this funding injection will be an important catalyst in delivering on the ambition outlined in the new Forest Strategy. Our environmental challenges are huge but so is our commitment to overcoming them and this Forestry Programme is key to delivering so many of our priorities. The new Programme will be 100% Exchequer funded and is subject to State Aid approval from the EU Commission. The Department is in contact with the Commission in relation to this approval which is a rigorous process.

3 November 2022

Alastair McKinstry: New Planning and Environment Court will reform planning appeals pro...

New Planning and Environment Court will reform planning appeals process #GreensInGovernment The Green Party has fulfilled an important Programme for Government commitment following the cabinet's decision to establish a new division of the High Court to specialise in environmental and planning issues. This is a major reform that will allow planning law to operate in a more efficient and environmentally friendly manner. Steven Matthews TD, Green Party Spokesperson for Planning and Local Government, explained the aim of the new court; The goal is to ensure that our judicial system has the capacity to decide cases as quickly as possible within a growing and increasingly complex body of Irish and EU environmental law. Timely access to justice is a cornersto ne of the effective rule of law and an international legal commitment Ireland has entered under the Aarhus Convention. This is extremely pressing given the urgent need to ease the pressure on the housing system and develop the infrastructure we need to transition to a zero-carbon economy." The Green Party is committed to ensuring the new Court will have the resources to fulfil its role. The Government will pass new legislation to allow the number of judges to be increased and provide the necessary exchequer funding to pay for these judges. The new Court will hear all the cases currently taken by the High Court s Commercial Planning and Strategic Infrastructure Development List and other major environmental cases. This is likely to include all major infrastructure cases as well as many cases relating to EU Environmental Law such as Environmental Impact Assessment, Strategic Environmental Assessment, Birds/Habitats, the Water Framework Directive and the Industrial Emissions Directive.

27 October 2022

Michael Ablassmeier: fun with pygame

Next year my son will turn 4. I have quit playing computer games for a pretty long time now, but recently i questioned myself: what will be the first computer game hes going to play? Why not create a simple game by myself? Living on the landside, his attention has been drawn to farming machines for quite some time now and that topic never grows old for him, which makes for a perfect game setting. The game logic should be pretty simple: A tiling 2d jump game where you have to make an tractor jump over appearing objects. Different vehicles and backgrounds to choose and a set of lives with randomly generated coins which you have to catch to undo happened failures. Never having done anything related to pygame the learning curve has been quite good so far :-) The part i spent most time with was searching for free assets and pixel art which im able to use. Gimp also made me lose quite some hair while failing to canvas/crop images to the right size so the placements within the different maps matched.. I used pyinstaller to make it somewhat portable (needs to run on windows too) and building the artifacts using github actions was quite a nice experience.. Lets see where this goes next, lots of ideas come to my mind :) image image image image https://github.com/abbbi/trktor

25 September 2022

Sergio Talens-Oliag: Kubernetes Static Content Server

This post describes how I ve put together a simple static content server for kubernetes clusters using a Pod with a persistent volume and multiple containers: an sftp server to manage contents, a web server to publish them with optional access control and another one to run scripts which need access to the volume filesystem. The sftp server runs using MySecureShell, the web server is nginx and the script runner uses the webhook tool to publish endpoints to call them (the calls will come from other Pods that run backend servers or are executed from Jobs or CronJobs).

HistoryThe system was developed because we had a NodeJS API with endpoints to upload files and store them on S3 compatible services that were later accessed via HTTPS, but the requirements changed and we needed to be able to publish folders instead of individual files using their original names and apply access restrictions using our API. Thinking about our requirements the use of a regular filesystem to keep the files and folders was a good option, as uploading and serving files is simple. For the upload I decided to use the sftp protocol, mainly because I already had an sftp container image based on mysecureshell prepared; once we settled on that we added sftp support to the API server and configured it to upload the files to our server instead of using S3 buckets. To publish the files we added a nginx container configured to work as a reverse proxy that uses the ngx_http_auth_request_module to validate access to the files (the sub request is configurable, in our deployment we have configured it to call our API to check if the user can access a given URL). Finally we added a third container when we needed to execute some tasks directly on the filesystem (using kubectl exec with the existing containers did not seem a good idea, as that is not supported by CronJobs objects, for example). The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was to use the webhook tool to provide the endpoints to call the scripts; for now we have three:
  • one to get the disc usage of a PATH,
  • one to hardlink all the files that are identical on the filesystem,
  • one to copy files and folders from S3 buckets to our filesystem.

Container definitions

mysecureshellThe mysecureshell container can be used to provide an sftp service with multiple users (although the files are owned by the same UID and GID) using standalone containers (launched with docker or podman) or in an orchestration system like kubernetes, as we are going to do here. The image is generated using the following Dockerfile:
ARG ALPINE_VERSION=3.16.2
FROM alpine:$ALPINE_VERSION as builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN apk update &&\
 apk add --no-cache alpine-sdk git musl-dev &&\
 git clone https://github.com/sto/mysecureshell.git &&\
 cd mysecureshell &&\
 ./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man\
 --localstatedir=/var --with-shutfile=/var/lib/misc/sftp.shut --with-debug=2 &&\
 make all && make install &&\
 rm -rf /var/cache/apk/*
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
COPY --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell
COPY --from=builder /usr/bin/sftp-* /usr/bin/
RUN apk update &&\
 apk add --no-cache openssh shadow pwgen &&\
 sed -i -e "s ^.*\(AuthorizedKeysFile\).*$ \1 /etc/ssh/auth_keys/%u "\
 /etc/ssh/sshd_config &&\
 mkdir /etc/ssh/auth_keys &&\
 cat /dev/null > /etc/motd &&\
 add-shell '/usr/bin/mysecureshell' &&\
 rm -rf /var/cache/apk/*
COPY bin/* /usr/local/bin/
COPY etc/sftp_config /etc/ssh/
COPY entrypoint.sh /
EXPOSE 22
VOLUME /sftp
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
The /etc/sftp_config file is used to configure the mysecureshell server to have all the user homes under /sftp/data, only allow them to see the files under their home directories as if it were at the root of the server and close idle connections after 5m of inactivity:
etc/sftp_config
# Default mysecureshell configuration
<Default>
   # All users will have access their home directory under /sftp/data
   Home /sftp/data/$USER
   # Log to a file inside /sftp/logs/ (only works when the directory exists)
   LogFile /sftp/logs/mysecureshell.log
   # Force users to stay in their home directory
   StayAtHome true
   # Hide Home PATH, it will be shown as /
   VirtualChroot true
   # Hide real file/directory owner (just change displayed permissions)
   DirFakeUser true
   # Hide real file/directory group (just change displayed permissions)
   DirFakeGroup true
   # We do not want users to keep forever their idle connection
   IdleTimeOut 5m
</Default>
# vim: ts=2:sw=2:et
The entrypoint.sh script is the one responsible to prepare the container for the users included on the /secrets/user_pass.txt file (creates the users with their HOME directories under /sftp/data and a /bin/false shell and creates the key files from /secrets/user_keys.txt if available). The script expects a couple of environment variables:
  • SFTP_UID: UID used to run the daemon and for all the files, it has to be different than 0 (all the files managed by this daemon are going to be owned by the same user and group, even if the remote users are different).
  • SFTP_GID: GID used to run the daemon and for all the files, it has to be different than 0.
And can use the SSH_PORT and SSH_PARAMS values if present. It also requires the following files (they can be mounted as secrets in kubernetes):
  • /secrets/host_keys.txt: Text file containing the ssh server keys in mime format; the file is processed using the reformime utility (the one included on busybox) and can be generated using the gen-host-keys script included on the container (it uses ssh-keygen and makemime).
  • /secrets/user_pass.txt: Text file containing lines of the form username:password_in_clear_text (only the users included on this file are available on the sftp server, in fact in our deployment we use only the scs user for everything).
And optionally can use another one:
  • /secrets/user_keys.txt: Text file that contains lines of the form username:public_ssh_ed25519_or_rsa_key; the public keys are installed on the server and can be used to log into the sftp server if the username exists on the user_pass.txt file.
The contents of the entrypoint.sh script are:
entrypoint.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Expects SSH_UID & SSH_GID on the environment and uses the value of the
# SSH_PORT & SSH_PARAMS variables if present
# SSH_PARAMS
SSH_PARAMS="-D -e -p $ SSH_PORT:=22  $ SSH_PARAMS "
# Fixed values
# DIRECTORIES
HOME_DIR="/sftp/data"
CONF_FILES_DIR="/secrets"
AUTH_KEYS_PATH="/etc/ssh/auth_keys"
# FILES
HOST_KEYS="$CONF_FILES_DIR/host_keys.txt"
USER_KEYS="$CONF_FILES_DIR/user_keys.txt"
USER_PASS="$CONF_FILES_DIR/user_pass.txt"
USER_SHELL_CMD="/usr/bin/mysecureshell"
# TYPES
HOST_KEY_TYPES="dsa ecdsa ed25519 rsa"
# ---------
# FUNCTIONS
# ---------
# Validate HOST_KEYS, USER_PASS, SFTP_UID and SFTP_GID
_check_environment()  
  # Check the ssh server keys ... we don't boot if we don't have them
  if [ ! -f "$HOST_KEYS" ]; then
    cat <<EOF
We need the host keys on the '$HOST_KEYS' file to proceed.
Call the 'gen-host-keys' script to create and export them on a mime file.
EOF
    exit 1
  fi
  # Check that we have users ... if we don't we can't continue
  if [ ! -f "$USER_PASS" ]; then
    cat <<EOF
We need at least the '$USER_PASS' file to provision users.
Call the 'gen-users-tar' script to create a tar file to create an archive that
contains public and private keys for users, a 'user_keys.txt' with the public
keys of the users and a 'user_pass.txt' file with random passwords for them 
(pass the list of usernames to it).
EOF
    exit 1
  fi
  # Check SFTP_UID
  if [ -z "$SFTP_UID" ]; then
    echo "The 'SFTP_UID' can't be empty, pass a 'GID'."
    exit 1
  fi
  if [ "$SFTP_UID" -eq "0" ]; then
    echo "The 'SFTP_UID' can't be 0, use a different 'UID'"
    exit 1
  fi
  # Check SFTP_GID
  if [ -z "$SFTP_GID" ]; then
    echo "The 'SFTP_GID' can't be empty, pass a 'GID'."
    exit 1
  fi
  if [ "$SFTP_GID" -eq "0" ]; then
    echo "The 'SFTP_GID' can't be 0, use a different 'GID'"
    exit 1
  fi
 
# Adjust ssh host keys
_setup_host_keys()  
  opwd="$(pwd)"
  tmpdir="$(mktemp -d)"
  cd "$tmpdir"
  ret="0"
  reformime <"$HOST_KEYS"   ret="1"
  for kt in $HOST_KEY_TYPES; do
    key="ssh_host_$ kt _key"
    pub="ssh_host_$ kt _key.pub"
    if [ ! -f "$key" ]; then
      echo "Missing '$key' file"
      ret="1"
    fi
    if [ ! -f "$pub" ]; then
      echo "Missing '$pub' file"
      ret="1"
    fi
    if [ "$ret" -ne "0" ]; then
      continue
    fi
    cat "$key" >"/etc/ssh/$key"
    chmod 0600 "/etc/ssh/$key"
    chown root:root "/etc/ssh/$key"
    cat "$pub" >"/etc/ssh/$pub"
    chmod 0600 "/etc/ssh/$pub"
    chown root:root "/etc/ssh/$pub"
  done
  cd "$opwd"
  rm -rf "$tmpdir"
  return "$ret"
 
# Create users
_setup_user_pass()  
  opwd="$(pwd)"
  tmpdir="$(mktemp -d)"
  cd "$tmpdir"
  ret="0"
  [ -d "$HOME_DIR" ]   mkdir "$HOME_DIR"
  # Make sure the data dir can be managed by the sftp user
  chown "$SFTP_UID:$SFTP_GID" "$HOME_DIR"
  # Allow the user (and root) to create directories inside the $HOME_DIR, if
  # we don't allow it the directory creation fails on EFS (AWS)
  chmod 0755 "$HOME_DIR"
  # Create users
  echo "sftp:sftp:$SFTP_UID:$SFTP_GID:::/bin/false" >"newusers.txt"
  sed -n "/^[^#]/   s/:/ /p  " "$USER_PASS"   while read -r _u _p; do
    echo "$_u:$_p:$SFTP_UID:$SFTP_GID::$HOME_DIR/$_u:$USER_SHELL_CMD"
  done >>"newusers.txt"
  newusers --badnames newusers.txt
  # Disable write permission on the directory to forbid remote sftp users to
  # remove their own root dir (they have already done it); we adjust that
  # here to avoid issues with EFS (see before)
  chmod 0555 "$HOME_DIR"
  # Clean up the tmpdir
  cd "$opwd"
  rm -rf "$tmpdir"
  return "$ret"
 
# Adjust user keys
_setup_user_keys()  
  if [ -f "$USER_KEYS" ]; then
    sed -n "/^[^#]/   s/:/ /p  " "$USER_KEYS"   while read -r _u _k; do
      echo "$_k" >>"$AUTH_KEYS_PATH/$_u"
    done
  fi
 
# Main function
exec_sshd()  
  _check_environment
  _setup_host_keys
  _setup_user_pass
  _setup_user_keys
  echo "Running: /usr/sbin/sshd $SSH_PARAMS"
  # shellcheck disable=SC2086
  exec /usr/sbin/sshd -D $SSH_PARAMS
 
# ----
# MAIN
# ----
case "$1" in
"server") exec_sshd ;;
*) exec "$@" ;;
esac
# vim: ts=2:sw=2:et
The container also includes a couple of auxiliary scripts, the first one can be used to generate the host_keys.txt file as follows:
$ docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
Where the script is as simple as:
bin/gen-host-keys
#!/bin/sh
set -e
# Generate new host keys
ssh-keygen -A >/dev/null
# Replace hostname
sed -i -e 's/@.*$/@mysecureshell/' /etc/ssh/ssh_host_*_key.pub
# Print in mime format (stdout)
makemime /etc/ssh/ssh_host_*
# vim: ts=2:sw=2:et
And there is another script to generate a .tar file that contains auth data for the list of usernames passed to it (the file contains a user_pass.txt file with random passwords for the users, public and private ssh keys for them and the user_keys.txt file that matches the generated keys). To generate a tar file for the user scs we can execute the following:
$ docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
To see the contents and the text inside the user_pass.txt file we can do:
$ tar tvf /tmp/scs-users.tar
-rw-r--r-- root/root        21 2022-09-11 15:55 user_pass.txt
-rw-r--r-- root/root       822 2022-09-11 15:55 user_keys.txt
-rw------- root/root       387 2022-09-11 15:55 id_ed25519-scs
-rw-r--r-- root/root        85 2022-09-11 15:55 id_ed25519-scs.pub
-rw------- root/root      3357 2022-09-11 15:55 id_rsa-scs
-rw------- root/root      3243 2022-09-11 15:55 id_rsa-scs.pem
-rw-r--r-- root/root       729 2022-09-11 15:55 id_rsa-scs.pub
$ tar xfO /tmp/scs-users.tar user_pass.txt
scs:20JertRSX2Eaar4x
The source of the script is:
bin/gen-users-tar
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
USER_KEYS_FILE="user_keys.txt"
USER_PASS_FILE="user_pass.txt"
# ---------
# MAIN CODE
# ---------
# Generate user passwords and keys, return 1 if no username is received
if [ "$#" -eq "0" ]; then
  return 1
fi
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
for u in "$@"; do
  ssh-keygen -q -a 100 -t ed25519 -f "id_ed25519-$u" -C "$u" -N ""
  ssh-keygen -q -a 100 -b 4096 -t rsa -f "id_rsa-$u" -C "$u" -N ""
  # Legacy RSA private key format
  cp -a "id_rsa-$u" "id_rsa-$u.pem"
  ssh-keygen -q -p -m pem -f "id_rsa-$u.pem" -N "" -P "" >/dev/null
  chmod 0600 "id_rsa-$u.pem"
  echo "$u:$(pwgen -s 16 1)" >>"$USER_PASS_FILE"
  echo "$u:$(cat "id_ed25519-$u.pub")" >>"$USER_KEYS_FILE"
  echo "$u:$(cat "id_rsa-$u.pub")" >>"$USER_KEYS_FILE"
done
tar cf - "$USER_PASS_FILE" "$USER_KEYS_FILE" id_* 2>/dev/null
cd "$opwd"
rm -rf "$tmpdir"
# vim: ts=2:sw=2:et

nginx-scsThe nginx-scs container is generated using the following Dockerfile:
ARG NGINX_VERSION=1.23.1
FROM nginx:$NGINX_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN rm -f /docker-entrypoint.d/*
COPY docker-entrypoint.d/* /docker-entrypoint.d/
Basically we are removing the existing docker-entrypoint.d scripts from the standard image and adding a new one that configures the web server as we want using a couple of environment variables:
  • AUTH_REQUEST_URI: URL to use for the auth_request, if the variable is not found on the environment auth_request is not used.
  • HTML_ROOT: Base directory of the web server, if not passed the default /usr/share/nginx/html is used.
Note that if we don t pass the variables everything works as if we were using the original nginx image. The contents of the configuration script are:
docker-entrypoint.d/10-update-default-conf.sh
#!/bin/sh
# Replace the default.conf nginx file by our own version.
set -e
if [ -z "$HTML_ROOT" ]; then
  HTML_ROOT="/usr/share/nginx/html"
fi
if [ "$AUTH_REQUEST_URI" ]; then
  cat >/etc/nginx/conf.d/default.conf <<EOF
server  
  listen       80;
  server_name  localhost;
  location /  
    auth_request /.auth;
    root  $HTML_ROOT;
    index index.html index.htm;
   
  location /.auth  
    internal;
    proxy_pass $AUTH_REQUEST_URI;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
    proxy_set_header X-Original-URI \$request_uri;
   
  error_page   500 502 503 504  /50x.html;
  location = /50x.html  
    root /usr/share/nginx/html;
   
 
EOF
else
  cat >/etc/nginx/conf.d/default.conf <<EOF
server  
  listen       80;
  server_name  localhost;
  location /  
    root  $HTML_ROOT;
    index index.html index.htm;
   
  error_page   500 502 503 504  /50x.html;
  location = /50x.html  
    root /usr/share/nginx/html;
   
 
EOF
fi
# vim: ts=2:sw=2:et
As we will see later the idea is to use the /sftp/data or /sftp/data/scs folder as the root of the web published by this container and create an Ingress object to provide access to it outside of our kubernetes cluster.

webhook-scsThe webhook-scs container is generated using the following Dockerfile:
ARG ALPINE_VERSION=3.16.2
ARG GOLANG_VERSION=alpine3.16
FROM golang:$GOLANG_VERSION AS builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
ENV WEBHOOK_VERSION 2.8.0
ENV WEBHOOK_PR 549
ENV S3FS_VERSION v1.91
WORKDIR /go/src/github.com/adnanh/webhook
RUN apk update &&\
 apk add --no-cache -t build-deps curl libc-dev gcc libgcc patch
RUN curl -L --silent -o webhook.tar.gz\
 https://github.com/adnanh/webhook/archive/$ WEBHOOK_VERSION .tar.gz &&\
 tar xzf webhook.tar.gz --strip 1 &&\
 curl -L --silent -o $ WEBHOOK_PR .patch\
 https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/$ WEBHOOK_PR .patch &&\
 patch -p1 < $ WEBHOOK_PR .patch &&\
 go get -d && \
 go build -o /usr/local/bin/webhook
WORKDIR /src/s3fs-fuse
RUN apk update &&\
 apk add ca-certificates build-base alpine-sdk libcurl automake autoconf\
 libxml2-dev libressl-dev mailcap fuse-dev curl-dev
RUN curl -L --silent -o s3fs.tar.gz\
 https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/$S3FS_VERSION.tar.gz &&\
 tar xzf s3fs.tar.gz --strip 1 &&\
 ./autogen.sh &&\
 ./configure --prefix=/usr/local &&\
 make -j && \
 make install
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
WORKDIR /webhook
RUN apk update &&\
 apk add --no-cache ca-certificates mailcap fuse libxml2 libcurl libgcc\
 libstdc++ rsync util-linux-misc &&\
 rm -rf /var/cache/apk/*
COPY --from=builder /usr/local/bin/webhook /usr/local/bin/webhook
COPY --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs
COPY entrypoint.sh /
COPY hooks/* ./hooks/
EXPOSE 9000
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
Again, we use a multi-stage build because in production we wanted to support a functionality that is not already on the official versions (streaming the command output as a response instead of waiting until the execution ends); this time we build the image applying the PATCH included on this pull request against a released version of the source instead of creating a fork. The entrypoint.sh script is used to generate the webhook configuration file for the existing hooks using environment variables (basically the WEBHOOK_WORKDIR and the *_TOKEN variables) and launch the webhook service:
entrypoint.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
WEBHOOK_BIN="$ WEBHOOK_BIN:-/webhook/hooks "
WEBHOOK_YML="$ WEBHOOK_YML:-/webhook/scs.yml "
WEBHOOK_OPTS="$ WEBHOOK_OPTS:--verbose "
# ---------
# FUNCTIONS
# ---------
print_du_yml()  
  cat <<EOF
- id: du
  execute-command: '$WEBHOOK_BIN/du.sh'
  command-working-directory: '$WORKDIR'
  response-headers:
  - name: 'Content-Type'
    value: 'application/json'
  http-methods: ['GET']
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
  pass-arguments-to-command:
  - source: 'url'
    name: 'path'
  pass-environment-to-command:
  - source: 'string'
    envname: 'OUTPUT_FORMAT'
    name: 'json'
EOF
 
print_hardlink_yml()  
  cat <<EOF
- id: hardlink
  execute-command: '$WEBHOOK_BIN/hardlink.sh'
  command-working-directory: '$WORKDIR'
  http-methods: ['GET']
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
EOF
 
print_s3sync_yml()  
  cat <<EOF
- id: s3sync
  execute-command: '$WEBHOOK_BIN/s3sync.sh'
  command-working-directory: '$WORKDIR'
  http-methods: ['POST']
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
  pass-environment-to-command:
  - source: 'payload'
    envname: 'AWS_KEY'
    name: 'aws.key'
  - source: 'payload'
    envname: 'AWS_SECRET_KEY'
    name: 'aws.secret_key'
  - source: 'payload'
    envname: 'S3_BUCKET'
    name: 's3.bucket'
  - source: 'payload'
    envname: 'S3_REGION'
    name: 's3.region'
  - source: 'payload'
    envname: 'S3_PATH'
    name: 's3.path'
  - source: 'payload'
    envname: 'SCS_PATH'
    name: 'scs.path'
  stream-command-output: true
EOF
 
print_token_yml()  
  if [ "$1" ]; then
    cat << EOF
  trigger-rule:
    match:
      type: 'value'
      value: '$1'
      parameter:
        source: 'header'
        name: 'X-Webhook-Token'
EOF
  fi
 
exec_webhook()  
  # Validate WORKDIR
  if [ -z "$WEBHOOK_WORKDIR" ]; then
    echo "Must define the WEBHOOK_WORKDIR variable!" >&2
    exit 1
  fi
  WORKDIR="$(realpath "$WEBHOOK_WORKDIR" 2>/dev/null)"   true
  if [ ! -d "$WORKDIR" ]; then
    echo "The WEBHOOK_WORKDIR '$WEBHOOK_WORKDIR' is not a directory!" >&2
    exit 1
  fi
  # Get TOKENS, if the DU_TOKEN or HARDLINK_TOKEN is defined that is used, if
  # not if the COMMON_TOKEN that is used and in other case no token is checked
  # (that is the default)
  DU_TOKEN="$ DU_TOKEN:-$COMMON_TOKEN "
  HARDLINK_TOKEN="$ HARDLINK_TOKEN:-$COMMON_TOKEN "
  S3_TOKEN="$ S3_TOKEN:-$COMMON_TOKEN "
  # Create webhook configuration
    
    print_du_yml
    print_token_yml "$DU_TOKEN"
    echo ""
    print_hardlink_yml
    print_token_yml "$HARDLINK_TOKEN"
    echo ""
    print_s3sync_yml
    print_token_yml "$S3_TOKEN"
   >"$WEBHOOK_YML"
  # Run the webhook command
  # shellcheck disable=SC2086
  exec webhook -hooks "$WEBHOOK_YML" $WEBHOOK_OPTS
 
# ----
# MAIN
# ----
case "$1" in
"server") exec_webhook ;;
*) exec "$@" ;;
esac
The entrypoint.sh script generates the configuration file for the webhook server calling functions that print a yaml section for each hook and optionally adds rules to validate access to them comparing the value of a X-Webhook-Token header against predefined values. The expected token values are taken from environment variables, we can define a token variable for each hook (DU_TOKEN, HARDLINK_TOKEN or S3_TOKEN) and a fallback value (COMMON_TOKEN); if no token variable is defined for a hook no check is done and everybody can call it. The Hook Definition documentation explains the options you can use for each hook, the ones we have right now do the following:
  • du: runs on the $WORKDIR directory, passes as first argument to the script the value of the path query parameter and sets the variable OUTPUT_FORMAT to the fixed value json (we use that to print the output of the script in JSON format instead of text).
  • hardlink: runs on the $WORKDIR directory and takes no parameters.
  • s3sync: runs on the $WORKDIR directory and sets a lot of environment variables from values read from the JSON encoded payload sent by the caller (all the values must be sent by the caller even if they are assigned an empty value, if they are missing the hook fails without calling the script); we also set the stream-command-output value to true to make the script show its output as it is working (we patched the webhook source to be able to use this option).

The du hook scriptThe du hook script code checks if the argument passed is a directory, computes its size using the du command and prints the results in text format or as a JSON dictionary:
hooks/du.sh
#!/bin/sh
set -e
# Script to print disk usage for a PATH inside the scs folder
# ---------
# FUNCTIONS
# ---------
print_error()  
  if [ "$OUTPUT_FORMAT" = "json" ]; then
    echo " \"error\":\"$*\" "
  else
    echo "$*" >&2
  fi
  exit 1
 
usage()  
  if [ "$OUTPUT_FORMAT" = "json" ]; then
    echo " \"error\":\"Pass arguments as '?path=XXX\" "
  else
    echo "Usage: $(basename "$0") PATH" >&2
  fi
  exit 1
 
# ----
# MAIN
# ----
if [ "$#" -eq "0" ]   [ -z "$1" ]; then
  usage
fi
if [ "$1" = "." ]; then
  DU_PATH="./"
else
  DU_PATH="$(find . -name "$1" -mindepth 1 -maxdepth 1)"   true
fi
if [ -z "$DU_PATH" ]   [ ! -d "$DU_PATH/." ]; then
  print_error "The provided PATH ('$1') is not a directory"
fi
# Print disk usage in bytes for the given PATH
OUTPUT="$(du -b -s "$DU_PATH")"
if [ "$OUTPUT_FORMAT" = "json" ]; then
  # Format output as  "path":"PATH","bytes":"BYTES" 
  echo "$OUTPUT"  
    sed -e "s%^\(.*\)\t.*/\(.*\)$% \"path\":\"\2\",\"bytes\":\"\1\" %"  
    tr -d '\n'
else
  # Print du output as is
  echo "$OUTPUT"
fi
# vim: ts=2:sw=2:et:ai:sts=2

The s3sync hook scriptThe s3sync hook script uses the s3fs tool to mount a bucket and synchronise data between a folder inside the bucket and a directory on the filesystem using rsync; all values needed to execute the task are taken from environment variables:
hooks/s3sync.sh
#!/bin/ash
set -euo pipefail
set -o errexit
set -o errtrace
# Functions
finish()  
  ret="$1"
  echo ""
  echo "Script exit code: $ret"
  exit "$ret"
 
# Check variables
if [ -z "$AWS_KEY" ]   [ -z "$AWS_SECRET_KEY" ]   [ -z "$S3_BUCKET" ]  
  [ -z "$S3_PATH" ]   [ -z "$SCS_PATH" ]; then
  [ "$AWS_KEY" ]   echo "Set the AWS_KEY environment variable"
  [ "$AWS_SECRET_KEY" ]   echo "Set the AWS_SECRET_KEY environment variable"
  [ "$S3_BUCKET" ]   echo "Set the S3_BUCKET environment variable"
  [ "$S3_PATH" ]   echo "Set the S3_PATH environment variable"
  [ "$SCS_PATH" ]   echo "Set the SCS_PATH environment variable"
  finish 1
fi
if [ "$S3_REGION" ] && [ "$S3_REGION" != "us-east-1" ]; then
  EP_URL="endpoint=$S3_REGION,url=https://s3.$S3_REGION.amazonaws.com"
else
  EP_URL="endpoint=us-east-1"
fi
# Prepare working directory
WORK_DIR="$(mktemp -p "$HOME" -d)"
MNT_POINT="$WORK_DIR/s3data"
PASSWD_S3FS="$WORK_DIR/.passwd-s3fs"
# Check the moutpoint
if [ ! -d "$MNT_POINT" ]; then
  mkdir -p "$MNT_POINT"
elif mountpoint "$MNT_POINT"; then
  echo "There is already something mounted on '$MNT_POINT', aborting!"
  finish 1
fi
# Create password file
touch "$PASSWD_S3FS"
chmod 0400 "$PASSWD_S3FS"
echo "$AWS_KEY:$AWS_SECRET_KEY" >"$PASSWD_S3FS"
# Mount s3 bucket as a filesystem
s3fs -o dbglevel=info,retries=5 -o "$EP_URL" -o "passwd_file=$PASSWD_S3FS" \
  "$S3_BUCKET" "$MNT_POINT"
echo "Mounted bucket '$S3_BUCKET' on '$MNT_POINT'"
# Remove the password file, just in case
rm -f "$PASSWD_S3FS"
# Check source PATH
ret="0"
SRC_PATH="$MNT_POINT/$S3_PATH"
if [ ! -d "$SRC_PATH" ]; then
  echo "The S3_PATH '$S3_PATH' can't be found!"
  ret=1
fi
# Compute SCS_UID & SCS_GID (by default based on the working directory owner)
SCS_UID="$ SCS_UID:=$(stat -c "%u" "." 2>/dev/null) "   true
SCS_GID="$ SCS_GID:=$(stat -c "%g" "." 2>/dev/null) "   true
# Check destination PATH
DST_PATH="./$SCS_PATH"
if [ "$ret" -eq "0" ] && [ -d "$DST_PATH" ]; then
  mkdir -p "$DST_PATH"   ret="$?"
fi
# Copy using rsync
if [ "$ret" -eq "0" ]; then
  rsync -rlptv --chown="$SCS_UID:$SCS_GID" --delete --stats \
    "$SRC_PATH/" "$DST_PATH/"   ret="$?"
fi
# Unmount the S3 bucket
umount -f "$MNT_POINT"
echo "Called umount for '$MNT_POINT'"
# Remove mount point dir
rmdir "$MNT_POINT"
# Remove WORK_DIR
rmdir "$WORK_DIR"
# We are done
finish "$ret"
# vim: ts=2:sw=2:et:ai:sts=2

Deployment objectsThe system is deployed as a StatefulSet with one replica. Our production deployment is done on AWS and to be able to scale we use EFS for our PersistenVolume; the idea is that the volume has no size limit, its AccessMode can be set to ReadWriteMany and we can mount it from multiple instances of the Pod without issues, even if they are in different availability zones. For development we use k3d and we are also able to scale the StatefulSet for testing because we use a ReadWriteOnce PVC, but it points to a hostPath that is backed up by a folder that is mounted on all the compute nodes, so in reality Pods in different k3d nodes use the same folder on the host.

secrets.yamlThe secrets file contains the files used by the mysecureshell container that can be generated using kubernetes pods as follows (we are only creating the scs user):
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
  --image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
  --image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
Once we have the files we can generate the secrets.yaml file as follows:
$ tar xf ./users.tar user_keys.txt user_pass.txt
$ kubectl --dry-run=client -o yaml create secret generic "scs-secret" \
  --from-file="host_keys.txt=host_keys.txt" \
  --from-file="user_keys.txt=user_keys.txt" \
  --from-file="user_pass.txt=user_pass.txt" > ./secrets.yaml
The resulting secrets.yaml will look like the following file (the base64 would match the content of the files, of course):
secrets.yaml
apiVersion: v1
data:
  host_keys.txt: TWlt...
  user_keys.txt: c2Nz...
  user_pass.txt: c2Nz...
kind: Secret
metadata:
  creationTimestamp: null
  name: scs-secret

pvc.yamlThe persistent volume claim for a simple deployment (one with only one instance of the statefulSet) can be as simple as this:
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: scs-pvc
  labels:
    app.kubernetes.io/name: scs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
On this definition we don t set the storageClassName to use the default one.

Volumes in our development environment (k3d)In our development deployment we create the following PersistentVolume as required by the Local Persistence Volume Static Provisioner (note that the /volumes/scs-pv has to be created by hand, in our k3d system we mount the same host directory on the /volumes path of all the nodes and create the scs-pv directory by hand before deploying the persistent volume):
k3d-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: scs-pv
  labels:
    app.kubernetes.io/name: scs
spec:
  capacity:
    storage: 8Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  claimRef:
    name: scs-pvc
  storageClassName: local-storage
  local:
    path: /volumes/scs-pv
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: node.kubernetes.io/instance-type
          operator: In
          values:
          - k3s
And to make sure that everything works as expected we update the PVC definition to add the right storageClassName:
k3d-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: scs-pvc
  labels:
    app.kubernetes.io/name: scs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: local-storage

Volumes in our production environment (aws)In the production deployment we don t create the PersistentVolume (we are using the aws-efs-csi-driver which supports Dynamic Provisioning) but we add the storageClassName (we set it to the one mapped to the EFS driver, i.e. efs-sc) and set ReadWriteMany as the accessMode:
efs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: scs-pvc
  labels:
    app.kubernetes.io/name: scs
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 8Gi
  storageClassName: efs-sc

statefulset.yamlThe definition of the statefulSet is as follows:
statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: scs
  labels:
    app.kubernetes.io/name: scs
spec:
  serviceName: scs
  replicas: 1
  selector:
    matchLabels:
      app: scs
  template:
    metadata:
      labels:
        app: scs
    spec:
      containers:
      - name: nginx
        image: stodh/nginx-scs:latest
        ports:
        - containerPort: 80
          name: http
        env:
        - name: AUTH_REQUEST_URI
          value: ""
        - name: HTML_ROOT
          value: /sftp/data
        volumeMounts:
        - mountPath: /sftp
          name: scs-datadir
      - name: mysecureshell
        image: stodh/mysecureshell:latest
        ports:
        - containerPort: 22
          name: ssh
        securityContext:
          capabilities:
            add:
            - IPC_OWNER
        env:
        - name: SFTP_UID
          value: '2020'
        - name: SFTP_GID
          value: '2020'
        volumeMounts:
        - mountPath: /secrets
          name: scs-file-secrets
          readOnly: true
        - mountPath: /sftp
          name: scs-datadir
      - name: webhook
        image: stodh/webhook-scs:latest
        securityContext:
          privileged: true
        ports:
        - containerPort: 9000
          name: webhook-http
        env:
        - name: WEBHOOK_WORKDIR
          value: /sftp/data/scs
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - mountPath: /sftp
          name: scs-datadir
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: scs-file-secrets
        secret:
          secretName: scs-secrets
      - name: scs-datadir
        persistentVolumeClaim:
          claimName: scs-pvc
Notes about the containers:
  • nginx: As this is an example the web server is not using an AUTH_REQUEST_URI and uses the /sftp/data directory as the root of the web (to get to the files uploaded for the scs user we will need to use /scs/ as a prefix on the URLs).
  • mysecureshell: We are adding the IPC_OWNER capability to the container to be able to use some of the sftp-* commands inside it, but they are not really needed, so adding the capability is optional.
  • webhook: We are launching this container in privileged mode to be able to use the s3fs-fuse, as it will not work otherwise for now (see this kubernetes issue); if the functionality is not needed the container can be executed with regular privileges; besides, as we are not enabling public access to this service we don t define *_TOKEN variables (if required the values should be read from a Secret object).
Notes about the volumes:
  • the devfuse volume is only needed if we plan to use the s3fs command on the webhook container, if not we can remove the volume definition and its mounts.

service.yamlTo be able to access the different services on the statefulset we publish the relevant ports using the following Service object:
service.yaml
apiVersion: v1
kind: Service
metadata:
  name: scs-svc
  labels:
    app.kubernetes.io/name: scs
spec:
  ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: webhook-http
    port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: scs

ingress.yamlTo download the scs files from the outside we can add an ingress object like the following (the definition is for testing using the localhost name):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: scs-ingress
  labels:
    app.kubernetes.io/name: scs
spec:
  ingressClassName: nginx
  rules:
  - host: 'localhost'
    http:
      paths:
      - path: /scs
        pathType: Prefix
        backend:
          service:
            name: scs-svc
            port:
              number: 80

DeploymentTo deploy the statefulSet we create a namespace and apply the object definitions shown before:
$ kubectl create namespace scs-demo
namespace/scs-demo created
$ kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$ kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$ kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$ kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$ kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
Once the objects are deployed we can check that all is working using kubectl:
$ kubectl  -n scs-demo get all,secrets,ingress
NAME        READY   STATUS    RESTARTS   AGE
pod/scs-0   3/3     Running   0          24s
NAME            TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)                  AGE
service/scs-svc ClusterIP  10.43.0.47  <none>       22/TCP,80/TCP,9000/TCP   21s

NAME                   READY   AGE
statefulset.apps/scs   1/1     24s
NAME                         TYPE                                  DATA   AGE
secret/default-token-mwcd7   kubernetes.io/service-account-token   3      53s
secret/scs-secrets           Opaque                                3      39s
NAME                                   CLASS  HOSTS      ADDRESS     PORTS   AGE
ingress.networking.k8s.io/scs-ingress  nginx  localhost  172.21.0.5  80      17s
At this point we are ready to use the system.

Usage examples

File uploadsAs previously mentioned in our system the idea is to use the sftp server from other Pods, but to test the system we are going to do a kubectl port-forward and connect to the server using our host client and the password we have generated (it is on the user_pass.txt file, inside the users.tar archive):
$ kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 -> 22
Forwarding from [::1]:2020 -> 22
$ PF_PID=$!
$ sftp -P 2020 scs@127.0.0.1                                                 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
  established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
  hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp> ls -la
drwxr-xr-x    2 sftp     sftp         4096 Sep 25 14:47 .
dr-xr-xr-x    3 sftp     sftp         4096 Sep 25 14:36 ..
sftp> !date -R > /tmp/date.txt                                               2
sftp> put /tmp/date.txt .
Uploading /tmp/date.txt to /date.txt
date.txt                                      100%   32    27.8KB/s   00:00
sftp> ls -l
-rw-r--r--    1 sftp     sftp           32 Sep 25 15:21 date.txt
sftp> ln date.txt date.txt.1                                                 3
sftp> ls -l
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt.1
sftp> put /tmp/date.txt date.txt.2                                           4
Uploading /tmp/date.txt to /date.txt.2
date.txt                                      100%   32    27.8KB/s   00:00
sftp> ls -l                                                                  5
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt.1
-rw-r--r--    1 sftp     sftp           32 Sep 25 15:21 date.txt.2
sftp> exit
$ kill "$PF_PID"
[1]  + terminated  kubectl -n scs-demo port-forward service/scs-svc 2020:22
  1. We connect to the sftp service on the forwarded port with the scs user.
  2. We put a file we have created on the host on the directory.
  3. We do a hard link of the uploaded file.
  4. We put a second copy of the file we created locally.
  5. On the file list we can see that the two first files have two hardlinks

File retrievalsIf our ingress is configured right we can download the date.txt file from the URL http://localhost/scs/date.txt:
$ curl -s http://localhost/scs/date.txt
Sun, 25 Sep 2022 17:21:51 +0200

Use of the webhook containerTo finish this post we are going to show how we can call the hooks directly, from a CronJob and from a Job.

Direct script call (du)In our deployment the direct calls are done from other Pods, to simulate it we are going to do a port-forward and call the script with an existing PATH (the root directory) and a bad one:
$ kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$ PF_PID=$!
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=.")"
$ echo $JSON
 "path":"","bytes":"4160" 
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=foo")"
$ echo $JSON
 "error":"The provided PATH ('foo') is not a directory" 
$ kill $PF_PID
As we only have files on the base directory we print the disk usage of the . PATH and the output is in json format because we export OUTPUT_FORMAT with the value json on the webhook configuration.

Jobs (s3sync)The following job can be used to synchronise the contents of a directory in a S3 bucket with the SCS Filesystem:
job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: s3sync
  labels:
    cronjob: 's3sync'
spec:
  template:
    metadata:
      labels:
        cronjob: 's3sync'
    spec:
      containers:
      - name: s3sync-job
        image: alpine:latest
        command: 
        - "wget"
        - "-q"
        - "--header"
        - "Content-Type: application/json"
        - "--post-file"
        - "/secrets/s3sync.json"
        - "-O-"
        - "http://scs-svc:9000/hooks/s3sync"
        volumeMounts:
        - mountPath: /secrets
          name: job-secrets
          readOnly: true
      restartPolicy: Never
      volumes:
      - name: job-secrets
        secret:
          secretName: webhook-job-secrets
The file with parameters for the script must be something like this:
s3sync.json
 
  "aws":  
    "key": "********************",
    "secret_key": "****************************************"
   ,
  "s3":  
    "region": "eu-north-1",
    "bucket": "blogops-test",
    "path": "test"
   ,
  "scs":  
    "path": "test"
   
 
Once we have both files we can run the Job as follows:
$ kubectl -n scs-demo create secret generic webhook-job-secrets \            1
  --from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$ kubectl -n scs-demo apply -f webhook-job.yaml                              2
job.batch/s3sync created
$ kubectl -n scs-demo get pods -l "cronjob=s3sync"                           3
NAME           READY   STATUS      RESTARTS   AGE
s3sync-zx2cj   0/1     Completed   0          12s
$ kubectl -n scs-demo logs s3sync-zx2cj                                      4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes  received 74 bytes  30,514.00 bytes/sec
total size is 15,075  speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$ kubectl -n scs-demo delete -f webhook-job.yaml                             5
job.batch "s3sync" deleted
$ kubectl -n scs-demo delete secrets webhook-job-secrets                     6
secret "webhook-job-secrets" deleted
  1. Here we create the webhook-job-secrets secret that contains the s3sync.json file.
  2. This command runs the job.
  3. Checking the label cronjob=s3sync we get the Pods executed by the job.
  4. Here we print the logs of the completed job.
  5. Once we are finished we remove the Job.
  6. And also the secret.

Final remarksThis post has been longer than I expected, but I believe it can be useful for someone; in any case, next time I ll try to explain something shorter or will split it into multiple entries.

9 September 2022

Jonathan Dowland: memtest

Since I'm writing about my NAS, a month ago I happened to notice an odd kernel message:
Aug 8 04:04] list_del corruption. prev->next should be ffff90c96e9c2090,
but was ffff90c94e9c2090
A kernel dev friend said "I'm familiar with that code ... you should run memtest86". This seemed like advice it would be foolish to ignore! I installed the memtest86 package, which on Debian stable, is actually the formerly open-source "memtest86" software, last updated in 2014, rather than the currently open-source "memtest86+". However the package (incorrectly, I think) Recommends: memtest86+ so I ended up with both. The package scripts integrate with GRUB, so both were added as boot options. Neither however, would boot on my NAS, which is a UEFI system: after selection from the GRUB prompt, I just had a blank screen. I focussed for a short while on display issues: I wondered if trying to run a 4k monitor over HDMI was too much to expect from a memory tester OS, but my mainboard has a VGA out as well. It has some quirky behaviour for the VGA out: the firmware doesn't use it at all, so output only begins appearing after something boots (GRUB for example). I fiddled about with the HDMI output, VGA output, and trying different RGB cables, to no avail. The issue was (likely) nothing to do with the video out, but rather that the packaged versions of memtest/memtest86+ don't work properly on UEFI systems. What did work, was Passmark Software's non-FOSS memtest86. It drew on HDMI, albeit in a postage stamp sized window. After some time (much less than I expected, some kind of magic modern memory matrix stuff going on I think), I got a clean bill of health:
memtest86(.com) passes
It's quite possible the FOSS versions of memtest (pcmemtest is another) have better support for UEFI in more recent versions than I installed (I just went with what's in Debian stable), and if not, then this is a worthy feature to work on.

1 September 2022

Emmanuel Kasper: OpenShift vs. AWS product mapping

If you know the Amazon Web Services portfolio, and you are interested in OpenShift or the OKD OpenShift community distribution, this is a table of corresponding technologies. OpenShift is Red Hat s Kubernetes distribution: it is basically the upstream Kubernetes delivered with monitoring, logging, CI/CD, underlying OS, tested upgrade paths not found with a manual kubernetes.io kubeadm install.
AWS OpenShift OpenShift upstream project
Cloud Trail Kubernetes API Server audit log Kubernetes
Cloud Watch OpenShift Monitoring Prometheus
AWS Artifact Compliance Operator OpenSCAP
AWS Trusted Advisor Insights
AWS Marketplace OpenShift Operator Hub
AWS Identity and Access Management (IAM) Red Hat SSO Keycloack
AWS Elastisc Beanstalk OpenShift Source2Image (S2I) Source2Image (S2I)
AWS S3 ODF Rados Gateway Rook RGW
AWS Elastic Bloc Storage ODF Rados Block Device Rook RBD
AWS Elastic File System ODF Ceph FS Rook CephFS
Amazon Simple Notification Service OpenShift Streams for Apache Kafka Apache Kafka
Amazon Guard Duty API Server audit log review, ACS Runtime detection Stackrox
Amazon Inspector Quay.io container scanner, ACS Vulnerability Assessment Clair, Stackrox
AWS Lambda Openshift Serverless* Knative
AWS Key Management System could be done with Hashicorp Vault Vault
AWS WAF NGINX Ingress Controller Operator with ModSecurity NGINX ModSecurity
Amazon Elasticache Redis Enterprise Operator Redis, memcached as alternative
AWS Relational Database Service Crunchy Data Operator PostgreSQL
* OpenShift Serverless requires the application to be packaged as a container, something AWS Lamda does not require.

31 July 2022

Russell Coker: Workstations With ECC RAM

The last new PC I bought was a Dell PowerEdge T110II in 2013. That model had been out for a while and I got it for under $2000. Since then the CPI has gone up by about 20% so it s probably about $2000 in today s money. Currently Dell has a special on the T150 tower server (the latest replacement for the T110II) which has a G6405T CPU that isn t even twice as fast as the i3-3220 (3746 vs 2219) in the T110II according to passmark.com (AKA cpubenchmark.net). The special price is $2600. I can t remember the details of my choices when purchasing the T110II but I recall that CPU speed wasn t a priority and I wanted a cheap reliable server for storage and for light desktop use. So it seems that the current entry model in the Dell T1xx server line is less than twice as fast as fast as it was in 2013 while costing about 25% more! An option is to spend an extra $989 to get a Xeon E-2378 which delivers a reasonable 18,248 in that benchmark. The upside of a T150 is that is uses buffered DDR4 ECC RAM which is pretty cheap nowadays, you can get 32G for about $120. For systems sold as workstations (as opposed to T1xx servers that make great workstations but aren t described as such) Dell has the Precision line. The Precision 3260 Compact Workstation currently starts at $1740, it has a fast CPU but takes SO-DIMMs and doesn t come with ECC RAM. So to use it as a proper workstation you need to discard the RAM and buy DDR5 unbuffered/unregistered ECC SO-DIMMS which don t seem to be on sale yet. The Precision 3460 is slightly larger, slightly more expensive, and also takes SO-DIMMs. The Precision 3660 starts at $2550 and takes unbuffered DDR5 ECC RAM which is available and costs half as much as the SO-DIMM equivalent would cost (if you could even buy it), but the general trend in RAM prices is that unbuffered ECC RAM is more expensive than buffered ECC RAM. The upside to Precision workstations is that the range of CPUs available is significantly faster than for the T150. The HP web site doesn t offer prices on their Z workstations and is generally worse than the Dell web site in most ways. Overall I m disappointed in the range of workstations available now. As an aside if anyone knows of any other company selling workstations in Australia that support ECC RAM then please let me know.

26 July 2022

Michael Ablassmeier: Added remote capability to virtnbdbackup

Latest virtnbdbackup version now supports backing up remote libvirt hosts, too. No installation on the hypervisor required anymore:
virtnbdbackup -U qemu+ssh://usr@hypervisor/system -d vm1 -o /backup/vm1
Same applies for restore operations, other enhancements are:
  • New backup mode auto which allows easy backup rotation.
  • Option to freeze only specific filesystems within backed up domain.
  • Remote backup via dedicated network: use --nbd-ip to bind the remote NDB service to an specific interface.
  • If virtual machine requires additional files like specific UEFI/Kernel image, these are saved via SFTP from the remote host, too.
  • Restore operation can now adjust domain config accordingly (and redefine it if desired).
Next up: add TLS support for remote NBD connections.

20 July 2022

Enrico Zini: Deconstruction of the DAM hat

Further reading Talk notes Intro
  • I'm not speaking for the whole of DAM
  • Motivation in part is personal frustration, and need to set boundaries and negotiate expectations
Debian Account Managers
  • history
Responsibility for official membership
  • approve account creation
  • manage the New Member Process and nm.debian.org
  • close MIA accounts
  • occasional emergency termination of accounts
  • handle Emeritus
  • with lots of help from FrontDesk and MIA teams (big shoutout)
What DAM is not
  • we are not mediators
  • we are not a community management team
  • a list or IRC moderation team
  • we are not responsible for vision or strategic choices about how people are expected to interact in Debian
  • We shouldn't try and solve things because they need solving
Unexpected responsibilities
  • Over time, the community has grown larger and more complex, in a larger and more complex online environment
  • Enforcing the Diversity Statement and the Code of Conduct
  • Emergency list moderation
    • we have ended up using DAM warnings to compensate for the lack of list moderation, at least twice
  • contributors.debian.org (mostly only because of me, but it would be good to have its own team)
DAM warnings
  • except for rare glaring cases, patterns of behaviour / intentions / taking feedback in, are more relevant than individual incidents
  • we do not set out to fix people. It is enough for us to get people to acknowledge a problem
    • if they can't acknowledge a problem they're probably out
    • once a problem is acknowledged, fixing it could be their implementation detail
    • then again it's not that easy to get a number of troublesome people to acknowledge problems, so we go back to the problem of deciding when enough is enough
DAM warnings?
  • I got to a point where I look at DAM warnings as potential signals that DAM has ended up with the ball that everyone else in Debian dropped.
  • DAM warning means we haven't gotten to a last resort situation yet, meaning that it probably shouldn't be DAM dealing with this at this point
  • Everyone in the project can write a person "do you realise there's an issue here? Can you do something to stop?", and give them a chance to reflect on issues or ignore them, and build their reputation accordingly.
  • People in Debian should not have to endure, completey powerless, as trolls drag painful list discussions indefinitely until all the trolled people run out of energy and leave. At the same time, people who abuse a list should expect to be suspended or banned from the list, not have their Debian membership put into question (unless it is a recurring pattern of behaviour).
  • The push to grow DAM warnings as a tool, is a sign of the rest of Debian passing on their responsibilities, and DAM picking them up.
  • Then in DAM we end up passing on things, too, because we also don't have the energy to face another intensive megametathread, and as we take actions for things that shouldn't quite be our responsibility, we face a higher level of controversy, and therefore demotivation.
  • Also, as we take actions for things that shouldn't be our responsibility, and work on a higher level of controversy, our legitimacy is undermined (and understandably so)
    • there's a pothole on my street that never gets filled, so at some point I go out and fill it. Then people thank me, people complain I shouldn't have, people complain I didn't fill it right, people appreciate the gesture and invite me to learn how to fix potholes better, people point me out to more potholes, and then complain that potholes don't get fixed properly on the whole street. I end up being the problem, instead of whoever had responsibility of the potholes but wasn't fixing them
  • The Community Team, the Diversity Team, and individual developers, have no energy or entitlement for explaining what a healthy community looks like, and DAM is left with that responsibility in the form of accountability for their actions: to issue, say, a DAM warning for bullying, we are expected to explain what is bullying, and how that kind of behaviour constitutes bullying, in a way that is understandable by the whole project.
  • Since there isn't consensus in the project about what bullying loos like, we end up having to define it in a warning, which again is a responsibility we shouldn't have, and we need to do it because we have an escalated situation at hand, but we can't do it right
House rules Interpreting house rules
  • you can't encode common sense about people behaviour in written rules: no matter how hard you try, people will find ways to cheat that
  • so one can use rules as a guideline, and someone responsible for the bits that can't go into rules.
    • context matters, privilege/oppression matters, patterns matter, histor matters
  • example:
    • call a person out for breaking a rule
    • get DARVO in response
    • state that DARVO is not acceptable
    • get concern trolling against margninalised people and accuse them of DARVO if they complain
  • example: assume good intentions vs enabling
  • example: rule lawyering and Figure skating
  • this cannot be solved by GRs: I/we (DAM)/possibly also we (Debian) don't want to do GRs about evaluating people
Governance by bullying
  • How to DoS discussions in Debian
    • example: gender, minority groups, affirmative action, inclusion, anything about the community team itself, anything about the CoC, systemd, usrmerge, dam warnings, expulsions
      • think of a topic. Think about sending a mail to debian-project about it. If you instinctively shiver at the thought, this is probably happening
      • would you send a mail about that to -project / -devel?
      • can you think of other topics?
    • it is an effective way of governance as it excludes topics from public discussion
  • A small number of people abuse all this, intentionally or not, to effectively manipulate decision making in the project.
  • Instead of using the rules of the community to bring forth the issues one cares about, it costs less energy to make it unthinkable or unbearable to have a discussion on issues one doesn't want to progress. What one can't stop constructively, one can oppose destructively.
  • even regularly diverting the discussion away from the original point or concern is enough to derail it without people realising you're doing it
  • This is an effective strategy for a few reckless people to unilaterally direct change, in the current state of Debian, at the cost of the health and the future of the community as a whole.
  • There are now a number of important issues nobody has the energy to discuss, because experience says that energy requirements to bring them to the foreground and deal with the consequences are anticipated to be disproportionate.
  • This is grave, as we're talking about trolling and bullying as malicious power moves to work around the accepted decision making structures of our community.
  • Solving this is out of scope for this talk, but it is urgent nevertheless, and can't be solved by expecting DAM to fix it
How about the Community Team?
  • It is also a small group of people who cannot pick up the responsibility of doing what the community isn't doing for itself
  • I believe we need to recover the Community Team: it's been years that every time they write something in public, they get bullied by the same recurring small group of people (see governance by bullying above)
How about DAM?
  • I was just saying that we are not the emergency catch all
  • When the only enforcement you have is "nuclear escalation", there's nothing you can do until it's too late, and meanwhile lots of people suffer (this was written before Russia invaded Ukraine)
  • Also, when issues happen on public lists, the BTS, or on IRC, some of the perpetrators are also outside of the jurisdiction of DAM, which shows how DAM is not the tool for this
How about the DPL?
  • Talking about emergency catch alls, don't they have enough to do already?
Concentrating responsibility
  • Concentrating all responsibility on social issues on a single point creates a scapegoat: we're blamed for any conduct issue, and we're blamed for any action we take on conduct issues
    • also, when you are a small group you are personally identified with it. Taking action on a person may mean making a new enemy, and becoming a target for harassment, retaliation, or even just the general unwarranted hostility of someone who is left with an axe to grind
  • As long as responsibility is centralised, any action one takes as a response of one micro-aggression (or one micro-aggression too many) is an overreaction. Distributing that responsibility allows a finer granularity of actions to be taken
    • you don't call the police to tell someone they're being annoying at the pub: the people at the pub will tell you you're being annoying, and the police is called if you want to beat them up in response
  • We are also a community where we have no tool to give feedback to posts, so it still looks good to nitpick stupid details with smart-looking tranchant one-liners, or elaborate confrontational put-downs, and one doesn't get the feedback of "that did not help". Compare with discussing https://salsa.debian.org/debian/grow-your-ideas/ which does have this kind of feedback
    • the lack of moderation and enforcement makes the Debian community ideal for easy baiting, concern trolling, dog whistling, and related fun, and people not empowered can be so manipulated to troll those responsible
    • if you're fragile in Debian, people will play cat and mouse with you. It might be social awkwardness, or people taking themselves too serious, but it can easily become bullying, and with no feedback it's hard to tell and course correct
  • Since DAM and DPL are where the ball stops, everyone else in Debian can afford to let the ball drop.
  • More generally, if only one group is responsible, nobody else is
Empowering developers
  • Police alone does not make a community safe: a community makes a community safe.
  • DDs currently have no power to act besides complaining to DAM, or complaining to Community Team that then can only pass complaints on to DAM.
    • you could act directly, but currently nobody has your back if the (micro-)aggression then starts extending to you, too
  • From no power comes no responsibility. And yet, the safety of a community is sustainable only if it is the responsibility of every member of the community.
  • don't wait for DAM as the only group who can do something
  • people should be able to address issues in smaller groups, without escalation at project level
  • but people don't have the tools for that
  • I/we've shouldered this responsibility for far too long because nobody else was doing it, and it's time the whole Debian community gets its act together and picks up this responsibility as they should be. You don't get to not care just because there's a small number of people who is caring for you.
What needs to happen
  • distinguish DAM decisions from decisions that are more about vision and direction, and would require more representation
  • DAM warnings shouldn't belong in DAM
  • who is responsible for interpretation of the CoC?
  • deciding what to do about controversial people shouldn't belong in DAM
  • curation of the community shouldn't belong in DAM
  • can't do this via GRs, it's a mess to do a GR to decide how acceptable is a specific person's behaviour, and a lot of this requires more and more frequent micro-decisions than one'd do via GRs

Next.

Previous.