Michael Ablassmeier: small standalone sshds in go
nc localhost <port>
nc localhost <port>
hz.tools
will be tagged
#hztools.librtlsdr
to read in
an IQ stream, did some filtering, and played the real valued audio stream via
pulseaudio
. Over 4 years this has slowly grown through persistence, lots of
questions to too many friends to thank (although I will try), and the eternal
patience of my wife hearing about radios nonstop for years into a number
of Go repos that can do quite a bit, and support a handful of radios.
I ve resisted making the repos public not out of embarrassment or a desire to
keep secrets, but rather, an attempt to keep myself free of any maintenance
obligations to users so that I could freely break my own API, add and remove
API surface as I saw fit. The worst case was to have this project feel like
work, and I can t imagine that will happen if I feel frustrated by PRs
that are getting ahead of me solving problems I didn t yet know about, or
bugs I didn t understand the fix for.
As my rate of changes to the most central dependencies has slowed, i ve begun
to entertain the idea of publishing them. After a bit of back and forth, I ve
decided
it s time to make a number of them public,
and to start working on them in the open, as I ve built up a bit of knowledge
in the space, and I and feel confident that the repo doesn t contain overt
lies. That s not to say it doesn t contain lies, but those lies are likely
hidden and lurking in the dark. Beware.
That being said, it shouldn t be a surprise to say I ve not published
everything yet for the same reasons as above. I plan to open repos as the
rate of changes slows and I understand the problems the library solves well
enough or if the project dead ends and I ve stopped learning.
hz.tools/rf
library contains the abstract concept of frequency, and
some very basic helpers to interact with frequency ranges (such as helpers
to deal with frequency ranges, or frequency range math) as well as frequencies
and some very basic conversions (to meters, etc) and parsers (to parse values
like 10MHz
). This ensures that all the hz.tools
libraries have a shared
understanding of Frequencies, a standard way of representing ranges of
Frequencies, and the ability to handle the IO boundary with things like CLI
arguments, JSON or YAML.
The git repo can be found at
github.com/hztools/go-rf, and is
importable as hz.tools/rf.
// Parse a frequency using hz.tools/rf.ParseHz, and print it to stdout.
freq := rf.MustParseHz("-10kHz")
fmt.Printf("Frequency: %s\n", freq+rf.MHz)
// Prints: 'Frequency: 990kHz'
// Return the Intersection between two RF ranges, and print
// it to stdout.
r1 := rf.Range rf.KHz, rf.MHz
r2 := rf.Range rf.Hz(10), rf.KHz * 100
fmt.Printf("Range: %s\n", r1.Intersection(r2))
// Prints: Range: 1000Hz->100kHz
io
idioms
so that this library feels as idiomatic as it can, so that Go builtins interact
with IQ in a way that s possible to reason about, and to avoid reinventing the
wheel by designing new API surface. While some of the API looks (and is even
called) the same thing as a similar function in io
, the implementation is
usually a lot more naive, and may have unexpected sharp edges such as
concurrency issues or performance problems.
The following IQ types are implemented using the sdr.Samples
interface. The
hz.tools/sdr
package contains helpers for conversion between types, and some
basic manipulation of IQ streams.
IQ Format | hz.tools Name | Underlying Go Type |
---|---|---|
Interleaved uint8 (rtl-sdr) | sdr.SamplesU8 |
[][2]uint8 |
Interleaved int8 (hackrf, uhd) | sdr.SamplesI8 |
[][2]int8 |
Interleaved int16 (pluto, uhd) | sdr.SamplesI16 |
[][2]int16 |
Interleaved float32 (airspy, uhd) | sdr.SamplesC64 |
[]complex64 |
SDR | Format | RX/TX | State |
---|---|---|---|
rtl | u8 | RX | Good |
HackRF | i8 | RX/TX | Good |
PlutoSDR | i16 | RX/TX | Good |
rtl kerberos | u8 | RX | Old |
uhd | i16/c64/i8 | RX/TX | Good |
airspyhf | c64 | RX | Exp |
Import | What is it? |
---|---|
hz.tools/sdr | Core IQ types, supporting types and implementations that interact with the byte boundary |
hz.tools/sdr/rtl | sdr.Receiver implementation using librtlsdr . |
hz.tools/sdr/rtl/kerberos | Helpers to enable coherent RX using the Kerberos SDR. |
hz.tools/sdr/rtl/e4k | Helpers to interact with the E4000 RTL-SDR dongle. |
hz.tools/sdr/fft | Interfaces for performing an FFT, which are implemented by other packages. |
hz.tools/sdr/rtltcp | sdr.Receiver implementation for rtl_tcp servers. |
hz.tools/sdr/pluto | sdr.Transceiver implementation for the PlutoSDR using libiio . |
hz.tools/sdr/uhd | sdr.Transceiver implementation for UHD radios, specifically the B210 and B200mini |
hz.tools/sdr/hackrf | sdr.Transceiver implementation for the HackRF using libhackrf . |
hz.tools/sdr/mock | Mock SDR for testing purposes. |
hz.tools/sdr/airspyhf | sdr.Receiver implementation for the AirspyHF+ Discovery with libairspyhf . |
hz.tools/sdr/internal/simd | SIMD helpers for IQ operations, written in Go ASM. This isn t the best to learn from, and it contains pure go implemtnations alongside. |
hz.tools/sdr/stream | Common Reader/Writer helpers that operate on IQ streams. |
hz.tools/fftw
package contains bindings to libfftw3
to implement
the hz.tools/sdr/fft.Planner
type to transform between the time and
frequency domain.
The git repo can be found at
github.com/hztools/go-fftw, and is
importable as hz.tools/fftw.
This is the default throughout most of my codebase, although that default is
only expressed at the leaf package libraries should not be hardcoding the
use of this library in favor of taking an fft.Planner
, unless it s used as
part of testing. There are a bunch of ways to do an FFT out there, things like
clFFT
or a pure-go FFT implementation could be plugged in depending on what s
being solved for.
hz.tools/fm
and hz.tools/am
packages contain demodulators for
AM analog radio, and FM analog radio. This code is a bit old, so it has
a lot of room for cleanup, but it ll do a very basic demodulation of IQ
to audio.
The git repos can be found at
github.com/hztools/go-fm and
github.com/hztools/go-am,
and are importable as
hz.tools/fm and
hz.tools/am.
As a bonus, the hz.tools/fm
package also contains a modulator, which has been
tested on the air and with some of my handheld radios. This code is a bit
old, since the hz.tools/fm
code is effectively the first IQ processing code
I d ever written, but it still runs and I run it from time to time.
// Basic sketch for playing FM radio using a reader stream from
// an SDR or other IQ stream.
bandwidth := 150*rf.KHz
reader, err = stream.ConvertReader(reader, sdr.SampleFormatC64)
if err != nil
...
demod, err := fm.Demodulate(reader, fm.DemodulatorConfig
Deviation: bandwidth / 2,
Downsample: 8, // some value here depending on sample rate
Planner: fftw.Plan,
)
if err != nil
...
speaker, err := pulseaudio.NewWriter(pulseaudio.Config
Format: pulseaudio.SampleFormatFloat32NE,
Rate: demod.SampleRate(),
AppName: "rf",
StreamName: "fm",
Channels: 1,
SinkName: "",
)
if err != nil
...
buf := make([]float32, 1024*64)
for
i, err := demod.Read(buf)
if err != nil
...
if i == 0
panic("...")
if err := speaker.Write(buf[:i]); err != nil
...
hz.tools/rfcap
package is the reference implementation of the
rfcap spec , and is how I store IQ captures
locally, and how I send them across a byte boundary.
The git repo can be found at
github.com/hztools/go-rfcap, and is
importable as hz.tools/rfcap.
If you re interested in storing IQ in a way others can use, the better approach
is to use SigMF rfcap
exists for cases
like using UNIX pipes to move IQ around, through APIs, or when I send
IQ data through an OS socket, to ensure the sample format (and other metadata)
is communicated with it.
rfcap
has a number of limitations, for instance, it can not express a change
in frequency or sample rate during the capture, since the header is fixed at
the beginning of the file.
kubeadm
install.
After passing the two corresponding certifications, my opinion on cloud operators is that it is very much a step back in the direction of proprietary software. You can rebuild their cloud stack with opensource components, but it is also a lot of integration work, similar to using the Linux from scratch distribution instead of something like Debian. A good middle point are the OpenShift and OKD Kubernetes distributions, who integrate the most common cloud components, but allow an installation on your own hardware or cloud provider of your choice.
AWS | Azure | OpenShift | *OpenShift upstream project& |
---|---|---|---|
Cloud Trail | Kubernetes API Server audit log | Kubernetes | |
Cloud Watch | Azure Monitor, Azure Log Analytics | OpenShift Monitoring | Prometheus, Kubernetes Metrics |
AWS Artifact | Compliance Operator | OpenSCAP | |
AWS Trusted Advisor | Azure Advisor | Insights | |
AWS Marketplace | Red Hat Market place | Operator Hub | |
AWS Identity and Access Management (IAM) | Azure Active Directory, Azure AD DS | Red Hat SSO | Keycloack |
AWS Elastisc Beanstalk | Azure App Services | OpenShift Source2Image (S2I) | Source2Image (S2I) |
AWS S3 | Azure Blob Storage** |
ODF Rados Gateway | Rook RGW |
AWS Elastic Block Storage | Azure Disk Storage | ODF Rados Block Device | Rook RBD |
AWS Elastic File System | Azure Files | ODF Ceph FS | Rook CephFS |
AWS ELB Classic | Azure Load Balancer | MetalLB Operator | MetalLB |
AWS ELB Application Load Balancer | Azure Application Gateway | OpenShift Router | HAProxy |
Amazon Simple Notification Service | OpenShift Streams for Apache Kafka | Apache Kafka | |
Amazon Guard Duty | Microsoft Defender for Cloud | API Server audit log review, ACS Runtime detection | Stackrox |
Amazon Inspector | Microsoft Defender for Cloud | Quay.io container scanner, ACS Vulnerability Assessment | Clair, Stackrox |
AWS Lambda | Azure Serverless | Openshift Serverless* |
Knative |
AWS Key Management System | Azure Key Vault | could be done with Hashicorp Vault | Vault |
AWS WAF | NGINX Ingress Controller Operator with ModSecurity | NGINX ModSecurity | |
Amazon Elasticache | Redis Enterprise Operator | Redis, memcached as alternative | |
AWS Relational Database Service | Azure SQL | Crunchy Data Operator | PostgreSQL |
Azure Arc | OpenShift ACM | Open Cluster Management | |
AWS Scaling Group | Azure Scale Set | OpenShift Autoscaler | OKD Autoscaler |
*
OpenShift Serverless requires the application to be packaged as a container, something AWS Lambda does not require.
**
Azure Blob Storage covers the object storage use case of S3, but is itself not S3 compatible
Publisher: | Amazon Original Stories |
Copyright: | August 2021 |
ISBN: | 1-5420-3272-5 |
ISBN: | 1-5420-3270-9 |
ISBN: | 1-5420-3271-7 |
ISBN: | 1-5420-3273-3 |
ISBN: | 1-5420-3268-7 |
ISBN: | 1-5420-3269-5 |
Format: | Kindle |
Pages: | 168 |
The Great War and Modern Memory (1975)
Wartime: Understanding and Behavior in the Second World War (1989)
Paul Fussell
Rather than describe the battles, weapons, geopolitics or big personalities of the two World Wars, Paul Fussell's The Great War and Modern Memory & Wartime are focused instead on how the two wars have been remembered by their everyday participants. Drawing on the memoirs and memories of soldiers and civilians along with a brief comparison with the actual events that shaped them, Fussell's two books are a compassionate, insightful and moving piece of analysis.
Fussell primarily sets himself against the admixture of nostalgia and trauma that obscures the origins and unimaginable experience of participating in these wars; two wars that were, in his view, a "perceptual and rhetorical scandal from which total recovery is unlikely." He takes particular aim at the dishonesty of hindsight:
For the past fifty years, the Allied war has been sanitised and romanticised almost beyond recognition by the sentimental, the loony patriotic, the ignorant and the bloodthirsty. I have tried to balance the scales. [And] in unbombed America especially, the meaning of the war [seems] inaccessible.The author does not engage in any of the customary rose-tinted view of war, yet he remains understanding and compassionate towards those who try to locate a reason within what was quite often senseless barbarism. If anything, his despondency and pessimism about the Second World War (the war that Fussell himself fought in) shines through quite acutely, and this is especially the case in what he chooses to quote from others:
"It was common [ ] throughout the [Okinawa] campaign for replacements to get hit before we even knew their names. They came up confused, frightened, and hopeful, got wounded or killed, and went right back to the rear on the route by which they had come, shocked, bleeding, or stiff. They were forlorn figures coming up to the meat grinder and going right back out of it like homeless waifs, unknown and faceless to us, like unread books on a shelf."It would take a rather heartless reader to fail to be sobered by this final simile, and an even colder one to view Fussell's citation of such an emotive anecdote to be manipulative. Still, stories and cruel ironies like this one infuse this often-angry book, but it is not without astute and shrewd analysis as well, especially on the many qualitative differences between the two conflicts that simply cannot be captured by facts and figures alone. For example:
A measure of the psychological distance of the Second [World] War from the First is the rarity, in 1914 1918, of drinking and drunkenness poems.Indeed so. In fact, what makes Fussell's project so compelling and perhaps even unique is that he uses these non-quantitive measures to try and take stock of what happened. After all, this was a war conducted by humans, not the abstract school of statistics. And what is the value of a list of armaments destroyed by such-and-such a regiment when compared with truly consequential insights into both how the war affected, say, the psychology of postwar literature ("Prolonged trench warfare, whether enacted or remembered, fosters paranoid melodrama, which I take to be a primary mode in modern writing."), the specific words adopted by combatants ("It is a truism of military propaganda that monosyllabic enemies are easier to despise than others") as well as the very grammar of interaction:
The Field Service Post Card [in WW1] has the honour of being the first widespread exemplary of that kind of document which uniquely characterises the modern world: the "Form". [And] as the first widely known example of dehumanised, automated communication, the post card popularised a mode of rhetoric indispensable to the conduct of later wars fought by great faceless conscripted armies.And this wouldn't be a book review without argument-ending observations that:
Indicative of the German wartime conception [of victory] would be Hitler and Speer's elaborate plans for the ultimate reconstruction of Berlin, which made no provision for a library.Our myths about the two world wars possess an undisputed power, in part because they contain an essential truth the atrocities committed by Germany and its allies were not merely extreme or revolting, but their full dimensions (embodied in the Holocaust and the Holodomor) remain essentially inaccessible within our current ideological framework. Yet the two wars are better understood as an abyss in which we were all dragged into the depths of moral depravity, rather than a battle pitched by the forces of light against the forces of darkness. Fussell is one of the few observers that can truly accept and understand this truth and is still able to speak to us cogently on the topic from the vantage point of experience. The Second World War which looms so large in our contemporary understanding of the modern world (see below) may have been necessary and unavoidable, but Fussell convinces his reader that it was morally complicated "beyond the power of any literary or philosophic analysis to suggest," and that the only way to maintain a na ve belief in the myth that these wars were a Manichaean fight between good and evil is to overlook reality. There are many texts on the two World Wars that can either stir the intellect or move the emotions, but Fussell's two books do both. A uniquely perceptive and intelligent commentary; outstanding.
Longitude (1995) Dava Sobel Since Man first decided to sail the oceans, knowing one's location has always been critical. Yet doing so reliably used to be a serious problem if you didn't know where you were, you are far more likely to die and/or lose your valuable cargo. But whilst finding one's latitude (ie. your north south position) had effectively been solved by the beginning of the 17th century, finding one's (east west) longitude was far from trustworthy in comparison. This book first published in 1995 is therefore something of an anachronism. As in, we readily use the GPS facilities of our phones today without hesitation, so we find it difficult to imagine a reality in which knowing something fundamental like your own location is essentially unthinkable. It became clear in the 18th century, though, that in order to accurately determine one's longitude, what you actually needed was an accurate clock. In Longitude, therefore, we read of the remarkable story of John Harrison and his quest to create a timepiece that would not only keep time during a long sea voyage but would survive the rough ocean conditions as well. Self-educated and a carpenter by trade, Harrison made a number of important breakthroughs in keeping accurate time at sea, and Longitude describes his novel breakthroughs in a way that is both engaging and without talking down to the reader. Still, this book covers much more than that, including the development of accurate longitude going hand-in-hand with advancements in cartography as well as in scientific experiments to determine the speed of light: experiments that led to the formulation of quantum mechanics. It also outlines the work being done by Harrison's competitors. 'Competitors' is indeed the correct word here, as Parliament offered a huge prize to whoever could create such a device, and the ramifications of this tremendous financial incentive are an essential part of this story. For the most part, though, Longitude sticks to the story of Harrison and his evolving obsession with his creating the perfect timepiece. Indeed, one reason that Longitude is so resonant with readers is that many of the tropes of the archetypical 'English inventor' are embedded within Harrison himself. That is to say, here is a self-made man pushing against the establishment of the time, with his groundbreaking ideas being underappreciated in his life, or dishonestly purloined by his intellectual inferiors. At the level of allegory, then, I am minded to interpret this portrait of Harrison as a symbolic distillation of postwar Britain a nation acutely embarrassed by the loss of the Empire that is now repositioning itself as a resourceful but plucky underdog; a country that, with a combination of the brains of boffins and a healthy dose of charisma and PR, can still keep up with the big boys. (It is this same search for postimperial meaning I find in the fiction of John le Carr , and, far more famously, in the James Bond franchise.) All of this is left to the reader, of course, as what makes Longitute singularly compelling is its gentle manner and tone. Indeed, at times it was as if the doyenne of sci-fi Ursula K. LeGuin had a sideline in popular non-fiction. I realise it's a mark of critical distinction to downgrade the importance of popular science in favour of erudite academic texts, but Latitude is ample evidence that so-called 'pop' science need not be patronising or reductive at all.
Closed Chambers: The Rise, Fall, and Future of the Modern Supreme Court (1998) Edward Lazarus After the landmark decision by the U.S. Supreme Court in *Dobbs v. Jackson Women's Health Organization that ended the Constitutional right to abortion conferred by Roe v Wade, I prioritised a few books in the queue about the judicial branch of the United States. One of these books was Closed Chambers, which attempts to assay, according to its subtitle, "The Rise, Fall and Future of the Modern Supreme Court". This book is not merely simply a learned guide to the history and functioning of the Court (although it is completely creditable in this respect); it's actually an 'insider' view of the workings of the institution as Lazurus was a clerk for Justice Harry Blackmun during the October term of 1988. Lazarus has therefore combined his experience as a clerk and his personal reflections (along with a substantial body of subsequent research) in order to communicate the collapse in comity between the Justices. Part of this book is therefore a pure history of the Court, detailing its important nineteenth-century judgements (such as Dred Scott which ruled that the Constitution did not consider Blacks to be citizens; and Plessy v. Ferguson which failed to find protection in the Constitution against racial segregation laws), as well as many twentieth-century cases that touch on the rather technical principle of substantive due process. Other layers of Lazurus' book are explicitly opinionated, however, and they capture the author's assessment of the Court's actions in the past and present [1998] day. Given the role in which he served at the Court, particular attention is given by Lazarus to the function of its clerks. These are revealed as being far more than the mere amanuenses they were hitherto believed to be. Indeed, the book is potentially unique in its the claim that the clerks have played a pivotal role in the deliberations, machinations and eventual rulings of the Court. By implication, then, the clerks have plaedy a crucial role in the internal controversies that surround many of the high-profile Supreme Court decisions decisions that, to the outsider at least, are presented as disinterested interpretations of Constitution of the United States. This is of especial importance given that, to Lazarus, "for all the attention we now pay to it, the Court remains shrouded in confusion and misunderstanding." Throughout his book, Lazarus complicates the commonplace view that the Court is divided into two simple right vs. left political factions, and instead documents an ever-evolving series of loosely held but strongly felt series of cabals, quid pro quo exchanges, outright equivocation and pure personal prejudices. (The age and concomitant illnesses of the Justices also appears to have a not insignificant effect on the Court's rulings as well.) In other words, Closed Chambers is not a book that will be read in a typical civics class in America, and the only time the book resorts to the customary breathless rhetoric about the US federal government is in its opening chapter:
The Court itself, a Greek-style temple commanding the crest of Capitol Hill, loomed above them in the dim light of the storm. Set atop a broad marble plaza and thirty-six steps, the Court stands in splendid isolation appropriate to its place at the pinnacle of the national judiciary, one of the three independent and "coequal" branches of American government. Once dubbed the Ivory Tower by architecture critics, the Court has a Corinthian colonnade and massive twenty-foot-high bronze doors that guard the single most powerful judicial institution in the Western world. Lights still shone in several offices to the right of the Court's entrance, and [ ]Et cetera, et cetera. But, of course, this encomium to the inherent 'nobility' of the Supreme Court is quickly revealed to be a narrative foil, as Lazarus soon razes this dangerously na ve conception to the ground:
[The] institution is [now] broken into unyielding factions that have largely given up on a meaningful exchange of their respective views or, for that matter, a meaningful explication or defense of their own views. It is of Justices who in many important cases resort to transparently deceitful and hypocritical arguments and factual distortions as they discard judicial philosophy and consistent interpretation in favor of bottom-line results. This is a Court so badly splintered, yet so intent on lawmaking, that shifting 5-4 majorities, or even mere pluralities, rewrite whole swaths of constitutional law on the authority of a single, often idiosyncratic vote. It is also a Court where Justices yield great and excessive power to immature, ideologically driven clerks, who in turn use that power to manipulate their bosses and the institution they ostensibly serve.Lazurus does not put forward a single, overarching thesis, but in the final chapters, he does suggest a potential future for the Court:
In the short run, the cure for what ails the Court lies solely with the Justices. It is their duty, under the shield of life tenure, to recognize the pathologies affecting their work and to restore the vitality of American constitutionalism. Ultimately, though, the long-term health of the Court depends on our own resolve on whom [we] select to join that institution.Back in 1998, Lazurus might have had room for this qualified optimism. But from the vantage point of 2022, it appears that the "resolve" of the United States citizenry was not muscular enough to meet his challenge. After all, Lazurus was writing before Bush v. Gore in 2000, which arrogated to the judicial branch the ability to decide a presidential election; the disillusionment of Barack Obama's failure to nominate a replacement for Scalia; and many other missteps in the Court as well. All of which have now been compounded by the Trump administration's appointment of three Republican-friendly justices to the Court, including hypocritically appointing Justice Barrett a mere 38 days before the 2020 election. And, of course, the leaking and ruling in Dobbs v. Jackson, the true extent of which has not been yet. Not of a bit of this is Lazarus' fault, of course, but the Court's recent decisions (as well as the liberal hagiographies of 'RBG') most perforce affect one's reading of the concluding chapters. The other slight defect of Closed Chambers is that, whilst it often implies the importance of the federal and state courts within the judiciary, it only briefly positions the Supreme Court's decisions in relation to what was happening in the House, Senate and White House at the time. This seems to be increasingly relevant as time goes on: after all, it seems fairly clear even to this Brit that relying on an activist Supreme Court to enact progressive laws must be interpreted as a failure of the legislative branch to overcome the perennial problems of the filibuster, culture wars and partisan bickering. Nevertheless, Lazarus' book is in equal parts ambitious, opinionated, scholarly and dare I admit it? wonderfully gossipy. By juxtaposing history, memoir, and analysis, Closed Chambers combines an exacting evaluation of the Court's decisions with a lively portrait of the intellectual and emotional intensity that has grown within the Supreme Court's pseudo-monastic environment all while it struggles with the most impactful legal issues of the day. This book is an excellent and well-written achievement that will likely never be repeated, and a must-read for anyone interested in this ever-increasingly important branch of the US government.
Crashed: How a Decade of Financial Crises Changed the World (2018)
Shutdown: How Covid Shook the World's Economy (2021)
Adam Tooze
The economic historian Adam Tooze has often been labelled as an unlikely celebrity, but in the fourteen years since the global financial crisis of 2008, a growing audience has been looking for answers about the various failures of the modern economy. Tooze, a professor of history at New York's Columbia University, has written much that is penetrative and thought-provoking on this topic, and as a result, he has generated something of a cult following amongst economists, historians and the online left.
I actually read two Tooze books this year. The first, Crashed (2018), catalogues the scale of government intervention required to prop up global finance after the 2008 financial crisis, and it characterises the different ways that countries around the world failed to live up to the situation, such as doing far too little, or taking action far too late. The connections between the high-risk subprime loans, credit default swaps and the resulting liquidity crisis in the US in late 2008 is fairly well known today in part thanks to films such as Adam McKay's 2015 The Big Short and much improved economic literacy in media reportage. But Crashed makes the implicit claim that, whilst the specific and structural origins of the 2008 crisis are worth scrutinising in exacting detail, it is the reaction of states in the months and years after the crash that has been overlooked as a result.
After all, this is a reaction that has not only shaped a new economic order, it has created one that does not fit any conventional idea about the way the world 'ought' to be run. Tooze connects the original American banking crisis to the (multiple) European debt crises with a larger crisis of liberalism. Indeed, Tooze somehow manages to cover all these topics and more, weaving in Trump, Brexit and Russia's 2014 annexation of Crimea, as well as the evolving role of China in the post-2008 economic order.
Where Crashed focused on the constellation of consequences that followed the events of 2008, Shutdown is a clear and comprehensive account of the way the world responded to the economic impact of Covid-19. The figures are often jaw-dropping: soon after the disease spread around the world, 95% of the world's economies contracted simultaneously, and at one point, the global economy shrunk by approximately 20%. Tooze's keen and sobering analysis of what happened is made all the more remarkable by the fact that it came out whilst the pandemic was still unfolding. In fact, this leads quickly to one of the book's few flaws: by being published so quickly, Shutdown prematurely over-praises China's 'zero Covid' policy, and these remarks will make a reader today squirm in their chair. Still, despite the regularity of these references (after all, mentioning China is very useful when one is directly comparing economic figures in early 2021, for examples), these are actually minor blemishes on the book's overall thesis.
That is to say, Crashed is not merely a retelling of what happened in such-and-such a country during the pandemic; it offers in effect a prediction about what might be coming next. Whilst the economic responses to Covid averted what could easily have been another Great Depression (and thus showed it had learned some lessons from 2008), it had only done so by truly discarding the economic rule book. The by-product of inverting this set of written and unwritten conventions that have governed the world for the past 50 years, this 'Washington consensus' if you well, has yet to be fully felt.
Of course, there are many parallels between these two books by Tooze. Both the liquidity crisis outlined in Crashed and the economic response to Covid in Shutdown exposed the fact that one of the central tenets of the modern economy ie. that financial markets can be trusted to regulate themselves was entirely untrue, and likely was false from the very beginning. And whilst Adam Tooze does not offer a singular piercing insight (conveying a sense of rigorous mastery instead), he may as well be asking whether we're simply going to lurch along from one crisis to the next, relying on the technocrats in power to fix problems when everything blows up again. The answer may very well be yes.
Looking for the Good War: American Amnesia and the Violent Pursuit of Happiness (2021) Elizabeth D. Samet Elizabeth D. Samet's Looking for the Good War answers the following question what would be the result if you asked a professor of English to disentangle the complex mythology we have about WW2 in the context of the recent US exit of Afghanistan? Samet's book acts as a twenty-first-century update of a kind to Paul Fussell's two books (reviewed above), as well as a deeper meditation on the idea that each new war is seen through the lens of the previous one. Indeed, like The Great War and Modern Memory (1975) and Wartime (1989), Samet's book is a perceptive work of demystification, but whilst Fussell seems to have been inspired by his own traumatic war experience, Samet is not only informed by her teaching West Point military cadets but by the physical and ontological wars that have occurred during her own life as well. A more scholarly and dispassionate text is the result of Samet's relative distance from armed combat, but it doesn't mean Looking for the Good War lacks energy or inspiration. Samet shares John Adams' belief that no political project can entirely shed the innate corruptions of power and ambition and so it is crucial to analyse and re-analyse the role of WW2 in contemporary American life. She is surely correct that the Second World War has been universally elevated as a special, 'good' war. Even those with exceptionally giddy minds seem to treat WW2 as hallowed:
It is nevertheless telling that one of the few occasions to which Trump responded with any kind of restraint while he was in office was the 75th anniversary of D-Day in 2019.What is the source of this restraint, and what has nurtured its growth in the eight decades since WW2 began? Samet posits several reasons for this, including the fact that almost all of the media about the Second World War is not only suffused with symbolism and nostalgia but, less obviously, it has been made by people who have no experience of the events that they depict. Take Stephen Ambrose, author of Steven Spielberg's Band of Brothers miniseries: "I was 10 years old when the war ended," Samet quotes of Ambrose. "I thought the returning veterans were giants who had saved the world from barbarism. I still think so. I remain a hero worshiper." If Looking for the Good War has a primary thesis, then, it is that childhood hero worship is no basis for a system of government, let alone a crusading foreign policy. There is a straight line (to quote this book's subtitle) from the "American Amnesia" that obscures the reality of war to the "Violent Pursuit of Happiness." Samet's book doesn't merely just provide a modern appendix to Fussell's two works, however, as it adds further layers and dimensions he overlooked. For example, Samet provides some excellent insight on the role of Western, gangster and superhero movies, and she is especially good when looking at noir films as a kind of kaleidoscopic response to the Second World War:
Noir is a world ruled by bad decisions but also by bad timing. Chance, which plays such a pivotal role in war, bleeds into this world, too.Samet rightfully weaves the role of women into the narrative as well. Women in film noir are often celebrated as 'independent' and sassy, correctly reflecting their newly-found independence gained during WW2. But these 'liberated' roles are not exactly a ringing endorsement of this independence: the 'femme fatale' and the 'tart', etc., reflect a kind of conditional freedom permitted to women by a post-War culture which is still wedded to an outmoded honour culture. In effect, far from being novel and subversive, these roles for women actually underwrote the ambient cultural disapproval of women's presence in the workforce. Samet later connects this highly-conditional independence with the liberation of Afghan women, which:
is inarguably one of the more palatable outcomes of our invasion, and the protection of women's rights has been invoked on the right and the left as an argument for staying the course in Afghanistan. How easily consequence is becoming justification. How flattering it will be one day to reimagine it as original objective.Samet has ensured her book has a predominantly US angle as well, for she ends her book with a chapter on the pseudohistorical Lost Cause of the Civil War. The legacy of the Civil War is still visible in the physical phenomena of Confederate statues, but it also exists in deep-rooted racial injustice that has been shrouded in euphemism and other psychological devices for over 150 years. Samet believes that a key part of what drives the American mythology about the Second World War is the way in which it subconsciously cleanses the horrors of brother-on-brother murder that were seen in the Civil War. This is a book that is not only of interest to historians of the Second World War; it is a work for anyone who wishes to understand almost any American historical event, social issue, politician or movie that has appeared since the end of WW2. That is for better or worse everyone on earth.
Mona (2021) Pola Oloixarac Mona is the story of a young woman who has just been nominated for the 'most important literary award in Europe'. Mona sees the nomination as a chance to escape her substance abuse on a Californian campus and so speedily decamps to the small village in the depths of Sweden where the nominees must convene for a week before the overall winner is announced. Mona didn't disappear merely to avoid pharmacological misadventures, though, but also to avoid the growing realisation that she is being treated as something of an anthropological curiosity at her university: a female writer of colour treasured for her flourish of exotic diversity that reflects well upon her department. But Mona is now stuck in the company of her literary competitors who all have now gathered from around the world in order to do what writers do: harbour private resentments, exchange empty flattery, embody the selfsame racialised stereotypes that Mona left the United States to avoid, stab rivals in the back, drink too much, and, of course, go to bed together. But as I read Mona, I slowly started to realise that something else is going on. Why does Mona keep finding traces of violence on her body, the origins of which she cannot or refuses to remember? There is something eerily defensive about her behaviour and sardonic demeanour in general as well. A genre-bending and mind-expanding novel unfolded itself, and, without getting into spoiler territory, Mona concludes with such a surprising ending that, according to Adam Thirlwell:
Perhaps we need to rethink what is meant by a gimmick. If a gimmick is anything that we want to reject as extra or excessive or ill-fitting, then it may be important to ask what inhibitions or arbitrary conventions have made it seem like excess, and to revel in the exorbitant fictional constructions it produces. [...]Mona is a savage satire of the literary world, but it's also a very disturbing exploration of trauma and violence. The success of the book comes in equal measure from the author's commitment to both ideas, but also from the way the psychological damage component creeps up on you. And, as implied above, the last ten pages are quite literally out of this world.
My Brilliant Friend (2011)
The Story of a New Name (2012)
Those Who Leave and Those Who Stay (2013)
The Story of the Lost Child (2014)
Elena Ferrante
Elena Ferrante's Neopolitan Quartet follows two girls, both brilliant in their own way. Our protagonist-narrator is Elena, a studious girl from the lower rungs of the middle class of Naples who is inspired to be more by her childhood friend, Lila. Lila is, in turn, far more restricted by her poverty and class, but can transcend it at times through her fiery nature, which also brands her as somewhat unique within their inward-looking community. The four books follow the two girls from the perspective of Elena as they grow up together in post-war Italy, where they drift in-and-out of each other's lives due to the vicissitudes of change and the consequences of choice. All the time this is unfolding, however, the narrative is very always slightly charged by the background knowledge revealed on the very first page that Lila will, many years later, disappear from Elena's life.
Whilst the quartet has the formal properties of a bildungsroman, its subject and conception are almost entirely different. In particular, the books are driven far more by character and incident than spectacular adventures in picturesque Italy. In fact, quite the opposite takes place: these are four books where ordinary-seeming occurrences take on an unexpected radiance against a background of poverty, ignorance, violence and other threats, often bringing to mind the films of the Italian neorealism movement. Brilliantly rendered from beginning to end, Ferrante has a seemingly studious eye for interpreting interactions and the psychology of adolescence and friendship. Some utterances indeed, perhaps even some glances are dissected at length over multiple pages, something that Vittorio De Sica's classic Bicycle Thieves (1948) could never do.
Potential readers should not take any notice of the saccharine cover illustrations on most editions of the books. The quartet could even win an award for the most misleading artwork, potentially rivalling even Vladimir Nabokov's Lolita. I wouldn't be at all surprised if it is revealed that the drippy illustrations and syrupy blurbs ("a rich, intense and generous-hearted story ") turn out to be part of a larger metatextual game that Ferrante is playing with her readers. This idiosyncratic view of mine is partially supported by the fact that each of the four books has been given a misleading title, the true ambiguity of which often only becomes clear as each of the four books comes into sharper focus.
Readers of the quartet often fall into debating which is the best of the four. I've heard from more than one reader that one has 'too much Italian politics' and another doesn't have enough 'classic' Lina moments. The first book then possesses the twin advantages of both establishing the environs and finishing with a breathtaking ending that is both satisfying and a cliffhanger as well but does this make it 'the best'? I prefer to liken the quartet more like the different seasons of The Wire (2002-2008) where, personal favourites and preferences aside, although each season is undoubtedly unique, it would take a certain kind of narrow-minded view of art to make the claim that, say, series one of The Wire is 'the best' or that the season that focuses on the Baltimore docks 'is boring'. Not to sound like a neo-Wagnerian, but each of them adds to final result in its own. That is to say, both The Wire and the Neopolitan Quartet achieve the rare feat of making the magisterial simultaneously intimate.
Out There: Stories (2022) Kate Folk Out There is a riveting collection of disturbing short stories by first-time author Kate Fork. The title story first appeared in the New Yorker in early 2020 imagines a near-future setting where a group of uncannily handsome artificial men called 'blots' have arrived on the San Francisco dating scene with the secret mission of sleeping with women, before stealing their personal data from their laptops and phones and then (quite literally) evaporating into thin air. Folk's satirical style is not at all didactic, so it rarely feels like she is making her points in a pedantic manner. But it's clear that the narrator of Out There is recounting her frustration with online dating. in a way that will resonate with anyone who s spent time with dating apps or indeed the contemporary hyper-centralised platform-based internet in general. Part social satire, part ghost story and part comic tales, the blurring of the lines between these factors is only one of the things that makes these stories so compelling. But whilst Folk constructs crazy scenarios and intentionally strange worlds, she also manages to also populate them with characters that feel real and genuinely sympathetic. Indeed, I challenge you not to feel some empathy for the 'blot' in the companion story Big Sur which concludes the collection, and it complicates any primary-coloured view of the dating world of consisting entirely of predatory men. And all of this is leavened with a few stories that are just plain surreal. I don't know what the deal is with Dating a Somnambulist (available online on Hobart Pulp), but I know that I like it.
Solaris (1961) Stanislaw Lem When Kelvin arrives at the planet Solaris to study the strange ocean that covers its surface, instead of finding an entirely physical scientific phenomenon, he soon discovers a previously unconscious memory embodied in the physical manifestation of a long-dead lover. The other scientists on the space station slowly reveal that they are also plagued with their own repressed corporeal memories. Many theories are put forward as to why all this is occuring, including the idea that Solaris is a massive brain that creates these incarnate memories. Yet if that is the case, the planet's purpose in doing so is entirely unknown, forcing the scientists to shift focus and wonder whether they can truly understand the universe without first understanding what lies within their own minds and in their desires. This would be an interesting outline for any good science fiction book, but one of the great strengths of Solaris is not only that it withholds from the reader why the planet is doing anything it does, but the book is so forcefully didactic in its dislike of the hubris, destructiveness and colonial thinking that can accompany scientific exploration. In one of its most vitriolic passages, Lem's own anger might be reaching out to the reader:
We are humanitarian and chivalrous; we don t want to enslave other races, we simply want to bequeath them our values and take over their heritage in exchange. We think of ourselves as the Knights of the Holy Contact. This is another lie. We are only seeking Man. We have no need of other worlds. We need mirrors. We don t know what to do with other worlds. A single world, our own, suffices us; but we can t accept it for what it is. We are searching for an ideal image of our own world: we go in quest of a planet, of a civilisation superior to our own, but developed on the basis of a prototype of our primaeval past. At the same time, there is something inside us that we don t like to face up to, from which we try to protect ourselves, but which nevertheless remains since we don t leave Earth in a state of primal innocence. We arrive here as we are in reality, and when the page is turned, and that reality is revealed to us that part of our reality that we would prefer to pass over in silence then we don t like it anymore.An overwhelming preoccupation with this idea infuses Solaris, and it turns out to be a common theme in a lot of Lem's work of this period, such as in his 1959 'anti-police procedural' The Investigation. Perhaps it not a dislike of exploration in general or the modern scientific method in particular, but rather a savage critique of the arrogance and self-assuredness that accompanies most forms of scientific positivism, or at least pursuits that cloak themselves under the guise of being a laudatory 'scientific' pursuit:
Man has gone out to explore other worlds and other civilizations without having explored his own labyrinth of dark passages and secret chambers and without finding what lies behind doorways that he himself has sealed.I doubt I need to cite specific instances of contemporary scientific pursuits that might meet Lem's punishing eye today, and the fact that his critique works both in 2022 and 1961 perhaps tells us more about the human condition than we'd care to know. Another striking thing about Solaris isn't just the specific Star Trek and Stargate SG-1 episodes that I retrospectively realised were purloined from the book, but that almost the entire register of Star Trek: The Next Generation in particular seems to be rehearsed here. That is to say, TNG presents itself as hard and fact-based 'sci-fi' on the surface, but, at its core, there are often human, existential and sometimes quite enormously emotionally devastating human themes being discussed such as memory, loss and grief. To take one example from many, the painful memories that the planet Solaris physically materialises in effect asks us to seriously consider what it actually is taking place when we 'love' another person: is it merely another 'mirror' of ourselves? (And, if that is the case, is that... bad?) It would be ahistorical to claim that all popular science fiction today can be found rehearsed in Solaris, but perhaps it isn't too much of a stretch:
[Solaris] renders unnecessary any more alien stories. Nothing further can be said on this topic ...] Possibly, it can be said that when one feels the urge for such a thing, one should simply reread Solaris and learn its lessons again. Kim Stanley Robinson [...]I could go on praising this book for quite some time; perhaps by discussing the extreme framing devices used within the book at one point, the book diverges into a lengthy bibliography of fictional books-within-the-book, each encapsulating a different theory about what the mechanics and/or function of Solaris is, thereby demonstrating that 'Solaris studies' as it is called within the world of the book has been going on for years with no tangible results, which actually leads to extreme embarrassment and then a deliberate and willful blindness to the 'Solaris problem' on the part of the book's scientific community. But I'll leave it all here before this review gets too long... Highly recommended, and a likely reread in 2023.
Brokeback Mountain (1997) Annie Proulx Brokeback Mountain began as a short story by American author Annie Proulx which appeared in the New Yorker in 1997, although it is now more famous for the 2005 film adaptation directed by Taiwanese filmmaker Ang Lee. Both versions follow two young men who are hired for the summer to look after sheep at a range under the 'Brokeback' mountain in Wyoming. Unexpectedly, however, they form an intense emotional and sexual attachment, yet life intervenes and demands they part ways at the end of the summer. Over the next twenty years, though, as their individual lives play out with marriages, children and jobs, they continue reuniting for brief albeit secret liaisons on camping trips in remote settings. There's no feigned shyness or self-importance in Brokeback Mountain, just a close, compassionate and brutally honest observation of a doomed relationship and a bone-deep feeling for the hardscrabble life in the post-War West. To my mind, very few books have captured so acutely the desolation of a frustrated and repressed passion, as well as the particular flavour of undirected anger that can accompany this kind of yearning. That the original novella does all this in such a beautiful way (and without the crutch of the Wyoming landscape to look at ) is a tribute to Proulx's skills as a writer. Indeed, even without the devasting emotional undertones, Proulx's descriptions of the mountains and scree of the West is likely worth the read alone.
Luster (2020) Raven Leilani Edie is a young Black woman living in New York whose life seems to be spiralling out of control. She isn't good at making friends, her career is going nowhere, and she has no close family to speak of as well. She is, thus, your typical NYC millennial today, albeit seen through a lens of Blackness that complicates any reductive view of her privilege or minority status. A representative paragraph might communicate the simmering tone:
Before I start work, I browse through some photos of friends who are doing better than me, then an article on a black teenager who was killed on 115th for holding a weapon later identified as a showerhead, then an article on a black woman who was killed on the Grand Concourse for holding a weapon later identified as a cell phone, then I drown myself in the comments section and do some online shopping, by which I mean I put four dresses in my cart as a strictly theoretical exercise and then let the page expire.She starts a sort-of affair with an older white man who has an affluent lifestyle in nearby New Jersey. Eric or so he claims has agreed upon an 'open relationship' with his wife, but Edie is far too inappropriate and disinhibited to respect any boundaries that Eric sets for her, and so Edie soon becomes deeply entangled in Eric's family life. It soon turns out that Eric and his wife have a twelve-year-old adopted daughter, Akila, who is also wait for it Black. Akila has been with Eric's family for two years now and they aren t exactly coping well together. They don t even know how to help her to manage her own hair, let alone deal with structural racism. Yet despite how dark the book's general demeanour is, there are faint glimmers of redemption here and there. Realistic almost to the end, Edie might finally realise what s important in her life, but it would be a stretch to say that she achieves them by the final page. Although the book is full of acerbic remarks on almost any topic (Dogs: "We made them needy and physically unfit. They used to be wolves, now they are pugs with asthma."), it is the comments on contemporary race relations that are most critically insightful. Indeed, unsentimental, incisive and funny, Luster had much of what I like in Colson Whitehead's books at times, but I can't remember a book so frantically fast-paced as this since the Booker-prize winning The Sellout by Paul Beatty or Sam Tallent's Running the Light.
Series: | Magic of the Lost #1 |
Publisher: | Orbit |
Copyright: | March 2021 |
ISBN: | 0-316-54267-9 |
Format: | Kindle |
Pages: | 490 |
Jobs
or CronJobs
).
NodeJS
API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.
Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.
For the upload I decided to use the sftp protocol, mainly because I already
had an sftp container image based on
mysecureshell prepared; once
we settled on that we added sftp support to the API server and configured it
to upload the files to our server instead of using S3 buckets.
To publish the files we added a nginx container configured
to work as a reverse proxy that uses the
ngx_http_auth_request_module
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).
Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using kubectl exec
with the existing containers
did not seem a good idea, as that is not supported by CronJobs
objects, for
example).
The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the webhook tool to provide the
endpoints to call the scripts; for now we have three:
PATH
,hardlink
all the files that are identical on the filesystem,mysecureshell
container can be used to provide an sftp service with
multiple users (although the files are owned by the same UID
and GID
) using
standalone containers (launched with docker
or podman
) or in an
orchestration system like kubernetes, as we are going to do here.
The image is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
FROM alpine:$ALPINE_VERSION as builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN apk update &&\
apk add --no-cache alpine-sdk git musl-dev &&\
git clone https://github.com/sto/mysecureshell.git &&\
cd mysecureshell &&\
./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man\
--localstatedir=/var --with-shutfile=/var/lib/misc/sftp.shut --with-debug=2 &&\
make all && make install &&\
rm -rf /var/cache/apk/*
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
COPY --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell
COPY --from=builder /usr/bin/sftp-* /usr/bin/
RUN apk update &&\
apk add --no-cache openssh shadow pwgen &&\
sed -i -e "s ^.*\(AuthorizedKeysFile\).*$ \1 /etc/ssh/auth_keys/%u "\
/etc/ssh/sshd_config &&\
mkdir /etc/ssh/auth_keys &&\
cat /dev/null > /etc/motd &&\
add-shell '/usr/bin/mysecureshell' &&\
rm -rf /var/cache/apk/*
COPY bin/* /usr/local/bin/
COPY etc/sftp_config /etc/ssh/
COPY entrypoint.sh /
EXPOSE 22
VOLUME /sftp
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
/etc/sftp_config
file is used to
configure
the mysecureshell
server to have all the user homes under /sftp/data
, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after 5m
of inactivity:
# Default mysecureshell configuration
<Default>
# All users will have access their home directory under /sftp/data
Home /sftp/data/$USER
# Log to a file inside /sftp/logs/ (only works when the directory exists)
LogFile /sftp/logs/mysecureshell.log
# Force users to stay in their home directory
StayAtHome true
# Hide Home PATH, it will be shown as /
VirtualChroot true
# Hide real file/directory owner (just change displayed permissions)
DirFakeUser true
# Hide real file/directory group (just change displayed permissions)
DirFakeGroup true
# We do not want users to keep forever their idle connection
IdleTimeOut 5m
</Default>
# vim: ts=2:sw=2:et
entrypoint.sh
script is the one responsible to prepare the container for
the users included on the /secrets/user_pass.txt
file (creates the users with
their HOME
directories under /sftp/data
and a /bin/false
shell and
creates the key files from /secrets/user_keys.txt
if available).
The script expects a couple of environment variables:
SFTP_UID
: UID
used to run the daemon and for all the files, it has to be
different than 0
(all the files managed by this daemon are going to be
owned by the same user and group, even if the remote users are different).SFTP_GID
: GID
used to run the daemon and for all the files, it has to be
different than 0
.SSH_PORT
and SSH_PARAMS
values if present.
It also requires the following files (they can be mounted as secrets in
kubernetes):
/secrets/host_keys.txt
: Text file containing the ssh server keys in mime
format; the file is processed using the reformime
utility (the one included
on busybox) and can be generated using the
gen-host-keys
script included on the container (it uses ssh-keygen
and
makemime
)./secrets/user_pass.txt
: Text file containing lines of the form
username:password_in_clear_text
(only the users included on this file are
available on the sftp
server, in fact in our deployment we use only the
scs
user for everything)./secrets/user_keys.txt
: Text file that contains lines of the form
username:public_ssh_ed25519_or_rsa_key
; the public keys are installed on
the server and can be used to log into the sftp
server if the username
exists on the user_pass.txt
file.entrypoint.sh
script are:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Expects SSH_UID & SSH_GID on the environment and uses the value of the
# SSH_PORT & SSH_PARAMS variables if present
# SSH_PARAMS
SSH_PARAMS="-D -e -p $ SSH_PORT:=22 $ SSH_PARAMS "
# Fixed values
# DIRECTORIES
HOME_DIR="/sftp/data"
CONF_FILES_DIR="/secrets"
AUTH_KEYS_PATH="/etc/ssh/auth_keys"
# FILES
HOST_KEYS="$CONF_FILES_DIR/host_keys.txt"
USER_KEYS="$CONF_FILES_DIR/user_keys.txt"
USER_PASS="$CONF_FILES_DIR/user_pass.txt"
USER_SHELL_CMD="/usr/bin/mysecureshell"
# TYPES
HOST_KEY_TYPES="dsa ecdsa ed25519 rsa"
# ---------
# FUNCTIONS
# ---------
# Validate HOST_KEYS, USER_PASS, SFTP_UID and SFTP_GID
_check_environment()
# Check the ssh server keys ... we don't boot if we don't have them
if [ ! -f "$HOST_KEYS" ]; then
cat <<EOF
We need the host keys on the '$HOST_KEYS' file to proceed.
Call the 'gen-host-keys' script to create and export them on a mime file.
EOF
exit 1
fi
# Check that we have users ... if we don't we can't continue
if [ ! -f "$USER_PASS" ]; then
cat <<EOF
We need at least the '$USER_PASS' file to provision users.
Call the 'gen-users-tar' script to create a tar file to create an archive that
contains public and private keys for users, a 'user_keys.txt' with the public
keys of the users and a 'user_pass.txt' file with random passwords for them
(pass the list of usernames to it).
EOF
exit 1
fi
# Check SFTP_UID
if [ -z "$SFTP_UID" ]; then
echo "The 'SFTP_UID' can't be empty, pass a 'GID'."
exit 1
fi
if [ "$SFTP_UID" -eq "0" ]; then
echo "The 'SFTP_UID' can't be 0, use a different 'UID'"
exit 1
fi
# Check SFTP_GID
if [ -z "$SFTP_GID" ]; then
echo "The 'SFTP_GID' can't be empty, pass a 'GID'."
exit 1
fi
if [ "$SFTP_GID" -eq "0" ]; then
echo "The 'SFTP_GID' can't be 0, use a different 'GID'"
exit 1
fi
# Adjust ssh host keys
_setup_host_keys()
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
ret="0"
reformime <"$HOST_KEYS" ret="1"
for kt in $HOST_KEY_TYPES; do
key="ssh_host_$ kt _key"
pub="ssh_host_$ kt _key.pub"
if [ ! -f "$key" ]; then
echo "Missing '$key' file"
ret="1"
fi
if [ ! -f "$pub" ]; then
echo "Missing '$pub' file"
ret="1"
fi
if [ "$ret" -ne "0" ]; then
continue
fi
cat "$key" >"/etc/ssh/$key"
chmod 0600 "/etc/ssh/$key"
chown root:root "/etc/ssh/$key"
cat "$pub" >"/etc/ssh/$pub"
chmod 0600 "/etc/ssh/$pub"
chown root:root "/etc/ssh/$pub"
done
cd "$opwd"
rm -rf "$tmpdir"
return "$ret"
# Create users
_setup_user_pass()
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
ret="0"
[ -d "$HOME_DIR" ] mkdir "$HOME_DIR"
# Make sure the data dir can be managed by the sftp user
chown "$SFTP_UID:$SFTP_GID" "$HOME_DIR"
# Allow the user (and root) to create directories inside the $HOME_DIR, if
# we don't allow it the directory creation fails on EFS (AWS)
chmod 0755 "$HOME_DIR"
# Create users
echo "sftp:sftp:$SFTP_UID:$SFTP_GID:::/bin/false" >"newusers.txt"
sed -n "/^[^#]/ s/:/ /p " "$USER_PASS" while read -r _u _p; do
echo "$_u:$_p:$SFTP_UID:$SFTP_GID::$HOME_DIR/$_u:$USER_SHELL_CMD"
done >>"newusers.txt"
newusers --badnames newusers.txt
# Disable write permission on the directory to forbid remote sftp users to
# remove their own root dir (they have already done it); we adjust that
# here to avoid issues with EFS (see before)
chmod 0555 "$HOME_DIR"
# Clean up the tmpdir
cd "$opwd"
rm -rf "$tmpdir"
return "$ret"
# Adjust user keys
_setup_user_keys()
if [ -f "$USER_KEYS" ]; then
sed -n "/^[^#]/ s/:/ /p " "$USER_KEYS" while read -r _u _k; do
echo "$_k" >>"$AUTH_KEYS_PATH/$_u"
done
fi
# Main function
exec_sshd()
_check_environment
_setup_host_keys
_setup_user_pass
_setup_user_keys
echo "Running: /usr/sbin/sshd $SSH_PARAMS"
# shellcheck disable=SC2086
exec /usr/sbin/sshd -D $SSH_PARAMS
# ----
# MAIN
# ----
case "$1" in
"server") exec_sshd ;;
*) exec "$@" ;;
esac
# vim: ts=2:sw=2:et
host_keys.txt
file as follows:
$ docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
#!/bin/sh
set -e
# Generate new host keys
ssh-keygen -A >/dev/null
# Replace hostname
sed -i -e 's/@.*$/@mysecureshell/' /etc/ssh/ssh_host_*_key.pub
# Print in mime format (stdout)
makemime /etc/ssh/ssh_host_*
# vim: ts=2:sw=2:et
.tar
file that contains auth data
for the list of usernames passed to it (the file contains a user_pass.txt
file with random passwords for the users, public and private ssh keys for them
and the user_keys.txt
file that matches the generated keys).
To generate a tar
file for the user scs
we can execute the following:
$ docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
user_pass.txt
file we can do:
$ tar tvf /tmp/scs-users.tar
-rw-r--r-- root/root 21 2022-09-11 15:55 user_pass.txt
-rw-r--r-- root/root 822 2022-09-11 15:55 user_keys.txt
-rw------- root/root 387 2022-09-11 15:55 id_ed25519-scs
-rw-r--r-- root/root 85 2022-09-11 15:55 id_ed25519-scs.pub
-rw------- root/root 3357 2022-09-11 15:55 id_rsa-scs
-rw------- root/root 3243 2022-09-11 15:55 id_rsa-scs.pem
-rw-r--r-- root/root 729 2022-09-11 15:55 id_rsa-scs.pub
$ tar xfO /tmp/scs-users.tar user_pass.txt
scs:20JertRSX2Eaar4x
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
USER_KEYS_FILE="user_keys.txt"
USER_PASS_FILE="user_pass.txt"
# ---------
# MAIN CODE
# ---------
# Generate user passwords and keys, return 1 if no username is received
if [ "$#" -eq "0" ]; then
return 1
fi
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
for u in "$@"; do
ssh-keygen -q -a 100 -t ed25519 -f "id_ed25519-$u" -C "$u" -N ""
ssh-keygen -q -a 100 -b 4096 -t rsa -f "id_rsa-$u" -C "$u" -N ""
# Legacy RSA private key format
cp -a "id_rsa-$u" "id_rsa-$u.pem"
ssh-keygen -q -p -m pem -f "id_rsa-$u.pem" -N "" -P "" >/dev/null
chmod 0600 "id_rsa-$u.pem"
echo "$u:$(pwgen -s 16 1)" >>"$USER_PASS_FILE"
echo "$u:$(cat "id_ed25519-$u.pub")" >>"$USER_KEYS_FILE"
echo "$u:$(cat "id_rsa-$u.pub")" >>"$USER_KEYS_FILE"
done
tar cf - "$USER_PASS_FILE" "$USER_KEYS_FILE" id_* 2>/dev/null
cd "$opwd"
rm -rf "$tmpdir"
# vim: ts=2:sw=2:et
nginx-scs
container is generated using the following Dockerfile
:
ARG NGINX_VERSION=1.23.1
FROM nginx:$NGINX_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN rm -f /docker-entrypoint.d/*
COPY docker-entrypoint.d/* /docker-entrypoint.d/
docker-entrypoint.d
scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:
AUTH_REQUEST_URI
: URL to use for the auth_request
, if the variable is not
found on the environment auth_request
is not used.HTML_ROOT
: Base directory of the web server, if not passed the default
/usr/share/nginx/html
is used.nginx
image.
The contents of the configuration script are:
#!/bin/sh
# Replace the default.conf nginx file by our own version.
set -e
if [ -z "$HTML_ROOT" ]; then
HTML_ROOT="/usr/share/nginx/html"
fi
if [ "$AUTH_REQUEST_URI" ]; then
cat >/etc/nginx/conf.d/default.conf <<EOF
server
listen 80;
server_name localhost;
location /
auth_request /.auth;
root $HTML_ROOT;
index index.html index.htm;
location /.auth
internal;
proxy_pass $AUTH_REQUEST_URI;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI \$request_uri;
error_page 500 502 503 504 /50x.html;
location = /50x.html
root /usr/share/nginx/html;
EOF
else
cat >/etc/nginx/conf.d/default.conf <<EOF
server
listen 80;
server_name localhost;
location /
root $HTML_ROOT;
index index.html index.htm;
error_page 500 502 503 504 /50x.html;
location = /50x.html
root /usr/share/nginx/html;
EOF
fi
# vim: ts=2:sw=2:et
/sftp/data
or /sftp/data/scs
folder as the root of the web published by this container and create an
Ingress
object to provide access to it outside of our kubernetes cluster.webhook-scs
container is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
ARG GOLANG_VERSION=alpine3.16
FROM golang:$GOLANG_VERSION AS builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
ENV WEBHOOK_VERSION 2.8.0
ENV WEBHOOK_PR 549
ENV S3FS_VERSION v1.91
WORKDIR /go/src/github.com/adnanh/webhook
RUN apk update &&\
apk add --no-cache -t build-deps curl libc-dev gcc libgcc patch
RUN curl -L --silent -o webhook.tar.gz\
https://github.com/adnanh/webhook/archive/$ WEBHOOK_VERSION .tar.gz &&\
tar xzf webhook.tar.gz --strip 1 &&\
curl -L --silent -o $ WEBHOOK_PR .patch\
https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/$ WEBHOOK_PR .patch &&\
patch -p1 < $ WEBHOOK_PR .patch &&\
go get -d && \
go build -o /usr/local/bin/webhook
WORKDIR /src/s3fs-fuse
RUN apk update &&\
apk add ca-certificates build-base alpine-sdk libcurl automake autoconf\
libxml2-dev libressl-dev mailcap fuse-dev curl-dev
RUN curl -L --silent -o s3fs.tar.gz\
https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/$S3FS_VERSION.tar.gz &&\
tar xzf s3fs.tar.gz --strip 1 &&\
./autogen.sh &&\
./configure --prefix=/usr/local &&\
make -j && \
make install
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
WORKDIR /webhook
RUN apk update &&\
apk add --no-cache ca-certificates mailcap fuse libxml2 libcurl libgcc\
libstdc++ rsync util-linux-misc &&\
rm -rf /var/cache/apk/*
COPY --from=builder /usr/local/bin/webhook /usr/local/bin/webhook
COPY --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs
COPY entrypoint.sh /
COPY hooks/* ./hooks/
EXPOSE 9000
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
PATCH
included on this
pull request against a released
version of the source instead of creating a fork.
The entrypoint.sh
script is used to generate the webhook
configuration file
for the existing hooks
using environment variables (basically the
WEBHOOK_WORKDIR
and the *_TOKEN
variables) and launch the webhook
service:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
WEBHOOK_BIN="$ WEBHOOK_BIN:-/webhook/hooks "
WEBHOOK_YML="$ WEBHOOK_YML:-/webhook/scs.yml "
WEBHOOK_OPTS="$ WEBHOOK_OPTS:--verbose "
# ---------
# FUNCTIONS
# ---------
print_du_yml()
cat <<EOF
- id: du
execute-command: '$WEBHOOK_BIN/du.sh'
command-working-directory: '$WORKDIR'
response-headers:
- name: 'Content-Type'
value: 'application/json'
http-methods: ['GET']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
pass-arguments-to-command:
- source: 'url'
name: 'path'
pass-environment-to-command:
- source: 'string'
envname: 'OUTPUT_FORMAT'
name: 'json'
EOF
print_hardlink_yml()
cat <<EOF
- id: hardlink
execute-command: '$WEBHOOK_BIN/hardlink.sh'
command-working-directory: '$WORKDIR'
http-methods: ['GET']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
EOF
print_s3sync_yml()
cat <<EOF
- id: s3sync
execute-command: '$WEBHOOK_BIN/s3sync.sh'
command-working-directory: '$WORKDIR'
http-methods: ['POST']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
pass-environment-to-command:
- source: 'payload'
envname: 'AWS_KEY'
name: 'aws.key'
- source: 'payload'
envname: 'AWS_SECRET_KEY'
name: 'aws.secret_key'
- source: 'payload'
envname: 'S3_BUCKET'
name: 's3.bucket'
- source: 'payload'
envname: 'S3_REGION'
name: 's3.region'
- source: 'payload'
envname: 'S3_PATH'
name: 's3.path'
- source: 'payload'
envname: 'SCS_PATH'
name: 'scs.path'
stream-command-output: true
EOF
print_token_yml()
if [ "$1" ]; then
cat << EOF
trigger-rule:
match:
type: 'value'
value: '$1'
parameter:
source: 'header'
name: 'X-Webhook-Token'
EOF
fi
exec_webhook()
# Validate WORKDIR
if [ -z "$WEBHOOK_WORKDIR" ]; then
echo "Must define the WEBHOOK_WORKDIR variable!" >&2
exit 1
fi
WORKDIR="$(realpath "$WEBHOOK_WORKDIR" 2>/dev/null)" true
if [ ! -d "$WORKDIR" ]; then
echo "The WEBHOOK_WORKDIR '$WEBHOOK_WORKDIR' is not a directory!" >&2
exit 1
fi
# Get TOKENS, if the DU_TOKEN or HARDLINK_TOKEN is defined that is used, if
# not if the COMMON_TOKEN that is used and in other case no token is checked
# (that is the default)
DU_TOKEN="$ DU_TOKEN:-$COMMON_TOKEN "
HARDLINK_TOKEN="$ HARDLINK_TOKEN:-$COMMON_TOKEN "
S3_TOKEN="$ S3_TOKEN:-$COMMON_TOKEN "
# Create webhook configuration
print_du_yml
print_token_yml "$DU_TOKEN"
echo ""
print_hardlink_yml
print_token_yml "$HARDLINK_TOKEN"
echo ""
print_s3sync_yml
print_token_yml "$S3_TOKEN"
>"$WEBHOOK_YML"
# Run the webhook command
# shellcheck disable=SC2086
exec webhook -hooks "$WEBHOOK_YML" $WEBHOOK_OPTS
# ----
# MAIN
# ----
case "$1" in
"server") exec_webhook ;;
*) exec "$@" ;;
esac
entrypoint.sh
script generates the configuration file for the webhook
server calling functions that print a yaml
section for each hook
and
optionally adds rules to validate access to them comparing the value of a
X-Webhook-Token
header against predefined values.
The expected token values are taken from environment variables, we can define
a token variable for each hook
(DU_TOKEN
, HARDLINK_TOKEN
or S3_TOKEN
)
and a fallback value (COMMON_TOKEN
); if no token variable is defined for a
hook
no check is done and everybody can call it.
The Hook
Definition documentation explains the options you can use for each hook
, the
ones we have right now do the following:
du
: runs on the $WORKDIR
directory, passes as first argument to the
script the value of the path
query parameter and sets the variable
OUTPUT_FORMAT
to the fixed value json
(we use that to print the output of
the script in JSON format instead of text).hardlink
: runs on the $WORKDIR
directory and takes no parameters.s3sync
: runs on the $WORKDIR
directory and sets a lot of environment
variables from values read from the JSON encoded payload sent by the caller
(all the values must be sent by the caller even if they are assigned an empty
value, if they are missing the hook
fails without calling the script); we
also set the stream-command-output
value to true
to make the script show
its output as it is working (we patched the webhook
source to be able to
use this option).du
hook scriptThe du
hook script code checks if the argument passed is a directory,
computes its size using the du
command and prints the results in text format
or as a JSON dictionary:
#!/bin/sh
set -e
# Script to print disk usage for a PATH inside the scs folder
# ---------
# FUNCTIONS
# ---------
print_error()
if [ "$OUTPUT_FORMAT" = "json" ]; then
echo " \"error\":\"$*\" "
else
echo "$*" >&2
fi
exit 1
usage()
if [ "$OUTPUT_FORMAT" = "json" ]; then
echo " \"error\":\"Pass arguments as '?path=XXX\" "
else
echo "Usage: $(basename "$0") PATH" >&2
fi
exit 1
# ----
# MAIN
# ----
if [ "$#" -eq "0" ] [ -z "$1" ]; then
usage
fi
if [ "$1" = "." ]; then
DU_PATH="./"
else
DU_PATH="$(find . -name "$1" -mindepth 1 -maxdepth 1)" true
fi
if [ -z "$DU_PATH" ] [ ! -d "$DU_PATH/." ]; then
print_error "The provided PATH ('$1') is not a directory"
fi
# Print disk usage in bytes for the given PATH
OUTPUT="$(du -b -s "$DU_PATH")"
if [ "$OUTPUT_FORMAT" = "json" ]; then
# Format output as "path":"PATH","bytes":"BYTES"
echo "$OUTPUT"
sed -e "s%^\(.*\)\t.*/\(.*\)$% \"path\":\"\2\",\"bytes\":\"\1\" %"
tr -d '\n'
else
# Print du output as is
echo "$OUTPUT"
fi
# vim: ts=2:sw=2:et:ai:sts=2
hardlink
hook scriptThe hardlink
hook script is really simple, it just runs the
util-linux version of the
hardlink
command on its working directory:
#!/bin/sh
hardlink --ignore-time --maximize .
s3sync
hook scriptThe s3sync
hook script uses the s3fs
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using rsync
; all values needed to execute
the task are taken from environment variables:
#!/bin/ash
set -euo pipefail
set -o errexit
set -o errtrace
# Functions
finish()
ret="$1"
echo ""
echo "Script exit code: $ret"
exit "$ret"
# Check variables
if [ -z "$AWS_KEY" ] [ -z "$AWS_SECRET_KEY" ] [ -z "$S3_BUCKET" ]
[ -z "$S3_PATH" ] [ -z "$SCS_PATH" ]; then
[ "$AWS_KEY" ] echo "Set the AWS_KEY environment variable"
[ "$AWS_SECRET_KEY" ] echo "Set the AWS_SECRET_KEY environment variable"
[ "$S3_BUCKET" ] echo "Set the S3_BUCKET environment variable"
[ "$S3_PATH" ] echo "Set the S3_PATH environment variable"
[ "$SCS_PATH" ] echo "Set the SCS_PATH environment variable"
finish 1
fi
if [ "$S3_REGION" ] && [ "$S3_REGION" != "us-east-1" ]; then
EP_URL="endpoint=$S3_REGION,url=https://s3.$S3_REGION.amazonaws.com"
else
EP_URL="endpoint=us-east-1"
fi
# Prepare working directory
WORK_DIR="$(mktemp -p "$HOME" -d)"
MNT_POINT="$WORK_DIR/s3data"
PASSWD_S3FS="$WORK_DIR/.passwd-s3fs"
# Check the moutpoint
if [ ! -d "$MNT_POINT" ]; then
mkdir -p "$MNT_POINT"
elif mountpoint "$MNT_POINT"; then
echo "There is already something mounted on '$MNT_POINT', aborting!"
finish 1
fi
# Create password file
touch "$PASSWD_S3FS"
chmod 0400 "$PASSWD_S3FS"
echo "$AWS_KEY:$AWS_SECRET_KEY" >"$PASSWD_S3FS"
# Mount s3 bucket as a filesystem
s3fs -o dbglevel=info,retries=5 -o "$EP_URL" -o "passwd_file=$PASSWD_S3FS" \
"$S3_BUCKET" "$MNT_POINT"
echo "Mounted bucket '$S3_BUCKET' on '$MNT_POINT'"
# Remove the password file, just in case
rm -f "$PASSWD_S3FS"
# Check source PATH
ret="0"
SRC_PATH="$MNT_POINT/$S3_PATH"
if [ ! -d "$SRC_PATH" ]; then
echo "The S3_PATH '$S3_PATH' can't be found!"
ret=1
fi
# Compute SCS_UID & SCS_GID (by default based on the working directory owner)
SCS_UID="$ SCS_UID:=$(stat -c "%u" "." 2>/dev/null) " true
SCS_GID="$ SCS_GID:=$(stat -c "%g" "." 2>/dev/null) " true
# Check destination PATH
DST_PATH="./$SCS_PATH"
if [ "$ret" -eq "0" ] && [ -d "$DST_PATH" ]; then
mkdir -p "$DST_PATH" ret="$?"
fi
# Copy using rsync
if [ "$ret" -eq "0" ]; then
rsync -rlptv --chown="$SCS_UID:$SCS_GID" --delete --stats \
"$SRC_PATH/" "$DST_PATH/" ret="$?"
fi
# Unmount the S3 bucket
umount -f "$MNT_POINT"
echo "Called umount for '$MNT_POINT'"
# Remove mount point dir
rmdir "$MNT_POINT"
# Remove WORK_DIR
rmdir "$WORK_DIR"
# We are done
finish "$ret"
# vim: ts=2:sw=2:et:ai:sts=2
StatefulSet
with one replica.
Our production deployment is done on AWS and to be
able to scale we use EFS for our
PersistenVolume
; the idea is that the volume has no size limit, its
AccessMode
can be set to ReadWriteMany
and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.
For development we use k3d and we are also able to scale the
StatefulSet
for testing because we use a ReadWriteOnce
PVC, but it points
to a hostPath
that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different k3d
nodes use the same folder
on the host.
mysecureshell
container that
can be generated using kubernetes pods as follows (we are only creating the
scs
user):
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
secrets.yaml
file as follows:
$ tar xf ./users.tar user_keys.txt user_pass.txt
$ kubectl --dry-run=client -o yaml create secret generic "scs-secret" \
--from-file="host_keys.txt=host_keys.txt" \
--from-file="user_keys.txt=user_keys.txt" \
--from-file="user_pass.txt=user_pass.txt" > ./secrets.yaml
secrets.yaml
will look like the following file (the base64
would match the content of the files, of course):
apiVersion: v1
data:
host_keys.txt: TWlt...
user_keys.txt: c2Nz...
user_pass.txt: c2Nz...
kind: Secret
metadata:
creationTimestamp: null
name: scs-secret
statefulSet
) can be as simple as this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName
to use the default one.
PersistentVolume
as
required by the
Local
Persistence Volume Static Provisioner (note that the /volumes/scs-pv
has to
be created by hand, in our k3d
system we mount the same host directory on the
/volumes
path of all the nodes and create the scs-pv
directory by hand
before deploying the persistent volume):
apiVersion: v1
kind: PersistentVolume
metadata:
name: scs-pv
labels:
app.kubernetes.io/name: scs
spec:
capacity:
storage: 8Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
claimRef:
name: scs-pvc
storageClassName: local-storage
local:
path: /volumes/scs-pv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- k3s
storageClassName
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: local-storage
PersistentVolume
(we are
using the
aws-efs-csi-driver which
supports Dynamic Provisioning) but we add the storageClassName
(we set it
to the one mapped to the EFS
driver, i.e. efs-sc
) and set ReadWriteMany
as the accessMode
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
statefulSet
is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: scs
labels:
app.kubernetes.io/name: scs
spec:
serviceName: scs
replicas: 1
selector:
matchLabels:
app: scs
template:
metadata:
labels:
app: scs
spec:
containers:
- name: nginx
image: stodh/nginx-scs:latest
ports:
- containerPort: 80
name: http
env:
- name: AUTH_REQUEST_URI
value: ""
- name: HTML_ROOT
value: /sftp/data
volumeMounts:
- mountPath: /sftp
name: scs-datadir
- name: mysecureshell
image: stodh/mysecureshell:latest
ports:
- containerPort: 22
name: ssh
securityContext:
capabilities:
add:
- IPC_OWNER
env:
- name: SFTP_UID
value: '2020'
- name: SFTP_GID
value: '2020'
volumeMounts:
- mountPath: /secrets
name: scs-file-secrets
readOnly: true
- mountPath: /sftp
name: scs-datadir
- name: webhook
image: stodh/webhook-scs:latest
securityContext:
privileged: true
ports:
- containerPort: 9000
name: webhook-http
env:
- name: WEBHOOK_WORKDIR
value: /sftp/data/scs
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- mountPath: /sftp
name: scs-datadir
volumes:
- name: devfuse
hostPath:
path: /dev/fuse
- name: scs-file-secrets
secret:
secretName: scs-secrets
- name: scs-datadir
persistentVolumeClaim:
claimName: scs-pvc
nginx
: As this is an example the web server is not using an
AUTH_REQUEST_URI
and uses the /sftp/data
directory as the root of the web
(to get to the files uploaded for the scs
user we will need to use /scs/
as a prefix on the URLs).mysecureshell
: We are adding the IPC_OWNER
capability to the container to
be able to use some of the sftp-*
commands inside it, but they are
not really needed, so adding the capability is optional.webhook
: We are launching this container in privileged mode to be able to
use the s3fs-fuse
, as it will not work otherwise for now (see this
kubernetes issue); if
the functionality is not needed the container can be executed with regular
privileges; besides, as we are not enabling public access to this service we
don t define *_TOKEN
variables (if required the values should be read from a
Secret
object).devfuse
volume is only needed if we plan to use the s3fs
command on
the webhook
container, if not we can remove the volume definition and its
mounts.Service
object:
apiVersion: v1
kind: Service
metadata:
name: scs-svc
labels:
app.kubernetes.io/name: scs
spec:
ports:
- name: ssh
port: 22
protocol: TCP
targetPort: 22
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: webhook-http
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: scs
scs
files from the outside we can add an ingress object like
the following (the definition is for testing using the localhost
name):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: scs-ingress
labels:
app.kubernetes.io/name: scs
spec:
ingressClassName: nginx
rules:
- host: 'localhost'
http:
paths:
- path: /scs
pathType: Prefix
backend:
service:
name: scs-svc
port:
number: 80
statefulSet
we create a namespace and apply the object
definitions shown before:
$ kubectl create namespace scs-demo
namespace/scs-demo created
$ kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$ kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$ kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$ kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$ kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
kubectl
:
$ kubectl -n scs-demo get all,secrets,ingress
NAME READY STATUS RESTARTS AGE
pod/scs-0 3/3 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/scs-svc ClusterIP 10.43.0.47 <none> 22/TCP,80/TCP,9000/TCP 21s
NAME READY AGE
statefulset.apps/scs 1/1 24s
NAME TYPE DATA AGE
secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s
secret/scs-secrets Opaque 3 39s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s
sftp
server from
other Pods, but to test the system we are going to do a kubectl port-forward
and connect to the server using our host client and the password we have
generated (it is on the user_pass.txt
file, inside the users.tar
archive):
$ kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 -> 22
Forwarding from [::1]:2020 -> 22
$ PF_PID=$!
$ sftp -P 2020 scs@127.0.0.1 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp> ls -la
drwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 .
dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 ..
sftp> !date -R > /tmp/date.txt 2
sftp> put /tmp/date.txt .
Uploading /tmp/date.txt to /date.txt
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt
sftp> ln date.txt date.txt.1 3
sftp> ls -l
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
sftp> put /tmp/date.txt date.txt.2 4
Uploading /tmp/date.txt to /date.txt.2
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l 5
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2
sftp> exit
$ kill "$PF_PID"
[1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
sftp
service on the forwarded port with the scs
user.date.txt
file from the
URL http://localhost/scs/date.txt:
$ curl -s http://localhost/scs/date.txt
Sun, 25 Sep 2022 17:21:51 +0200
hooks
directly,
from a CronJob
and from a Job
.
du
)In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a port-forward
and call the script with an existing PATH (the
root directory) and a bad one:
$ kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$ PF_PID=$!
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=.")"
$ echo $JSON
"path":"","bytes":"4160"
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=foo")"
$ echo $JSON
"error":"The provided PATH ('foo') is not a directory"
$ kill $PF_PID
.
PATH and the output is in json
format because we export OUTPUT_FORMAT
with
the value json
on the webhook
configuration.hardlink
)As explained before, the webhook
container can be used to run cronjobs
; the
following one uses an alpine
container to call the hardlink
script each
minute (that setup is for testing, obviously):
apiVersion: batch/v1
kind: CronJob
metadata:
name: hardlink
labels:
cronjob: 'hardlink'
spec:
schedule: "* */1 * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
metadata:
labels:
cronjob: 'hardlink'
spec:
containers:
- name: hardlink-cronjob
image: alpine:latest
command: ["wget", "-q", "-O-", "http://scs-svc:9000/hooks/hardlink"]
restartPolicy: Never
$ kubectl -n scs-demo apply -f webhook-cronjob.yaml 1
cronjob.batch/hardlink created
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" -w 2
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Pending 0 0s
hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s
hardlink-27735351-zvpnb 0/1 Completed 0 2s
^C
$ kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3
Mode: real
Method: sha256
Files: 3
Linked: 1 files
Compared: 0 xattrs
Compared: 1 files
Saved: 32 B
Duration: 0.000220 seconds
$ sleep 60
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" 4
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Completed 0 83s
hardlink-27735352-br5rn 0/1 Completed 0 23s
$ kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5
Mode: real
Method: sha256
Files: 3
Linked: 0 files
Compared: 0 xattrs
Compared: 0 files
Saved: 0 B
Duration: 0.000070 seconds
$ kubectl -n scs-demo delete -f webhook-cronjob.yaml 6
cronjob.batch "hardlink" deleted
cronjob
label, we interrupt it once we see
that the first run has been completed.date.txt.2
has been replaced by a hardlink (the
summary does not name the file, but it is the only option knowing the
contents from the original upload).s3sync
)The following job can be used to synchronise the contents of a directory in a
S3 bucket with the SCS Filesystem:
apiVersion: batch/v1
kind: Job
metadata:
name: s3sync
labels:
cronjob: 's3sync'
spec:
template:
metadata:
labels:
cronjob: 's3sync'
spec:
containers:
- name: s3sync-job
image: alpine:latest
command:
- "wget"
- "-q"
- "--header"
- "Content-Type: application/json"
- "--post-file"
- "/secrets/s3sync.json"
- "-O-"
- "http://scs-svc:9000/hooks/s3sync"
volumeMounts:
- mountPath: /secrets
name: job-secrets
readOnly: true
restartPolicy: Never
volumes:
- name: job-secrets
secret:
secretName: webhook-job-secrets
"aws":
"key": "********************",
"secret_key": "****************************************"
,
"s3":
"region": "eu-north-1",
"bucket": "blogops-test",
"path": "test"
,
"scs":
"path": "test"
$ kubectl -n scs-demo create secret generic webhook-job-secrets \ 1
--from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$ kubectl -n scs-demo apply -f webhook-job.yaml 2
job.batch/s3sync created
$ kubectl -n scs-demo get pods -l "cronjob=s3sync" 3
NAME READY STATUS RESTARTS AGE
s3sync-zx2cj 0/1 Completed 0 12s
$ kubectl -n scs-demo logs s3sync-zx2cj 4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec
total size is 15,075 speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$ kubectl -n scs-demo delete -f webhook-job.yaml 5
job.batch "s3sync" deleted
$ kubectl -n scs-demo delete secrets webhook-job-secrets 6
secret "webhook-job-secrets" deleted
webhook-job-secrets
secret that contains the
s3sync.json
file.cronjob=s3sync
we get the Pods executed by the job.Aug 8 04:04] list_del corruption. prev->next should be ffff90c96e9c2090,
but was ffff90c94e9c2090
A kernel dev friend said "I'm familiar with that code ... you should run memtest86".
This seemed like advice it would be foolish to ignore!
I installed the memtest86
package, which on Debian stable, is actually the
formerly open-source "memtest86" software, last updated in 2014, rather than
the currently open-source "memtest86+". However the package (incorrectly, I
think) Recommends: memtest86+
so I ended up with both. The package scripts
integrate with GRUB, so both were added as boot options.
Neither however, would boot on my NAS, which is a UEFI system: after selection
from the GRUB prompt, I just had a blank screen. I focussed for a short while
on display issues: I wondered if trying to run a 4k monitor over HDMI was too
much to expect from a memory tester OS, but my mainboard has a VGA out as well.
It has some quirky behaviour for the VGA out: the firmware doesn't use it at
all, so output only begins appearing after something boots (GRUB for example).
I fiddled about with the HDMI output, VGA output, and trying different RGB
cables, to no avail.
The issue was (likely) nothing to do with the video out, but rather that the
packaged versions of memtest
/memtest86+
don't work properly on UEFI
systems. What did work, was Passmark Software's non-FOSS
memtest86. It drew on HDMI, albeit in a postage
stamp sized window. After some time (much less than I expected, some kind of
magic modern memory matrix stuff going on I think), I got a clean bill of
health:
It's quite possible the FOSS versions of memtest
(pcmemtest
is another)
have better support for UEFI in more recent versions than I installed (I
just went with what's in Debian stable), and if not, then this is a worthy
feature to work on.
kubeadm
install.
AWS | OpenShift | OpenShift upstream project |
---|---|---|
Cloud Trail | Kubernetes API Server audit log | Kubernetes |
Cloud Watch | OpenShift Monitoring | Prometheus |
AWS Artifact | Compliance Operator | OpenSCAP |
AWS Trusted Advisor | Insights | |
AWS Marketplace | OpenShift Operator Hub | |
AWS Identity and Access Management (IAM) | Red Hat SSO | Keycloack |
AWS Elastisc Beanstalk | OpenShift Source2Image (S2I) | Source2Image (S2I) |
AWS S3 | ODF Rados Gateway | Rook RGW |
AWS Elastic Bloc Storage | ODF Rados Block Device | Rook RBD |
AWS Elastic File System | ODF Ceph FS | Rook CephFS |
Amazon Simple Notification Service | OpenShift Streams for Apache Kafka | Apache Kafka |
Amazon Guard Duty | API Server audit log review, ACS Runtime detection | Stackrox |
Amazon Inspector | Quay.io container scanner, ACS Vulnerability Assessment | Clair, Stackrox |
AWS Lambda | Openshift Serverless* |
Knative |
AWS Key Management System | could be done with Hashicorp Vault | Vault |
AWS WAF | NGINX Ingress Controller Operator with ModSecurity | NGINX ModSecurity |
Amazon Elasticache | Redis Enterprise Operator | Redis, memcached as alternative |
AWS Relational Database Service | Crunchy Data Operator | PostgreSQL |
*
OpenShift Serverless requires the application to be packaged as a container, something AWS Lamda does not require.
virtnbdbackup -U qemu+ssh://usr@hypervisor/system -d vm1 -o /backup/vm1
--nbd-ip
to bind the remote
NDB service to an specific interface.Next.