Artifact Space is a military (mostly) science fiction novel, the
first of an expected trilogy. Christian Cameron is a prolific author of
historical fiction under that name, thrillers under the name Gordon Kent,
and historical fantasy under the name Miles Cameron. This is his first
science fiction novel.
Marca Nbaro is descended from one of the great spacefaring mercantile
families, but it's not doing her much good. She is a ward of the
Orphanage, the boarding school for orphaned children of the DHC, generous
in theory and a hellhole in practice. Her dream to serve on one of the
Greatships, the enormous interstellar vessels that form the backbone of
the human trading network, has been blocked by the school authorities, a
consequence of the low-grade war she's been fighting with them throughout
her teenage years. But Marca is not a person to take no for an answer.
Pawning her family crest gets her just enough money to hire a hacker to
doctor her school records, adding the graduation she was denied and
getting her aboard the Greatship Athens as a new Midshipper.
I don't read a lot of military science fiction, but there is one type of
story that I love that military SF is uniquely well-suited to tell. It's
not the combat or the tactics or the often-trite politics. It's the
experience of the military as a system, a collective human endeavor.
One ideal of the military is that people come to it from all sorts of
backgrounds, races, and social classes, and the military incorporates them
all into a system built for a purpose. It doesn't matter who you are or
what you did before: if you follow the rules, do your job, and become part
of a collaboration larger than yourself, you have a place and people to
watch your back whether or not they know you or like you. Obviously, like
any ideal, many militaries don't live up to this, and there are many
stories about those failures. But the story of that ideal, told well, is
a genre I like a great deal and is hard to find elsewhere.
This sort of military story shares some features with found family, and
it's not a coincidence that I also like found family stories. But found
family still assumes that these people love you, or at least like you.
For some protagonists, that's a tricky barrier both to cross and to
believe one has crossed. The (admittedly idealized) military doesn't
assume anyone likes you. It doesn't expect that you or anyone around you
have the right feelings. It just expects you to do your job and work with
other people who are doing their job. The requirements are more concrete,
and thus in a way easier to believe in.
Artifact Space is one of those military science fiction stories. I
was entirely unsurprised to see that the author is a former US Navy career
officer.
The Greatships here are, technically, more of a merchant marine than a
full-blown military. (The author noted in an interview that he based
them on the merchant ships of Venice.) The weapons are used primarily for
defense; the purpose of the Greatships is trade, and every crew member has
a storage allotment in the immense cargo area that they're encouraged to
use. The setting is in the far future, after a partial collapse and
reconstruction of human society, in which humans have spread through
interstellar space, settled habitable planets, and built immense orbital
cities. The Athens is trading between multiple human settlements,
but its true destination is far into the deep black: Tradepoint, where it
can trade with the mysterious alien Starfish for xenoglas, a material that
humans have tried and failed to reproduce and on which much of human
construction now depends.
This is, to warn, one of those stories where the scrappy underdog of noble
birth makes friends with everyone and is far more competent than anyone
expects. The story shape is not going to surprise you, and you have to
have considerable tolerance for it to enjoy this book. Marca is
ridiculously, absurdly central to the plot for a new Middie. Sometimes
this makes sense given her history; other times, she is in the middle of
improbable accidents that felt forced by the author. Cameron doesn't
entirely break normal career progression, but Marca is very special in a
way that you only get to be as the protagonist of a novel.
That said, Cameron does some things with that story shape that I liked.
Marca's hard-won survival skills are not weirdly well-suited for
her new life aboard ship. To the contrary, she has to unlearn a lot of
bad habits and let go of a lot of anxiety. I particularly liked her
relationship with her more-privileged cabin mate, which at first seemed to
only be a contrast between Thea's privilege and Marca's background, but
turned into both of them learning from each other. There's a great mix of
supporting characters, with a wide variety of interactions with Marca and
a solid sense that all of the characters have their own lives and their
own concerns that don't revolve around her.
There is, of course, a plot to go with this. I haven't talked about it
much because I think the summaries of this book are a bit of a spoiler,
but there are several layers of political intrigue, threats to the ship,
an interesting AI, and a good hook in the alien xenoglas trade. Cameron
does a deft job balancing the plot with Marca's training and her
slow-developing sense of place in the ship (and fear about discovery of
her background and hacking). The pacing is excellent, showing all the
skill I'd expect from someone with a thriller background and over forty
prior novels under his belt. Cameron portrays the tedious work of
learning a role on a ship without boring the reader, which is a tricky
balancing act.
I also like the setting: a richly multicultural future that felt like it
included people from all of Earth, not just the white western parts. That
includes a normalized androgyne third gender, which is the sort of thing
you rarely see in military SF. Faster-than-light travel involves typical
physics hand-waving, but the shape of the hand-waving is one I've not seen
before and is a great excuse for copying the well-known property of
oceangoing navies that longer ships can go faster.
(One tech grumble, though: while Cameron does eventually say that this is
a known tactic and Marca didn't come up with anything novel, deploying
spread sensors for greater resolution is sufficiently obvious it should be
standard procedure, and shouldn't have warranted the character reactions
it got.)
I thoroughly enjoyed this. Artifact Space is the best military SF
that I've read in quite a while, at least back to John G. Hemry's
JAG in space novels and probably better than
those. It's going to strike some readers, with justification, as cliched,
but the cliches are handled so well that I had only minor grumbling at a
few absurd coincidences. Marca is a great character who is easy to care
about. The plot was tense and satisfying, and the feeling of military
structure, tradition, jargon, and ship pride was handled well. I had a
very hard time putting this down and was sad when it ended.
If you're in the mood for that class of "learning how to be part of a
collaborative structure" style of military SF, recommended.
Artifact Space reaches a somewhat satisfying conclusion, but leaves
major plot elements unresolved. Followed by Deep Black, which
doesn't have a release date at the time of this writing.
Rating: 9 out of 10
The Fifth Elephant is the 24th Discworld and fifth Watch novel, and
largely assumes you know who the main characters are. This is not a good
place to start.
The dwarves are electing a new king. The resulting political conflict is
spilling over into the streets of Ankh-Morpork, but that's not the primary
problem. First, the replica Scone of Stone, a dwarven artifact used to
crown the Low King of the Dwarves, is stolen from the Dwarf Bread Museum.
Then, Vimes is dispatched to berwald, ostensibly to negotiate increased
fat exports with the new dwarven king. And then Angua disappears,
apparently headed towards her childhood home in berwald, which
immediately prompts Carrot to resign and head after her. The City Watch
is left in the hands of now-promoted Captain Colon.
We see lots of Lady Sybil for the first time since
Guards! Guards!, and there's a
substantial secondary plot with Angua and Carrot and a tertiary plot with
Colon making a complete mess of things back home, but this is mostly a
Vimes novel. As usual, Vetinari is pushing him outside of his comfort
zone, but he's not seriously expecting Vimes to act like an ambassador.
He's expecting Vimes to act like a policeman, even though he's way outside
his jurisdiction. This time, that means untangling a messy three-sided
political situation involving the dwarves, the werewolves, and the
vampires.
There is some Igor dialogue in this book, but
thankfully Pratchett toned it down a lot and it never started to bother
me.
I do enjoy Pratchett throwing Vimes and his suspicious morality at
political problems and watching him go at them sideways. Vimes's
definition of crimes is just broad enough to get him fully invested in a
problem, but too narrow to give him much patience with the diplomatic
maneuvering. It makes him an unpredictable diplomat in a clash of
cultures way that's fun to read about. Cheery and Detritus are great
traveling companions for this, since both of them also unsettle the
dwarves in wildly different ways.
I also have to admit that Pratchett is doing more interesting things with
the Angua and Carrot relationship than I had feared. In previous books, I
was getting tired of their lack of communication and wasn't buying the
justifications for it, but I think I finally understand why the
communication barriers are there. It's not that Angua refuses to talk to
Carrot (although there's still a bit of that going on). It's that
Carrot's attitude towards the world is very strange, and gets
stranger the closer you are to him.
Carrot has always been the character who is too earnest and
straightforward and good for Ankh-Morpork and yet somehow makes it work,
but Pratchett is doing something even more interesting with the concept of
nobility. A sufficiently overwhelming level of heroic ethics becomes
almost alien, so contrary to how people normally think that it can make
conversations baffling. It's not that Carrot is perfect (sometimes he
does very dumb things), it's that his natural behavior follows a set of
ethics that humans like to pretend they follow but actually don't and
never would entirely. His character should be a boring cliche or an
over-the-top parody, and yet he isn't at all.
But Carrot's part is mostly a side plot. Even more than
Jingo, The Fifth Elephant is
establishing Vimes as a force to be reckoned with, even if you take him
outside his familiar city. He is in so many ways the opposite of
Vetinari, and yet he's a tool that Vetinari is extremely good at using.
Colon of course is a total disaster as the head of the Watch, and that's
mostly because Colon should never be more than a sergeant, but it's also
because even when he's taking the same action as Vimes, he's not doing it
for the same reasons or with the same stubborn core of basic morality and
loyalty that's under Vimes's suspicious conservatism.
The characterization in the Watch novels doesn't seem that subtle or deep
at first, but it accumulates over the course of the series in a way that I
think is more effective than any of the other story strands. Vetinari,
Vimes, and Carrot all represent "right," or at least order, in overlapping
stories of right versus wrong, but they do so in radically different ways
and with radically different goals. Each time one of them seems
ascendant, each time one of their approaches seems more clearly correct,
Pratchett throws them at a problem where a different approach is required.
It's a great reading experience.
This was one of the better Discworld novels even though I found the
villains to be a bit tedious and stupid. Recommended.
Followed by The Truth in publication order. The next Watch novel
is Night Watch.
Rating: 8 out of 10
I loaded up this title with buzzwords. The basic idea is that IM systems shouldn t have to only use the Internet. Why not let them be carried across LoRa radios, USB sticks, local Wifi networks, and yes, the Internet? I ll first discuss how, and then why.
How do set it up
I ve talked about most of the pieces here already:
Delta Chat, which is an IM app that uses mail servers (SMTP and IMAP) as transport, and OpenPGPencryption for security.
Yggdrasil, which forms an auto-mesh network over things like ad-hoc wifi. It s not asynchronous itself, but its properties may be used to build an asyncrhonous email network email itself can be asynchronous across any carrier. Others such as Tor could also be used.
And various other physical carriers such as LoRa and XBee SX radios.
Email servers. For instance, there are existing instructions for running Postfixor Exim over NNCP. These can be easily adapted to run across something like Filespooler instead. These can be run locally on a laptop, or, with a tool such as Termux, on Android.
So, putting this together:
All Delta Chat needs is access to a SMTP and IMAP server. This server could easily reside on localhost.
Existing email servers support transport of email using non-IP transports, including batch transports that can easily store it in files.
These batches can be easily carried by NNCP, Syncthing, Filespooler, etc. Or, if the connectivity is good enough, via traditional networking using Yggdrasil.
Side note: Both NNCP and email servers support various routing arrangements, and can easily use intermediary routing nodes. Syncthing can also mesh. NNCP supports asynchronous multicast, letting your messages opportunistically find the best way to their destination.
OK, so why would you do it?
You might be thinking, doesn t asynchronous mean slow? Well, not necessarily. Asynchronous means reliability is more important than speed ; that is, slow (even to the point of weeks) is acceptable, but not required. NNCP and Syncthing, for instance, can easily deliver within a couple of seconds.
But let s step back a bit. Let s say you re hiking in the wilderness in an area with no connectivity. You get back to your group at a campsite at the end of the day, and have taken some photos of the forest and sent them to some friends. Some of those friends are at the campsite; when you get within signal range, they get your messages right away. Some of those friends are in another country. So one person from your group drives into town and sits at a coffee shop for a few minutes, connected to their wifi. All the messages from everyone in the group go out, all the messages from outside the group come in. Then they go back to camp and the devices exchange messages.
Pretty slick, eh?
Note: this article also has a more permanent home on my website, where it may be periodically updated.
From November 2nd to 4th, 2022, the 19th edition of
Latinoware - Latin American Congress of Free Software
and Open Technologies took place in Foz do Igua u. After 2 years happening
online due to the COVID-19 pandemic, the event was back in person and we felt
Debian Brasil community should be there.
Out last time at Latinoware was in
2016
The Latinoware organization provided the Debian Brazil community with a booth
so that we could have contact with people visiting the open exhibition area and
thus publicize the Debian project. During the 3 days of the event, the booth was
organized by me (Paulo Henrique Santana) as Debian Developer, and by Leonardo
Rodrigues as Debian contributor. Unfortunately Daniel Lenharo had an issue and
could not travel to Foz do Igua u (we miss you there!).
A huge number of people visited the booth, and the beginners (mainly students)
who didn't know Debian, asked what our group was about and we explained various
concepts such as what Free Software is, GNU/Linux distribution and Debian
itself. We also received people from the Brazilian Free Software community and
from other Latin American countries who were already using a GNU/Linux
distribution and, of course, many people who were already using Debian. We had
some special visitors as Jon maddog Hall, Debian Developer Emeritus Ot vio
Salvador, Debian Developer Eriberto Mota, and Debian Maintainers Guilherme de
Paula Segundo and Paulo Kretcheu.
Photo from left to right: Leonardo, Paulo, Eriberto and Ot vio.
Photo from left to right: Paulo, Fabian (Argentina) and Leonardo.
In addition to talking a lot, we distributed Debian stickers that were produced
a few months ago with Debian's sponsorship to be distributed at DebConf22
(and that were left over), and we sold several Debian
t-shirts) produced by
Curitiba Livre community).
We also had 3 talks included in Latinoware official schedule.
I) talked about:
"how to become a Debian contributor by doing translations" and "how the
SysAdmins of a global company use Debian". And
Leonardo) talked about:
"advantages of Open Source telephony in companies".
Photo Paulo in his talk.
Many thanks to Latinoware organization for once again welcoming the Debian
community and kindly providing spaces for our participation, and we
congratulate all the people involved in the organization for the success of
this important event for our community. We hope to be present again in 2023.
We also thank Jonathan Carter for approving financial support from Debian for
our participation at Latinoware.
Portuguese version
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
From November 2nd to 4th, 2022, the 19th edition of
Latinoware - Latin American Congress of Free Software
and Open Technologies took place in Foz do Igua u. After 2 years happening
online due to the COVID-19 pandemic, the event was back in person and we felt
Debian Brasil community should be there.
Out last time at Latinoware was in
2016
The Latinoware organization provided the Debian Brazil community with a booth
so that we could have contact with people visiting the open exhibition area and
thus publicize the Debian project. During the 3 days of the event, the booth was
organized by me (Paulo Henrique Santana) as Debian Developer, and by Leonardo
Rodrigues as Debian contributor. Unfortunately Daniel Lenharo had an issue and
could not travel to Foz do Igua u (we miss you there!).
A huge number of people visited the booth, and the beginners (mainly students)
who didn't know Debian, asked what our group was about and we explained various
concepts such as what Free Software is, GNU/Linux distribution and Debian
itself. We also received people from the Brazilian Free Software community and
from other Latin American countries who were already using a GNU/Linux
distribution and, of course, many people who were already using Debian. We had
some special visitors as Jon maddog Hall, Debian Developer Emeritus Ot vio
Salvador, Debian Developer Eriberto Mota, and Debian Maintainers Guilherme de
Paula Segundo and Paulo Kretcheu.
Photo from left to right: Leonardo, Paulo, Eriberto and Ot vio.
Photo from left to right: Paulo, Fabian (Argentina) and Leonardo.
In addition to talking a lot, we distributed Debian stickers that were produced
a few months ago with Debian's sponsorship to be distributed at DebConf22
(and that were left over), and we sold several Debian
t-shirts) produced by
Curitiba Livre community).
We also had 3 talks included in Latinoware official schedule.
I) talked about:
"how to become a Debian contributor by doing translations" and "how the
SysAdmins of a global company use Debian". And
Leonardo) talked about:
"advantages of Open Source telephony in companies".
Photo Paulo in his talk.
Many thanks to Latinoware organization for once again welcoming the Debian
community and kindly providing spaces for our participation, and we
congratulate all the people involved in the organization for the success of
this important event for our community. We hope to be present again in 2023.
We also thank Jonathan Carter for approving financial support from Debian for
our participation at Latinoware.
Portuguese version
The Road to Gandolfo
I think I had read this book almost 10-12 years back and somehow ended up reading it up again. Apparently, he had put this fiction, story, book under some other pen name earlier. It is possible that I might have read it under that name and hence forgotten all about it. This book/story is full of innuendo, irony, sarcasm and basically the thrill of life. There are two main characters in the book, the first is General Mackenzie who has spent almost 3 to 4 decades being a spy/a counterintelligence expert in the Queen s service. And while he outclasses them all even at the ripe age of 50, he is thrown out under the pretext of conduct unbecoming of an officer.
The other main character is Sam Devereaux. This gentleman is an army lawyer and is basically counting the days when he completes his tour of duty as a military lawyer and start his corporate civil law with somebody he knows. It is much to his dismay that while under a week is left for his tour of duty to be left over he is summoned to try and extradite General MacKenzie who has been put on house arrest. Apparently, in China there was a sculpture of a great Chinese gentleman in the nude. For reasons unknown or rather not being shared herein, he basically breaks part of the sculpture. This of course, enrages the Chinese and they call it a diplomatic accident and try to put the General into house arrest. Unfortunately for both the General and his captors, he decides to escape/go for it. While he does succeed entering the American embassy, he finds himself to be person non-grata and is thrown back outside where the Chinese recapture him.
This is where the Embassy & the Govt. decide it would be better if somehow the General could be removed from China permanently so he doesn t cause any further diplomatic accidents. In order to do that Sam s services are bought.
Now in order to understand the General, Sam learns that he has 4 ex-wives. He promptly goes and meet them to understand why the general behaved as he did. He apparently also peed on the American flag. To his surprise, all the four ex-wives are still very much in with the general. During the course of interviewing the ladies he is seduced by them and also gives names to their chests in order to differentiate between each one of them. Later he is seduced by the eldest of the four wives and they spend the evening together.
Next day Sam meets and is promptly manhandled by the general and the diplomatic papers are seen by the general. After meeting the general and the Chinese counterpart, they quickly agree to extradite him as they do not know how to keep the general control. During his stay of house arrest, the General reads one of the communist rags as he puts it and gets the idea to kidnap the pope and that forms the basis of the story.
Castel Gandolfo seems to be a real place which is in Italy and is apparently is the papal residence where s/he goes to reside every winter. The book is written in 1976 hence in the book, the General decides to form a corporation for which he would raise funds in order to make the kidnapping. The amount in 1976 was 40 million dollars and it was a big sum, to be with times, let s think of say 40 billion dollars so gets the scale of things.
Now while a part of me wants to tell the rest of the story, the story isn t really mine to tell. Read The Road to Gandolfo for the rest. While I can t guarantee you much, I can say you might find yourself constantly amused by the antics of both the General, Sam and the General s ex-wives. There are also a few minute characters that you will meet on the way, hope you discover them and enjoy it immensely as I have.
One thing I have to say, while I was reading it, I very much got vibes of Not a penny more, not a penny less by Jeffrey Archer. As shared before, lots of twists and turns, enjoy the ride
Webforms
Webforms are nothing but a form you fill on the web or www. Webforms are and were a thing from early 90s to today. I was supposed to register for https://www.swavlambancard.gov.in/ almost a month back but procrastinated till few couple of days back and with good reason. I was hoping one of my good friends would help me but they had their own thing. So finally, I tried to fill the form few days back. It took me almost 30 odd attempts to finally fill the form and was given an enrollment number. Why it took me 30 odd attempts and with what should tell you the reason
I felt like I was filling the form from 1990 s rather than today because
The form doesn t know either its state or saves data during a session This lesson has been learned a long time back by almost all service providers except Govt. of India. Both the browsers on a mobile as well as desktop can save data during session. If you don t know what I mean by that go to about:preferences#privacy in Firefox and look at Manage Data. There you will find most sites do put some data along with cookies arguably to help make your web experience better. Chrome or Chromium has the same thing perhaps shared under a different name but its the same thing. But that is not all.
None of the fields have any verification. The form is of 3 pages. The verification at the end of the document doesn t tell you what is wrong and what needs to be corrected. Really think on this, I am on a 24 LED monitor and I m filling the form and I had to do it at least 20-30 times before it was accepted. And guess what, I have no clue even about why it was selected. The same data, the same everything and after the nth time it accepted. Now if I am facing such a problem when I have some idea how technology works somewhat how are people who are trying to fill this form on 6 mobiles supposed to do? And many of them not at all clued in technology as I am.
I could go on outlining many of the issues that I faced but they are all similar in many ways the problems faced while filling the NEW Income Tax forms. Of course the New Income Tax portal is a whole ball-game in itself as it gives new errors every time instead of solving them. Most C.A. s have turned to third-party xml tools that enable you to upload xml compliant data to the New Income tax portal but this is for businesses and those who can afford it. Again, even that is in a sort of messy state but that is a whole another tale altogether.
One of the reasons to my mind why the forms are designed the way they are so that people go to specific cybercafes or get individual people to fill and upload it and make more money. I was told to go to a specific cybercafe and meet a certain individual and he asked for INR 500/- to do the work. While I don t have financial problems, I was more worried about my data going into the wrong hands. But I can see a very steady way to make money without doing much hard work.
Hearing Loss info.
Now because I had been both to Kamla Nehru Hospital as well as Sasoon and especially the deaf department, I saw many kids with half-formed ears. I had asked the doctors and they had shared this is due to malnutrition. We do know that women during pregnancies need more calories, more everything as they are eating for two bodies, not one. And this is large-scale, apparently more than 5 percent of population have children like this. And this number was of 2014, what is it today nobody knows. I also came to know that at least for some people like me, due to Covid they became deaf. I had asked the doctors if they knew of people who had become deaf due to Covid. They basically replied in the negative as they don t have the resources to monitor the same. The Govt. has an idea of health ID but just like Aadhar has to many serious sinister implications. Somebody had shared with me a long time back that in India systems work inspite of Govt. machinery rather than because of it. Meaning that the Government itself ties itself into several knots and then people have to be creative to try and figure a way out to help people. I found another issue while dealing with them.
Apparently, even though I have 60% hearing loss I would be given a certificate of 40% hearing loss and they call it Temporary Progressive Loss. I saw almost all the people who had come, many of them having far severe defencies than me getting the same/similar certificate. All of them got Temporary Progressive. Some of the cases were real puzzling. For e.g. I met another Agarwal who had a severe accident few months ago and there is some kind of paralysis & bone issue. The doctors have given up but even that gentleman was given Temporary Progressive. From what little I could understand, the idea is that over period if there is possibility of things becoming better then it should be given. Another gentleman suffered a case of dwarfism. Even he was given the same certificate. Think there have been orders from above so that people even having difficulties are not helped. Another point if you look in a macro sense, it presents a somewhat rosy picture. If someone were to debunk the Govt. data either from India or abroad then from GOI perspective they have an agenda even though the people who are suffering are our brothers and sisters And all of this is because I can read, write, articulate. Perhaps many of them may not even have a voice or a platform.
Even to get this temporary progressive disability certificate there is more than 4 months of running from one place to the other, 4 months of culmination of work. This I can share and tell from my experience, who knows how much else others might have suffered for the same. In my case a review will happen after 5 years, in most other cases they have given only 1 year. Of course, this does justify people s jobs and perhaps partly it may be due to that. Such are times where I really miss that I am unable to hear otherwise could have fleshed out lot more other people s sufferings.
And just so people know/understand this is happening in the heart of the city whose population easily exceeds 6 million plus and is supposed to be a progressive city. I do appreciate and understand the difficulties that the doctors are placed under.
Mum s Birthday & Social Engineering.
While I don t want to get into details, in the last couple of weeks mum s birthday was there and that had totally escaped me. I have been trying to disassociate myself from her and at times it s hard and then you don t remember and somebody makes you remember. So, on one hand guilty, and the other do not know what to do. If she were alive I would have bought a piece of cake or something. Didn t feel like it, hence donated some money to the local aged home. This way at least I hope they have some semblance of peace. All of them are of her similar age group.
The other thing that I began to observe in the earnest, fake identities have become the norm. Many people took elon musk s potrait using their own names in the handles, but even then Elon Free Speech Musk banned them. So much for free speech. Then I saw quite a few handles that have cute women as their profile picture but they are good at social engineering. This has started only a couple of weeks back and have seen quite a few handles leaving Twitter and joining Mastodon. Also, have been hearing that many admins of Mastodon pods are unable to get on top of this. Also, lot of people complaining as it isn t user-friendly UI as twitter is. Do they not realize that Twitter has its own IP and any competing network can t copy or infringe on their product. Otherwise, they will be sued like how Ford was & potentially win. I am not really gonna talk much about it as the blog post has become quite long and that needs its own post to do any sort of justice to it. Till later people
If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version Client version: stable-2.12.2 Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
pre-kubernetes-setup -------------------- control plane namespace does not already exist can create non-namespaced resources can create ServiceAccounts can create Services can create Deployments can create CronJobs can create ConfigMaps can create Secrets can read Secrets can read extension-apiserver-authentication configmap no clock skew detected
linkerd-version --------------- can determine the latest version cli is up-to-date
Status check results are
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f - Rendering Linkerd CRDs... Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install kubectl apply -f - namespace/linkerd created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created serviceaccount/linkerd-identity created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created serviceaccount/linkerd-destination created secret/linkerd-sp-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created secret/linkerd-policy-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created clusterrole.rbac.authorization.k8s.io/linkerd-policy created clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created role.rbac.authorization.k8s.io/linkerd-heartbeat created rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created serviceaccount/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created serviceaccount/linkerd-proxy-injector created secret/linkerd-proxy-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created configmap/linkerd-config created secret/linkerd-identity-issuer created configmap/linkerd-identity-trust-roots created service/linkerd-identity created service/linkerd-identity-headless created deployment.apps/linkerd-identity created service/linkerd-dst created service/linkerd-dst-headless created service/linkerd-sp-validator created service/linkerd-policy created service/linkerd-policy-validator created deployment.apps/linkerd-destination created cronjob.batch/linkerd-heartbeat created deployment.apps/linkerd-proxy-injector created service/linkerd-proxy-injector created secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
linkerd-existence ----------------- 'linkerd-config' config map exists heartbeat ServiceAccount exist control plane replica sets are ready no unschedulable pods control plane pods are ready cluster networks contains all pods cluster networks contains all services
linkerd-config -------------- control plane Namespace exists control plane ClusterRoles exist control plane ClusterRoleBindings exist control plane ServiceAccounts exist control plane CustomResourceDefinitions exist control plane MutatingWebhookConfigurations exist control plane ValidatingWebhookConfigurations exist proxy-init container runs as root user if docker container runtime is used
linkerd-identity ---------------- certificate config is valid trust anchors are using supported crypto algorithm trust anchors are within their validity period trust anchors are valid for at least 60 days issuer cert is using supported crypto algorithm issuer cert is within its validity period issuer cert is valid for at least 60 days issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls ------------------------------- proxy-injector webhook has valid cert proxy-injector cert is valid for at least 60 days sp-validator webhook has valid cert sp-validator cert is valid for at least 60 days policy-validator webhook has valid cert policy-validator cert is valid for at least 60 days
linkerd-version --------------- can determine the latest version cli is up-to-date
control-plane-version --------------------- can retrieve the control plane version control plane is up-to-date control plane and cli versions match
linkerd-control-plane-proxy --------------------------- control plane proxies are healthy control plane proxies are up-to-date control plane proxies and cli versions match
Status check results are
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f - namespace/linkerd-viz created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created serviceaccount/metrics-api created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created serviceaccount/prometheus created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created serviceaccount/tap created rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created secret/tap-k8s-tls created apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created role.rbac.authorization.k8s.io/web created rolebinding.rbac.authorization.k8s.io/web created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created serviceaccount/web created server.policy.linkerd.io/admin created authorizationpolicy.policy.linkerd.io/admin created networkauthentication.policy.linkerd.io/kubelet created server.policy.linkerd.io/proxy-admin created authorizationpolicy.policy.linkerd.io/proxy-admin created service/metrics-api created deployment.apps/metrics-api created server.policy.linkerd.io/metrics-api created authorizationpolicy.policy.linkerd.io/metrics-api created meshtlsauthentication.policy.linkerd.io/metrics-api-web created configmap/prometheus-config created service/prometheus created deployment.apps/prometheus created service/tap created deployment.apps/tap created server.policy.linkerd.io/tap-api created authorizationpolicy.policy.linkerd.io/tap created clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created serviceaccount/tap-injector created secret/tap-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created service/tap-injector created deployment.apps/tap-injector created server.policy.linkerd.io/tap-injector-webhook created authorizationpolicy.policy.linkerd.io/tap-injector created networkauthentication.policy.linkerd.io/kube-api-server created service/web created deployment.apps/web created serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard Linkerd dashboard available at: http://localhost:50750 Grafana dashboard available at: http://localhost:50750/grafana Opening Linkerd dashboard in the default browser Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components 2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
Carpe Jugulum is the 23rd Discworld novel and the 6th witches
novel. I would not recommend reading it before Maskerade, which introduces Agnes.
There are some spoilers for Wyrd
Sisters, Lords and Ladies, and
Maskerade in the setup here and hence in the plot description
below. I don't think they matter that much, but if you're avoiding all
spoilers for earlier books, you may want to skip over this one. (You're
unlikely to want to read it before those books anyway.)
It is time to name the child of the king of Lancre, and in a gesture of
good will and modernization, he has invited his neighbors in Uberwald to
attend. Given that those neighbors are vampires, an open invitation was
perhaps not the wisest choice.
Meanwhile, Granny Weatherwax's invitation has gone missing. On the plus
side, that meant she was home to be summoned to the bedside of a pregnant
woman who was kicked by a cow, where she makes the type of hard decision
that Granny has been making throughout the series. On the minus side, the
apparent snub seems to send her into a spiral of anger at the lack of
appreciation.
Points off right from the start for a plot based on a misunderstanding and
a subsequent refusal of people to simply talk to each other. It is partly
engineered, but still, it's a cheap and irritating plot.
This is an odd book.
The vampires (or vampyres, as the Count wants to use) think of themselves
as modern and sophisticated, making a break from the past by attempting to
overcome such traditional problems as burning up in the sunlight and fear
of religious symbols and garlic. The Count has put his family through
rigorous training and desensitization, deciding such traditional
vulnerabilities are outdated things of the past. He has, however, kept
the belief that vampires are at the top of a natural chain of being,
humans are essentially cattle, and vampires naturally should rule and feed
on the population. Lancre is an attractive new food source. Vampires
also have mind control powers, control the weather, and can put their
minds into magpies.
They are, in short, enemies designed for Granny Weatherwax, the witch
expert in headology. A shame that Granny is apparently off sulking.
Nanny and Agnes may have to handle the vampires on their own, with the
help of Magrat.
One of the things that makes this book odd is that it seemed like
Pratchett was setting up some character growth, giving Agnes a chance to
shine, and giving Nanny Ogg a challenge that she didn't want. This sort
of happens, but then nothing much comes of it. Most of the book is the
vampires preening about how powerful they are and easily conquering
Lancre, while everyone else flails ineffectively. Pratchett does pull
together an ending with some nice set pieces, but that ending doesn't
deliver on any of the changes or developments it felt like the story was
setting up.
We do get a lot of Granny, along with an amusingly earnest priest of Om
(lots of references to Small Gods here,
while firmly establishing it as long-ago history). Granny is one of my
favorite Discworld characters, so I don't mind that, but we've seen Granny
solve a lot of problems before. I wanted to see more of Agnes, who is the
interesting new character and whose dynamic with her inner voice feels
like it has a great deal of unrealized potential.
There is a sharp and condensed version of comparative religion from
Granny, which is probably the strongest part of the book and includes one
of those Discworld quotes that has been widely repeated out of context:
"And sin, young man, is when you treat people as things. Including
yourself. That's what sin is."
"It's a lot more complicated than that "
"No. It ain't. When people say things are a lot more complicated
than that, they means they're getting worried that they won t like the
truth. People as things, that's where it starts."
This loses a bit in context because this book is literally about treating
people as things, and thus the observation feels more obvious when it
arrives in this book than when you encounter it on its own, but it's still
a great quote.
Sadly, I found a lot of this book annoying. One of those annoyances is a
pet peeve that others may or may not share: I have very little patience
for dialogue in phonetically-spelled dialect, and there are two
substantial cases of that here. One is a servant named Igor who speaks
with an affected lisp represented by replacing every ess sound with th,
resulting in lots of this:
"No, my Uncle Igor thtill workth for him. Been thtruck by lightning
three hundred timeth and thtill putth in a full night'th work."
I like Igor as a character (he's essentially a refugee from The
Addams Family, which adds a good counterpoint to the malicious and
arrogant evil of the vampires), but my brain stumbles over words like
"thtill" every time. It's not that I can't decipher it; it's that the
deciphering breaks the flow of reading in a way that I found not at all
fun. It bugged me enough that I started skipping his lines if I couldn't
work them out right away.
The other example is the Nac Mac Feegles, who are... well, in the book,
they're Pictsies and a type of fairy, but they're Scottish Smurfs, right
down to only having one female (at least in this book). They're
entertainingly homicidal, but they all talk like this:
"Ach, hins tak yar scaggie, yer dank yowl callyake!"
I'm from the US and bad with accents and even worse with accents
reproduced in weird spellings, and I'm afraid that I found 95% of
everything said by Nac Mac Feegles completely incomprehensible to the
point where I gave up even trying to read it. (I'm now rather worried
about the Tiffany Aching books and am hoping Pratchett toned the dialect
down a lot, because I'm not sure I can deal with more of this.)
But even apart from the dialect, I thought something was off about the
plot structure of this book. There's a lot of focus on characters who
don't seem to contribute much to the plot resolution. I wanted more of
the varied strengths of Lancre coming together, rather than the focus on
Granny. And the vampires are absurdly powerful, unflappable, smarmy, and
contemptuous of everyone, which makes for threatening villains but also
means spending a lot of narrative time with a Discworld version of
Jacob Rees-Mogg. I
feel like there's enough of that in the news already.
Also, while I will avoid saying too much about the plot, I get very
suspicious when older forms of oppression are presented as good
alternatives to modernizing, rationalist spins on exploitation. I see
what Pratchett was trying to do, and there is an interesting point here
about everyone having personal relationships and knowing their roles (a
long-standing theme of the Lancre Discworld stories). But I think the
reason why there is some nostalgia for older autocracy is that we only
hear about it from stories, and the process of storytelling often creates
emotional distance and a patina of adventure and happy outcomes. Maybe
you can make an argument that classic British imperialism is superior to
smug neoliberalism, but both of them are quite bad and I don't want either
of them.
On a similar note, Nanny Ogg's tyranny over her entire extended clan
continues to be played for laughs, but it's rather unappealing and seems
more abusive the more one thinks about it. I realize the witches are not
intended to be wholly good or uncomplicated moral figures, but I want to
like Nanny, and Pratchett seems to be writing her as likable, even though
she has an astonishing lack of respect for all the people she's related
to. One might even say that she treats them like things.
There are some great bits in this book, and I suspect there are many
people who liked it more than I did. I wouldn't be surprised if it was
someone's favorite Discworld novel. But there were enough bits that
didn't work for me that I thought it averaged out to a middle-of-the-road
entry.
Followed by The Fifth Elephant in publication order. This is the
last regular witches novel, but some of the thematic thread is picked up
by The Wee Free Men, the first Tiffany Aching novel.
Rating: 7 out of 10
I hope you had a nice Halloween!
I've collected together some songs that I've enjoyed over the last couple of
years that loosely fit a theme: ambient, instrumental, experimental, industrial,
dark, disconcerting, etc. I've prepared a Spotify playlist of most of
them, but not all. The list is inline below as well, with many (but not all)
tracks linking to Bandcamp, if I could find them there.
This is a bit late, sorry. If anyone listens to something here and has any
feedback I'd love to hear it.
(If you are reading this on an aggregation site, it's possible the embeds won't
work. If so, click through to my main site)
Spotify playlist: https://open.spotify.com/playlist/3bEvEguRnf9U1RFrNbv5fk?si=9084cbf78c364ac8;
The list, with Bandcamp embeds where possible:
LaTeX the age-old typesetting system makes me angry. Not because it's bad.
To clarify, not because there's something better. But because there should
be.
When writing a document using LaTeX, if you are prone to procrastination
it can be very difficult to focus on the task at hand, because there are so
many yaks to shave. Here's a few points of advice.
format the document source for legible reading. Yes, it's the input
to the typesetter, and yes, the output of the typesetter needs to be
legible. But it's worth making the input easy to read, too. Because
avoid rebuilding your rendered document too often. It's slow, it takes you
out of the activity of writing, and it throws up lots of opportunities to
get distracted by some rendering nit that you didn't realise would happen.
Unless you are very good at manoeuvring around long documents, liberally
split them up. I think it's fine to have sections in their own source
files.
Machine-assisted moving around documents is good. If you use (neo)vim,
you can tweak exuberant-ctags to generate more useful tags for LaTeX
documents than what you get OOTB, including jumping to \label s and
the BibTeX source of \cite s. See this stackoverflow post.
If you use syntax highlighting in your editor, take a long, hard look at
what it's drawing attention to. It's not your text, that's for sure. Is
it worth having it on? Consider turning it off. Or (yak shaving beware!)
tweak it to de-emphasise things, instead of emphasising them. One small
example for (neo)vim, to change tokens recognised as being "todo" to
match the styling used for comments (which is normally de-emphasised):
hi def link texTodo Comment
In a nutshell, I think it's wise to move much document reviewing work back
into the editor rather than the rendered document, at least in the early
stages of a section. And to do that, you need the document to be as legible
as possible in the editor. The important stuff is the text you write, not
the TeX macros you've sprinkled around to format it.
A few tips I benefit from in terms of source formatting:
I stick a line of 78 '%' characters between each section and sub-section.
This helps to visually break them up and finding them in a scroll-past is
quicker.
I indent as much of the content as I can in each chapter/section/subsection
(however deep I go in sections) to tie them to the section they belong to
and see at a glance how deep I am in subsections, just like with source
code. The exception is environments that I can't due to other tool
limitations: I have code excerpts demarked by \begin code /\end code
which are executed by Haskell's GHCi interpreter, and the indentation can
interfere with Haskell's indentation rules.
For large documents (like a thesis), I have little helper "standalone" .tex
files whose purpose is to let me build just one chapter or section at a time.
I'm fairly sure I'll settle on a serif font for my final document. But I
have found that a sans-serif font is easier on my eyes on-screen. YMMV.
Of course, you need to review the rendered document too! I like to bounce that
to a tablet with a pen/stylus/pencil and review it in a different environment
to where I write. I then end up with a long list of scrawled notes, and a third
distinct activity, back at the writing desk, is to systematically go through
them and apply some GTD-style thinking to them: can I fix it in a few seconds?
Do it straight away. Delegate it? Unlikely Defer it? transfer the review note
into another system of record (such as LaTeX \\todo ).
And finally
(Sorry if you have read this already, due to a tag mistake, my draft
copy got published)
I recently bought a refurbished thinkpad x260. If you have read my
post of my <a href= <!DOCTYPE html>
Laptop refreshment
This is the 21st Discworld novel and relies on the previous Watch novels
for characterization and cast development. I would not start here.
In the middle of the Circle Sea, the body of water between Ankh-Morpork
and the desert empire of Klatch, a territorial squabble between one
fishing family from Ankh-Morpork and one from Klatch is interrupted by a
weathercock rising dramatically from the sea. When the weathercock is
shortly followed by the city to which it is attached and the island on
which that city is resting, it's justification for more than a fishing
squabble. It's a good reason for a war over new territory.
The start of hostilities is an assassination attempt on a prince of
Klatch. Vimes and the Watch start investigating, but politics outraces
police work. Wars are a matter for the nobility and their armies, not for
normal civilian leadership. Lord Vetinari resigns, leaving the city under
the command of Lord Rust, who is eager for a glorious military victory
against their long-term rivals. The Klatchians seem equally eager to
oblige.
One of the useful properties of a long series is that you build up a cast
of characters you can throw at a plot, and if you can assume the reader
has read enough of the previous books, you don't have to spend a lot of
time on establishing characterization and can get straight to the story.
Pratchett uses that here. You could read this cold, I suppose, because
most of the Watch are obvious enough types that the bits of
characterization they get are enough, but it works best with the nuance
and layers of the previous books. Of course Colon is the most susceptible
to the jingoism that prompts the book's title, and of course Angua's
abilities make her the best detective. The familiar characters let
Pratchett dive right in to the political machinations.
Everyone plays to type here: Vetinari is deftly maneuvering everyone into
place to make the situation work out the way he wants, Vimes is stubborn
and ethical and needs Vetinari to push him in the right direction, and
Carrot is sensible and effortlessly charismatic. Colon and Nobby are, as
usual, comic relief of a sort, spending much of the book with Vetinari
while not understanding what he's up to. But Nobby gets an interesting
bit of characterization in the form of an extended turn as a spy that
starts as cross-dressing and becomes an understated sort of gender
exploration hidden behind humor that's less mocking than one might expect.
Pratchett has been slowly playing more with gender in this series, and
while it's simple and a bit deemphasized, I like it.
I think the best part of this book, thematically, is the contrast between
Carrot's and Vimes's reactions to the war. Carrot is a paragon of a
certain type of ethics in Watch novels, but a war is one of the things
that plays to his weaknesses. Carrot follows rules, and wars have rules
of a type. You can potentially draw Carrot into them. But Vimes, despite
being someone who enforces rules professionally, is deeply suspicious of
them, which makes him harder to fool. Pratchett uses one of the Klatchian
characters to hold a mirror up to Vimes in ways that are minor spoilers,
but that I quite liked.
The argument of jingoism, made by both Lord Rust and by the Klatchian
prince, is that wars are something special, outside the normal rules of
justice. Vimes absolutely refuses this position. As someone from the US,
his reaction to Lord Rust's attempted militarization of the Watch was one
of the best moments of the book.
Not a muscle moved on Rust's face. There was a clink as
Vimes's badge was set neatly on the table.
"I don't have to take this," Vimes said calmly.
"Oh, so you'd rather be a civilian, would you?"
"A watchman is a civilian, you inbred streak of pus!"
Vimes is also willing to think of a war as a possible crime, which may not
be as effective as Vetinari's tricky scheming but which is very
emotionally satisfying.
As with most Pratchett books, the moral underpinnings of the story aren't
that elaborate: people are people despite cultural differences, wars are
bad, and people are too ready to believe the worst of their neighbors.
The story arc is not going to provide great insights into human character
that the reader did not already have. But watching Vimes stubbornly
attempt to do the right thing regardless of the rule book is wholly
satisfying, and watching Vetinari at work is equally, if differently,
enjoyable.
Not the best Discworld novel, but one of the better ones.
Followed by The Last Continent in publication order, and by
The Fifth Elephant thematically.
Rating: 8 out of 10
This post describes how I ve put together a simple static content server for
kubernetes clusters using a Pod with a persistent volume and multiple
containers: an sftp server to manage contents, a web server to publish them
with optional access control and another one to run scripts which need access
to the volume filesystem.
The sftp server runs using
MySecureShell, the web
server is nginx and the script runner uses the
webhook tool to publish endpoints to call
them (the calls will come from other Pods that run backend servers or are
executed from Jobs or CronJobs).
HistoryThe system was developed because we had a NodeJS API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.
Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.
For the upload I decided to use the sftp protocol, mainly because I already
had an sftp container image based on
mysecureshell prepared; once
we settled on that we added sftp support to the API server and configured it
to upload the files to our server instead of using S3 buckets.
To publish the files we added a nginx container configured
to work as a reverse proxy that uses the
ngx_http_auth_request_module
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).
Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using kubectl exec with the existing containers
did not seem a good idea, as that is not supported by CronJobs objects, for
example).
The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the webhook tool to provide the
endpoints to call the scripts; for now we have three:
one to get the disc usage of a PATH,
one to hardlink all the files that are identical on the filesystem,
one to copy files and folders from S3 buckets to our filesystem.
Container definitions
mysecureshellThe mysecureshell container can be used to provide an sftp service with
multiple users (although the files are owned by the same UID and GID) using
standalone containers (launched with docker or podman) or in an
orchestration system like kubernetes, as we are going to do here.
The image is generated using the following Dockerfile:
The /etc/sftp_config file is used to
configure
the mysecureshell server to have all the user homes under /sftp/data, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after 5m of inactivity:
The entrypoint.sh script is the one responsible to prepare the container for
the users included on the /secrets/user_pass.txt file (creates the users with
their HOME directories under /sftp/data and a /bin/false shell and
creates the key files from /secrets/user_keys.txt if available).
The script expects a couple of environment variables:
SFTP_UID: UID used to run the daemon and for all the files, it has to be
different than 0 (all the files managed by this daemon are going to be
owned by the same user and group, even if the remote users are different).
SFTP_GID: GID used to run the daemon and for all the files, it has to be
different than 0.
And can use the SSH_PORT and SSH_PARAMS values if present.
It also requires the following files (they can be mounted as secrets in
kubernetes):
/secrets/host_keys.txt: Text file containing the ssh server keys in mime
format; the file is processed using the reformime utility (the one included
on busybox) and can be generated using the
gen-host-keys script included on the container (it uses ssh-keygen and
makemime).
/secrets/user_pass.txt: Text file containing lines of the form
username:password_in_clear_text (only the users included on this file are
available on the sftp server, in fact in our deployment we use only the
scs user for everything).
And optionally can use another one:
/secrets/user_keys.txt: Text file that contains lines of the form
username:public_ssh_ed25519_or_rsa_key; the public keys are installed on
the server and can be used to log into the sftp server if the username
exists on the user_pass.txt file.
The contents of the entrypoint.sh script are:
The container also includes a couple of auxiliary scripts, the first one can be
used to generate the host_keys.txt file as follows:
$docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
Where the script is as simple as:
And there is another script to generate a .tar file that contains auth data
for the list of usernames passed to it (the file contains a user_pass.txt
file with random passwords for the users, public and private ssh keys for them
and the user_keys.txt file that matches the generated keys).
To generate a tar file for the user scs we can execute the following:
$docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
To see the contents and the text inside the user_pass.txt file we can do:
Basically we are removing the existing docker-entrypoint.d scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:
AUTH_REQUEST_URI: URL to use for the auth_request, if the variable is not
found on the environment auth_request is not used.
HTML_ROOT: Base directory of the web server, if not passed the default
/usr/share/nginx/html is used.
Note that if we don t pass the variables everything works as if we were using
the original nginx image.
The contents of the configuration script are:
As we will see later the idea is to use the /sftp/data or /sftp/data/scs
folder as the root of the web published by this container and create an
Ingress object to provide access to it outside of our kubernetes cluster.
webhook-scsThe webhook-scs container is generated using the following Dockerfile:
Again, we use a multi-stage build because in production we wanted to support a
functionality that is not already on the official versions (streaming the
command output as a response instead of waiting until the execution ends); this
time we build the image applying the PATCH included on this
pull request against a released
version of the source instead of creating a fork.
The entrypoint.sh script is used to generate the webhook configuration file
for the existing hooks using environment variables (basically the
WEBHOOK_WORKDIR and the *_TOKEN variables) and launch the webhook
service:
The entrypoint.sh script generates the configuration file for the webhook
server calling functions that print a yaml section for each hook and
optionally adds rules to validate access to them comparing the value of a
X-Webhook-Token header against predefined values.
The expected token values are taken from environment variables, we can define
a token variable for each hook (DU_TOKEN, HARDLINK_TOKEN or S3_TOKEN)
and a fallback value (COMMON_TOKEN); if no token variable is defined for a
hook no check is done and everybody can call it.
The Hook
Definition documentation explains the options you can use for each hook, the
ones we have right now do the following:
du: runs on the $WORKDIR directory, passes as first argument to the
script the value of the path query parameter and sets the variable
OUTPUT_FORMAT to the fixed value json (we use that to print the output of
the script in JSON format instead of text).
hardlink: runs on the $WORKDIR directory and takes no parameters.
s3sync: runs on the $WORKDIR directory and sets a lot of environment
variables from values read from the JSON encoded payload sent by the caller
(all the values must be sent by the caller even if they are assigned an empty
value, if they are missing the hook fails without calling the script); we
also set the stream-command-output value to true to make the script show
its output as it is working (we patched the webhook source to be able to
use this option).
The du hook scriptThe du hook script code checks if the argument passed is a directory,
computes its size using the du command and prints the results in text format
or as a JSON dictionary:
The hardlink hook scriptThe hardlink hook script is really simple, it just runs the
util-linux version of the
hardlink
command on its working directory:
We use that to reduce the size of the stored content; to manage versions of
files and folders we keep each version on a separate directory and when one or
more files are not changed this script makes them hardlinks to the same file on
disc, reducing the space used on disk.
The s3sync hook scriptThe s3sync hook script uses the s3fs
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using rsync; all values needed to execute
the task are taken from environment variables:
Deployment objectsThe system is deployed as a StatefulSet with one replica.
Our production deployment is done on AWS and to be
able to scale we use EFS for our
PersistenVolume; the idea is that the volume has no size limit, its
AccessMode can be set to ReadWriteMany and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.
For development we use k3d and we are also able to scale the
StatefulSet for testing because we use a ReadWriteOnce PVC, but it points
to a hostPath that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different k3d nodes use the same folder
on the host.
secrets.yamlThe secrets file contains the files used by the mysecureshell container that
can be generated using kubernetes pods as follows (we are only creating the
scs user):
$kubectl run "mysecureshell"--restart='Never'--quiet--rm--stdin\ --image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"$kubectl run "mysecureshell"--restart='Never'--quiet--rm--stdin\ --image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
Once we have the files we can generate the secrets.yaml file as follows:
The resulting secrets.yaml will look like the following file (the base64
would match the content of the files, of course):
pvc.yamlThe persistent volume claim for a simple deployment (one with only one instance
of the statefulSet) can be as simple as this:
On this definition we don t set the storageClassName to use the default one.
Volumes in our development environment (k3d)In our development deployment we create the following PersistentVolume as
required by the
Local
Persistence Volume Static Provisioner (note that the /volumes/scs-pv has to
be created by hand, in our k3d system we mount the same host directory on the
/volumes path of all the nodes and create the scs-pv directory by hand
before deploying the persistent volume):
And to make sure that everything works as expected we update the PVC definition
to add the right storageClassName:
Volumes in our production environment (aws)In the production deployment we don t create the PersistentVolume (we are
using the
aws-efs-csi-driver which
supports Dynamic Provisioning) but we add the storageClassName (we set it
to the one mapped to the EFS driver, i.e. efs-sc) and set ReadWriteMany
as the accessMode:
statefulset.yamlThe definition of the statefulSet is as follows:
Notes about the containers:
nginx: As this is an example the web server is not using an
AUTH_REQUEST_URI and uses the /sftp/data directory as the root of the web
(to get to the files uploaded for the scs user we will need to use /scs/
as a prefix on the URLs).
mysecureshell: We are adding the IPC_OWNER capability to the container to
be able to use some of the sftp-* commands inside it, but they are
not really needed, so adding the capability is optional.
webhook: We are launching this container in privileged mode to be able to
use the s3fs-fuse, as it will not work otherwise for now (see this
kubernetes issue); if
the functionality is not needed the container can be executed with regular
privileges; besides, as we are not enabling public access to this service we
don t define *_TOKEN variables (if required the values should be read from a
Secret object).
Notes about the volumes:
the devfuse volume is only needed if we plan to use the s3fs command on
the webhook container, if not we can remove the volume definition and its
mounts.
service.yamlTo be able to access the different services on the statefulset we publish the
relevant ports using the following Service object:
ingress.yamlTo download the scs files from the outside we can add an ingress object like
the following (the definition is for testing using the localhost name):
DeploymentTo deploy the statefulSet we create a namespace and apply the object
definitions shown before:
$kubectl create namespace scs-demo
namespace/scs-demo created
$kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
Once the objects are deployed we can check that all is working using kubectl:
$kubectl -n scs-demo get all,secrets,ingress
NAME READY STATUS RESTARTS AGE
pod/scs-0 3/3 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/scs-svc ClusterIP 10.43.0.47 <none>22/TCP,80/TCP,9000/TCP 21s
NAME READY AGE
statefulset.apps/scs 1/1 24s
NAME TYPE DATA AGE
secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s
secret/scs-secrets Opaque 3 39s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s
At this point we are ready to use the system.
Usage examples
File uploadsAs previously mentioned in our system the idea is to use the sftp server from
other Pods, but to test the system we are going to do a kubectl port-forward
and connect to the server using our host client and the password we have
generated (it is on the user_pass.txt file, inside the users.tar archive):
$kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 ->22
Forwarding from [::1]:2020 ->22
$PF_PID=$!$sftp -P 2020 scs@127.0.0.1 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp>ls-ladrwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 .
dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 ..
sftp>!date-R> /tmp/date.txt 2
sftp>put /tmp/date.txt .Uploading /tmp/date.txt to /date.txt
date.txt 100% 32 27.8KB/s 00:00
sftp>ls-l-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt
sftp>ln date.txt date.txt.1 3
sftp>ls-l-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
sftp>put /tmp/date.txt date.txt.2 4
Uploading /tmp/date.txt to /date.txt.2
date.txt 100% 32 27.8KB/s 00:00
sftp>ls-l5
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2
sftp>exit$kill"$PF_PID"[1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
We connect to the sftp service on the forwarded port with the scs user.
We put a file we have created on the host on the directory.
We do a hard link of the uploaded file.
We put a second copy of the file we created locally.
On the file list we can see that the two first files have two hardlinks
File retrievalsIf our ingress is configured right we can download the date.txt file from the
URL http://localhost/scs/date.txt:
Use of the webhook containerTo finish this post we are going to show how we can call the hooks directly,
from a CronJob and from a Job.
Direct script call (du)In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a port-forward and call the script with an existing PATH (the
root directory) and a bad one:
$kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$PF_PID=$!$JSON="$(curl -s"http://localhost:9000/hooks/du?path=.")"$echo$JSON "path":"","bytes":"4160"
$JSON="$(curl -s"http://localhost:9000/hooks/du?path=foo")"$echo$JSON "error":"The provided PATH ('foo') is not a directory"
$kill$PF_PID
As we only have files on the base directory we print the disk usage of the .
PATH and the output is in json format because we export OUTPUT_FORMAT with
the value json on the webhook configuration.
Cronjobs (hardlink)As explained before, the webhook container can be used to run cronjobs; the
following one uses an alpine container to call the hardlink script each
minute (that setup is for testing, obviously):
The following console session shows how we create the object, allow a couple of
executions and remove it (in production we keep it running but once a day, not
each minute):
$kubectl -n scs-demo apply -f webhook-cronjob.yaml 1
cronjob.batch/hardlink created
$kubectl -n scs-demo get pods -l"cronjob=hardlink"-w2
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Pending 0 0s
hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s
hardlink-27735351-zvpnb 0/1 Completed 0 2s
^C
$kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3
Mode: real
Method: sha256
Files: 3
Linked: 1 files
Compared: 0 xattrs
Compared: 1 files
Saved: 32 B
Duration: 0.000220 seconds
$sleep 60
$kubectl -n scs-demo get pods -l"cronjob=hardlink"4
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Completed 0 83s
hardlink-27735352-br5rn 0/1 Completed 0 23s
$kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5
Mode: real
Method: sha256
Files: 3
Linked: 0 files
Compared: 0 xattrs
Compared: 0 files
Saved: 0 B
Duration: 0.000070 seconds
$kubectl -n scs-demo delete -f webhook-cronjob.yaml 6
cronjob.batch "hardlink" deleted
This command creates the cronjob object.
This checks the pods with our cronjob label, we interrupt it once we see
that the first run has been completed.
With this command we see the output of the execution, as this is the fist
execution we see that date.txt.2 has been replaced by a hardlink (the
summary does not name the file, but it is the only option knowing the
contents from the original upload).
After waiting a little bit we check the pods executed again to get the name
of the latest one.
The log now shows that nothing was done.
As this is a demo, we delete the cronjob.
Jobs (s3sync)The following job can be used to synchronise the contents of a directory in a
S3 bucket with the SCS Filesystem:
The file with parameters for the script must be something like this:
Once we have both files we can run the Job as follows:
$kubectl -n scs-demo create secret generic webhook-job-secrets \ 1
--from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$kubectl -n scs-demo apply -f webhook-job.yaml 2
job.batch/s3sync created
$kubectl -n scs-demo get pods -l"cronjob=s3sync"3
NAME READY STATUS RESTARTS AGE
s3sync-zx2cj 0/1 Completed 0 12s
$kubectl -n scs-demo logs s3sync-zx2cj 4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec
total size is 15,075 speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$kubectl -n scs-demo delete -f webhook-job.yaml 5
job.batch "s3sync" deleted
$kubectl -n scs-demo delete secrets webhook-job-secrets 6
secret "webhook-job-secrets" deleted
Here we create the webhook-job-secrets secret that contains the
s3sync.json file.
This command runs the job.
Checking the label cronjob=s3sync we get the Pods executed by the job.
Here we print the logs of the completed job.
Once we are finished we remove the Job.
And also the secret.
Final remarksThis post has been longer than I expected, but I believe it can be useful for
someone; in any case, next time I ll try to explain something shorter or will
split it into multiple entries.
Fiction
A few days ago somebody asked me and I think it is an often requested to perhaps all fiction readers as to why we like fiction? First of all, reading in itself is told as food for the soul. Because, whenever you write or read anything you don t just read it, you also visualize it. And that visualization is and would be far greater than any attempt in cinema as there are no budget constraints and it takes no more than a minute to visualize a scenario if the writer is any good. You just close your eyes and in a moment you are transported to a different world. This is also what is known as world building . Something fantasy writers are especially gifted in. Also, with the whole parallel Universes being a reality, it is just so much fertile land for imagination that I just cannot believe that it hasn t been worked to death to date. And you do need a lot of patience to make a world, to make characters, to make characters a bit eccentric one way or the other. And you have to know to put into a three, five, or whatever number of acts you want to put in. And then, of course, they have readers like us who dream and add more color to the story than the author did. As we take his, her, or their story and weave countless stories depending on where we are, where we are and who we are.
What people need to understand is that not just readers want escapism but writers too want to escape from the human condition. And they find solace in whatever they write. The well-known example of J.R.R. Tolkien is always there. How he must have felt each day coming after war, to somehow find the strength and just dream away, transport himself to a world of hobbits, elves, and other mysterious beings. It surely must have taken a lot of pain from him that otherwise, he would have felt. There are many others. What also does happen now and then, is authors believe in their own intelligence so much, that they commit crimes, but that s par for the course.
Dean Koontz, Odd Apocalypse
Currently, I am reading the above title. It is perhaps one of the first horror title books that I have read which has so much fun. The hero has a great sense of wit, humor, and sarcasm that you can cut butter with it. Now if you got that, this is par for the wordplay happening every second paragraph and I m just 100 pages in of the 500-page Novel.
Now, while I haven t read the whole book and I m just speculating, what if at the end we realize that the hero all along was or is the villain. Sadly, we don t have many such twisted stories and that too is perhaps because most people used to have black and white rather than grey characters. From all my reading, and even watching web series and whatnot, it is only the Europeans who seem to have a taste for exploring grey characters and giving twists at the end that people cannot anticipate. Even their heroes or heroines are grey characters. and they can really take you for a ride. It is also perhaps how we humans are, neither black nor white but more greyish. Having grey characters also frees the author quite a bit as she doesn t have to use so-called tropes and just led the characters to lead themselves.
Indian Book publishing Industry
I do know Bengali stories do have a lot of grey characters, but sadly most of the good works are still in Bengali and not widely published compared to say European or American authors. While there is huge potential in the Indian publishing market for English books and there is also hunger, getting good and cheap publishers is the issue. Just recently SAGE publishing division shut down and this does not augur well for the Indian market. In the past few years, I and other readers have seen some very good publishing houses quit India for one reason or the other. GST has also made the sector more expensive. The only thing that works now and has been for some time is the seconds and thirds market. For e.g. I just bought today about 15-20 books @INR 125/- a kind of belated present for the self. That would be what, at the most 2 USD or 2 Euros per book. I bet even a burger costs more than that, but again India being a price-sensitive market, at these prices the seconds book sells. And these are all my favorite authors, Lee Child, Tom Clancy, Dean Koontz, and so on and so forth. I also saw a lot of fantasy books but they would have to wait for another day.
Tourism in India for Debconf 23
I had shared a while back that I would write a bit about tourism as Debconf or Annual Debian Conference will happen in India next year around this time. I was supposed to write it in the FAQ but couldn t find a place or a corner where I could write it. There are actually two things that people need to be aware of. The one thing that people need to be very aware of is food poisoning or Delhi Belly. This is a far too common sight that I have witnessed especially with westerners when they come to visit India. I am somewhat shocked that it hasn t been shared in the FAQ but then perhaps we cannot cover all the bases therein. I did find this interesting article and would recommend the suggestions given in it wholeheartedly. I would suggest people coming to India to buy and have purifying water tablets with them if they decide to stay back and explore India.
Now the problem with tourism is, that one can have as much tourism as one wants. One of the unique ways I found some westerners having the time of their life is buying an Indian Rickshaw or Tuk-Tuk and traveling with it. A few years ago, when I was more adventourous-spirited I was able to meet a few of them. There is also the Race with Rickshaws that happens in Rajasthan and you get to see about 10 odd cities in and around Rajasthan state and get to see the vibrancy in the North. If somebody really wants to explore India, then I would suggest getting down to Goa, specifically, South Goa, meeting with the hippie crowd, and getting one of the hippie guidebooks to India. Most people forget that the Hippies came to India in the 1960s and many of them just never left. Tap water in Pune is ok, have seen and experienced the same in Himachal, Garwhal, and Uttarakhand, although it has been a few years since I have been to those places. North-East is a place I have yet to venture into.
India does have a lot of beauty but most people are not clean-conscious so if you go to common tourist destinations, you will find a lot of garbage. Most cities in India do give you an option of homestays and some even offer food, so if you are on a budget as well as wanna experience life with an Indian family, that could be something you could look into. So you can see and share about India with different eyes.
There is casteism, racism, and all that. Generally speaking, you would see it wielded a lot more in your face in North India than in South India where it is there but far more subtle. About food, what has been shared in the India BOF. Have to say, it doesn t even scratch the surface. If you stay with an Indian family, there is probably a much better chance of exploring the variety of food that India has to offer. From the western perspective, we tend to overcook stuff and make food with Masalas but that s the way most people like it. People who have had hot sauces or whatnot would probably find India much easier to adjust to as tastes might be similar to some extent.
If you want to socialize with young people, while discos are an option, meetup.com also is a good place. You can share your passions and many people have taken to it with gusto. We also have been hosting Comiccons in India, but I haven t had the opportunity to attend them so far. India has a rich oral culture reach going back a few thousand years, but many of those who are practicing those reside more in villages rather than in cities. And while there have been attempts in the past to record them, most of those have come to naught as money runs out as there is no commercial viability to such projects, but that probably is for another day.
In the end, what I have shared is barely a drop in the ocean that is India. Come, have fun, explore, enjoy and invigorate yourself and others
I am traveling to Europe, specifically to Ireland, for a 6 days for a
work meeting.
I thought I could use my phone there. So I looked at my phone provider's
services in Europe, and found the "Fido roaming" services:
https://www.fido.ca/mobility/roaming
The fees, at the time of writing, at fifteen (15!) dollars PER DAY
to get access to my regular phone service (not unlimited!!).
If I do not use that "roaming" service, the fees are:
2$/min
0.75$/text
10$/20MB
That is absolutely outrageous. Any random phone plan in Europe will be
cheaper than this, by at least one order of magnitude. Just to take
any example:
https://www.tescomobile.ie/sim-only-plans.aspx
Those fine folks offer a one-time, prepaid plan for 15 for 28 days
which includes:
unlimited data
1000 minutes
500 text messages
12GB data elsewhere in Europe
I think it's absolutely scandalous that telecommunications providers
in Canada can charge so much money, especially since the most
prohibitive fee (the "non-prepaid" plans) are automatically charged
if I happen to forget to remove my sim card or put my phone in
"airplane mode".
As advised, I have called customer service at Fido for advice on how
to handle this situation. They have confirmed those are the only plans
available for travelers and could not accommodate me otherwise. I have
notified them I was in the process of filing this complaint.
I believe that Canada has become the technological dunce of the world,
and I blame the CRTC for its lack of regulation in that matter. You
should not allow those companies to grow into such a cartel that they
can do such price-fixing as they wish.
I haven't investigated Fido's competitors, but I will bet at least one
of my hats that they do not offer better service.
I attach a screenshot of the Fido page showing those
outrageous fees.
I have no illusions about this having any effect. I thought of
filing such a complain after the Rogers outage as well, but
felt I had less of a standing there because I wasn't affected that
much (e.g. I didn't have a life-threatening situation myself).
This, however, was ridiculous and frustrating enough to trigger this
outrage. We'll see how it goes...
"We will respond to you within 10 working days."
Response from CRTC
They did respond within 10 days. Here is the full response:
Dear Antoine Beaupr :
Thank you for contacting us about your mobile telephone international roaming service plan rates concern with Fido Solutions Inc. (Fido).
In Canada, mobile telephone service is offered on a competitive basis. Therefore, the Canadian Radio-television and Telecommunications Commission (CRTC) is not involved in Fido's terms of service (including international roaming service plan rates), billing and marketing practices, quality of service issues and customer relations.
If you haven't already done so, we encourage you to escalate your concern to a manager if you believe the answer you have received from Fido's customer service is not satisfactory.
Based on the information that you have provided, this may also appear to be a Competition Bureau matter. The Competition Bureau is responsible for administering and enforcing the Competition Act, and deals with issues such as false or misleading representations, deceptive marketing practices and collusion. You can reach the Competition Bureau by calling 1-800-348-5358 (toll-free), by TTY (for deaf and hard of hearing people) by calling 1-866-694-8389 (toll-free). For more contact information, please visit http://www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/eng/00157.html
When consumers are not satisfied with the service they are offered, we encourage them to compare the products and services of other providers in their area and look for a company that can better match their needs. The following tool helps to show choices of providers in your area: https://crtc.gc.ca/eng/comm/fourprov.htm
Thank you for sharing your concern with us.
In other words, complain with Fido, or change providers. Don't
complain to us, we don't manage the telcos, they self-regulate.
Great job, CRTC. This is going great. This is exactly why we're one of
the most expensive countries on the planet for cell phone service.
Live chat with Fido
Interestingly, the day after I received that response from the CRTC,
I received this email from Fido, while traveling:
Date: Tue, 13 Sep 2022 10:10:00 -0400
From: Fido DONOTREPLY@fido.ca
To: REDACTED
Subject: Courriel d avis d itin rance Fido
Roaming Welcome Confirmation
Fido
Date : 13 septembre 2022
Num ro de compte : [redacted]
Bonjour
Antoine Beaupr !
Nous vous crivons pour vous indiquer qu au moins un utilisateur inscrit votre compte s est r cemment connect un r seau en itin rance.
Vous trouverez ci-dessous le message texte de bienvenue en itin rance envoy l utilisateur (ou aux utilisateurs), qui contenait les tarifs d itin rance
applicables.
Message texte de bienvenue en itin rance
Destinataire : REDACTED
Date et heure : 2022-09-13 / 10:10:00
Allo, ici Fido : Bienvenue destination! Vous tes inscrit Fido Nomade alors utilisez vos donn es, parlez et textez comme vous le faites la
maison. Depuis le 1 mars 2022 le tarif cette destination pour 15 $/jour (+ taxes) et valide tous les jours jusqu' 23 h 59 HE, peu importe le fuseau
horaire dans lequel vous vous trouvez. Bon voyage! Des questions? Consultez fido.ca/m/itinerance ou composez +15149333436 (sans frais).
Besoin d aide?
PLANIFIEZ UN VOYAGE AVEC Fido NomadeMC
D couvrez nos options d itin rance et restez en contact l tranger sans vous soucier de votre
facture.
D tails
G rez votre compte
G rez vos produits et services Fido la maison ou sur la route gr ce Mon
Compte.
D tails
Ce courriel est produit automatiquement; veuillez ne pas y r pondre. Ce courriel (et toute pi ce jointe) est confidentiel. Si vous n tes pas le destinataire,
veuillez supprimer le pr sent message et en d truire toute copie.
FIDO SOLUTIONS 800, RUE DE LA GAUCHETI RE OUEST
BUREAU 4000 MONTR AL (QU BEC) H5A 1K3
Fido
I found that message utterly confusing (and yes, I can read
french). Basically, it says that some user (presumably me!) connected
to the network with roaming. I did just disabled airplane mode on my
phone to debug a Syncthing bug but had not enabled roaming.
So this message seemed to say that I would be charged 15$ (per DAY!)
for roaming from now on. Confused, I tried their live chat to try to
clarify things, worried I would get charged even more for calling tech
support on *611. This is a transcript of the chat:
F: Hi! What are we doing today?
Type in your question or choose from the options below:
* Track my Equipment Order
* View Bill Online
* Payment Options
* iPhone 14 Pre-Order
A: i received a message about roaming while abroad but i did not
enable roaming on my phone, will i be charged anyways?
F: I think I know what you re asking for.
Select the topic that best matches your request or try rephrasing your question.
A: no
F: Thank you, this will help us to improve! Would you like to chat with a specialist?
Chat with a specialist
I'll get a specialist to help you with this.
It appears that you're not signed in. Your session may have timed out.
To save time and identify your account details, please sign in to My Account.
Sign in
I'm not able to sign in
Have any questions specific to your Fido account? To service you faster, please identify yourself by completing the form below.
A: Personal info
Form submitted
F: Thank you! I'll connect you with the next available specialist.
Your chat is being transferred to a Live Chat agent. Thanks for your patience.
We are here to assist you and we kindly ask that our team members be treated with respect and dignity. Please note that abuse directed towards any Consumer Care Specialist will not be tolerated and will result in the termination of your conversation with us.
All of our agents are with other customers at the moment. Your chat is in a priority sequence and someone will be with you as soon as possible. Thanks!
Thanks for continuing to hold. An agent will be with you as soon as possible.
Thank you for your continued patience. We re getting more Live Chat requests than usual so it s taking longer to answer. Your chat is still in a priority sequence and will be answered as soon as an agent becomes available.
Thank you so much for your patience we're sorry for the wait. Your chat is still in a priority sequence and will be answered as soon as possible.
Hi, I'm [REDACTED] from Fido in [REDACTED]. May I have your name please?
A: hi i am antoine, nice to meet you
sorry to use the live chat, but it's not clear to me i can safely
use my phone to call support, because i am in ireland and i'm
worried i'll get charged for the call
F: Thank You Antoine , I see you waited to speak with me today, thank you for your patience.Apart from having to wait, how are you today?
A: i am good thank you
[... delay ...]
A: should i restate my question?
F: Yes please what is the concern you have?
A: i have received an email from fido saying i someone used my phone for roaming
it's in french (which is fine), but that's the gist of it
i am traveling to ireland for a week
i do not want to use fido's services here... i have set the phon eto airplane mode for most of my time here
F: The SMS just says what will be the charges if you used any services.
A: but today i have mistakenly turned that off and did not turn on roaming
well it's not a SMS, it's an email
F: Yes take out the sim and keep it safe.Turun off or On for roaming you cant do it as it is part of plan.
A: wat
F: if you used any service you will be charged if you not used any service you will not be charged.
A: you are saying i need to physically take the SIM out of the phone?
i guess i will have a fun conversation with your management once i return from this trip
not that i can do that now, given that, you know, i nee dto take the sim out of this phone
fun times
F: Yes that is better as most of the customer end up using some kind of service and get charged for roaming.
A: well that is completely outrageous
roaming is off on the phone
i shouldn't get charged for roaming, since roaming is off on the phone
i also don't get why i cannot be clearly told whether i will be charged or not
the message i have received says i will be charged if i use the service
and you seem to say i could accidentally do that easily
can you tell me if i have indeed used service sthat will incur an extra charge?
are incoming text messages free?
F: I understand but it is on you if you used some data SMS or voice mail you can get charged as you used some services.And we cant check anything for now you have to wait for next bill.
and incoming SMS are free rest all service comes under roaming.
That is the reason I suggested take out the sim from phone and keep it safe or always keep the phone or airplane mode.
A: okay
can you confirm whether or not i can call fido by voice for support?
i mean for free
F: So use your Fido sim and call on +1-514-925-4590 on this number it will be free from out side Canada from Fido sim.
A: that is quite counter-intuitive, but i guess i will trust you on that
thank you, i think that will be all
F: Perfect, Again, my name is [REDACTED] and it s been my pleasure to help you today. Thank you for being a part of the Fido family and have a great day!
A: you too
So, in other words:
they can't tell me if I've actually been roaming
they can't tell me how much it's going to cost me
I should remove the SIM card from my phone (!?) or turn on
airplane mode, but the former is safer
I can call Fido support, but not on the usual *611, and
instead on that long-distance-looking phone number, and yes, that
means turning off airplane mode and putting the SIM card in, which
contradicts step 3
Also notice how the phone number from the live chat
(+1-514-925-4590) is different than the one provided in the email
(15149333436). So who knows what would have happened if I would have
called the latter. The former is mentioned in their contact page.
I guess the next step is to call Fido over the phone and talk to a
manager, which is what the CRTC told me to do in the first place...
I ended up talking with a manager (another 1h phone call) and they
confirmed there is no other package available at Fido for this. At
best they can provide me with a credit if I mistakenly use the roaming
by accident to refund me, but that's it. The manager also confirmed
that I cannot know if I have actually used any data before reading the
bill, which is issued on the 15th of every month, but only
available... three days later, at which point I'll be back home
anyways.
Fantastic.
OK, you re probably thinking. John, you talk a lot about things like Gopher and personal radios, and now you want to talk about building a reliable network out of USB drives?
Well, yes. In fact, I ve already done it.
What is sneakernet?
Normally, sneakernet is a sort of tongue-in-cheek reference to using disconnected storage to transport data or messages. By disconnect storage I mean anything like CD-ROMs, hard drives, SD cards, USB drives, and so forth. There are times when loading up 12TB on a device and driving it across town is just faster and easier than using the Internet for the same. And, sometimes you need to get data to places that have no Internet at all.
Another reason for sneakernet is security. For instance, if your backup system is online, and your systems being backed up are online, then it could become possible for an attacker to destroy both your primary copy of data and your backups. Or, you might use a dedicated computer with no network connection to do GnuPG (GPG) signing.
What about reliable sneakernet, then?
TCP is often considered a reliable protocol. That means that the sending side is generally able to tell if its message was properly received. As with most reliable protocols, we have these components:
After transmitting a piece of data, the sender retains it.
After receiving a piece of data, the receiver sends an acknowledgment (ACK) back to the sender.
Upon receiving the acknowledgment, the sender removes its buffered copy of the data.
If no acknowledgment is received at the sender, it retransmits the data, in case it gets lost in transit.
It reorders any packets that arrive out of order, so that the recipient s data stream is ordered correctly.
Now, a lot of the things I just mentioned for sneakernet are legendarily unreliable. USB drives fail, CD-ROMs get scratched, hard drives get banged up. Think about putting these things in a bicycle bag or airline luggage. Some of them are going to fail.
You might think, well, I ll just copy files to a USB drive instead of move them, and once I get them onto the destination machine, I ll delete them from the source. Congratulations! You are a human retransmit algorithm! We should be able to automate this!
And we can.
Enter NNCP
NNCP is one of those things that almost defies explanation. It is a toolkit for building asynchronous networks. It can use as a carrier: a pipe, TCP network connection, a mounted filesystem (specifically intended for cases like this), and much more. It also supports multi-hop asynchronous routing and asynchronous meshing, but these are beyond the scope of this particular article.
NNCP s transports that involve live communication between two hops already had all the hallmarks of being reliable; there was a positive ACK and retransmit. As of version 8.7.0, NNCP s ACKs themselves can also be asynchronous meaning that every NNCP transport can now be reliable.
Yes, that s right. Your ACKs can flow over tapes and USB drives if you want them to.
I use this for archiving and backups.
If you aren t already familiar with NNCP, you might take a look at my NNCP page. I also have a lot of blog posts about NNCP.
Those pages describe the basics of NNCP: the packet (the unit of transmission in NNCP, which can be tiny or many TB), the end-to-end encryption, and so forth. The new command we will now be interested in is nncp-ack.
The Basic Idea
Here are the basic steps to processing this stuff with NNCP:
First, we use nncp-xfer -rx to process incoming packets from the USB (or other media) device. This moves them into the NNCP inbound queue, deleting them from the media device, and verifies the packet integrity.
We use nncp-ack -node $NODE to create ACK packets responding to the packets we just loaded into the rx queue. It writes a list of generated ACKs onto fd 4, which we save off for later use.
We run nncp-toss -seen to process the incoming queue. The use of -seen causes NNCP to remember the hashes of packets seen before, so a duplicate of an already-seen packet will not be processed twice. This command also processes incoming ACKs for packets we ve sent out previously; if they pass verification, the relevant packets are removed from the local machine s tx queue.
Now, we use nncp-xfer -keep -tx -mkdir -node $NODE to send outgoing packets to a given node by writing them to a given directory on the media device. -keep causes them to remain in the outgoing queue.
Finally, we use the list of generated ACK packets saved off in step 2 above. That list is passed to nncp-rm -node $NODE -pkt < $FILE to remove those specific packets from the outbound queue. The reason is that there will never be an ACK of ACK packet (that would create an infinite loop), so if we don t delete them in this manner, they would hang around forever.
You can see these steps follow the same basic outline on upstream s nncp-ack page.
One thing to keep in mind: if anything else is running nncp-toss, there is a chance of a race condition between steps 1 and 2 (if nncp-toss gets to it first, it might not get an ack generated). This would sort itself out eventually, presumably, as the sender would retransmit and it would be ACKed later.
Further ideas
NNCP guarantees the integrity of packets, but not ordering between packets; if you need that, you might look into my Filespooler program. It is designed to work with NNCP and can provide ordered processing.
An example script
Here is a script you might try for this sort of thing. It may have more logic than you need really, you just need the steps above but hopefully it is clear.
#!/bin/bash
set -eo pipefail
MEDIABASE="/media/$USER"
# The local node name
NODENAME=" hostname "
# All nodes. NODENAME should be in this list.
ALLNODES="node1 node2 node3"
RUNNNCP=""
# If you need to sudo, use something like RUNNNCP="sudo -Hu nncp"
NNCPPATH="/usr/local/nncp/bin"
ACKPATH=" mktemp -d "
# Process incoming packets.
#
# Parameters: $1 - the path to scan. Must contain a directory
# named "nncp".
procrxpath ()
while [ -n "$1" ]; do
BASEPATH="$1/nncp"
shift
if ! [ -d "$BASEPATH" ]; then
echo "$BASEPATH doesn't exist; skipping"
continue
fi
echo " *** Incoming: processing $BASEPATH"
TMPDIR=" mktemp -d "
# This rsync and the one below can help with
# certain permission issues from weird foreign
# media. You could just eliminate it and
# always use $BASEPATH instead of $TMPDIR below.
rsync -rt "$BASEPATH/" "$TMPDIR/"
# You may need these next two lines if using sudo as above.
# chgrp -R nncp "$TMPDIR"
# chmod -R g+rwX "$TMPDIR"
echo " Running nncp-xfer -rx"
$RUNNNCP $NNCPPATH/nncp-xfer -progress -rx "$TMPDIR"
for NODE in $ALLNODES; do
if [ "$NODE" != "$NODENAME" ]; then
echo " Running nncp-ack for $NODE"
# Now, we generate ACK packets for each node we will
# process. nncp-ack writes a list of the created
# ACK packets to fd 4. We'll use them later.
# If using sudo, add -C 5 after $RUNNNCP.
$RUNNNCP $NNCPPATH/nncp-ack -progress -node "$NODE" \
4>> "$ACKPATH/$NODE"
fi
done
rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
rm -fr "$TMPDIR"
done
proctxpath ()
while [ -n "$1" ]; do
BASEPATH="$1/nncp"
shift
if ! [ -d "$BASEPATH" ]; then
echo "$BASEPATH doesn't exist; skipping"
continue
fi
echo " *** Outgoing: processing $BASEPATH"
TMPDIR=" mktemp -d "
rsync -rt "$BASEPATH/" "$TMPDIR/"
# You may need these two lines if using sudo:
# chgrp -R nncp "$TMPDIR"
# chmod -R g+rwX "$TMPDIR"
for DESTHOST in $ALLNODES; do
if [ "$DESTHOST" = "$NODENAME" ]; then
continue
fi
# Copy outgoing packets to this node, but keep them in the outgoing
# queue with -keep.
$RUNNNCP $NNCPPATH/nncp-xfer -keep -tx -mkdir -node "$DESTHOST" -progress "$TMPDIR"
# Here is the key: that list of ACK packets we made above - now we delete them.
# There will never be an ACK for an ACK, so they'd keep sending forever
# if we didn't do this.
if [ -f "$ACKPATH/$DESTHOST" ]; then
echo "nncp-rm for node $DESTHOST"
$RUNNNCP $NNCPPATH/nncp-rm -debug -node "$DESTHOST" -pkt < "$ACKPATH/$DESTHOST"
fi
done
rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
rm -rf "$TMPDIR"
# We only want to write stuff once.
return 0
done
procrxpath "$MEDIABASE"/*
echo " *** Initial tossing..."
# We make sure to use -seen to rule out duplicates.
$RUNNNCP $NNCPPATH/nncp-toss -progress -seen
proctxpath "$MEDIABASE"/*
echo "You can unmount devices now."
echo "Done."
After I wrote hledger, I got some good feedback, both from a friend
in-person and also on Twitter.
My in-person friend asked, frankly, do I really try to manage money like
this: tracking every single expense? Which affirms my suspicion, that many
people don't, and that it perhaps isn't essential to do so.
Combined with the details below, 3/4 of the way through my experiment with
using hledger, I'm not convinced that it has been a good idea.
I'm quoting my Twitter feedback here in order to respond. The context is
handling when I have used the "wrong" card to pay for something: a card
affiliated with my family expenses for something personal, or vice versa. With
double-entry book-keeping, and one pair of transactions, the destination
account can either record the expense category:
When you accidentally use the family CV for personal expenses, credit the
account "family:liabilities:creditcard:jon" instead of
"family:liabilities:creditcard". That'll allow you to track w/ 2 postings.
This is an interesting idea: create a sub-account underneath the credit card,
and I would have a separate balance representing the money I owed. Before:
$ hledger bal -t
-3 family:liabilities:creditcard
3 jon:expenses:coffee
$ hledger bal -t
-3 family:liabilities:creditcard
-3 jon
3 jon:expenses:coffee
Great. However, what process would clear the balance on that sub-account? In
practice, I don't make a separate, explicit payment to the credit card from
my personal accounts. It's paid off in full by direct debit from the family
shared account. In practice, such dues are accumulated and settled with one
off bank transfers, now and then.
Since the sub-account is still part of the credit card heirarchy, I can't
just use a set of virtual postings to consolidate that value with other
liabilities, or cover it. Any transaction in there which did not correspond
to a real transaction on the credit card would make the balance drift away
from the real-word credit statements. The only way I could see this working
would be if the direct debit that settles the credit card was artificially
split to clear the sub-account, and then the amount owed would be lost.
https://twitter.com/pranesh/status/1516819846431789058:
A "receivable" account would function like the "dues" accounts I described
in hledger (except "receivable" is an established account type in
double-entry book-keeping). Here I think Pranesh is proposing using these
two accounts in addition to the others on a posting. E.g.
This balances, and we end up with two other accounts, which are tracking the
exact same thing. I only owe 3, but if you didn't know that the accounts were
"views" onto the same thing, you could mistakenly think I owed 6.
I can't see the advantage of this over just using a virtual, unbalanced posting.
Dues, Liabilities
I'd invented accounts called "dues" to track moneys owed. The more correct
term for this in accounting parlance would be "accounts receivable", as in
one of the examples above. I could instead be tracking moneys due; this is
a classic liability. Liabilities have negative balances.
jon:liabilities:family -3
This means, I owe the family 3.
Liability accounts like that are identical to "dues" accounts. A positive
balance in a Liability is a counter-intuitive way of describing moneys owed
to me, rather than by me. And, reviewing a lot of the coding I did this year,
I've got myself hopelessly confused with the signs, and made lots of errors.
Crucially, double-entry has not protected me from making those mistakes:
of course, I'm circumventing it by using unbalanced virtual postings in many
cases (although I was not consistent in where I did this), but even if I used
a pair of accounts as in the last example above, I could still screw it up.
FTP master
This month I accepted 420 and rejected 44 packages. The overall number of packages that got accepted was 422.
I am sad to write the following lines, but unfortunately there are people who rather take advantage of others instead of doing a proper maintenance of their packages.
So, in order to find time slots for as much packages in NEW as possible, I no longer write a debian/copyright for others. I know it is a boring task to collect the copyright information, but our policy still requires this. Of course nobody is perfect and certainly one or the other license or copyright holder can be overlooked. Luckily most of the contributors maintain their debian/copyright very thouroughly with a terrific result.
On the other hand some contributors upload only some crap and demand that I exactly list what is missing. I am no longer willing to do this. I am going to stop processing after I found a few missing things and reject the package. When I see repeatedly uploads containing only improvements with things I pointed out, I will process this package only after all others from NEW are done.
Debian LTS
This was my ninety-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
This month my all in all workload has been 35.75h. Unfortunately Stretch LTS has moved to Stretch ELTS and Buster LTS was not yet opened in July. So I think this is the first month I did not work all assigned hours.
Besides things on security-master, I only worked 20h on moving the LTS documentation to their new destination. At the moment the documentation is spread over several locations. As searching over all those locations is not possible, it shall be collected at one place.
Debian ELTS
This month was the forty-eighth ELTS month.
During my allocated time I uploaded:
[ELA-643-1] for ncurses (5.9+20140913-1+deb8u4, 6.0+20161126-1+deb9u3)
[ELA-655-1] for libhttp-daemon-perl (6.01-1+deb8u1, 6.01-1+deb9u1)
I also started to work on mod-wsgi and my patch was already approved by the maintainer. Now I am waiting for the security team to decide whether it will be uploaded as DSA or via PU.
Last but not least I did some days of frontdesk duties.
Other stuff
This month I uploaded new upstream versions or improved packaging of: