(quoted from my other blog at since a new OS might be interesting for many and this is published in separate planets) ALP - The Adaptable Linux Platform is a new operating system from SUSE to run containerized and virtualized workloads. It is in early prototype phase, but the development is done completely openly so it s easy to jump in to try it.For this trying out, I used the latest encrypted build as of the writing, 22.1 from ALP images. I imported it in virt-manager as a Generic Linux 2022 image, using UEFI instead of BIOS, added a TPM device (which I m interested in otherwise) and referring to an Ignition JSON file in the XML config in virt-manager.The Ignition part is pretty much fully thanks to Paolo Stivanin who studied the secrets of it before me. But here it goes - and this is required for password login in Cockpit to work in addition to SSH key based login to the VM from host - first, create config.ign file:
where password SHA512 hash can be obtained using openssl passwd -6 and the ssh key is your public ssh key.That file is put to eg /tmp and referred in the virt-manager s XML like follows:
Now we can boot up the VM and ssh in - or you could log in directly too but it s easier to copy-paste commands when using ssh.Inside the VM, we can follow the ALP documentation to install and start Cockpit:
Check your host s IP address with ip -a, and open IP:9090 in your host s browser:Login with root / your password and you shall get the front page: and many other pages where you can manage your ALP deployment via browser:All in all, ALP is in early phases but I m really happy there s up-to-date documentation provided and people can start experimenting it whenever they want. The images from the linked directory should be fairly good, and test automation with openQA has been started upon as well.You can try out the other example workloads that are available just as well.
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version Client version: stable-2.12.2 Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
pre-kubernetes-setup -------------------- control plane namespace does not already exist can create non-namespaced resources can create ServiceAccounts can create Services can create Deployments can create CronJobs can create ConfigMaps can create Secrets can read Secrets can read extension-apiserver-authentication configmap no clock skew detected
linkerd-version --------------- can determine the latest version cli is up-to-date
Status check results are
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f - Rendering Linkerd CRDs... Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install kubectl apply -f - namespace/linkerd created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created serviceaccount/linkerd-identity created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created serviceaccount/linkerd-destination created secret/linkerd-sp-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created secret/linkerd-policy-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created clusterrole.rbac.authorization.k8s.io/linkerd-policy created clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created role.rbac.authorization.k8s.io/linkerd-heartbeat created rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created serviceaccount/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created serviceaccount/linkerd-proxy-injector created secret/linkerd-proxy-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created configmap/linkerd-config created secret/linkerd-identity-issuer created configmap/linkerd-identity-trust-roots created service/linkerd-identity created service/linkerd-identity-headless created deployment.apps/linkerd-identity created service/linkerd-dst created service/linkerd-dst-headless created service/linkerd-sp-validator created service/linkerd-policy created service/linkerd-policy-validator created deployment.apps/linkerd-destination created cronjob.batch/linkerd-heartbeat created deployment.apps/linkerd-proxy-injector created service/linkerd-proxy-injector created secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
linkerd-existence ----------------- 'linkerd-config' config map exists heartbeat ServiceAccount exist control plane replica sets are ready no unschedulable pods control plane pods are ready cluster networks contains all pods cluster networks contains all services
linkerd-config -------------- control plane Namespace exists control plane ClusterRoles exist control plane ClusterRoleBindings exist control plane ServiceAccounts exist control plane CustomResourceDefinitions exist control plane MutatingWebhookConfigurations exist control plane ValidatingWebhookConfigurations exist proxy-init container runs as root user if docker container runtime is used
linkerd-identity ---------------- certificate config is valid trust anchors are using supported crypto algorithm trust anchors are within their validity period trust anchors are valid for at least 60 days issuer cert is using supported crypto algorithm issuer cert is within its validity period issuer cert is valid for at least 60 days issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls ------------------------------- proxy-injector webhook has valid cert proxy-injector cert is valid for at least 60 days sp-validator webhook has valid cert sp-validator cert is valid for at least 60 days policy-validator webhook has valid cert policy-validator cert is valid for at least 60 days
linkerd-version --------------- can determine the latest version cli is up-to-date
control-plane-version --------------------- can retrieve the control plane version control plane is up-to-date control plane and cli versions match
linkerd-control-plane-proxy --------------------------- control plane proxies are healthy control plane proxies are up-to-date control plane proxies and cli versions match
Status check results are
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f - namespace/linkerd-viz created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created serviceaccount/metrics-api created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created serviceaccount/prometheus created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created serviceaccount/tap created rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created secret/tap-k8s-tls created apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created role.rbac.authorization.k8s.io/web created rolebinding.rbac.authorization.k8s.io/web created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created serviceaccount/web created server.policy.linkerd.io/admin created authorizationpolicy.policy.linkerd.io/admin created networkauthentication.policy.linkerd.io/kubelet created server.policy.linkerd.io/proxy-admin created authorizationpolicy.policy.linkerd.io/proxy-admin created service/metrics-api created deployment.apps/metrics-api created server.policy.linkerd.io/metrics-api created authorizationpolicy.policy.linkerd.io/metrics-api created meshtlsauthentication.policy.linkerd.io/metrics-api-web created configmap/prometheus-config created service/prometheus created deployment.apps/prometheus created service/tap created deployment.apps/tap created server.policy.linkerd.io/tap-api created authorizationpolicy.policy.linkerd.io/tap created clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created serviceaccount/tap-injector created secret/tap-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created service/tap-injector created deployment.apps/tap-injector created server.policy.linkerd.io/tap-injector-webhook created authorizationpolicy.policy.linkerd.io/tap-injector created networkauthentication.policy.linkerd.io/kube-api-server created service/web created deployment.apps/web created serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard Linkerd dashboard available at: http://localhost:50750 Grafana dashboard available at: http://localhost:50750/grafana Opening Linkerd dashboard in the default browser Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components 2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
Two weeks ago, I had the chance to go see Cory Doctorow at my local independent
bookstore, in Montr al. He was there to present his latest essay, co-written
with Rebecca Giblin1. Titled Chokepoint Capitalism: How Big Tech and
Big Content Captured Creative Labor Markets and How We'll Win Them Back, it
focuses on the impact of monopolies and monopsonies (more on this later) on
creative workers.
The book is divided in two main parts:
Part one, Culture has been captured (chapters 1 to 11), is a series of
case studies that focus on different examples of market failure. The specific
sectors analysed are the book market, the news media, the music industry,
Hollywood, the mobile apps industry and the online video platforms.
Part two, Braking anticompetitive flywheels (chapters 12 to 19), looks at
different solutions to try to fix these failures.
Although Doctorow is known for his strong political stances, I have to say I'm
quite surprised by the quality of the research Giblin and he did for this book.
They both show a pretty advanced understanding of the market dynamics they look
at, and even though most of the solutions they propose aren't new or
groundbreaking, they manage to be convincing and clear.
That is to say, you certainly don't need to be an economist to understand or
enjoy this book :)
As I have mentioned before, the book heavily criticises monopolies, but also
monopsonies a market structure that has only one buyer (instead of one
seller). I find this quite interesting, as whereas people are often familiar
with the concept of monopolies, monopsonies are frequently overlooked.
The classic example of a monopsony is a labor market with a single employer:
there is a multitude of workers trying to sell their labor power, but in the
end, working conditions are dictated by the sole employer, who gets to decide
who has a job and who hasn't. Mining towns are good real-world examples of
monopsonies.
In the book, the authors argue most of the contemporary work produced by
creative workers (especially musicians and writers) is sold to monopsonies and
oligopsonies, like Amazon2 or major music labels. This creates a
situation where the consumers are less directly affected by the lack of
competition in the market (they often get better prices), but where creators
have an increasingly hard time making ends meet. Not only this, but natural
monopsonies3 are relatively rare, making the case for breaking the
existing ones even stronger.
Apart from the evident need to actually start applying (the quite good)
antitrust laws in the USA, some of the other solutions put forward are:
Transparency Rights giving creative workers a way to audit the companies
that sell their work and make sure they are paid what they are due.
Collective Action
Time Limits on Copyright Contracts making sure creators that sell their
copyrights to publishers or labels can get them back after a reasonable
period of time.
Radical Interoperability forcing tech giants to make their walled-gardens
interoperable.
Minimum Wages for Creative Work enforcing minimum legal rates for workers
in certain domains, like what is already done for screenplays in the US by
members of the Writers Guild of America.
Collective Ownership
Overall, I found this book quite enjoying and well written. Since I am not a
creative worker myself and don't experience first-hand the hardships presented
in the book, it was the occasion for me to delve more deeply in this topic.
Chances are I'll reuse some of the expos s in my classes too.
Professor at the Melbourne Law School and Director of the
Intellectual Property Research Institute of Australia, amongst other things.
More on her here.
Amazon owns more than 50% of the US physical book retail market and
has an even higher market share for ebooks and audiobooks (via Audible). Not
only this, but with the decline of the physical book market, audiobooks are
an increasingly important source of revenue for authors.
Natural monopolies happen when it does not make economic sense for
multiple enterprises to compete in a market. Critical infrastructures, like
water supply or electricity, make for good examples of natural monopolies. It
simply wouldn't be efficient to have 10 separate electrical cables connecting
your house to 10 separate electric grids. In my opinion, such monopolies are
acceptable (and even desirable), as long as they are collectively owned,
either by the State or by local entities (municipalities, non-profits, etc.).
So this week I recycled a talk I'd given in the past, about how even using
extremely simple parsers allows a lot of useful static-analysis to be done,
for specific niche use-cases.
This included examples of scanning comments above classes to ensure they
referred to the appropriate object, ensuring that specific function
calls always included a specific (optional) parameter, etc.
Nothing too complex, but I figured I'd give a new example this time, and
I remembered I'd recently written a bunch of functions for an interpreter
which I'd ordered quite deliberately.
Assume you're writing a BASIC interpreter, you need to implement a bunch of
built-in maths functions such as SIN, COS, TAN, then some string-related
functions LEFT$, RIGHT$, MID$, etc.
When it comes to ordering there are a couple of approaches:
Stick them all in one package:
builtins/builtins.go
Create a package and group them:
builtins/maths.go
builtins/string.go
.. etc
Personal preference probably dictates the choice you make, but either way I think it would be rational and obvious that you'd put the functions in alphabetical order:
I did that myself, and I wrote a perl-script to just parse the file using a simple regexp "^func\s+([^(]+)\(" but then I figured this was a good time to write a real static-analysis tool.
The golang environment is full of trivial little linters for various purposes, and the standard "go vet .." driver makes it easy to invoke them. Realizing that I was going to get driven in the same way it was obvious I'd write something called "alphaVet".
So anyway, half written for a talk, half-written because of the name:
Golang linter that reports failures if functions aren't in alphabetical order
I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything
bad. Is still valid, but I plan to get rid of it soon. It was created in 2013.
The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4
I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also,
the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg
The new key has been uploaded to some public keyservers.
If you would like to sign the new key, please follow the steps in the Debian wiki.
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/
For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8
Cheers!
Our local Debian user group gathered on Sunday October 30th to chat, work on
Debian and do other, non-Debian related hacking :) This time around, we met at
EfficiOS's1 offices. As you can see from the following
picture, it's a great place and the view they have is pretty awesome. Many
thanks for hosting us!
This was our 4th meeting this year and once again, attendance was great: 10
people showed up to work on various things.
Following our bi-monthly schedule, our next meeting should be in December, but
I'm not sure it'll happen. December can be a busy month here and I will have to
poke our mailing list to see if people have the spoons for an event.
This time around, I was able to get a rough log of the Debian work people did:
pollo:
Shantaram
I know I have been quite behind in review of books but then that s life. First up is actually not as much as a shocker but somewhat of a pleasant surprise. So, a bit of background before I share the news. If you have been living under a rock, then about 10-12 years ago a book called Shantaram was released. While the book is said to have been released in 2003/4 I got it in my hand around 2008/09 or somewhere around that. The book is like a good meal, a buffet. To share the synopsis, Lin a 20 something Australian guy gets involved with a girl, she encourages him to get into heroin, he becomes a heroin user. And drugs, especially hard drugs need constant replenishment, it is a chemical thing. So, to fund those cravings, he starts to steal, rising to rob a bank and while getting away shoots a cop who becomes dead. Now either he surrenders or is caught is unclear, but he is tortured in the jail. So one day, he escapes from prison, lands up at home of somebody who owes him a favor, gets some money, gets a fake passport and lands up in Mumbai/Bombay as it was then known. This is from where the actual story starts. And how a 6 foot something Australian guy relying on his street smartness and know how the transformation happens from Lin to Shantaram. Now what I have shared is perhaps just 5% of the synopsis, as shared the real story starts here.
Now the good news, last week 4 episodes of Shantaram were screened by Apple TV. Interestingly, I have seen quite a number people turning up to buy or get this book and also sharing it on Goodreads. Now there seems to have been some differences from the book to TV. Now I m relying on 10-12 year back memory but IIRC Khaderbhai, one of the main characters who sort of takes Lin/Shantaram under his wing is an Indian. In the series, he is a western or at least looks western/Middle Eastern to me. Also, they have tried to reproduce 1980s in Mumbai/Bombay but dunno how accurate they were My impression of that city from couple of visits at that point in time where they were still more tongas (horse-ridden carriages), an occasional two wheelers and not many three wheelers. Although, it was one of the more turbulent times as lot of agitation for worker rights were happening around that time and a lot of industrial action. Later that led to lot of closure of manufacturing in Bombay and it became more commercial. It would be interesting to know whether they shot it in actual India or just made a set somewhere in Australia, where it possibly might have been shot. The chawl of the book needs a bit of arid land and Australia has lots of it.
It is also interesting as this was a project that had who s who interested in it for a long time but somehow none of them was able to bring the project to fruition, the project seems to largely have an Australian cast as well as second generations of Indians growing in Australia. To take names, Amitabh Bacchan, Johnny Depp, Russel Crowe each of them wanted to make it into a feature film. In retrospect, it is good it was not into a movie, otherwise they would have to cut a lot of material and that perhaps wouldn t have been sufficient. Making it into a web series made sure they could have it in multiple seasons if people like it. There is a lot between now and 12 episodes to even guess till where it would leave you then. So, if you have not read the book and have some holidays coming up, can recommend it. The writing IIRC is easy and just flows. There is a bit of action but much more nuance in the book while in the web series they are naturally more about action. There is also quite a bit of philosophy between him and Kaderbhai and while the series touches upon it, it doesn t do justice but then again it is being commercially made.
Read the book, see the series and share your thoughts on what you think. It is possible that the series might go up or down but am sharing from where I see it, may do another at the end of the season, depending on where they leave it and my impressions.
Update A slight update from the last blog post. Seems Rishi Sunak seems would be made PM of UK. With Hunt as chancellor and Rishi Sunak, Austerity 2.0 seems complete. There have been numerous articles which share how austerity gives rises to fascism and vice-versa. History gives lot of lessons about the same. In Germany, when the economy was not good, it was all blamed on the Jews for number of years. This was the reason for rise of Hitler, and while it did go up by a bit, propaganda by him and his loyalists did the rest. And we know and have read about the Holocaust. Today quite a few Germans deny it or deny parts of it but that s how misinformation spreads. Also Hitler is looked now more as an aberration rather than something to do with the German soul. I am not gonna talk more as there is still lots to share and that actually perhaps requires its own blog post to do justice for the same.
The Pyramid by Henning Mankell
I had actually wanted to review this book but then the bomb called Shantaram appeared and I had to post it above. I had read two-three books before it, but most of them were about multiple beheadings and serial killers. Enough to put anybody into depression. I do not know if modern crime needs to show crime and desperation of and to such a level. Why I and most loved and continue to love Sherlock Holmes as most stories were not about gross violence but rather a homage to the art of deduction, which pretty much seems to be missing in modern crime thrillers rather than grotesque stuff.
In that, like a sort of fresh air I read/am reading the Pyramid by Henning Mankell. The book is about a character made by Monsieur Henning Mankell named Kurt Wallender. I am aware of the series called Wallender but haven t yet seen it. The book starts with Wallender as a beat cop around age 20 and on his first case. He is ambitious, wants to become a detective and has a narrow escape with death. I wouldn t go much into it as it basically gives you an idea of the character and how he thinks and what he does. He is more intuitive by nature and somewhat of a loner. Probably most detectives IRL are, dunno, have no clue. At least in the literary world it makes sense, in real world think there would be much irony for sure. This is speculation on my part, who knows.
Back to the book though. The book has 5 stories a sort of prequel one could say but also not entirely true. The first case starts when he is a beat cop in 1969 and he is just a beat cop. It is a kind of a prequel and a kind of an anthology as he covers from the first case to the 1990s where he is ending his career sort of.
Before I start sharing about the stories in the book, I found the foreword also quite interesting. It asks questions about the interplay of the role of welfare state and the Swedish democracy. Incidentally did watch couple of videos about a sort of mixed sort of political representation that happens in Sweden. It uses what is known as proportional representation. Ironically, Sweden made a turn to the far right this election season. The book was originally in Swedish and were translated to English by Ebba Segerberg and Laurie Thompson.
While all the stories are interesting, would share the last three or at least ask the questions of intrigue. Of course, to answer them you would need to read the book
So the last three stories I found the most intriguing.
The first one is titled Man on the Beach. Apparently, a gentleman goes to one of the beaches, a sort of lonely beach, hails a taxi and while returning suddenly dies. The Taxi driver showing good presence of mind takes it to hospital where the gentleman is declared dead on arrival. Unlike in India, he doesn t run away but goes to the cafeteria and waits there for the cops to arrive and take his statement. Now the man is in his early 40s and looks to be fit. Upon searching his pockets he is found to relatively well-off and later it turns out he owns a couple of shops. So then here are the questions ?
What was the man doing on a beach, in summer that beach is somewhat popular but other times not so much, so what was he doing there?
How did he die, was it a simple heart attack or something more? If he had been drugged or something then when and how?
These and more questions can be answered by reading the story Man on the Beach .
2. The death of a photographer Apparently, Kurt lives in a small town where almost all the residents have been served one way or the other by the town photographer. The man was polite and had worked for something like 40 odd years before he is killed/murdered. Apparently, he is murdered late at night. So here come the questions
a. The shop doesn t even stock any cameras and his cash box has cash. Further investigation reveals it is approximate to his average takeout for the day. So if it s not for cash, then what is the motive ?
b. The body was discovered by his cleaning staff who has worked for almost 20 years, 3 days a week. She has her own set of keys to come and clean the office? Did she give the keys to someone, if yes why?
c. Even after investigation, there is no scandal about the man, no other woman or any vices like gambling etc. that could rack up loans. Also, nobody seems to know him and yet take him for granted till he dies. The whole thing appears to be quite strange. Again, the answers lie in the book.
3. The Pyramid Kurt is sleeping one night when the telephone rings. The scene starts with a Piper Cherokee, a single piston aircraft flying low and dropping something somewhere or getting somebody from/on the coast of Sweden. It turns and after a while crashes. Kurt is called to investigate it. Turns out, the plane was supposed to be destroyed. On crash, both the pilot and the passenger are into pieces so only dental records can prove who they are. Same day or a day or two later, two seemingly ordinary somewhat elderly women, spinsters, by all accounts, live above the shop where they sell buttons and all kinds of sewing needs of the town. They seem middle-class. Later the charred bodies of the two sisters are found :(. So here come the questions
a.Did the plane drop something or pick something somebody up ? The Cherokee is a small plane so any plane field or something it could have landed up or if a place was somehow marked then could be dropped or picked up without actually landing.
b. The firefighter suspects arson started at multiple places with the use of petrol? The question is why would somebody wanna do that? The sisters don t seem to be wealthy and practically everybody has bought stuff from them. They weren t popular but weren t also unpopular.
c. Are the two crimes connected or unconnected? If connected, then how?
d. Most important question, why the title Pyramid is given to the story. Why does the author share the name Pyramid. Does he mean the same or the original thing? He could have named it triangle. Again, answers to all the above can be found in the book.
One thing I also became very aware of during reading the book that it is difficult to understand people s behavior and what they do. And this is without even any criminality involved in. Let s say for e.g. I die in some mysterious circumstances, the possibility of the police finding my actions in last days would be limited and this is when I have hearing loss. And this probably is more to do with how our minds are wired. And most people I know are much more privacy conscious/aware than I am.
Japan s Hikikomori
Japan has been a curious country. It was more or less a colonizer and somewhat of a feared power till it dragged the U.S. unnecessarily in World War 2. The result of the two atom bombs and the restitution meant that Japan had to build again from the ground up. It is also in a seismically unstable place as they have frequent earthquakes although the buildings are hardened/balanced to make sure that vibrations don t tear buildings apart. Had seen years ago on Natgeo a documentary that explains all that. Apart from that, Japan was helped by the Americans and there was good kinship between them till the 1980s till it signed the Plaza Accord which enhanced asset price bubbles that eventually burst. Something from which they are smarting even today. Japan has a constitutional monarchy. A somewhat history lesson or why it exists even today can be found here. Asset price bubbles of the 1980s, more than 50 percent of the population on zero hour contracts and the rest tend to suffer from overwork. There is a term called Karoshi that explains all. An Indian pig-pen would be two, two and a half times larger than a typical Japanese home. Most Japanese live in micro-apartments called konbachiku . All of the above stresses meant that lately many young Japanese people have become Hikikomori. Bloomberg featured about the same a couple of years back. I came to know about it as many Indians are given the idea of Japan being a successful country without knowing the ills and issues it faces. Even in that most women get the wrong end of the short stick i.e. even it they manage to find jobs, it would be most back-breaking menial work. The employment statistics of Japan s internal ministry tells its own story.
If you look at the data above, it seems that the between 2002 and 2019, the share of zero hour contracts has increased while regular work has decreased. This also means that those on the bottom of the ladder can no longer afford a home. There is and was a viral video called Lost in Manboo that went viral few years ago. It is a perfect set of storms. Add to that the Fukushima nuclear incident about which I had shared a few years ago. While the workers are blamed but all design decisions are taken by the management. And as was shown in numerous movies, documentaries etc. Interestingly, and somewhat ironically, the line workers knew the correct things to do and correct decisions to take unlike the management. The shut-ins story is almost a decade or two decades old. It is similar story in South Korea but not as depressive as the in Japan. It is somewhat depressive story but needed to be shared. The stories shared in the bloomberg article makes your heart ache
Backpacks
In and around 2015, I had bought a Targus backpack, very much similar to the Targus TSB194US-70 Motor 16-inch Backpack. That bag has given me a lot of comfort over the years but now has become frayed the zip sometimes work and sometimes doesn t. Unlike those days there are a bunch of companies now operating in India. There are eight different companies that I came to know about, Aircase, Harrisons Sirius, HP Oddyssey, Mokobara, Artic Hunter, Dell Pro Hybrid, Dell Roller Backpack and lastly the Decathlon Quechua Hiking backpack 32L NH Escape 500 . Now of all the above, two backpacks seem the best, the first one is Harrisons Sirius, with 45L capacity, I don t think I would need another bag at all. The runner-up is the Decathlon Quecha Hiking Backpack 32L. One of the better things in all the bags is that all have hidden pockets for easy taking in and out of passport while having being ant-theft. I do not have to stress how stressful it is to take out the passport and put it back in. Almost all the vendors have made sure that it is not a stress point anymore. The good thing about the Quecha is that they are giving 10 years warranty, the point to be asked is if that is does the warranty cover the zip. Zips are the first thing that goes out in bags.That actually has what happened to my current bag. Decathlon has a store in Wakad, Pune while I have reached out to the gentleman in charge of Harrisons India to see if they have a reseller in Pune. So hopefully, in next one week I should have a backpack that isn t spilling with things all over the place, whichever I m able to figure out.
I first installed Ubuntu when Ubuntu 6.06 LTS Dapper Drake was released. I was brand new to Linux. This was Ubuntu s first LTS release; the very first release of Ubuntu was only a year and a half before. I was impressed by how usable and useful the system was. It soon became my primary home operating system and I wanted to help make it better.
On October 15, 2009, I was helping test the release candidates ISOs for the Ubuntu 9.10 release. Specifically, I tested Edubuntu. Edubuntu has since been discontinued but at the time it was an official Ubuntu flavor preloaded with lots of education apps. One of those education apps was Moodle, an e-learning platform.
When testing Moodle, I found that a default installation would make Moodle impossible to use locally. I figured out how to fix this issue. This was really exciting: I finally found an Ubuntu bug I knew how to fix. I filed the bug report.
This was very late in the Ubuntu 9.10 release process and Ubuntu was in the Final Freeze state. In Final Freeze, every upload to packages included in the default install need to be individually approved by a member of the Ubuntu Release Team. Also, I didn t have upload rights to Ubuntu. Jordan Mantha (LaserJock), an Edubuntu maintainer, sponsored my bug fix upload.
I also forwarded my patch to Debian.
While trying to figure out what wasn t working with Moodle, I stumbled across a packaging bug. Edubuntu provided a choice of MySQL or PostgreSQL for the system default database. MySQL was the default, but if PostgreSQL were chosen instead, Moodle wouldn t work. I figured out how to fix this bug too a week later. Jordan sponsored this upload and Steve Langasek from the Release Team approved it so it also was able to be fixed before 9.10 was released.
Although the first bug was new to 9.10 because of a behavior change in a low-level dependency, this PostgreSQL bug existed in stable Ubuntu releases. Therefore, I prepared Stable Release Updates for Ubuntu 9.04 and Ubuntu 8.04 LTS.
Afterwards
Six months later, I was able to attend my first Ubuntu Developer Summit. I was living in Bahrain (in the Middle East) at the time and a trip to Belgium seemed easier to me than if I were living in the United States where I usually live. This was the Ubuntu Developer Summit where planning for Ubuntu 10.10 took place. I like to believe that I helped with the naming since I addedMaverick to the wiki page where people contribute suggestions.
I did not apply for financial sponsorship to attend and I stayed in a budget hotel on the other side of Brussels. The event venue was on the outskirts of Brussels so there wasn t a direct bus or metro line to get there. I rented a car. I didn t yet have a smartphone and I had a LOT of trouble navigating to and from the site every day. I learned then that it s best to stay close to the conference site since a lot of the event is actually in the unstructured time in the evenings. Fortunately, I managed to arrive in time for Mark Shuttleworth s keynote where the Unity desktop was first announced. This was released in Ubuntu 10.10 in the Ubuntu Netbook Remix and became the default for Ubuntu Desktop in Ubuntu 11.04.
Ubuntu s switch to Unity provided me with a huge opportunity. In April 2011, GNOME 3.0 was released. I wanted to try it but it wasn t yet packaged in Ubuntu or Debian. It was suggested that I could help work on packaging the major new version in a PPA. The PPA was convenient because I was able to get permission to upload there easier than being able to upload directly to Ubuntu. My contributions there then enabled me to get upload rights to the Ubuntu Desktop packages later that year.
At a later Ubuntu Developer Summit, it was suggested that I start an official Ubuntu flavor for GNOME. So along with Tim Lunn (darkxst), I co-founded Ubuntu GNOME. Years later, Canonical stopped actively developing Unity; instead, Ubuntu GNOME was merged into Ubuntu Desktop.
Along the way, I became an Ubuntu Core Developer and a Debian Developer. And in January 2022, I joined Canonical on the Desktop Team. This all still feels amazing to me. It took me a long time to be comfortable calling myself a developer!
Conclusion
My first Ubuntu bugfix was 13 years ago this week. Because Ubuntu historically uses alphabetical adjective animal release names, 13 years means that we have rolled around to the letter K again! Later today, we begin release candidate ISO testing for Ubuntu 22.10 Kinetic Kudu .
I encourage you to help us test the release candidates and report bugs that you find. If you figure out how to fix a bug, we still sponsor bug fixes. If you are an Ubuntu contributor, I highly encourage you to attend an Ubuntu Summit if you can. The first Ubuntu Summit in years will be in 3 weeks in Prague, but the intent is for the Ubuntu Summits to be recurring events again.
The purpose of this post is to demonstrate a first approach to the analysis of multiwavelength kinetic data, like those obtained using stopped-flow data. To practice, we will use data that were acquired during the stopped flow practicals of the MetBio summer school from the FrenchBIC. During the practicals, the student monitored the reaction of myoglobin (in its Fe(III) state) with azide, which yields a fast and strong change in the absorbance spectrum of the protein, which was monitored using a diode array. The data is publicly available on zenodo.
Aims of this tutorial
The purpose of this tutorial is to teach you to use the free softwareQSoas to run a simple, multiwavelength exponential fit on the data, and to look at the results. This is not a kinetics lecture, so that it will not go in depth about the use of the exponential fit and its meaning.
Getting started: loading the file
First, make sure you have a working version of QSoas, you can download them (for free) there. Then download the data files from zenodo. We will work only on the data file Azide-1.25mm_001.dat, but of course, the purpose of this tutorial is to enable you to work on all of them. The data files contain the time evolution of the absorbance for all wavelengths, in a matrix format, in which each row correpond to a time point and each column to a wavelength.
Start QSoas, and launch the command:
QSoas> load /comments='"'
Then, choose the Azide-1.25mm_001.dat data file. This should bring up a horizontal red line at the bottom of the data display, with X values between about 0 and 2.5. If you zoom on the red line with the mouse wheel, you'll realize it is data. The /comments='"' part is very important since it allows the extraction of the wavelength from the data. We will look at what it means another day. At this stage, you can look at the loaded data using the command:
QSoas> edit
You should have a window looking like this:
The rows each correspond to a data point displayed on the window below. The first column correspond to the X values, the second the Y values, and all the other ones to extra Y columns (they are not displayed by default). What is especially interesting is the first row, which contains a nan as the X value and what is obviously the wavelength for all the Y values. To tell that QSoas should take this line as the wavelength (which will be the perpendicular coordinate, the coordinate of the other direction of the matrix), first close the edit window and run:
QSoas> set-perp /from-row=0
Splitting and fitting
Now, we have a single dataset containing a lot of Y columns. We want to fit all of them simultaneously with a (mono) exponential fit. For that, we first need to split the big matrix into a series of X,Y datasets (because fitting only works on the first Y). This is possible by running:
QSoas> expand /style=red-to-blue /flags=kinetics
Your screen should now look like this:
You're looking at the kinetics at all wavelengths at the same time (this may take some time to display on your computer, it is after all a rather large number of data points). The /style=red-to-blue is not strictly necessary, but it gives the red to blue color gradient which makes things easier to look at (and cooler !). The /flags=kinetics is there to attach a label (a flag) to the newly created datasets so we can easily manipulate all of them at the same time. Then it's time to fit, with the following command:
QSoas> mfit-exponential-decay flagged:kinetics
This should bring up a new window. After resizing it, you should have something that looks like this:
The bottom of the fit window is taken by the parameters, each with two checkboxes on the right to set them fixed (i.e. not determined by the fitting mechanism) and/or global (i.e. with a single value for all the datasets, here all the wavelengths). The top shows the current dataset along with the corresponding fit (in green), and, below, the residuals. You can change the dataset by clicking on the horizontal arrows or using Ctrl+PgUp or Ctrl+PgDown (keep holding it to scan fast). See the Z = 728.15 showing that QSoas has recognized that the currently displayed dataset corresponds to the wavelength 728.15. The equation fitted to the data is: $$y(x) = A_\infty + A_1 \times \exp -(x - x_0)/\tau_1$$
In this case, while the \(A_1\) and \(A_\infty\) parameters clearly depend on the wavelength, the time constant of evolution should be independent of wavelength (the process happens at a certain rate regardless of the wavelength we're analyzing), so that the \(\tau_1\) parameter should be common for all the datasets/wavelengths. Just click on the global checkbox at the right of the tau_1 parameter, make sure it is checked, and hit the Fit button...
The fit should not take long (less than a minute), and then you end up with the results of the fits: all the parameters. The best way to look at the non global parameters like \(A_1\) and \(A_\infty\) is to use the Show Parameters item from the Parameters menu. Using it and clicking on A_inf too should give you a display like this one:
The A_inf parameter corresponds to the spectum at infinite time (of azide-bound heme), while the A_1 parameter corresponds to the difference spectrum between the initial (azide-free) and final (azide-bound) states.
Now, the fit is finished, you can save the parameters if you want to reload them in a later fit by using the Parameters/Save menu item or export them in a form more suitable for plotting using Parameters/Export (although QSoas can also display and the parameters saved using Save). This concludes this first approach to fitting the data. What you can do is
look at the depence of the tau_1 parameter as a function of the azide concentration;
try fitting more than one exponential, using for instance:
How to read the code above
All the lines starting by QSoas> in the code areas above are meant to be typed into the QSoas command line (at the bottom of the window), and started by pressing enter at the end. You must remove the QSoas> bit. The other lines (when applicable) show you the response of QSoas, in the terminal just above the command-line. You may want to play with the QSoas tutorial to learn more about how to interact with QSoas.
About QSoas
QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.1. You can freely (and at no cost) download its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Contact: find my email address there, or contact me on LinkedIn.
LaTeX the age-old typesetting system makes me angry. Not because it's bad.
To clarify, not because there's something better. But because there should
be.
When writing a document using LaTeX, if you are prone to procrastination
it can be very difficult to focus on the task at hand, because there are so
many yaks to shave. Here's a few points of advice.
format the document source for legible reading. Yes, it's the input
to the typesetter, and yes, the output of the typesetter needs to be
legible. But it's worth making the input easy to read, too. Because
avoid rebuilding your rendered document too often. It's slow, it takes you
out of the activity of writing, and it throws up lots of opportunities to
get distracted by some rendering nit that you didn't realise would happen.
Unless you are very good at manoeuvring around long documents, liberally
split them up. I think it's fine to have sections in their own source
files.
Machine-assisted moving around documents is good. If you use (neo)vim,
you can tweak exuberant-ctags to generate more useful tags for LaTeX
documents than what you get OOTB, including jumping to \label s and
the BibTeX source of \cite s. See this stackoverflow post.
If you use syntax highlighting in your editor, take a long, hard look at
what it's drawing attention to. It's not your text, that's for sure. Is
it worth having it on? Consider turning it off. Or (yak shaving beware!)
tweak it to de-emphasise things, instead of emphasising them. One small
example for (neo)vim, to change tokens recognised as being "todo" to
match the styling used for comments (which is normally de-emphasised):
hi def link texTodo Comment
In a nutshell, I think it's wise to move much document reviewing work back
into the editor rather than the rendered document, at least in the early
stages of a section. And to do that, you need the document to be as legible
as possible in the editor. The important stuff is the text you write, not
the TeX macros you've sprinkled around to format it.
A few tips I benefit from in terms of source formatting:
I stick a line of 78 '%' characters between each section and sub-section.
This helps to visually break them up and finding them in a scroll-past is
quicker.
I indent as much of the content as I can in each chapter/section/subsection
(however deep I go in sections) to tie them to the section they belong to
and see at a glance how deep I am in subsections, just like with source
code. The exception is environments that I can't due to other tool
limitations: I have code excerpts demarked by \begin code /\end code
which are executed by Haskell's GHCi interpreter, and the indentation can
interfere with Haskell's indentation rules.
For large documents (like a thesis), I have little helper "standalone" .tex
files whose purpose is to let me build just one chapter or section at a time.
I'm fairly sure I'll settle on a serif font for my final document. But I
have found that a sans-serif font is easier on my eyes on-screen. YMMV.
Of course, you need to review the rendered document too! I like to bounce that
to a tablet with a pen/stylus/pencil and review it in a different environment
to where I write. I then end up with a long list of scrawled notes, and a third
distinct activity, back at the writing desk, is to systematically go through
them and apply some GTD-style thinking to them: can I fix it in a few seconds?
Do it straight away. Delegate it? Unlikely Defer it? transfer the review note
into another system of record (such as LaTeX \\todo ).
And finally
This entry explains how I have configured a linux bridge, dnsmasq and
iptables to be able to run and communicate different virtualization systems
and containers on laptops running Debian GNU/Linux.
I ve used different variations of this setup for a long time with
VirtualBox and KVM for
the Virtual Machines and Linux-VServer,
OpenVZ, LXC and lately
Docker or Podman for the
Containers.
Required packagesI m running Debian Sid with systemd and network-manager to configure the
WiFi and Ethernet interfaces, but for the bridge I use bridge-utils with
ifupdown (as I said this setup is old, I guess ifupdow2 and ifupdown-ng
will work too).
To start and stop the DNS and DHCP services and add NAT rules when the
bridge is brought up or down I execute a script that uses:
ip from iproute2 to get the network information,
dnsmasq to provide the DNS and DHCP services (currently only the
dnsmasq-base package is needed and it is recommended by network-manager,
so it is probably installed),
iptables to configure NAT (for now docker kind of forces me to keep using
iptables, but at some point I d like to move to nftables).
To make sure you have everything installed you can run the following command:
Warning: To use a separate file with ifupdown make sure that /etc/network/interfaces
contains the line:
source /etc/network/interfaces.d/*
or add its contents to /etc/network/interfaces directly, if you prefer.
This configuration creates a bridge with the address 10.0.4.1 and
assumes that the machines connected to it will use the 10.0.4.0/24 network;
you can change the network address if you want, as long as you use a private
range and it does not collide with networks used in your Virtual Machines all
should be OK.
The vmbridge script is used to start the dnsmasq server and setup the NAT
rules when the interface is brought up and remove the firewall rules and stop
the dnsmasq server when it is brought down.
The vmbridge scriptThe vmbridge script launches an instance of dnsmasq that binds to the
bridge interface (vmbr0 in our case) that is used as DNS and DHCP server.
The DNS server reads the /etc/hosts file to publish local DNS names and
forwards all the other requests to the the dnsmasq server launched by
NetworkManager that is listening on the loopback interface.
As this server already does catching we disable it for our server, with the
added advantage that, if we change networks, new requests go to the new
resolvers because the DNS server handled by NetworkManager gets restarted and
flushes its cache (this is useful if we connect to a new network that has
internal DNS servers that are configured to do split DNS for internal services;
if we use this model all requests get the internal address as soon as the DNS
server is queried again).
The DHCP server is configured to provide IPs to unknown hosts for a sub range
of the addresses on the bridge network and use fixed IPs if the /etc/ethers
file has a MAC with a matching hostname on the /etc/hosts file.
To make things work with old DHCP clients the script also adds checksums to
the DHCP packets using iptables (when the interface is not linked to a
physical device the kernel does not add checksums, but we can fix it adding a
rule on the mangle table).
If we want external connectivity we can pass the nat argument and then the
script creates a MASQUERADE rule for the bridge network and enables IP
forwarding.
The script source code is the following:
NetworkManager ConfigurationThe default /etc/NetworkManager/NetworkManager.conf file has the following
contents:
Which means that it will leave interfaces managed by ifupdown alone and, by
default, will send the connection DNS configuration to systemd-resolved if it
is installed.
As we want to use dnsmasq for DNS resolution, but we don t want
NetworkManager to modify our /etc/resolv.conf we are going to add the
following file (/etc/NetworkManager/conf.d/dnsmasq.conf) to our system:
and restart the NetworkManager service:
$sudo systemctl restart NetworkManager.service
From now on the NetworkManager will start a dnsmasq service that queries
the servers provided by the DHCP servers we connect to on 127.0.0.1:53 but
will not touch our /etc/resolv.conf file.
Configuring systemd-resolvedIf we start using our own name server but our system has systemd-resolved
installed we will no longer need or use the DNS stub; programs using it will
use our dnsmasq server directly now, but we keep running systemd-resolved
for the host programs that use its native api or access it through
/etc/nsswitch.conf (when libnss-resolve is installed).
To disable the stub we add a /etc/systemd/resolved.conf.d/disable-stub.conf
file to our machine with the following content:
# Disable the DNS Stub Listener, we use our own dnsmasq
[Resolve]DNSStubListener=no
and restart the systemd-resolved to make sure that the stub is stopped:
$sudo systemctl restart systemd-resolved.service
Adjusting /etc/resolv.confFirst we remove the existing /etc/resolv.conf file (it does not matter if it
is a link or a regular file) and then create a new one that contains at least
the following line (we can add a search line if is useful for us):
nameserver 10.0.4.1
From now on we will be using the dnsmasq server launched when we bring up the
vmbr0 for multiple systems:
as our main DNS server from the host (if we use the standard
/etc/nsswitch.conf and libnss-resolve is installed it is queried first,
but the systemd-resolved uses it as forwarder by default if needed),
as the DNS server of the Virtual Machines or containers that use DHCP for
network configuration and attach their virtual interfaces to our bridge,
as the DNS server of docker containers that get the DNS information from
/etc/resolv.conf (if we have entries that use loopback addresses the
containers that don t use the host network tend to fail, as those addresses
inside the running containers are not linked to the loopback device of the
host).
TestingAfter all the configuration files and scripts are in place we just need to
bring up the bridge interface and check that everything works:
$# Bring interface up$sudo ifup vmbr0
$# Check that it is available$ip a ls dev vmbr0
4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP>mtu 1500 qdisc noqueue state DOWN
group default qlen 1000
link/ether 0a:b8:ef:b8:07:6c brd ff:ff:ff:ff:ff:ff
inet 10.0.4.1/24 brd 10.0.4.255 scope global vmbr0
valid_lft forever preferred_lft forever
$# View the listening ports used by our dnsmasq servers$sudo ss -tulpangrep dnsmasq
udp UNCONN 0 0 127.0.0.1:53 0.0.0.0:* users:(("dnsmasq",pid=1733930,fd=4))
udp UNCONN 0 0 10.0.4.1:53 0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=6))
udp UNCONN 0 0 0.0.0.0%vmbr0:67 0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=4))
tcp LISTEN 0 32 10.0.4.1:53 0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=7))
tcp LISTEN 0 32 127.0.0.1:53 0.0.0.0:* users:(("dnsmasq",pid=1733930,fd=5))
$# Verify that the DNS server works on the vmbr0 address$host www.debian.org 10.0.4.1
Name: 10.0.4.1
Address: 10.0.4.1#53
Aliases:
www.debian.org has address 130.89.148.77
www.debian.org has IPv6 address 2001:67c:2564:a119::77
Managing running systemsIf we want to update DNS entries and/or MAC addresses we can edit the
/etc/hosts and /etc/ethers files and reload the dnsmasq configuration
using the vmbridge script:
$sudo /usr/local/sbin/vmbridge vmbr0 reload
That call sends a signal to the running dnsmasq server and it reloads the
files; after that we can refresh the DHCP addresses from the client machines or
start using the new DNS names immediately.
I've begun writing up my phd and, not for the first time, I'm pondering
issues of how best to represent things. Specifically, rewrite rules.
Here's one way of representing an example rewrite rule:
streamFilter g . streamFilter f = streamFilter (g . f)
This is a fairly succinct representation. It's sort-of Haskell, but not
quite. It's an equation. The left-hand side is a pattern: it's intended
to describe not one expression but a family of expressions that match.
The lower case individual letters g and f are free variables:
labelled placeholders within the pattern that can be referred to on the
right hand side. I haven't stipulated what defines a free variable and
what is a more concrete part of the pattern. It's kind-of Haskell, and
I've used the well-known operator . to represent the two stream operators
(streamFilters) being connected together. (In practice, when we get
to the system where rules are actually applied, the connecting operator
is not going to be . at all, so this is also an approximation).
One thing I don't like about . here, despite its commonness, is having
to read right-to-left. I adopted the habit of using the lesser-known >>>
in a lot of my work (which is defined as (>>>) = flip (.)), which reads
left-to-right. And then I have the reverse problem: people aren't familiar
with >>>, and, just like ., it's a stand-in anyway.
Towards the beginning of my PhD, I spent some time inventing rewrite rules
to operate on pairs of operators taken from a defined, known set. I began
representing the rules much as in the example above. Later on, I wanted to
encode them as real Haskell, in order to check them more thoroughly. The
above rule, I first encoded like this
filterFilterPre = streamFilter g . streamFilter f
filterFilterPost = streamFilter (g . f)
prop_filterFilter s = filterFilterPre s == filterFilterPost s
This is real code: the operators were already implemented in
StrIoT, and the final expression defined a
property for QuickCheck.
However, it's still not quite a rewrite rule. The left-hand side, which should
be a pattern, is really a concrete expression. The names f and g are
masquerading as free variables but are really concretely defined in a preamble
I wrote to support running QuickCheck against these things: usually simple
stuff like g = odd, etc.
Eventually, I had to figure out how to really implement rewrite rules in
StrIoT. There were a few approaches I could take. One would be to express
the rules in some abstract form like the first example (but with properly
defined semantics) and write a parser for them: I really wanted to avoid
doing that.
As a happy accident, the solution I landed on was enabled by the semantics of
algebraic-graphs, a Graph library we
adopted to support representing a stream-processing topology. I wrote more
about that in data-types for representing stream-processing
programs.
I was able to implement rewrite rules as ordinary Haskell functions. The
left-hand side of the rewrite rule maps to the left-hand side (pattern) part of
a function definition. The function body implements the right-hand side. The
system that applies the rules attempts to apply each rewrite rule to every
sub-graph of a stream-processing program. The rewrite functions therefore need
to signal whether or not they're applicable at runtime. For that reason, the
return type is wrapped in Maybe, and we provide a catch-all pattern for every
rule which simply returns Nothing. The right-hand side implementation can be
pretty thorny. On the left-hand side, the stream operator connector we've
finally ended up with is Connect from algebraic-graphs.
Here's filter fuse,
taken from the full
ruleset:
filterFuse :: RewriteRule
filterFuse (Connect (Vertex a@(StreamVertex i (Filter sel1) (p:_) ty _ s1))
(Vertex b@(StreamVertex _ (Filter sel2) (q:_) _ _ s2))) =
let c = a operator = Filter (sel1 * sel2)
, parameters = [\[ (\p q x -> p x && q x) $(p) $(q) ]]
, serviceTime = sumTimes s1 sel1 s2
in Just (removeEdge c c . mergeVertices ( elem [a,b]) c)
filterFuse _ = Nothing
That's perhaps the simplest rule in the set. (See e.g.
hoistOp
for one of the worst!)
The question that remains to me, is, which representation, or representations,
to use in the thesis? I'm currently planning to skip the abstract example I
started with and start with the concrete Haskell encoding using QuickCheck.
I'm not sure if it seems weird to have two completely separate implementations
of the rules, but the simpler QuickCheck-checked rules are much closer to the
"core essence" of the rules than the final implementation in StrIoT. And the
derivation of the rules comes earlier in the timeline than the design work that
led to the final StrIoT implementation. The middle-option is still compromised,
however, by having concrete expressions pretending to be patterns. So I'm not
sure.
(Sorry if you have read this already, due to a tag mistake, my draft
copy got published)
I recently bought a refurbished thinkpad x260. If you have read my
post of my <a href= <!DOCTYPE html>
Laptop refreshment
This post describes how I ve put together a simple static content server for
kubernetes clusters using a Pod with a persistent volume and multiple
containers: an sftp server to manage contents, a web server to publish them
with optional access control and another one to run scripts which need access
to the volume filesystem.
The sftp server runs using
MySecureShell, the web
server is nginx and the script runner uses the
webhook tool to publish endpoints to call
them (the calls will come from other Pods that run backend servers or are
executed from Jobs or CronJobs).
HistoryThe system was developed because we had a NodeJS API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.
Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.
For the upload I decided to use the sftp protocol, mainly because I already
had an sftp container image based on
mysecureshell prepared; once
we settled on that we added sftp support to the API server and configured it
to upload the files to our server instead of using S3 buckets.
To publish the files we added a nginx container configured
to work as a reverse proxy that uses the
ngx_http_auth_request_module
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).
Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using kubectl exec with the existing containers
did not seem a good idea, as that is not supported by CronJobs objects, for
example).
The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the webhook tool to provide the
endpoints to call the scripts; for now we have three:
one to get the disc usage of a PATH,
one to hardlink all the files that are identical on the filesystem,
one to copy files and folders from S3 buckets to our filesystem.
Container definitions
mysecureshellThe mysecureshell container can be used to provide an sftp service with
multiple users (although the files are owned by the same UID and GID) using
standalone containers (launched with docker or podman) or in an
orchestration system like kubernetes, as we are going to do here.
The image is generated using the following Dockerfile:
The /etc/sftp_config file is used to
configure
the mysecureshell server to have all the user homes under /sftp/data, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after 5m of inactivity:
The entrypoint.sh script is the one responsible to prepare the container for
the users included on the /secrets/user_pass.txt file (creates the users with
their HOME directories under /sftp/data and a /bin/false shell and
creates the key files from /secrets/user_keys.txt if available).
The script expects a couple of environment variables:
SFTP_UID: UID used to run the daemon and for all the files, it has to be
different than 0 (all the files managed by this daemon are going to be
owned by the same user and group, even if the remote users are different).
SFTP_GID: GID used to run the daemon and for all the files, it has to be
different than 0.
And can use the SSH_PORT and SSH_PARAMS values if present.
It also requires the following files (they can be mounted as secrets in
kubernetes):
/secrets/host_keys.txt: Text file containing the ssh server keys in mime
format; the file is processed using the reformime utility (the one included
on busybox) and can be generated using the
gen-host-keys script included on the container (it uses ssh-keygen and
makemime).
/secrets/user_pass.txt: Text file containing lines of the form
username:password_in_clear_text (only the users included on this file are
available on the sftp server, in fact in our deployment we use only the
scs user for everything).
And optionally can use another one:
/secrets/user_keys.txt: Text file that contains lines of the form
username:public_ssh_ed25519_or_rsa_key; the public keys are installed on
the server and can be used to log into the sftp server if the username
exists on the user_pass.txt file.
The contents of the entrypoint.sh script are:
The container also includes a couple of auxiliary scripts, the first one can be
used to generate the host_keys.txt file as follows:
$docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
Where the script is as simple as:
And there is another script to generate a .tar file that contains auth data
for the list of usernames passed to it (the file contains a user_pass.txt
file with random passwords for the users, public and private ssh keys for them
and the user_keys.txt file that matches the generated keys).
To generate a tar file for the user scs we can execute the following:
$docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
To see the contents and the text inside the user_pass.txt file we can do:
Basically we are removing the existing docker-entrypoint.d scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:
AUTH_REQUEST_URI: URL to use for the auth_request, if the variable is not
found on the environment auth_request is not used.
HTML_ROOT: Base directory of the web server, if not passed the default
/usr/share/nginx/html is used.
Note that if we don t pass the variables everything works as if we were using
the original nginx image.
The contents of the configuration script are:
As we will see later the idea is to use the /sftp/data or /sftp/data/scs
folder as the root of the web published by this container and create an
Ingress object to provide access to it outside of our kubernetes cluster.
webhook-scsThe webhook-scs container is generated using the following Dockerfile:
Again, we use a multi-stage build because in production we wanted to support a
functionality that is not already on the official versions (streaming the
command output as a response instead of waiting until the execution ends); this
time we build the image applying the PATCH included on this
pull request against a released
version of the source instead of creating a fork.
The entrypoint.sh script is used to generate the webhook configuration file
for the existing hooks using environment variables (basically the
WEBHOOK_WORKDIR and the *_TOKEN variables) and launch the webhook
service:
The entrypoint.sh script generates the configuration file for the webhook
server calling functions that print a yaml section for each hook and
optionally adds rules to validate access to them comparing the value of a
X-Webhook-Token header against predefined values.
The expected token values are taken from environment variables, we can define
a token variable for each hook (DU_TOKEN, HARDLINK_TOKEN or S3_TOKEN)
and a fallback value (COMMON_TOKEN); if no token variable is defined for a
hook no check is done and everybody can call it.
The Hook
Definition documentation explains the options you can use for each hook, the
ones we have right now do the following:
du: runs on the $WORKDIR directory, passes as first argument to the
script the value of the path query parameter and sets the variable
OUTPUT_FORMAT to the fixed value json (we use that to print the output of
the script in JSON format instead of text).
hardlink: runs on the $WORKDIR directory and takes no parameters.
s3sync: runs on the $WORKDIR directory and sets a lot of environment
variables from values read from the JSON encoded payload sent by the caller
(all the values must be sent by the caller even if they are assigned an empty
value, if they are missing the hook fails without calling the script); we
also set the stream-command-output value to true to make the script show
its output as it is working (we patched the webhook source to be able to
use this option).
The du hook scriptThe du hook script code checks if the argument passed is a directory,
computes its size using the du command and prints the results in text format
or as a JSON dictionary:
The hardlink hook scriptThe hardlink hook script is really simple, it just runs the
util-linux version of the
hardlink
command on its working directory:
We use that to reduce the size of the stored content; to manage versions of
files and folders we keep each version on a separate directory and when one or
more files are not changed this script makes them hardlinks to the same file on
disc, reducing the space used on disk.
The s3sync hook scriptThe s3sync hook script uses the s3fs
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using rsync; all values needed to execute
the task are taken from environment variables:
Deployment objectsThe system is deployed as a StatefulSet with one replica.
Our production deployment is done on AWS and to be
able to scale we use EFS for our
PersistenVolume; the idea is that the volume has no size limit, its
AccessMode can be set to ReadWriteMany and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.
For development we use k3d and we are also able to scale the
StatefulSet for testing because we use a ReadWriteOnce PVC, but it points
to a hostPath that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different k3d nodes use the same folder
on the host.
secrets.yamlThe secrets file contains the files used by the mysecureshell container that
can be generated using kubernetes pods as follows (we are only creating the
scs user):
$kubectl run "mysecureshell"--restart='Never'--quiet--rm--stdin\ --image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"$kubectl run "mysecureshell"--restart='Never'--quiet--rm--stdin\ --image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
Once we have the files we can generate the secrets.yaml file as follows:
The resulting secrets.yaml will look like the following file (the base64
would match the content of the files, of course):
pvc.yamlThe persistent volume claim for a simple deployment (one with only one instance
of the statefulSet) can be as simple as this:
On this definition we don t set the storageClassName to use the default one.
Volumes in our development environment (k3d)In our development deployment we create the following PersistentVolume as
required by the
Local
Persistence Volume Static Provisioner (note that the /volumes/scs-pv has to
be created by hand, in our k3d system we mount the same host directory on the
/volumes path of all the nodes and create the scs-pv directory by hand
before deploying the persistent volume):
And to make sure that everything works as expected we update the PVC definition
to add the right storageClassName:
Volumes in our production environment (aws)In the production deployment we don t create the PersistentVolume (we are
using the
aws-efs-csi-driver which
supports Dynamic Provisioning) but we add the storageClassName (we set it
to the one mapped to the EFS driver, i.e. efs-sc) and set ReadWriteMany
as the accessMode:
statefulset.yamlThe definition of the statefulSet is as follows:
Notes about the containers:
nginx: As this is an example the web server is not using an
AUTH_REQUEST_URI and uses the /sftp/data directory as the root of the web
(to get to the files uploaded for the scs user we will need to use /scs/
as a prefix on the URLs).
mysecureshell: We are adding the IPC_OWNER capability to the container to
be able to use some of the sftp-* commands inside it, but they are
not really needed, so adding the capability is optional.
webhook: We are launching this container in privileged mode to be able to
use the s3fs-fuse, as it will not work otherwise for now (see this
kubernetes issue); if
the functionality is not needed the container can be executed with regular
privileges; besides, as we are not enabling public access to this service we
don t define *_TOKEN variables (if required the values should be read from a
Secret object).
Notes about the volumes:
the devfuse volume is only needed if we plan to use the s3fs command on
the webhook container, if not we can remove the volume definition and its
mounts.
service.yamlTo be able to access the different services on the statefulset we publish the
relevant ports using the following Service object:
ingress.yamlTo download the scs files from the outside we can add an ingress object like
the following (the definition is for testing using the localhost name):
DeploymentTo deploy the statefulSet we create a namespace and apply the object
definitions shown before:
$kubectl create namespace scs-demo
namespace/scs-demo created
$kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
Once the objects are deployed we can check that all is working using kubectl:
$kubectl -n scs-demo get all,secrets,ingress
NAME READY STATUS RESTARTS AGE
pod/scs-0 3/3 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/scs-svc ClusterIP 10.43.0.47 <none>22/TCP,80/TCP,9000/TCP 21s
NAME READY AGE
statefulset.apps/scs 1/1 24s
NAME TYPE DATA AGE
secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s
secret/scs-secrets Opaque 3 39s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s
At this point we are ready to use the system.
Usage examples
File uploadsAs previously mentioned in our system the idea is to use the sftp server from
other Pods, but to test the system we are going to do a kubectl port-forward
and connect to the server using our host client and the password we have
generated (it is on the user_pass.txt file, inside the users.tar archive):
$kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 ->22
Forwarding from [::1]:2020 ->22
$PF_PID=$!$sftp -P 2020 scs@127.0.0.1 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp>ls-ladrwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 .
dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 ..
sftp>!date-R> /tmp/date.txt 2
sftp>put /tmp/date.txt .Uploading /tmp/date.txt to /date.txt
date.txt 100% 32 27.8KB/s 00:00
sftp>ls-l-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt
sftp>ln date.txt date.txt.1 3
sftp>ls-l-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
sftp>put /tmp/date.txt date.txt.2 4
Uploading /tmp/date.txt to /date.txt.2
date.txt 100% 32 27.8KB/s 00:00
sftp>ls-l5
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2
sftp>exit$kill"$PF_PID"[1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
We connect to the sftp service on the forwarded port with the scs user.
We put a file we have created on the host on the directory.
We do a hard link of the uploaded file.
We put a second copy of the file we created locally.
On the file list we can see that the two first files have two hardlinks
File retrievalsIf our ingress is configured right we can download the date.txt file from the
URL http://localhost/scs/date.txt:
Use of the webhook containerTo finish this post we are going to show how we can call the hooks directly,
from a CronJob and from a Job.
Direct script call (du)In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a port-forward and call the script with an existing PATH (the
root directory) and a bad one:
$kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$PF_PID=$!$JSON="$(curl -s"http://localhost:9000/hooks/du?path=.")"$echo$JSON "path":"","bytes":"4160"
$JSON="$(curl -s"http://localhost:9000/hooks/du?path=foo")"$echo$JSON "error":"The provided PATH ('foo') is not a directory"
$kill$PF_PID
As we only have files on the base directory we print the disk usage of the .
PATH and the output is in json format because we export OUTPUT_FORMAT with
the value json on the webhook configuration.
Cronjobs (hardlink)As explained before, the webhook container can be used to run cronjobs; the
following one uses an alpine container to call the hardlink script each
minute (that setup is for testing, obviously):
The following console session shows how we create the object, allow a couple of
executions and remove it (in production we keep it running but once a day, not
each minute):
$kubectl -n scs-demo apply -f webhook-cronjob.yaml 1
cronjob.batch/hardlink created
$kubectl -n scs-demo get pods -l"cronjob=hardlink"-w2
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Pending 0 0s
hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s
hardlink-27735351-zvpnb 0/1 Completed 0 2s
^C
$kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3
Mode: real
Method: sha256
Files: 3
Linked: 1 files
Compared: 0 xattrs
Compared: 1 files
Saved: 32 B
Duration: 0.000220 seconds
$sleep 60
$kubectl -n scs-demo get pods -l"cronjob=hardlink"4
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Completed 0 83s
hardlink-27735352-br5rn 0/1 Completed 0 23s
$kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5
Mode: real
Method: sha256
Files: 3
Linked: 0 files
Compared: 0 xattrs
Compared: 0 files
Saved: 0 B
Duration: 0.000070 seconds
$kubectl -n scs-demo delete -f webhook-cronjob.yaml 6
cronjob.batch "hardlink" deleted
This command creates the cronjob object.
This checks the pods with our cronjob label, we interrupt it once we see
that the first run has been completed.
With this command we see the output of the execution, as this is the fist
execution we see that date.txt.2 has been replaced by a hardlink (the
summary does not name the file, but it is the only option knowing the
contents from the original upload).
After waiting a little bit we check the pods executed again to get the name
of the latest one.
The log now shows that nothing was done.
As this is a demo, we delete the cronjob.
Jobs (s3sync)The following job can be used to synchronise the contents of a directory in a
S3 bucket with the SCS Filesystem:
The file with parameters for the script must be something like this:
Once we have both files we can run the Job as follows:
$kubectl -n scs-demo create secret generic webhook-job-secrets \ 1
--from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$kubectl -n scs-demo apply -f webhook-job.yaml 2
job.batch/s3sync created
$kubectl -n scs-demo get pods -l"cronjob=s3sync"3
NAME READY STATUS RESTARTS AGE
s3sync-zx2cj 0/1 Completed 0 12s
$kubectl -n scs-demo logs s3sync-zx2cj 4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec
total size is 15,075 speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$kubectl -n scs-demo delete -f webhook-job.yaml 5
job.batch "s3sync" deleted
$kubectl -n scs-demo delete secrets webhook-job-secrets 6
secret "webhook-job-secrets" deleted
Here we create the webhook-job-secrets secret that contains the
s3sync.json file.
This command runs the job.
Checking the label cronjob=s3sync we get the Pods executed by the job.
Here we print the logs of the completed job.
Once we are finished we remove the Job.
And also the secret.
Final remarksThis post has been longer than I expected, but I believe it can be useful for
someone; in any case, next time I ll try to explain something shorter or will
split it into multiple entries.
Rama II
This would be more of a short post about the current book I am reading. Now people who have seen Arrival would probably be more at home. People who have also seen Avatar would also be familiar to the theme or concept I am sharing about. Now before I go into detail, it seems that Arthur C. Clarke wanted to use a powerful god or mythological character for the name and that is somehow the RAMA series started.
Now the first book in the series explores an extraterrestrial spaceship that earth people see/connect with. The spaceship is going somewhere and is doing an Earth flyby so humans don t have much time to explore the spaceship and it is difficult to figure out how the spaceship worked. The spaceship is around 40 km. long. They don t meet any living Ramans but mostly automated systems and something called biots.
As I m still reading it, I can t really say what happens next. Although in Rama or Rama I, the powers that be want to destroy it while in the end last they don t. Whether they could have destroyed it or not would be whole another argument. What people need to realize is that the book is a giant What IF scenario.
Aliens
If there were any intelligent life in the Universe, I don t think they will take the pain of visiting Earth. And the reasons are far more mundane than anything else. Look at how we treat each other. One of the largest democracies on Earth, The U.S. has been so divided. While the progressives have made some good policies, the Republicans are into political stunts, consider the political stunt of sending Refugees to Martha s Vineyard. The ex-president also made a statement that he can declassify anything just by thinking about it. Now understand this, a refugee is a legal migrant whose papers would be looked into by the American Govt. and till the time he/she/their application is approved or declined they can work, have a house, or do whatever to support themselves. There is a huge difference between having refugee status and being an undocumented migrant. And it isn t as if the Republicans don t know this, they did it because they thought they will be able to get away with it.
Both the above episodes don t throw us in a good light. If we treat others like the above, how can we expect to be treated? And refugees always have a hard time, not just in the U.S, , the UK you name it. The UK just some months ago announced a controversial deal where they will send Refugees to Rwanda while their refugee application is accepted or denied, most of them would be denied.
The Indian Government is more of the same. A friend, a casual acquaintance Nishant Shah shared the same issues as I had shared a few weeks back even though he s an NRI. So, it seems we are incapable of helping ourselves as well as helping others. On top of it, we have the temerity of using the word alien for them.
Now, just for a moment, imagine you are an intelligent life form. An intelligent life-form that could coax energy from the stars, why would you come to Earth, where the people at large have already destroyed more than half of the atmosphere and still arguing about it with the other half. On top of it, we see a list of authoritarian figures like Putin, Xi Jinping whose whole idea is to hold on to power for as long as they can, damn the consequences. Mr. Modi is no different, he is the dumbest of the lot and that s saying something. Most of the projects made by him are in disarray, Pune Metro, my city giving an example. And this is when Pune was the first applicant to apply for a Metro. Just like the UK, India too has tanked the economy under his guidance. Every time they come closer to target dates, the targets are put far into the future, for e.g. now they have said 2040 for a good economy. And just like in other countries, he has some following even though he has a record of failure in every sector of the economy, education, and defense, the list is endless. There isn t a single accomplishment by him other than screwing with other religions. Most of my countrymen also don t really care or have a bother to see how the economy grows and how exports play a crucial part otherwise they would be more alert. Also, just like the UK, India too gave tax cuts to the wealthy, most people don t understand how economies function and the PM doesn t care. The media too is subservient and because nobody asks the questions, nobody seems to be accountable :(.
Religion
There is another aspect that also has been to the fore, just like in medieval times, I see a great fervor for religion happening here, especially since the pandemic and people are much more insecure than ever before. Before, I used to think that insecurity and religious appeal only happen in the uneducated, and I was wrong. I have friends who are highly educated and yet still are blinded by religion. In many such cases or situations, I find their faith to be a sham. If you have faith, then there shouldn t be any room for doubt or insecurity. And if you are not in doubt or insecure, you won t need to talk about your religion. The difference between the two is that a person is satiated himself/herself/themselves with thirst and hunger. That person would be in a relaxed mode while the other person would continue to create drama as there is no peace in their heart.
Another fact is none of the major religions, whether it is Christianity, Islam, Buddhism or even Hinduism has allowed for the existence of extraterrestrials. We have already labeled them as aliens even before meeting them & just our imagination. And more often than not, we end up killing them. There are and have been scores of movies that have explored the idea. Independence day, Aliens, Arrival, the list goes on and on. And because our religions have never thought about the idea of ET s and how they will affect us, if ET s do come, all the religions and religious practices would panic and die. That is the possibility why even the 1947 Roswell Incident has been covered up .
If the above was not enough, the bombing of Hiroshima and Nagasaki by the Americans would always be a black mark against humanity. From the alien perspective, if you look at the technology that they have vis-a-vis what we have, they will probably think of us as spoilt babies and they wouldn t be wrong. Spoilt babies with nuclear weapons are not exactly a healthy mix
Earth
To add to our fragile ego, we didn t even leave earth even though we have made sure we exploit it as much as we can. We even made the anthropocentric or homocentric view that makes man the apex animal and to top it we have this weird idea that extraterrestrials come here or will invade for water. A species that knows how to get energy out of stars but cannot make a little of H2O. The idea belies logic and again has been done to death. Why we as humans are so insecure even though we have been given so much I fail to understand. I have shared on numerous times the Kardeshev Scale on this blog itself.
The above are some of the reasons why Arthur C. Clarke s works are so controversial and this is when I haven t even read the whole book. It forces us to ask questions that we normally would never think about. And I have to repeat that when these books were published for the first time, they were new ideas. All the movies, from Stanley Kubrick s 2001: Space Odyssey, Aliens, Arrival, and Avatar, somewhere or the other reference some aspect of this work. It is highly possible that I may read and re-read the book couple of times before beginning the next one. There is also quite a bit of human drama, but then that is to be expected. I have to admit I did have some nice dreams after reading just the first few pages, imagining being given the opportunity to experience an Extraterrestrial spaceship that is beyond our wildest dreams. While the Governments may try to cover up or something, the ones who get to experience that spacecraft would be unimaginable. And if they were able to share the pictures or a Livestream, it would be nothing short of amazing.
For those who want to, there is a lot going on with the New James Webb Telescope. I am sure it would give rise to more questions than answers.
I today released version 0.06.4
of my WAP WML browser wApua and also uploaded that release to Debian Unstable.
It s a bugfix release and the first upstream release since 2017.
It fixes the recognition of WAP WML pages with more recent DTD
location URLs ending in .dtd instead of .xml
(and some other small difference). No idea when these URLs changed,
but I assume they have been changed to look more like the URLs of
other DTDs. The old URLs of the DTD still work, but more recent WAP
pages (yes, they do exist :-) seem to use the new DTD URLs, so there
was a need to recognise them instead of throwing an annoying warning.
Thanks to Lian Begett for the bug report!
Culture
Just before I start, I would like to point out that this post may or would probably be NSFW. Again, what is SFW (Safe at Work) and NSFW that so much depends on culture and perception of culture from wherever we are or wherever we take birth? But still, to be on the safe side I have put it as NSFW. Now there have been a few statements and ideas that gave me a pause. This will be a sort of chaotic blog post as I am in such a phase today.
For e.g. while I do not know which culture or which country this comes from, somebody shared that in some cultures one can talk/comment May your poop be easy and with a straight face. I dunno which culture is this but if somebody asked me that I would just die from laughing or maybe poop there itself. While I can understand if it is a constipated person, but a whole culture? Until and unless their DNA is really screwed, I don t think so but then what do I know? I do know that we shit when we have extreme reactions of either joy or fear. And IIRC, this comes from mammal response when they were in dangerous situations and we got the same as humans evolved. I would really be interested to know which culture is that. I did come to know that the Japanese do wish that you may not experience hard work or something to that effect while ironically they themselves are becoming extinct due to hard work and not enough relaxation, toxic workplace is common in Japan according to social scientists and population experts.
Another term that I couldn t figure out is The Florida Man Strikes again and this term is usually used when somebody does something stupid or something weird. While it is exclusively used in the American context, I am curious to know how that came about. Why does Florida have such people or is it an exaggeration? I have heard the term e.g. What happens in Vegas, stays in Vegas . Think it is also called Sin city although why just Vegas is beyond me?
Omicron-8712 Blood pressure machine
I felt so stupid. I found another site or e-commerce site called Wellness Forever. They had the blood pressure machine I wanted, an Omron-8172. I bought it online and they delivered the same within half an hour. Amazon took six days and in the end, didn t deliver it at all.
I tried taking measurements from it yesterday. I have yet to figure out what it all means but I did get measurements of 109 SYS, 88 DIA and Pulse is 72. As far as the pulse is concerned, guess that is normal, the others just don t know. If only I had known this couple of months ago. I was able to register the product as well as download and use the Omron Connect app. For roughly INR 2.5k you have a sort of health monitoring system. It isn t Star Trek Tricorder in any shape or form but it will have to do while the tricorder gets invented. And while we are on the subject let s not forget Elizabeth Holmes and the scam called Theranos. It really is something to see How Elizabeth Holmes modeled so much of herself on Steve Jobs mimicking how he left college/education halfway. A part of me is sad that Theranos is not real. Joe Scott just a few days ago shared some perspectives on the same just a few days ago. The idea in itself is pretty seductive, to say the least, and that is the reason the scam went on for more than a decade and perhaps would have been longer if some people hadn t gotten the truth out.
I do see potentially, something like that coming on as A.I. takes a bigger role in automating testing. Half a decade to a decade from now, who knows if there is an algorithm that is able to do what is needed? If such a product were to come to the marketplace at a decent price, it would revolutionize medicine, especially in countries like India, South Africa, and all sorts of remote places. Especially, with all sorts of off-grid technologies coming and maturing in the marketplace. Before I forget, there is a game called Cell on Android that tells or shares about the evolution of life on earth. It also shares credence to the idea that life has come 6 times on Earth and has been destroyed multiple times by asteroids. It is in the idle sort of game format, so you can see the humble beginnings from the primordial soup to various kinds of cells and bacteria to finally a mammal. This is where I am and a long way to go.
Indian Bureaucracy
One of the few things that Britishers gave to India, is the bureaucracy and the bureaucracy tests us in myriad ways. It would be full 2 months on 5th September and I haven t yet got a death certificate. And I need that for a sundry number of things. The same goes for a disability certificate. What is and was interesting is my trip to the local big hospital called Sassoon Hospital. My mum had shared incidents that occurred in the 1950s when she and the family had come to Pune. According to her, when she was alive, while Sassoon was the place to be, it was big and chaotic and you never knew where you are going. That was in 1950, I had the same experience in 2022. The term/adage the more things change, the more they remain the same seems to be held true for Sassoon Hospital.
Btw, those of you who think the Devil exists, he is totally a fallacy. There is a popular myth that the devil comes to deal that he/she/they come to deal with you when somebody close to you passes, I was waiting desperately for him when mum passed. Any deal that he/she/they would have offered me I would have gladly taken, but all my wait was all for nothing. While I believe evil exists, that is manifested by humans and nobody else. The whole idea and story of the devil is just to control young children and nothing beyond that
Debconf 2023, friends, JPEGOptim, and EV s
Quite a number of friends had gone to Albania this year as India won the right to host Debconf for the year 2023. While I did lurk on the Debconf orga IRC channel, I m not sure how helpful I would be currently. One news that warmed my heart is some people would be coming to India to check the site way before and make sure things go smoothly. Nothing like having more eyes (in this case bodies) to throw at a problem and hopefully it will be sorted. While I have not been working for the last couple of years, one of the things that I had to do and have been doing is moving a lot of stuff online. This is in part due to the Government s own intention of having everything on the cloud. One of the things I probably may have shared it more than enough times is that the storage most of these sites give is like the 1990s. I tried jpegoptim and while it works, it degrades the quality of the image quite a bit. The whole thing seems backward, especially as newer and newer smartphones are capturing more data per picture (megapixel resolution), case in point Samsung Galaxy A04 that is being introduced. But this is not only about newer phones, even my earlier phone, Samsung J-5/500 which I bought in 2016 took images at 5 MB. So it is not a new issue but a continuous issue. And almost all Govt. sites have the upper band fixed at 1 MB. But this is not limited to Govt. sites alone, most sites in India are somewhat frozen in the 1990s. And it isn t as if resources for designing web pages using HTML5, CSS3, Javascript, Python, or Java aren t available. If worse comes to worst, one can even use amp to make his, her or their point. But this is if they want to do stuff. I would be sharing a few photos with commentary, there are still places where I can put photos apart from social media
Friends
Last week, Saturday suddenly all the friends decided to show up. I have no clue one way or the other why but am glad they showed up.
I will have to be a bit rapid about what I am sharing above so here goes nothing
1. The first picture shows Mahendra, Akshat, me, and Sagar Sukhose (Mangesh s friend). The picture was taken by Mangesh Diwate. We talked quite a bit of various things that could be done in Debian. A few of the things that I shared were (bringing more stuff from BSD to Debian, I am sure there s still quite a lot of security software that could be advantageous to have in Debian.) The best person to talk to or guide about this would undoubtedly be Paul Wise or as he is affectionally called Pabs. He is one of the shy ones and yet knows so much about how things work. The one and only time I met him is 2016. The other thing that we talked about is porting Debian to one of the phones. This has been done in the past and done by a Puneitie some 4-5 years back. While I don t recollect the gentleman s name, I remember that the porting was done on a Motorola phone as that was the easiest to do. He had tried some other mobile but that didn t work. Making Debian available on phone is hard work. Just to have an idea, I went to the xda developers forum and found out that while M51 has been added, my specific phone model is not there. A Samsung Galaxy M52G Android (samsung; SM-M526B; lahaina; arm64-v8a) v12 . You look at the chat and you understand how difficult the process might be. One of the other ideas that Akshat pitched was Debian Astro, this is something that is close to the heart of many, including me. I also proposed to have some kind of web app or something where we can find and share about the various astronomy and related projects done by various agencies. While there is a NASA app, nothing comes close to JSR and that site just shares stuff, no speculation. There are so many projects taken or being done by the EU, JAXA, ISRO, and even middle-east countries are trying but other than people who are following some of the developments, we hear almost nothing. Even the Chinese have made some long strides but most people know nothing about the same. And it s sad to know that those developments are not being known, shared, or even speculated about as much as say NASA or SpaceX is. How do we go about it and how do we get people to contribute or ask questions around it would be interesting.
2. The second picture was something that was shared by Akshat. Akshat was sharing how in Albania people are moving on these electric scooters . I dunno if that is the right word for it or what. I had heard from a couple of friends who had gone to Vietnam a few years ago how most people in Vietnam had modified their scooters and they were snaking lines of electric wires charging scooters. I have no clue whether they were closer to Vespa or something like above. In India, the Govt. is in partnership with the oil, gas, and coal mafia just as it was in Australia (the new Govt. in Australia is making changes) the same thing is here. With the humongous profits that the oil sector provides the petro states and others, Corruption is bound to happen. We talk and that s the extent of things.
3. The third picture is from a nearby area called F.C. Road or Fergusson College Road. The area has come up quite sharply (commercially) in the last few years. Apparently, Mr. Kushal is making a real-life replica of Wall Street which would be given to commercial tenants. Right now the real estate market is tight in India, we will know how things pan out in the next few years.
4. Number four is an image of a Ganesh idol near my house. There is a 10-day festival of the elephant god that people used to celebrate every year. For the last couple of years because of the pandemic, people were unable to celebrate the festival as it is meant to celebrate. This time some people are going overboard while others are cautious and rightfully so.
5. Last and not least, one of the things that people do at this celebration is to have new clothes, so I shared a photo of a gentleman who had bought and was wearing new clothes. While most countries around the world are similar, Latin America is very similar to India in many ways, perhaps Gunnar can share. especially about religious activities. The elephant god is known for his penchant for sweets and that can be seen from his rounded stomach, that is also how he is celebrated. He is known to make problems disappear or that is supposed to be his thing. We do have something like 4 billion gods, so each one has to be given some work or quality to justify the same