Debian Brasil: About Debian Brasil at Latinoware 2022







$ kubectl get pods -AThe first step would be to setup the Linkerd CLI:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (119m ago) 7d
kube-system calico-node-6ptsh 1/1 Running 2 (119m ago) 7d
kube-system calico-node-7x5j8 1/1 Running 2 (119m ago) 7d
kube-system calico-node-qlnf6 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (119m ago) 7d
kube-system etcd-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-apiserver-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-4n9b7 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-k4rzv 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-lz2dd 1/1 Running 2 (119m ago) 7d
kube-system kube-scheduler-k8s-master 1/1 Running 2 (119m ago) 7d
$ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install shOn most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/At this point, checking the version would give you the following:
$ linkerd versionSetting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
Client version: stable-2.12.2
Server version: unavailable
$ linkerd check --pre
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
pre-kubernetes-setup
--------------------
control plane namespace does not already exist
can create non-namespaced resources
can create ServiceAccounts
can create Services
can create Deployments
can create CronJobs
can create ConfigMaps
can create Secrets
can read Secrets
can read extension-apiserver-authentication configmap
no clock skew detected
linkerd-version
---------------
can determine the latest version
cli is up-to-date
Status check results areAll the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f -
Rendering Linkerd CRDs...
Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io createdNext, install the Linkerd control plane components in the same manner, this time without the crds switch:
customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
$ linkerd install kubectl apply -f -Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
secret/linkerd-sp-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
secret/linkerd-policy-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-policy created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
configmap/linkerd-identity-trust-roots created
service/linkerd-identity created
service/linkerd-identity-headless created
deployment.apps/linkerd-identity created
service/linkerd-dst created
service/linkerd-dst-headless created
service/linkerd-sp-validator created
service/linkerd-policy created
service/linkerd-policy-validator created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
secret/linkerd-config-overrides created
$ kubectl get pods -AThe components are running in the new linkerd namespace.To verify the setup, run a check:
...
linkerd linkerd-destination-67b9cc8749-xqcbx 4/4 Running 0 69s
linkerd linkerd-identity-59b46789cc-ntfcx 2/2 Running 0 69s
linkerd linkerd-proxy-injector-7fc85556bf-vnvw6 1/2 Running 0 69s
$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
linkerd-existence
-----------------
'linkerd-config' config map exists
heartbeat ServiceAccount exist
control plane replica sets are ready
no unschedulable pods
control plane pods are ready
cluster networks contains all pods
cluster networks contains all services
linkerd-config
--------------
control plane Namespace exists
control plane ClusterRoles exist
control plane ClusterRoleBindings exist
control plane ServiceAccounts exist
control plane CustomResourceDefinitions exist
control plane MutatingWebhookConfigurations exist
control plane ValidatingWebhookConfigurations exist
proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
certificate config is valid
trust anchors are using supported crypto algorithm
trust anchors are within their validity period
trust anchors are valid for at least 60 days
issuer cert is using supported crypto algorithm
issuer cert is within its validity period
issuer cert is valid for at least 60 days
issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
proxy-injector webhook has valid cert
proxy-injector cert is valid for at least 60 days
sp-validator webhook has valid cert
sp-validator cert is valid for at least 60 days
policy-validator webhook has valid cert
policy-validator cert is valid for at least 60 days
linkerd-version
---------------
can determine the latest version
cli is up-to-date
control-plane-version
---------------------
can retrieve the control plane version
control plane is up-to-date
control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
control plane proxies are healthy
control plane proxies are up-to-date
control plane proxies and cli versions match
Status check results areEverything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f -A few seconds later, you should see the following in your pod list:
namespace/linkerd-viz created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
serviceaccount/metrics-api created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created
serviceaccount/tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created
secret/tap-k8s-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
role.rbac.authorization.k8s.io/web created
rolebinding.rbac.authorization.k8s.io/web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
serviceaccount/web created
server.policy.linkerd.io/admin created
authorizationpolicy.policy.linkerd.io/admin created
networkauthentication.policy.linkerd.io/kubelet created
server.policy.linkerd.io/proxy-admin created
authorizationpolicy.policy.linkerd.io/proxy-admin created
service/metrics-api created
deployment.apps/metrics-api created
server.policy.linkerd.io/metrics-api created
authorizationpolicy.policy.linkerd.io/metrics-api created
meshtlsauthentication.policy.linkerd.io/metrics-api-web created
configmap/prometheus-config created
service/prometheus created
deployment.apps/prometheus created
service/tap created
deployment.apps/tap created
server.policy.linkerd.io/tap-api created
authorizationpolicy.policy.linkerd.io/tap created
clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created
serviceaccount/tap-injector created
secret/tap-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created
service/tap-injector created
deployment.apps/tap-injector created
server.policy.linkerd.io/tap-injector-webhook created
authorizationpolicy.policy.linkerd.io/tap-injector created
networkauthentication.policy.linkerd.io/kube-api-server created
service/web created
deployment.apps/web created
serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created
serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
$ kubectl get pods -AThe viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
...
linkerd-viz prometheus-b5865f776-w5ssf 1/2 Running 0 35s
linkerd-viz tap-64f5c8597b-rqgbk 2/2 Running 0 35s
linkerd-viz tap-injector-7c75cfff4c-wl9mx 2/2 Running 0 34s
linkerd-viz web-8c444745-jhzr5 2/2 Running 0 34s
$ linkerd viz dashboard
Linkerd dashboard available at:
http://localhost:50750
Grafana dashboard available at:
http://localhost:50750/grafana
Opening Linkerd dashboard in the default browser
Opening in existing browser session.
$ cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
$ kubectl apply -f deploy.yamlBack in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:
$ kubectl get pod nginx-deployment-cd55c47f5-cgxw2 -o yamlLet s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: aee0295dda906f7935ce5c150ae30360005f5330e98c75a550b7cc0d1532f529
cni.projectcalico.org/podIP: 172.16.36.89/32
cni.projectcalico.org/podIPs: 172.16.36.89/32
creationTimestamp: "2022-11-05T19:35:12Z"
generateName: nginx-deployment-cd55c47f5-
labels:
app: nginx
pod-template-hash: cd55c47f5
name: nginx-deployment-cd55c47f5-cgxw2
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-cd55c47f5
uid: b604f5c4-f662-4333-aaa0-bd1a2b8b08c6
resourceVersion: "22979"
uid: 8fe30214-491b-4753-9fb2-485b6341376c
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2bt6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-2bt6z
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:12Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:13Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://f088f200315b44cbeed16499aba9b2d1396f9f81645e53b032d4bfa44166128a
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:35:15Z"
hostIP: 192.168.2.216
phase: Running
podIP: 172.16.36.89
podIPs:
- ip: 172.16.36.89
qosClass: BestEffort
startTime: "2022-11-05T19:35:12Z"
$ kubectl get deploy nginx-deployment -o yaml linkerd inject - kubectl apply -f -
deployment "nginx-deployment" injected
deployment.apps/nginx-deployment configuredBack in the viz dashboard, the workload now is integrated into Linkerd control plane.
$ kubectl get pod nginx-deployment-858bdd545b-55jpf -o yamlAt this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1ec3d345f859be8ead0374a7e880bcfdb9ba74a121b220a6fccbd342ac4b7ea8
cni.projectcalico.org/podIP: 172.16.36.90/32
cni.projectcalico.org/podIPs: 172.16.36.90/32
linkerd.io/created-by: linkerd/proxy-injector stable-2.12.2
linkerd.io/inject: enabled
linkerd.io/proxy-version: stable-2.12.2
linkerd.io/trust-root-sha256: 354fe6f49331e8e03d8fb07808e00a3e145d2661181cbfec7777b41051dc8e22
viz.linkerd.io/tap-enabled: "true"
creationTimestamp: "2022-11-05T19:44:15Z"
generateName: nginx-deployment-858bdd545b-
labels:
app: nginx
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: nginx-deployment
linkerd.io/workload-ns: default
pod-template-hash: 858bdd545b
name: nginx-deployment-858bdd545b-55jpf
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-858bdd545b
uid: 2e618972-aa10-4e35-a7dd-084853673a80
resourceVersion: "23820"
uid: 62f1857a-b701-4a19-8996-b5b605ff8488
spec:
containers:
- env:
- name: _pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: LINKERD2_PROXY_LOG
value: warn,linkerd=info
- name: LINKERD2_PROXY_LOG_FORMAT
value: plain
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: linkerd-policy.linkerd.svc.cluster.local.:8090
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: $(_pod_ns):$(_pod_name)
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: all-unauthenticated
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: 100ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: 1000ms
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: 0.0.0.0:4190
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: 0.0.0.0:4191
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: 0.0.0.0:4143
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
value: "80"
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: svc.cluster.local.
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: 25,587,3306,4444,5432,6379,9300,11211
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value:
"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)"
- name: _pod_sa
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: linkerd
- name: _l5d_trustdomain
value: cluster.local
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
value:
-----BEGIN CERTIFICATE-----
MIIBiDCCAS6gAwIBAgIBATAKBggqhkjOPQQDAjAcMRowGAYDVQQDExFpZGVudGl0
eS5saW5rZXJkLjAeFw0yMjExMDUxOTIxMDlaFw0yMzExMDUxOTIxMjlaMBwxGjAY
BgNVBAMTEWlkZW50aXR5LmxpbmtlcmQuMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcD
QgAE8AgxbWWa1qgEgN3ykFAOJ3sw9nSugUk1N5Qfvo6jXX/8/TZUW0ddko/N71+H
EcKc72kK0tlclj8jDi3pzJ4C0KNhMF8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdJQQW
MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBThSr0yAj5joW7pj/NZPYcfIIepbzAKBggqhkjOPQQDAgNIADBFAiAomg0TVn6N
UxhOyzZdg848lAvH0Io9Ra/Ef2hxZGN0LgIhAIKjrsgDUqZA8XHiiciYYicxFnKr
Tw5yj9gBhVAgYCaB
-----END CERTIFICATE-----
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: /var/run/secrets/tokens/linkerd-identity-token
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_TAP_SVC_NAME
value: tap.linkerd-viz.serviceaccount.identity.linkerd.cluster.local
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
- --timeout=2m
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 4191
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
protocol: TCP
- containerPort: 4191
name: linkerd-admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 4191
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 2102
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191,4567,4568
- --outbound-ports-to-ignore
- 4567,4568
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imagePullPolicy: IfNotPresent
name: linkerd-init
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /run
name: linkerd-proxy-init-xtables-lock
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-9zpnn
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- emptyDir:
name: linkerd-proxy-init-xtables-lock
- emptyDir:
medium: Memory
name: linkerd-identity-end-entity
- name: linkerd-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: identity.l5d.io
expirationSeconds: 86400
path: linkerd-identity-token
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:15Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://62028867c48aaa726df48249a27c52cd8820cd33e8e5695ad0d322540924754e
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imageID: cr.l5d.io/linkerd/proxy@sha256:787db5055b2a46a3c4318ef3b632461261f81254c8e47bf4b9b8dab2c42575e4
lastState:
name: linkerd-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:16Z"
- containerID: containerd://8f8ce663c19360a7b6868ace68a4a5119f0b18cd57ffebcc2d19331274038381
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:19Z"
hostIP: 192.168.2.216
initContainerStatuses:
- containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imageID: cr.l5d.io/linkerd/proxy-init@sha256:7d5e66b9e176b1ebbdd7f40b6385d1885e82c80a06f4c6af868247bb1dffe262
lastState:
name: linkerd-init
ready: true
restartCount: 0
state:
terminated:
containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
exitCode: 0
finishedAt: "2022-11-05T19:44:16Z"
reason: Completed
startedAt: "2022-11-05T19:44:15Z"
phase: Running
podIP: 172.16.36.90
podIPs:
- ip: 172.16.36.90
qosClass: Burstable
startTime: "2022-11-05T19:44:15Z"
$ cat deploy.yamlThis annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
annotations:
linkerd.io/inject: enabled
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
$ linkerd viz uninstall kubectl delete -f -In order to uninstall the control plane, you would need to first uninject the Linkerd control plane components from any existing running pods by:
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-admin" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-admin" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
role.rbac.authorization.k8s.io "web" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-reader" deleted
rolebinding.rbac.authorization.k8s.io "web" deleted
apiservice.apiregistration.k8s.io "v1alpha1.tap.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-tap-injector-webhook-config" deleted
namespace "linkerd-viz" deleted
authorizationpolicy.policy.linkerd.io "admin" deleted
authorizationpolicy.policy.linkerd.io "metrics-api" deleted
authorizationpolicy.policy.linkerd.io "proxy-admin" deleted
authorizationpolicy.policy.linkerd.io "tap" deleted
authorizationpolicy.policy.linkerd.io "tap-injector" deleted
server.policy.linkerd.io "admin" deleted
server.policy.linkerd.io "metrics-api" deleted
server.policy.linkerd.io "proxy-admin" deleted
server.policy.linkerd.io "tap-api" deleted
server.policy.linkerd.io "tap-injector-webhook" deleted
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 10m
$ kubectl get deployment nginx-deployment -o yaml linkerd uninject - kubectl apply -f -
deployment "nginx-deployment" uninjected
deployment.apps/nginx-deployment configuredNow you can delete the control plane.
$ linkerd uninstall kubectl delete -f -At this point we re back to the original state:
clusterrole.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-destination-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
role.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
customresourcedefinition.apiextensions.k8s.io "authorizationpolicies.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "httproutes.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "meshtlsauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "networkauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serverauthorizations.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "servers.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serviceprofiles.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-proxy-injector-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-policy-validator-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-sp-validator-webhook-config" deleted
namespace "linkerd" deleted
$ kubectl get pods -AI hope you find this useful to get you started on your journey with Linkerd. Head on over to the docs for more information, guides and best practices.
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-cd55c47f5-99xf2 1/1 Running 0 82s
default nginx-deployment-cd55c47f5-tt58t 1/1 Running 0 86s
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-6ptsh 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-7x5j8 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-qlnf6 1/1 Running 2 (3h39m ago) 7d1h
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (3h39m ago) 7d2h
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (3h39m ago) 7d2h
kube-system etcd-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-apiserver-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-4n9b7 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-k4rzv 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-lz2dd 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-scheduler-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
Series: | Discworld #23 |
Publisher: | Harper |
Copyright: | 1998 |
Printing: | May 2014 |
ISBN: | 0-06-228014-7 |
Format: | Mass market |
Pages: | 409 |
"And sin, young man, is when you treat people as things. Including yourself. That's what sin is." "It's a lot more complicated than that " "No. It ain't. When people say things are a lot more complicated than that, they means they're getting worried that they won t like the truth. People as things, that's where it starts."This loses a bit in context because this book is literally about treating people as things, and thus the observation feels more obvious when it arrives in this book than when you encounter it on its own, but it's still a great quote. Sadly, I found a lot of this book annoying. One of those annoyances is a pet peeve that others may or may not share: I have very little patience for dialogue in phonetically-spelled dialect, and there are two substantial cases of that here. One is a servant named Igor who speaks with an affected lisp represented by replacing every ess sound with th, resulting in lots of this:
"No, my Uncle Igor thtill workth for him. Been thtruck by lightning three hundred timeth and thtill putth in a full night'th work."I like Igor as a character (he's essentially a refugee from The Addams Family, which adds a good counterpoint to the malicious and arrogant evil of the vampires), but my brain stumbles over words like "thtill" every time. It's not that I can't decipher it; it's that the deciphering breaks the flow of reading in a way that I found not at all fun. It bugged me enough that I started skipping his lines if I couldn't work them out right away. The other example is the Nac Mac Feegles, who are... well, in the book, they're Pictsies and a type of fairy, but they're Scottish Smurfs, right down to only having one female (at least in this book). They're entertainingly homicidal, but they all talk like this:
"Ach, hins tak yar scaggie, yer dank yowl callyake!"I'm from the US and bad with accents and even worse with accents reproduced in weird spellings, and I'm afraid that I found 95% of everything said by Nac Mac Feegles completely incomprehensible to the point where I gave up even trying to read it. (I'm now rather worried about the Tiffany Aching books and am hoping Pratchett toned the dialect down a lot, because I'm not sure I can deal with more of this.) But even apart from the dialect, I thought something was off about the plot structure of this book. There's a lot of focus on characters who don't seem to contribute much to the plot resolution. I wanted more of the varied strengths of Lancre coming together, rather than the focus on Granny. And the vampires are absurdly powerful, unflappable, smarmy, and contemptuous of everyone, which makes for threatening villains but also means spending a lot of narrative time with a Discworld version of Jacob Rees-Mogg. I feel like there's enough of that in the news already. Also, while I will avoid saying too much about the plot, I get very suspicious when older forms of oppression are presented as good alternatives to modernizing, rationalist spins on exploitation. I see what Pratchett was trying to do, and there is an interesting point here about everyone having personal relationships and knowing their roles (a long-standing theme of the Lancre Discworld stories). But I think the reason why there is some nostalgia for older autocracy is that we only hear about it from stories, and the process of storytelling often creates emotional distance and a patina of adventure and happy outcomes. Maybe you can make an argument that classic British imperialism is superior to smug neoliberalism, but both of them are quite bad and I don't want either of them. On a similar note, Nanny Ogg's tyranny over her entire extended clan continues to be played for laughs, but it's rather unappealing and seems more abusive the more one thinks about it. I realize the witches are not intended to be wholly good or uncomplicated moral figures, but I want to like Nanny, and Pratchett seems to be writing her as likable, even though she has an astonishing lack of respect for all the people she's related to. One might even say that she treats them like things. There are some great bits in this book, and I suspect there are many people who liked it more than I did. I wouldn't be surprised if it was someone's favorite Discworld novel. But there were enough bits that didn't work for me that I thought it averaged out to a middle-of-the-road entry. Followed by The Fifth Elephant in publication order. This is the last regular witches novel, but some of the thematic thread is picked up by The Wee Free Men, the first Tiffany Aching novel. Rating: 7 out of 10
exuberant-ctags
to generate more useful tags for LaTeX
documents than what you get OOTB, including jumping to \label
s and
the BibTeX source of \cite
s. See this stackoverflow post.hi def link texTodo Comment
\begin code
/\end code
which are executed by Haskell's GHCi interpreter, and the indentation can
interfere with Haskell's indentation rules..tex
files whose purpose is to let me build just one chapter or section at a time.\\todo
).
And finally
Series: | Discworld #21 |
Publisher: | Harper |
Copyright: | 1997 |
Printing: | May 2014 |
ISBN: | 0-06-228020-1 |
Format: | Mass market |
Pages: | 455 |
Not a muscle moved on Rust's face. There was a clink as Vimes's badge was set neatly on the table. "I don't have to take this," Vimes said calmly. "Oh, so you'd rather be a civilian, would you?" "A watchman is a civilian, you inbred streak of pus!"Vimes is also willing to think of a war as a possible crime, which may not be as effective as Vetinari's tricky scheming but which is very emotionally satisfying. As with most Pratchett books, the moral underpinnings of the story aren't that elaborate: people are people despite cultural differences, wars are bad, and people are too ready to believe the worst of their neighbors. The story arc is not going to provide great insights into human character that the reader did not already have. But watching Vimes stubbornly attempt to do the right thing regardless of the rule book is wholly satisfying, and watching Vetinari at work is equally, if differently, enjoyable. Not the best Discworld novel, but one of the better ones. Followed by The Last Continent in publication order, and by The Fifth Elephant thematically. Rating: 8 out of 10
Jobs
or CronJobs
).
NodeJS
API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.
Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.
For the upload I decided to use the sftp protocol, mainly because I already
had an sftp container image based on
mysecureshell prepared; once
we settled on that we added sftp support to the API server and configured it
to upload the files to our server instead of using S3 buckets.
To publish the files we added a nginx container configured
to work as a reverse proxy that uses the
ngx_http_auth_request_module
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).
Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using kubectl exec
with the existing containers
did not seem a good idea, as that is not supported by CronJobs
objects, for
example).
The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the webhook tool to provide the
endpoints to call the scripts; for now we have three:
PATH
,hardlink
all the files that are identical on the filesystem,mysecureshell
container can be used to provide an sftp service with
multiple users (although the files are owned by the same UID
and GID
) using
standalone containers (launched with docker
or podman
) or in an
orchestration system like kubernetes, as we are going to do here.
The image is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
FROM alpine:$ALPINE_VERSION as builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN apk update &&\
apk add --no-cache alpine-sdk git musl-dev &&\
git clone https://github.com/sto/mysecureshell.git &&\
cd mysecureshell &&\
./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man\
--localstatedir=/var --with-shutfile=/var/lib/misc/sftp.shut --with-debug=2 &&\
make all && make install &&\
rm -rf /var/cache/apk/*
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
COPY --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell
COPY --from=builder /usr/bin/sftp-* /usr/bin/
RUN apk update &&\
apk add --no-cache openssh shadow pwgen &&\
sed -i -e "s ^.*\(AuthorizedKeysFile\).*$ \1 /etc/ssh/auth_keys/%u "\
/etc/ssh/sshd_config &&\
mkdir /etc/ssh/auth_keys &&\
cat /dev/null > /etc/motd &&\
add-shell '/usr/bin/mysecureshell' &&\
rm -rf /var/cache/apk/*
COPY bin/* /usr/local/bin/
COPY etc/sftp_config /etc/ssh/
COPY entrypoint.sh /
EXPOSE 22
VOLUME /sftp
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
/etc/sftp_config
file is used to
configure
the mysecureshell
server to have all the user homes under /sftp/data
, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after 5m
of inactivity:
# Default mysecureshell configuration
<Default>
# All users will have access their home directory under /sftp/data
Home /sftp/data/$USER
# Log to a file inside /sftp/logs/ (only works when the directory exists)
LogFile /sftp/logs/mysecureshell.log
# Force users to stay in their home directory
StayAtHome true
# Hide Home PATH, it will be shown as /
VirtualChroot true
# Hide real file/directory owner (just change displayed permissions)
DirFakeUser true
# Hide real file/directory group (just change displayed permissions)
DirFakeGroup true
# We do not want users to keep forever their idle connection
IdleTimeOut 5m
</Default>
# vim: ts=2:sw=2:et
entrypoint.sh
script is the one responsible to prepare the container for
the users included on the /secrets/user_pass.txt
file (creates the users with
their HOME
directories under /sftp/data
and a /bin/false
shell and
creates the key files from /secrets/user_keys.txt
if available).
The script expects a couple of environment variables:
SFTP_UID
: UID
used to run the daemon and for all the files, it has to be
different than 0
(all the files managed by this daemon are going to be
owned by the same user and group, even if the remote users are different).SFTP_GID
: GID
used to run the daemon and for all the files, it has to be
different than 0
.SSH_PORT
and SSH_PARAMS
values if present.
It also requires the following files (they can be mounted as secrets in
kubernetes):
/secrets/host_keys.txt
: Text file containing the ssh server keys in mime
format; the file is processed using the reformime
utility (the one included
on busybox) and can be generated using the
gen-host-keys
script included on the container (it uses ssh-keygen
and
makemime
)./secrets/user_pass.txt
: Text file containing lines of the form
username:password_in_clear_text
(only the users included on this file are
available on the sftp
server, in fact in our deployment we use only the
scs
user for everything)./secrets/user_keys.txt
: Text file that contains lines of the form
username:public_ssh_ed25519_or_rsa_key
; the public keys are installed on
the server and can be used to log into the sftp
server if the username
exists on the user_pass.txt
file.entrypoint.sh
script are:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Expects SSH_UID & SSH_GID on the environment and uses the value of the
# SSH_PORT & SSH_PARAMS variables if present
# SSH_PARAMS
SSH_PARAMS="-D -e -p $ SSH_PORT:=22 $ SSH_PARAMS "
# Fixed values
# DIRECTORIES
HOME_DIR="/sftp/data"
CONF_FILES_DIR="/secrets"
AUTH_KEYS_PATH="/etc/ssh/auth_keys"
# FILES
HOST_KEYS="$CONF_FILES_DIR/host_keys.txt"
USER_KEYS="$CONF_FILES_DIR/user_keys.txt"
USER_PASS="$CONF_FILES_DIR/user_pass.txt"
USER_SHELL_CMD="/usr/bin/mysecureshell"
# TYPES
HOST_KEY_TYPES="dsa ecdsa ed25519 rsa"
# ---------
# FUNCTIONS
# ---------
# Validate HOST_KEYS, USER_PASS, SFTP_UID and SFTP_GID
_check_environment()
# Check the ssh server keys ... we don't boot if we don't have them
if [ ! -f "$HOST_KEYS" ]; then
cat <<EOF
We need the host keys on the '$HOST_KEYS' file to proceed.
Call the 'gen-host-keys' script to create and export them on a mime file.
EOF
exit 1
fi
# Check that we have users ... if we don't we can't continue
if [ ! -f "$USER_PASS" ]; then
cat <<EOF
We need at least the '$USER_PASS' file to provision users.
Call the 'gen-users-tar' script to create a tar file to create an archive that
contains public and private keys for users, a 'user_keys.txt' with the public
keys of the users and a 'user_pass.txt' file with random passwords for them
(pass the list of usernames to it).
EOF
exit 1
fi
# Check SFTP_UID
if [ -z "$SFTP_UID" ]; then
echo "The 'SFTP_UID' can't be empty, pass a 'GID'."
exit 1
fi
if [ "$SFTP_UID" -eq "0" ]; then
echo "The 'SFTP_UID' can't be 0, use a different 'UID'"
exit 1
fi
# Check SFTP_GID
if [ -z "$SFTP_GID" ]; then
echo "The 'SFTP_GID' can't be empty, pass a 'GID'."
exit 1
fi
if [ "$SFTP_GID" -eq "0" ]; then
echo "The 'SFTP_GID' can't be 0, use a different 'GID'"
exit 1
fi
# Adjust ssh host keys
_setup_host_keys()
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
ret="0"
reformime <"$HOST_KEYS" ret="1"
for kt in $HOST_KEY_TYPES; do
key="ssh_host_$ kt _key"
pub="ssh_host_$ kt _key.pub"
if [ ! -f "$key" ]; then
echo "Missing '$key' file"
ret="1"
fi
if [ ! -f "$pub" ]; then
echo "Missing '$pub' file"
ret="1"
fi
if [ "$ret" -ne "0" ]; then
continue
fi
cat "$key" >"/etc/ssh/$key"
chmod 0600 "/etc/ssh/$key"
chown root:root "/etc/ssh/$key"
cat "$pub" >"/etc/ssh/$pub"
chmod 0600 "/etc/ssh/$pub"
chown root:root "/etc/ssh/$pub"
done
cd "$opwd"
rm -rf "$tmpdir"
return "$ret"
# Create users
_setup_user_pass()
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
ret="0"
[ -d "$HOME_DIR" ] mkdir "$HOME_DIR"
# Make sure the data dir can be managed by the sftp user
chown "$SFTP_UID:$SFTP_GID" "$HOME_DIR"
# Allow the user (and root) to create directories inside the $HOME_DIR, if
# we don't allow it the directory creation fails on EFS (AWS)
chmod 0755 "$HOME_DIR"
# Create users
echo "sftp:sftp:$SFTP_UID:$SFTP_GID:::/bin/false" >"newusers.txt"
sed -n "/^[^#]/ s/:/ /p " "$USER_PASS" while read -r _u _p; do
echo "$_u:$_p:$SFTP_UID:$SFTP_GID::$HOME_DIR/$_u:$USER_SHELL_CMD"
done >>"newusers.txt"
newusers --badnames newusers.txt
# Disable write permission on the directory to forbid remote sftp users to
# remove their own root dir (they have already done it); we adjust that
# here to avoid issues with EFS (see before)
chmod 0555 "$HOME_DIR"
# Clean up the tmpdir
cd "$opwd"
rm -rf "$tmpdir"
return "$ret"
# Adjust user keys
_setup_user_keys()
if [ -f "$USER_KEYS" ]; then
sed -n "/^[^#]/ s/:/ /p " "$USER_KEYS" while read -r _u _k; do
echo "$_k" >>"$AUTH_KEYS_PATH/$_u"
done
fi
# Main function
exec_sshd()
_check_environment
_setup_host_keys
_setup_user_pass
_setup_user_keys
echo "Running: /usr/sbin/sshd $SSH_PARAMS"
# shellcheck disable=SC2086
exec /usr/sbin/sshd -D $SSH_PARAMS
# ----
# MAIN
# ----
case "$1" in
"server") exec_sshd ;;
*) exec "$@" ;;
esac
# vim: ts=2:sw=2:et
host_keys.txt
file as follows:
$ docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
#!/bin/sh
set -e
# Generate new host keys
ssh-keygen -A >/dev/null
# Replace hostname
sed -i -e 's/@.*$/@mysecureshell/' /etc/ssh/ssh_host_*_key.pub
# Print in mime format (stdout)
makemime /etc/ssh/ssh_host_*
# vim: ts=2:sw=2:et
.tar
file that contains auth data
for the list of usernames passed to it (the file contains a user_pass.txt
file with random passwords for the users, public and private ssh keys for them
and the user_keys.txt
file that matches the generated keys).
To generate a tar
file for the user scs
we can execute the following:
$ docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
user_pass.txt
file we can do:
$ tar tvf /tmp/scs-users.tar
-rw-r--r-- root/root 21 2022-09-11 15:55 user_pass.txt
-rw-r--r-- root/root 822 2022-09-11 15:55 user_keys.txt
-rw------- root/root 387 2022-09-11 15:55 id_ed25519-scs
-rw-r--r-- root/root 85 2022-09-11 15:55 id_ed25519-scs.pub
-rw------- root/root 3357 2022-09-11 15:55 id_rsa-scs
-rw------- root/root 3243 2022-09-11 15:55 id_rsa-scs.pem
-rw-r--r-- root/root 729 2022-09-11 15:55 id_rsa-scs.pub
$ tar xfO /tmp/scs-users.tar user_pass.txt
scs:20JertRSX2Eaar4x
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
USER_KEYS_FILE="user_keys.txt"
USER_PASS_FILE="user_pass.txt"
# ---------
# MAIN CODE
# ---------
# Generate user passwords and keys, return 1 if no username is received
if [ "$#" -eq "0" ]; then
return 1
fi
opwd="$(pwd)"
tmpdir="$(mktemp -d)"
cd "$tmpdir"
for u in "$@"; do
ssh-keygen -q -a 100 -t ed25519 -f "id_ed25519-$u" -C "$u" -N ""
ssh-keygen -q -a 100 -b 4096 -t rsa -f "id_rsa-$u" -C "$u" -N ""
# Legacy RSA private key format
cp -a "id_rsa-$u" "id_rsa-$u.pem"
ssh-keygen -q -p -m pem -f "id_rsa-$u.pem" -N "" -P "" >/dev/null
chmod 0600 "id_rsa-$u.pem"
echo "$u:$(pwgen -s 16 1)" >>"$USER_PASS_FILE"
echo "$u:$(cat "id_ed25519-$u.pub")" >>"$USER_KEYS_FILE"
echo "$u:$(cat "id_rsa-$u.pub")" >>"$USER_KEYS_FILE"
done
tar cf - "$USER_PASS_FILE" "$USER_KEYS_FILE" id_* 2>/dev/null
cd "$opwd"
rm -rf "$tmpdir"
# vim: ts=2:sw=2:et
nginx-scs
container is generated using the following Dockerfile
:
ARG NGINX_VERSION=1.23.1
FROM nginx:$NGINX_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN rm -f /docker-entrypoint.d/*
COPY docker-entrypoint.d/* /docker-entrypoint.d/
docker-entrypoint.d
scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:
AUTH_REQUEST_URI
: URL to use for the auth_request
, if the variable is not
found on the environment auth_request
is not used.HTML_ROOT
: Base directory of the web server, if not passed the default
/usr/share/nginx/html
is used.nginx
image.
The contents of the configuration script are:
#!/bin/sh
# Replace the default.conf nginx file by our own version.
set -e
if [ -z "$HTML_ROOT" ]; then
HTML_ROOT="/usr/share/nginx/html"
fi
if [ "$AUTH_REQUEST_URI" ]; then
cat >/etc/nginx/conf.d/default.conf <<EOF
server
listen 80;
server_name localhost;
location /
auth_request /.auth;
root $HTML_ROOT;
index index.html index.htm;
location /.auth
internal;
proxy_pass $AUTH_REQUEST_URI;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI \$request_uri;
error_page 500 502 503 504 /50x.html;
location = /50x.html
root /usr/share/nginx/html;
EOF
else
cat >/etc/nginx/conf.d/default.conf <<EOF
server
listen 80;
server_name localhost;
location /
root $HTML_ROOT;
index index.html index.htm;
error_page 500 502 503 504 /50x.html;
location = /50x.html
root /usr/share/nginx/html;
EOF
fi
# vim: ts=2:sw=2:et
/sftp/data
or /sftp/data/scs
folder as the root of the web published by this container and create an
Ingress
object to provide access to it outside of our kubernetes cluster.webhook-scs
container is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
ARG GOLANG_VERSION=alpine3.16
FROM golang:$GOLANG_VERSION AS builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
ENV WEBHOOK_VERSION 2.8.0
ENV WEBHOOK_PR 549
ENV S3FS_VERSION v1.91
WORKDIR /go/src/github.com/adnanh/webhook
RUN apk update &&\
apk add --no-cache -t build-deps curl libc-dev gcc libgcc patch
RUN curl -L --silent -o webhook.tar.gz\
https://github.com/adnanh/webhook/archive/$ WEBHOOK_VERSION .tar.gz &&\
tar xzf webhook.tar.gz --strip 1 &&\
curl -L --silent -o $ WEBHOOK_PR .patch\
https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/$ WEBHOOK_PR .patch &&\
patch -p1 < $ WEBHOOK_PR .patch &&\
go get -d && \
go build -o /usr/local/bin/webhook
WORKDIR /src/s3fs-fuse
RUN apk update &&\
apk add ca-certificates build-base alpine-sdk libcurl automake autoconf\
libxml2-dev libressl-dev mailcap fuse-dev curl-dev
RUN curl -L --silent -o s3fs.tar.gz\
https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/$S3FS_VERSION.tar.gz &&\
tar xzf s3fs.tar.gz --strip 1 &&\
./autogen.sh &&\
./configure --prefix=/usr/local &&\
make -j && \
make install
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
WORKDIR /webhook
RUN apk update &&\
apk add --no-cache ca-certificates mailcap fuse libxml2 libcurl libgcc\
libstdc++ rsync util-linux-misc &&\
rm -rf /var/cache/apk/*
COPY --from=builder /usr/local/bin/webhook /usr/local/bin/webhook
COPY --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs
COPY entrypoint.sh /
COPY hooks/* ./hooks/
EXPOSE 9000
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
PATCH
included on this
pull request against a released
version of the source instead of creating a fork.
The entrypoint.sh
script is used to generate the webhook
configuration file
for the existing hooks
using environment variables (basically the
WEBHOOK_WORKDIR
and the *_TOKEN
variables) and launch the webhook
service:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
WEBHOOK_BIN="$ WEBHOOK_BIN:-/webhook/hooks "
WEBHOOK_YML="$ WEBHOOK_YML:-/webhook/scs.yml "
WEBHOOK_OPTS="$ WEBHOOK_OPTS:--verbose "
# ---------
# FUNCTIONS
# ---------
print_du_yml()
cat <<EOF
- id: du
execute-command: '$WEBHOOK_BIN/du.sh'
command-working-directory: '$WORKDIR'
response-headers:
- name: 'Content-Type'
value: 'application/json'
http-methods: ['GET']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
pass-arguments-to-command:
- source: 'url'
name: 'path'
pass-environment-to-command:
- source: 'string'
envname: 'OUTPUT_FORMAT'
name: 'json'
EOF
print_hardlink_yml()
cat <<EOF
- id: hardlink
execute-command: '$WEBHOOK_BIN/hardlink.sh'
command-working-directory: '$WORKDIR'
http-methods: ['GET']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
EOF
print_s3sync_yml()
cat <<EOF
- id: s3sync
execute-command: '$WEBHOOK_BIN/s3sync.sh'
command-working-directory: '$WORKDIR'
http-methods: ['POST']
include-command-output-in-response: true
include-command-output-in-response-on-error: true
pass-environment-to-command:
- source: 'payload'
envname: 'AWS_KEY'
name: 'aws.key'
- source: 'payload'
envname: 'AWS_SECRET_KEY'
name: 'aws.secret_key'
- source: 'payload'
envname: 'S3_BUCKET'
name: 's3.bucket'
- source: 'payload'
envname: 'S3_REGION'
name: 's3.region'
- source: 'payload'
envname: 'S3_PATH'
name: 's3.path'
- source: 'payload'
envname: 'SCS_PATH'
name: 'scs.path'
stream-command-output: true
EOF
print_token_yml()
if [ "$1" ]; then
cat << EOF
trigger-rule:
match:
type: 'value'
value: '$1'
parameter:
source: 'header'
name: 'X-Webhook-Token'
EOF
fi
exec_webhook()
# Validate WORKDIR
if [ -z "$WEBHOOK_WORKDIR" ]; then
echo "Must define the WEBHOOK_WORKDIR variable!" >&2
exit 1
fi
WORKDIR="$(realpath "$WEBHOOK_WORKDIR" 2>/dev/null)" true
if [ ! -d "$WORKDIR" ]; then
echo "The WEBHOOK_WORKDIR '$WEBHOOK_WORKDIR' is not a directory!" >&2
exit 1
fi
# Get TOKENS, if the DU_TOKEN or HARDLINK_TOKEN is defined that is used, if
# not if the COMMON_TOKEN that is used and in other case no token is checked
# (that is the default)
DU_TOKEN="$ DU_TOKEN:-$COMMON_TOKEN "
HARDLINK_TOKEN="$ HARDLINK_TOKEN:-$COMMON_TOKEN "
S3_TOKEN="$ S3_TOKEN:-$COMMON_TOKEN "
# Create webhook configuration
print_du_yml
print_token_yml "$DU_TOKEN"
echo ""
print_hardlink_yml
print_token_yml "$HARDLINK_TOKEN"
echo ""
print_s3sync_yml
print_token_yml "$S3_TOKEN"
>"$WEBHOOK_YML"
# Run the webhook command
# shellcheck disable=SC2086
exec webhook -hooks "$WEBHOOK_YML" $WEBHOOK_OPTS
# ----
# MAIN
# ----
case "$1" in
"server") exec_webhook ;;
*) exec "$@" ;;
esac
entrypoint.sh
script generates the configuration file for the webhook
server calling functions that print a yaml
section for each hook
and
optionally adds rules to validate access to them comparing the value of a
X-Webhook-Token
header against predefined values.
The expected token values are taken from environment variables, we can define
a token variable for each hook
(DU_TOKEN
, HARDLINK_TOKEN
or S3_TOKEN
)
and a fallback value (COMMON_TOKEN
); if no token variable is defined for a
hook
no check is done and everybody can call it.
The Hook
Definition documentation explains the options you can use for each hook
, the
ones we have right now do the following:
du
: runs on the $WORKDIR
directory, passes as first argument to the
script the value of the path
query parameter and sets the variable
OUTPUT_FORMAT
to the fixed value json
(we use that to print the output of
the script in JSON format instead of text).hardlink
: runs on the $WORKDIR
directory and takes no parameters.s3sync
: runs on the $WORKDIR
directory and sets a lot of environment
variables from values read from the JSON encoded payload sent by the caller
(all the values must be sent by the caller even if they are assigned an empty
value, if they are missing the hook
fails without calling the script); we
also set the stream-command-output
value to true
to make the script show
its output as it is working (we patched the webhook
source to be able to
use this option).du
hook scriptThe du
hook script code checks if the argument passed is a directory,
computes its size using the du
command and prints the results in text format
or as a JSON dictionary:
#!/bin/sh
set -e
# Script to print disk usage for a PATH inside the scs folder
# ---------
# FUNCTIONS
# ---------
print_error()
if [ "$OUTPUT_FORMAT" = "json" ]; then
echo " \"error\":\"$*\" "
else
echo "$*" >&2
fi
exit 1
usage()
if [ "$OUTPUT_FORMAT" = "json" ]; then
echo " \"error\":\"Pass arguments as '?path=XXX\" "
else
echo "Usage: $(basename "$0") PATH" >&2
fi
exit 1
# ----
# MAIN
# ----
if [ "$#" -eq "0" ] [ -z "$1" ]; then
usage
fi
if [ "$1" = "." ]; then
DU_PATH="./"
else
DU_PATH="$(find . -name "$1" -mindepth 1 -maxdepth 1)" true
fi
if [ -z "$DU_PATH" ] [ ! -d "$DU_PATH/." ]; then
print_error "The provided PATH ('$1') is not a directory"
fi
# Print disk usage in bytes for the given PATH
OUTPUT="$(du -b -s "$DU_PATH")"
if [ "$OUTPUT_FORMAT" = "json" ]; then
# Format output as "path":"PATH","bytes":"BYTES"
echo "$OUTPUT"
sed -e "s%^\(.*\)\t.*/\(.*\)$% \"path\":\"\2\",\"bytes\":\"\1\" %"
tr -d '\n'
else
# Print du output as is
echo "$OUTPUT"
fi
# vim: ts=2:sw=2:et:ai:sts=2
hardlink
hook scriptThe hardlink
hook script is really simple, it just runs the
util-linux version of the
hardlink
command on its working directory:
#!/bin/sh
hardlink --ignore-time --maximize .
s3sync
hook scriptThe s3sync
hook script uses the s3fs
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using rsync
; all values needed to execute
the task are taken from environment variables:
#!/bin/ash
set -euo pipefail
set -o errexit
set -o errtrace
# Functions
finish()
ret="$1"
echo ""
echo "Script exit code: $ret"
exit "$ret"
# Check variables
if [ -z "$AWS_KEY" ] [ -z "$AWS_SECRET_KEY" ] [ -z "$S3_BUCKET" ]
[ -z "$S3_PATH" ] [ -z "$SCS_PATH" ]; then
[ "$AWS_KEY" ] echo "Set the AWS_KEY environment variable"
[ "$AWS_SECRET_KEY" ] echo "Set the AWS_SECRET_KEY environment variable"
[ "$S3_BUCKET" ] echo "Set the S3_BUCKET environment variable"
[ "$S3_PATH" ] echo "Set the S3_PATH environment variable"
[ "$SCS_PATH" ] echo "Set the SCS_PATH environment variable"
finish 1
fi
if [ "$S3_REGION" ] && [ "$S3_REGION" != "us-east-1" ]; then
EP_URL="endpoint=$S3_REGION,url=https://s3.$S3_REGION.amazonaws.com"
else
EP_URL="endpoint=us-east-1"
fi
# Prepare working directory
WORK_DIR="$(mktemp -p "$HOME" -d)"
MNT_POINT="$WORK_DIR/s3data"
PASSWD_S3FS="$WORK_DIR/.passwd-s3fs"
# Check the moutpoint
if [ ! -d "$MNT_POINT" ]; then
mkdir -p "$MNT_POINT"
elif mountpoint "$MNT_POINT"; then
echo "There is already something mounted on '$MNT_POINT', aborting!"
finish 1
fi
# Create password file
touch "$PASSWD_S3FS"
chmod 0400 "$PASSWD_S3FS"
echo "$AWS_KEY:$AWS_SECRET_KEY" >"$PASSWD_S3FS"
# Mount s3 bucket as a filesystem
s3fs -o dbglevel=info,retries=5 -o "$EP_URL" -o "passwd_file=$PASSWD_S3FS" \
"$S3_BUCKET" "$MNT_POINT"
echo "Mounted bucket '$S3_BUCKET' on '$MNT_POINT'"
# Remove the password file, just in case
rm -f "$PASSWD_S3FS"
# Check source PATH
ret="0"
SRC_PATH="$MNT_POINT/$S3_PATH"
if [ ! -d "$SRC_PATH" ]; then
echo "The S3_PATH '$S3_PATH' can't be found!"
ret=1
fi
# Compute SCS_UID & SCS_GID (by default based on the working directory owner)
SCS_UID="$ SCS_UID:=$(stat -c "%u" "." 2>/dev/null) " true
SCS_GID="$ SCS_GID:=$(stat -c "%g" "." 2>/dev/null) " true
# Check destination PATH
DST_PATH="./$SCS_PATH"
if [ "$ret" -eq "0" ] && [ -d "$DST_PATH" ]; then
mkdir -p "$DST_PATH" ret="$?"
fi
# Copy using rsync
if [ "$ret" -eq "0" ]; then
rsync -rlptv --chown="$SCS_UID:$SCS_GID" --delete --stats \
"$SRC_PATH/" "$DST_PATH/" ret="$?"
fi
# Unmount the S3 bucket
umount -f "$MNT_POINT"
echo "Called umount for '$MNT_POINT'"
# Remove mount point dir
rmdir "$MNT_POINT"
# Remove WORK_DIR
rmdir "$WORK_DIR"
# We are done
finish "$ret"
# vim: ts=2:sw=2:et:ai:sts=2
StatefulSet
with one replica.
Our production deployment is done on AWS and to be
able to scale we use EFS for our
PersistenVolume
; the idea is that the volume has no size limit, its
AccessMode
can be set to ReadWriteMany
and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.
For development we use k3d and we are also able to scale the
StatefulSet
for testing because we use a ReadWriteOnce
PVC, but it points
to a hostPath
that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different k3d
nodes use the same folder
on the host.
mysecureshell
container that
can be generated using kubernetes pods as follows (we are only creating the
scs
user):
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
secrets.yaml
file as follows:
$ tar xf ./users.tar user_keys.txt user_pass.txt
$ kubectl --dry-run=client -o yaml create secret generic "scs-secret" \
--from-file="host_keys.txt=host_keys.txt" \
--from-file="user_keys.txt=user_keys.txt" \
--from-file="user_pass.txt=user_pass.txt" > ./secrets.yaml
secrets.yaml
will look like the following file (the base64
would match the content of the files, of course):
apiVersion: v1
data:
host_keys.txt: TWlt...
user_keys.txt: c2Nz...
user_pass.txt: c2Nz...
kind: Secret
metadata:
creationTimestamp: null
name: scs-secret
statefulSet
) can be as simple as this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName
to use the default one.
PersistentVolume
as
required by the
Local
Persistence Volume Static Provisioner (note that the /volumes/scs-pv
has to
be created by hand, in our k3d
system we mount the same host directory on the
/volumes
path of all the nodes and create the scs-pv
directory by hand
before deploying the persistent volume):
apiVersion: v1
kind: PersistentVolume
metadata:
name: scs-pv
labels:
app.kubernetes.io/name: scs
spec:
capacity:
storage: 8Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
claimRef:
name: scs-pvc
storageClassName: local-storage
local:
path: /volumes/scs-pv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- k3s
storageClassName
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: local-storage
PersistentVolume
(we are
using the
aws-efs-csi-driver which
supports Dynamic Provisioning) but we add the storageClassName
(we set it
to the one mapped to the EFS
driver, i.e. efs-sc
) and set ReadWriteMany
as the accessMode
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scs-pvc
labels:
app.kubernetes.io/name: scs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: efs-sc
statefulSet
is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: scs
labels:
app.kubernetes.io/name: scs
spec:
serviceName: scs
replicas: 1
selector:
matchLabels:
app: scs
template:
metadata:
labels:
app: scs
spec:
containers:
- name: nginx
image: stodh/nginx-scs:latest
ports:
- containerPort: 80
name: http
env:
- name: AUTH_REQUEST_URI
value: ""
- name: HTML_ROOT
value: /sftp/data
volumeMounts:
- mountPath: /sftp
name: scs-datadir
- name: mysecureshell
image: stodh/mysecureshell:latest
ports:
- containerPort: 22
name: ssh
securityContext:
capabilities:
add:
- IPC_OWNER
env:
- name: SFTP_UID
value: '2020'
- name: SFTP_GID
value: '2020'
volumeMounts:
- mountPath: /secrets
name: scs-file-secrets
readOnly: true
- mountPath: /sftp
name: scs-datadir
- name: webhook
image: stodh/webhook-scs:latest
securityContext:
privileged: true
ports:
- containerPort: 9000
name: webhook-http
env:
- name: WEBHOOK_WORKDIR
value: /sftp/data/scs
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- mountPath: /sftp
name: scs-datadir
volumes:
- name: devfuse
hostPath:
path: /dev/fuse
- name: scs-file-secrets
secret:
secretName: scs-secrets
- name: scs-datadir
persistentVolumeClaim:
claimName: scs-pvc
nginx
: As this is an example the web server is not using an
AUTH_REQUEST_URI
and uses the /sftp/data
directory as the root of the web
(to get to the files uploaded for the scs
user we will need to use /scs/
as a prefix on the URLs).mysecureshell
: We are adding the IPC_OWNER
capability to the container to
be able to use some of the sftp-*
commands inside it, but they are
not really needed, so adding the capability is optional.webhook
: We are launching this container in privileged mode to be able to
use the s3fs-fuse
, as it will not work otherwise for now (see this
kubernetes issue); if
the functionality is not needed the container can be executed with regular
privileges; besides, as we are not enabling public access to this service we
don t define *_TOKEN
variables (if required the values should be read from a
Secret
object).devfuse
volume is only needed if we plan to use the s3fs
command on
the webhook
container, if not we can remove the volume definition and its
mounts.Service
object:
apiVersion: v1
kind: Service
metadata:
name: scs-svc
labels:
app.kubernetes.io/name: scs
spec:
ports:
- name: ssh
port: 22
protocol: TCP
targetPort: 22
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: webhook-http
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: scs
scs
files from the outside we can add an ingress object like
the following (the definition is for testing using the localhost
name):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: scs-ingress
labels:
app.kubernetes.io/name: scs
spec:
ingressClassName: nginx
rules:
- host: 'localhost'
http:
paths:
- path: /scs
pathType: Prefix
backend:
service:
name: scs-svc
port:
number: 80
statefulSet
we create a namespace and apply the object
definitions shown before:
$ kubectl create namespace scs-demo
namespace/scs-demo created
$ kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$ kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$ kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$ kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$ kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
kubectl
:
$ kubectl -n scs-demo get all,secrets,ingress
NAME READY STATUS RESTARTS AGE
pod/scs-0 3/3 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/scs-svc ClusterIP 10.43.0.47 <none> 22/TCP,80/TCP,9000/TCP 21s
NAME READY AGE
statefulset.apps/scs 1/1 24s
NAME TYPE DATA AGE
secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s
secret/scs-secrets Opaque 3 39s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s
sftp
server from
other Pods, but to test the system we are going to do a kubectl port-forward
and connect to the server using our host client and the password we have
generated (it is on the user_pass.txt
file, inside the users.tar
archive):
$ kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 -> 22
Forwarding from [::1]:2020 -> 22
$ PF_PID=$!
$ sftp -P 2020 scs@127.0.0.1 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp> ls -la
drwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 .
dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 ..
sftp> !date -R > /tmp/date.txt 2
sftp> put /tmp/date.txt .
Uploading /tmp/date.txt to /date.txt
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt
sftp> ln date.txt date.txt.1 3
sftp> ls -l
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
sftp> put /tmp/date.txt date.txt.2 4
Uploading /tmp/date.txt to /date.txt.2
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l 5
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2
sftp> exit
$ kill "$PF_PID"
[1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
sftp
service on the forwarded port with the scs
user.date.txt
file from the
URL http://localhost/scs/date.txt:
$ curl -s http://localhost/scs/date.txt
Sun, 25 Sep 2022 17:21:51 +0200
hooks
directly,
from a CronJob
and from a Job
.
du
)In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a port-forward
and call the script with an existing PATH (the
root directory) and a bad one:
$ kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$ PF_PID=$!
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=.")"
$ echo $JSON
"path":"","bytes":"4160"
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=foo")"
$ echo $JSON
"error":"The provided PATH ('foo') is not a directory"
$ kill $PF_PID
.
PATH and the output is in json
format because we export OUTPUT_FORMAT
with
the value json
on the webhook
configuration.hardlink
)As explained before, the webhook
container can be used to run cronjobs
; the
following one uses an alpine
container to call the hardlink
script each
minute (that setup is for testing, obviously):
apiVersion: batch/v1
kind: CronJob
metadata:
name: hardlink
labels:
cronjob: 'hardlink'
spec:
schedule: "* */1 * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
metadata:
labels:
cronjob: 'hardlink'
spec:
containers:
- name: hardlink-cronjob
image: alpine:latest
command: ["wget", "-q", "-O-", "http://scs-svc:9000/hooks/hardlink"]
restartPolicy: Never
$ kubectl -n scs-demo apply -f webhook-cronjob.yaml 1
cronjob.batch/hardlink created
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" -w 2
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Pending 0 0s
hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s
hardlink-27735351-zvpnb 0/1 Completed 0 2s
^C
$ kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3
Mode: real
Method: sha256
Files: 3
Linked: 1 files
Compared: 0 xattrs
Compared: 1 files
Saved: 32 B
Duration: 0.000220 seconds
$ sleep 60
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" 4
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Completed 0 83s
hardlink-27735352-br5rn 0/1 Completed 0 23s
$ kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5
Mode: real
Method: sha256
Files: 3
Linked: 0 files
Compared: 0 xattrs
Compared: 0 files
Saved: 0 B
Duration: 0.000070 seconds
$ kubectl -n scs-demo delete -f webhook-cronjob.yaml 6
cronjob.batch "hardlink" deleted
cronjob
label, we interrupt it once we see
that the first run has been completed.date.txt.2
has been replaced by a hardlink (the
summary does not name the file, but it is the only option knowing the
contents from the original upload).s3sync
)The following job can be used to synchronise the contents of a directory in a
S3 bucket with the SCS Filesystem:
apiVersion: batch/v1
kind: Job
metadata:
name: s3sync
labels:
cronjob: 's3sync'
spec:
template:
metadata:
labels:
cronjob: 's3sync'
spec:
containers:
- name: s3sync-job
image: alpine:latest
command:
- "wget"
- "-q"
- "--header"
- "Content-Type: application/json"
- "--post-file"
- "/secrets/s3sync.json"
- "-O-"
- "http://scs-svc:9000/hooks/s3sync"
volumeMounts:
- mountPath: /secrets
name: job-secrets
readOnly: true
restartPolicy: Never
volumes:
- name: job-secrets
secret:
secretName: webhook-job-secrets
"aws":
"key": "********************",
"secret_key": "****************************************"
,
"s3":
"region": "eu-north-1",
"bucket": "blogops-test",
"path": "test"
,
"scs":
"path": "test"
$ kubectl -n scs-demo create secret generic webhook-job-secrets \ 1
--from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$ kubectl -n scs-demo apply -f webhook-job.yaml 2
job.batch/s3sync created
$ kubectl -n scs-demo get pods -l "cronjob=s3sync" 3
NAME READY STATUS RESTARTS AGE
s3sync-zx2cj 0/1 Completed 0 12s
$ kubectl -n scs-demo logs s3sync-zx2cj 4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec
total size is 15,075 speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$ kubectl -n scs-demo delete -f webhook-job.yaml 5
job.batch "s3sync" deleted
$ kubectl -n scs-demo delete secrets webhook-job-secrets 6
secret "webhook-job-secrets" deleted
webhook-job-secrets
secret that contains the
s3sync.json
file.cronjob=s3sync
we get the Pods executed by the job.I have just filed a complaint with the CRTC about my phone provider's outrageous fees. This is a copy of the complaint.I am traveling to Europe, specifically to Ireland, for a 6 days for a work meeting. I thought I could use my phone there. So I looked at my phone provider's services in Europe, and found the "Fido roaming" services: https://www.fido.ca/mobility/roaming The fees, at the time of writing, at fifteen (15!) dollars PER DAY to get access to my regular phone service (not unlimited!!). If I do not use that "roaming" service, the fees are:
I have no illusions about this having any effect. I thought of filing such a complain after the Rogers outage as well, but felt I had less of a standing there because I wasn't affected that much (e.g. I didn't have a life-threatening situation myself). This, however, was ridiculous and frustrating enough to trigger this outrage. We'll see how it goes..."We will respond to you within 10 working days."
Dear Antoine Beaupr : Thank you for contacting us about your mobile telephone international roaming service plan rates concern with Fido Solutions Inc. (Fido). In Canada, mobile telephone service is offered on a competitive basis. Therefore, the Canadian Radio-television and Telecommunications Commission (CRTC) is not involved in Fido's terms of service (including international roaming service plan rates), billing and marketing practices, quality of service issues and customer relations. If you haven't already done so, we encourage you to escalate your concern to a manager if you believe the answer you have received from Fido's customer service is not satisfactory. Based on the information that you have provided, this may also appear to be a Competition Bureau matter. The Competition Bureau is responsible for administering and enforcing the Competition Act, and deals with issues such as false or misleading representations, deceptive marketing practices and collusion. You can reach the Competition Bureau by calling 1-800-348-5358 (toll-free), by TTY (for deaf and hard of hearing people) by calling 1-866-694-8389 (toll-free). For more contact information, please visit http://www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/eng/00157.html When consumers are not satisfied with the service they are offered, we encourage them to compare the products and services of other providers in their area and look for a company that can better match their needs. The following tool helps to show choices of providers in your area: https://crtc.gc.ca/eng/comm/fourprov.htm Thank you for sharing your concern with us.In other words, complain with Fido, or change providers. Don't complain to us, we don't manage the telcos, they self-regulate. Great job, CRTC. This is going great. This is exactly why we're one of the most expensive countries on the planet for cell phone service.
Date: Tue, 13 Sep 2022 10:10:00 -0400 From: Fido DONOTREPLY@fido.ca To: REDACTED Subject: Courriel d avis d itin rance Fido Roaming Welcome Confirmation Fido Date : 13 septembre 2022I found that message utterly confusing (and yes, I can read french). Basically, it says that some user (presumably me!) connected to the network with roaming. I did just disabled airplane mode on my phone to debug a Syncthing bug but had not enabled roaming. So this message seemed to say that I would be charged 15$ (per DAY!) for roaming from now on. Confused, I tried their live chat to try to clarify things, worried I would get charged even more for calling tech support on
Num ro de compte : [redacted] Bonjour
Antoine Beaupr ! Nous vous crivons pour vous indiquer qu au moins un utilisateur inscrit votre compte s est r cemment connect un r seau en itin rance.
Vous trouverez ci-dessous le message texte de bienvenue en itin rance envoy l utilisateur (ou aux utilisateurs), qui contenait les tarifs d itin rance
applicables. Message texte de bienvenue en itin rance Destinataire : REDACTED Date et heure : 2022-09-13 / 10:10:00
Allo, ici Fido : Bienvenue destination! Vous tes inscrit Fido Nomade alors utilisez vos donn es, parlez et textez comme vous le faites la
maison. Depuis le 1 mars 2022 le tarif cette destination pour 15 $/jour (+ taxes) et valide tous les jours jusqu' 23 h 59 HE, peu importe le fuseau
horaire dans lequel vous vous trouvez. Bon voyage! Des questions? Consultez fido.ca/m/itinerance ou composez +15149333436 (sans frais). Besoin d aide?D tails
- PLANIFIEZ UN VOYAGE AVEC Fido NomadeMC
D couvrez nos options d itin rance et restez en contact l tranger sans vous soucier de votre
facture.D tails Ce courriel est produit automatiquement; veuillez ne pas y r pondre. Ce courriel (et toute pi ce jointe) est confidentiel. Si vous n tes pas le destinataire,
- G rez votre compte
G rez vos produits et services Fido la maison ou sur la route gr ce Mon
Compte.
veuillez supprimer le pr sent message et en d truire toute copie. FIDO SOLUTIONS 800, RUE DE LA GAUCHETI RE OUEST
BUREAU 4000 MONTR AL (QU BEC) H5A 1K3
Fido
*611
. This is a transcript of the chat:
F: Hi! What are we doing today? Type in your question or choose from the options below: * Track my Equipment Order * View Bill Online * Payment Options * iPhone 14 Pre-Order A: i received a message about roaming while abroad but i did not enable roaming on my phone, will i be charged anyways? F: I think I know what you re asking for. Select the topic that best matches your request or try rephrasing your question. A: no F: Thank you, this will help us to improve! Would you like to chat with a specialist? Chat with a specialist I'll get a specialist to help you with this. It appears that you're not signed in. Your session may have timed out. To save time and identify your account details, please sign in to My Account.[... delay ...]Have any questions specific to your Fido account? To service you faster, please identify yourself by completing the form below. A: Personal info Form submitted F: Thank you! I'll connect you with the next available specialist. Your chat is being transferred to a Live Chat agent. Thanks for your patience. We are here to assist you and we kindly ask that our team members be treated with respect and dignity. Please note that abuse directed towards any Consumer Care Specialist will not be tolerated and will result in the termination of your conversation with us. All of our agents are with other customers at the moment. Your chat is in a priority sequence and someone will be with you as soon as possible. Thanks! Thanks for continuing to hold. An agent will be with you as soon as possible. Thank you for your continued patience. We re getting more Live Chat requests than usual so it s taking longer to answer. Your chat is still in a priority sequence and will be answered as soon as an agent becomes available. Thank you so much for your patience we're sorry for the wait. Your chat is still in a priority sequence and will be answered as soon as possible. Hi, I'm [REDACTED] from Fido in [REDACTED]. May I have your name please? A: hi i am antoine, nice to meet you sorry to use the live chat, but it's not clear to me i can safely use my phone to call support, because i am in ireland and i'm worried i'll get charged for the call F: Thank You Antoine , I see you waited to speak with me today, thank you for your patience.Apart from having to wait, how are you today? A: i am good thank you
- Sign in
- I'm not able to sign in
A: should i restate my question? F: Yes please what is the concern you have? A: i have received an email from fido saying i someone used my phone for roaming it's in french (which is fine), but that's the gist of it i am traveling to ireland for a week i do not want to use fido's services here... i have set the phon eto airplane mode for most of my time here F: The SMS just says what will be the charges if you used any services. A: but today i have mistakenly turned that off and did not turn on roaming well it's not a SMS, it's an email F: Yes take out the sim and keep it safe.Turun off or On for roaming you cant do it as it is part of plan. A: wat F: if you used any service you will be charged if you not used any service you will not be charged. A: you are saying i need to physically take the SIM out of the phone? i guess i will have a fun conversation with your management once i return from this trip not that i can do that now, given that, you know, i nee dto take the sim out of this phone fun times F: Yes that is better as most of the customer end up using some kind of service and get charged for roaming. A: well that is completely outrageous roaming is off on the phone i shouldn't get charged for roaming, since roaming is off on the phone i also don't get why i cannot be clearly told whether i will be charged or not the message i have received says i will be charged if i use the service and you seem to say i could accidentally do that easily can you tell me if i have indeed used service sthat will incur an extra charge? are incoming text messages free? F: I understand but it is on you if you used some data SMS or voice mail you can get charged as you used some services.And we cant check anything for now you have to wait for next bill. and incoming SMS are free rest all service comes under roaming. That is the reason I suggested take out the sim from phone and keep it safe or always keep the phone or airplane mode. A: okay can you confirm whether or not i can call fido by voice for support? i mean for free F: So use your Fido sim and call on +1-514-925-4590 on this number it will be free from out side Canada from Fido sim. A: that is quite counter-intuitive, but i guess i will trust you on that thank you, i think that will be all F: Perfect, Again, my name is [REDACTED] and it s been my pleasure to help you today. Thank you for being a part of the Fido family and have a great day! A: you tooSo, in other words:
*611
, and
instead on that long-distance-looking phone number, and yes, that
means turning off airplane mode and putting the SIM card in, which
contradicts step 3+1-514-925-4590
) is different than the one provided in the email
(15149333436
). So who knows what would have happened if I would have
called the latter. The former is mentioned in their contact page.
I guess the next step is to call Fido over the phone and talk to a
manager, which is what the CRTC told me to do in the first place...
I ended up talking with a manager (another 1h phone call) and they
confirmed there is no other package available at Fido for this. At
best they can provide me with a credit if I mistakenly use the roaming
by accident to refund me, but that's it. The manager also confirmed
that I cannot know if I have actually used any data before reading the
bill, which is issued on the 15th of every month, but only
available... three days later, at which point I'll be back home
anyways.
Fantastic.
nncp-xfer -rx
to process incoming packets from the USB (or other media) device. This moves them into the NNCP inbound queue, deleting them from the media device, and verifies the packet integrity.nncp-ack -node $NODE
to create ACK packets responding to the packets we just loaded into the rx queue. It writes a list of generated ACKs onto fd 4, which we save off for later use.nncp-toss -seen
to process the incoming queue. The use of -seen
causes NNCP to remember the hashes of packets seen before, so a duplicate of an already-seen packet will not be processed twice. This command also processes incoming ACKs for packets we ve sent out previously; if they pass verification, the relevant packets are removed from the local machine s tx queue.nncp-xfer -keep -tx -mkdir -node $NODE
to send outgoing packets to a given node by writing them to a given directory on the media device. -keep
causes them to remain in the outgoing queue.nncp-rm -node $NODE -pkt < $FILE
to remove those specific packets from the outbound queue. The reason is that there will never be an ACK of ACK packet (that would create an infinite loop), so if we don t delete them in this manner, they would hang around forever.nncp-toss
, there is a chance of a race condition between steps 1 and 2 (if nncp-toss gets to it first, it might not get an ack generated). This would sort itself out eventually, presumably, as the sender would retransmit and it would be ACKed later.
#!/bin/bash
set -eo pipefail
MEDIABASE="/media/$USER"
# The local node name
NODENAME=" hostname "
# All nodes. NODENAME should be in this list.
ALLNODES="node1 node2 node3"
RUNNNCP=""
# If you need to sudo, use something like RUNNNCP="sudo -Hu nncp"
NNCPPATH="/usr/local/nncp/bin"
ACKPATH=" mktemp -d "
# Process incoming packets.
#
# Parameters: $1 - the path to scan. Must contain a directory
# named "nncp".
procrxpath ()
while [ -n "$1" ]; do
BASEPATH="$1/nncp"
shift
if ! [ -d "$BASEPATH" ]; then
echo "$BASEPATH doesn't exist; skipping"
continue
fi
echo " *** Incoming: processing $BASEPATH"
TMPDIR=" mktemp -d "
# This rsync and the one below can help with
# certain permission issues from weird foreign
# media. You could just eliminate it and
# always use $BASEPATH instead of $TMPDIR below.
rsync -rt "$BASEPATH/" "$TMPDIR/"
# You may need these next two lines if using sudo as above.
# chgrp -R nncp "$TMPDIR"
# chmod -R g+rwX "$TMPDIR"
echo " Running nncp-xfer -rx"
$RUNNNCP $NNCPPATH/nncp-xfer -progress -rx "$TMPDIR"
for NODE in $ALLNODES; do
if [ "$NODE" != "$NODENAME" ]; then
echo " Running nncp-ack for $NODE"
# Now, we generate ACK packets for each node we will
# process. nncp-ack writes a list of the created
# ACK packets to fd 4. We'll use them later.
# If using sudo, add -C 5 after $RUNNNCP.
$RUNNNCP $NNCPPATH/nncp-ack -progress -node "$NODE" \
4>> "$ACKPATH/$NODE"
fi
done
rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
rm -fr "$TMPDIR"
done
proctxpath ()
while [ -n "$1" ]; do
BASEPATH="$1/nncp"
shift
if ! [ -d "$BASEPATH" ]; then
echo "$BASEPATH doesn't exist; skipping"
continue
fi
echo " *** Outgoing: processing $BASEPATH"
TMPDIR=" mktemp -d "
rsync -rt "$BASEPATH/" "$TMPDIR/"
# You may need these two lines if using sudo:
# chgrp -R nncp "$TMPDIR"
# chmod -R g+rwX "$TMPDIR"
for DESTHOST in $ALLNODES; do
if [ "$DESTHOST" = "$NODENAME" ]; then
continue
fi
# Copy outgoing packets to this node, but keep them in the outgoing
# queue with -keep.
$RUNNNCP $NNCPPATH/nncp-xfer -keep -tx -mkdir -node "$DESTHOST" -progress "$TMPDIR"
# Here is the key: that list of ACK packets we made above - now we delete them.
# There will never be an ACK for an ACK, so they'd keep sending forever
# if we didn't do this.
if [ -f "$ACKPATH/$DESTHOST" ]; then
echo "nncp-rm for node $DESTHOST"
$RUNNNCP $NNCPPATH/nncp-rm -debug -node "$DESTHOST" -pkt < "$ACKPATH/$DESTHOST"
fi
done
rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
rm -rf "$TMPDIR"
# We only want to write stuff once.
return 0
done
procrxpath "$MEDIABASE"/*
echo " *** Initial tossing..."
# We make sure to use -seen to rule out duplicates.
$RUNNNCP $NNCPPATH/nncp-toss -progress -seen
proctxpath "$MEDIABASE"/*
echo "You can unmount devices now."
echo "Done."
hledger
, I'm not convinced that it has been a good idea.
I'm quoting my Twitter feedback here in order to respond. The context is
handling when I have used the "wrong" card to pay for something: a card
affiliated with my family expenses for something personal, or vice versa. With
double-entry book-keeping, and one pair of transactions, the destination
account can either record the expense category:
2022-08-20 coffee
family:liabilities:creditcard -3
jon:expenses:coffee 3
or the fact it was paid for on the wrong card
2022-08-20 coffee
family:liabilities:creditcard -3
family:liabilities:jon 3 ; jon owes family
but not easily both.
https://twitter.com/pranesh/status/1516819846431789058:
When you accidentally use the family CV for personal expenses, credit the account "family:liabilities:creditcard:jon" instead of "family:liabilities:creditcard". That'll allow you to track w/ 2 postings.This is an interesting idea: create a sub-account underneath the credit card, and I would have a separate balance representing the money I owed. Before:
$ hledger bal -t
-3 family:liabilities:creditcard
3 jon:expenses:coffee
transaction
2022-08-20 coffee
family:liabilities:creditcard:jon -3
jon:expenses:coffee 3
Corresponding balances
$ hledger bal -t
-3 family:liabilities:creditcard
-3 jon
3 jon:expenses:coffee
Great. However, what process would clear the balance on that sub-account? In
practice, I don't make a separate, explicit payment to the credit card from
my personal accounts. It's paid off in full by direct debit from the family
shared account. In practice, such dues are accumulated and settled with one
off bank transfers, now and then.
Since the sub-account is still part of the credit card heirarchy, I can't
just use a set of virtual postings to consolidate that value with other
liabilities, or cover it. Any transaction in there which did not correspond
to a real transaction on the credit card would make the balance drift away
from the real-word credit statements. The only way I could see this working
would be if the direct debit that settles the credit card was artificially
split to clear the sub-account, and then the amount owed would be lost.
https://twitter.com/pranesh/status/1516819846431789058:
Else, add: family:assets:receivable:jon $3A "receivable" account would function like the "dues" accounts I described in hledger (except "receivable" is an established account type in double-entry book-keeping). Here I think Pranesh is proposing using these two accounts in addition to the others on a posting. E.g.
jon:liabilities:family:cc $-3
2022-08-20 coffee
family:liabilities:creditcard -3
jon:expenses:coffee 3
family:assets:receivable:jon 3
jon:liabilities:family -3
This balances, and we end up with two other accounts, which are tracking the
exact same thing. I only owe 3, but if you didn't know that the accounts were
"views" onto the same thing, you could mistakenly think I owed 6.
I can't see the advantage of this over just using a virtual, unbalanced posting.
Dues, Liabilities
I'd invented accounts called "dues" to track moneys owed. The more correct
term for this in accounting parlance would be "accounts receivable", as in
one of the examples above. I could instead be tracking moneys due; this is
a classic liability. Liabilities have negative balances.
jon:liabilities:family -3
This means, I owe the family 3.
Liability accounts like that are identical to "dues" accounts. A positive
balance in a Liability is a counter-intuitive way of describing moneys owed
to me, rather than by me. And, reviewing a lot of the coding I did this year,
I've got myself hopelessly confused with the signs, and made lots of errors.
Crucially, double-entry has not protected me from making those mistakes:
of course, I'm circumventing it by using unbalanced virtual postings in many
cases (although I was not consistent in where I did this), but even if I used
a pair of accounts as in the last example above, I could still screw it up.
/home
partition, shared by both partition sets. It contains the games, user files, and anything that the user wants to install there.
Although the user can trivially become root, make the root filesystem read-write and install or change anything (the pacman
package manager is available), this is not recommended because
steamdeck-recovery-4.img
(the number may vary).
Note that the recovery image is already SteamOS (just not the most up-to-date version). If you simply want to have a quick look you can play a bit with it and skip the installation step. In this case I recommend that you extend the image before using it, for example with truncate -s 64G steamdeck-recovery-4.img
or, better, create a qcow2 overlay file and leave the original raw image unmodified: qemu-img create -f qcow2 -F raw -b steamdeck-recovery-4.img steamdeck-recovery-extended.qcow2 64G
But here we want to perform the actual installation, so we need a destination image. Let s create one:
$ qemu-img create -f qcow2 steamos.qcow2 64G
Installing SteamOS
Now that we have all files we can start the virtual machine:
$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \ -device usb-ehci -device usb-tablet \ -device intel-hda -device hda-duplex \ -device VGA,xres=1280,yres=800 \ -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \ -drive if=virtio,file=steamdeck-recovery-4.img,driver=raw \ -device nvme,drive=drive0,serial=badbeef \ -drive if=none,id=drive0,file=steamos.qcow2Note that we re emulating an NVMe drive for
steamos.qcow2
because that s what the installer script expects. This is not strictly necessary but it makes things a bit easier. If you don t want to do that you ll have to edit ~/tools/repair_device.sh
and change DISK
and DISK_SUFFIX
.
$ sudo steamos-chroot --disk /dev/nvme0n1 --partset A
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
$ sudo steamos-chroot --disk /dev/nvme0n1 --partset B
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
After this we can shut down the virtual machine. Our new SteamOS drive is ready to be used. We can discard the recovery image now if we want.
Booting SteamOS and first steps
To boot SteamOS we can use a QEMU line similar to the one used during the installation. This time we re not emulating an NVMe drive because it s no longer necessary.
$ cp /usr/share/OVMF/OVMF_VARS.fd . $ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \ -device usb-ehci -device usb-tablet \ -device intel-hda -device hda-duplex \ -device VGA,xres=1280,yres=800 \ -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \ -drive if=pflash,format=raw,file=OVMF_VARS.fd \ -drive if=virtio,file=steamos.qcow2 \ -device virtio-net-pci,netdev=net0 \ -netdev user,id=net0,hostfwd=tcp::2222-:22(the last two lines redirect tcp port 2222 to port 22 of the guest to be able to SSH into the VM. If you don t want to do that you can omit them) If everything went fine, you should see KDE Plasma again, this time with a desktop icon to launch Steam and another one to Return to Gaming Mode (which we should not use because it won t work). See the screenshot that opens this post. Congratulations, you re running SteamOS now. Here are some things that you probably want to do:
deck
user: run passwd
on a terminalsudo systemctl enable sshd
and/or sudo systemctl start sshd
.ssh -p 2222 deck@localhost
-m 8G
). The OS update might fail if you use less.sudo steamos-select-branch beta
(or main
, if you want the bleeding edge)/etc/os-release
(see the BUILD_ID
variable)steamos-update check
steamos-update
steamos-update
again. This works around a bug in the update process. Recent images fix this and this workaround is not necessary with them.
As we did with the recovery image, before rebooting we should ensure that the new update boots into the Plasma session, otherwise it won t work:
$ sudo steamos-chroot --partset other
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
After this we can restart the system.
If everything went fine we should be running the latest SteamOS release. Enjoy!
Reporting bugs
SteamOS is under active development. If you find problems or want to request improvements please go to the SteamOS community tracker.
Edit 06 Jul 2022: Small fixes, mention how to install the OS without using NVMe.
/etc/rsyslog.d/router.conf
:
module(load="imtcp")
input(type="imtcp" port="514")
if $fromhost-ip == '192.168.1.1' then
if $syslogseverity <= 5 then
action(type="omfile" file="/var/log/router.log")
stop
This is using the latest rsyslog configuration method: a handy scripting
language called
RainerScript.
Severity level 5
maps to "notice" which consists of unusual non-error conditions, and
192.168.1.1
is of course the IP address of the router on the LAN side.
With this, I'm directing all router log messages to a separate file,
filtering out anything less important than severity 5.
In order for rsyslog to pick up this new configuration file, I restarted it:
systemctl restart rsyslog.service
and checked that it was running correctly (e.g. no syntax errors in the new
config file) using:
systemctl status rsyslog.service
Since I added a new log file, I also setup log rotation for it by putting
the following in /etc/logrotate.d/router
:
/var/log/router.log
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
In addition, since I use
logcheck to monitor my server
logs and email me errors, I had to add /var/log/router.log
to
/etc/logcheck/logcheck.logfiles
.
Finally I opened the rsyslog port to the router in my server's firewall by
adding the following to /etc/network/iptables.up.rules
:
# Allow logs from the router
-A INPUT -s 192.168.1.1 -p tcp --dport 514 -j ACCEPT
and ran iptables-apply
.
With all of this in place, it was time to get the router to send messages.
/etc/syslog-ng.d/remote.conf
:
destination d_loghost
network("192.168.1.200" time-zone("America/Vancouver"));
;
source dns
file("/var/log/resolver");
;
log
source(src);
source(net);
source(kernel);
source(dns);
destination(d_loghost);
;
Setting the timezone to the same as my server was needed because the router
messages were otherwise sent with UTC timestamps.
To ensure that the destination host always gets the same IP address
(192.168.1.200
), I went to the advanced DHCP configuration
page and added a
static lease for the server's MAC address so that it always gets assigned
192.168.1.200
. If that wasn't already the server's IP address, you'll have
to restart it for this to take effect.
Finally, I restarted the syslog-ng daemon on the router to pick up the new
config file:
/etc/init.d/syslog-ng restart
tail -f /var/log/syslog
on the servertail -f /var/log/router.log
on the servertail -f /var/log/messages
on the routerstop
command in
/etc/rsyslog.d/router.conf
.
To force a log messages to be emitted by the router, simply ssh into it and
issue the following command:
logger Test
It should show up in the second and third windows immediately if you've got
everything setup correctly
Next.