Search Results: "wdg"

31 December 2024

Russell Coker: Links December 2024

Interesting video about the hack of Andrew Tate s The Real World site [1]. Informative video about Nick Fuentes covering the racism, anti-semitism, misogyny, and how he is clearly in denial about being gay [2]. It ends with his arrest. Hopefully the first of many arrests. This is what conservatives support. Insightful article covering the history of bus-mastering attacks on computer security and ending with pwning via CF cards [3]. Interesting lecture at the seL4 symposium about attestation of a running Linux kernel [4]. I m not a fan of most attestation systems but using a separate isolated seL4 process to monitor a Linux VM offers some real benefits. Interesting seL4 symposium lecture about CPU drivers and the fact that a modern SoC is a distributed computing environment with lots of untrusted firmware [5]. I like the way he slipped and called it unworthy firmware instead of untrustworthy firmware , I think I ll copy that.

5 November 2022

Anuradha Weeraman: Getting started with Linkerd

If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:
The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (119m ago) 7d
kube-system calico-node-6ptsh 1/1 Running 2 (119m ago) 7d
kube-system calico-node-7x5j8 1/1 Running 2 (119m ago) 7d
kube-system calico-node-qlnf6 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (119m ago) 7d
kube-system etcd-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-apiserver-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-4n9b7 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-k4rzv 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-lz2dd 1/1 Running 2 (119m ago) 7d
kube-system kube-scheduler-k8s-master 1/1 Running 2 (119m ago) 7d
The first step would be to setup the Linkerd CLI:
$ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install   sh
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version
Client version: stable-2.12.2
Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
$ linkerd check --pre
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
pre-kubernetes-setup
--------------------
control plane namespace does not already exist
can create non-namespaced resources
can create ServiceAccounts
can create Services
can create Deployments
can create CronJobs
can create ConfigMaps
can create Secrets
can read Secrets
can read extension-apiserver-authentication configmap
no clock skew detected
linkerd-version
---------------
can determine the latest version
cli is up-to-date
Status check results are  
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds   kubectl apply -f -
Rendering Linkerd CRDs...
Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install   kubectl apply -f -       
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
secret/linkerd-sp-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
secret/linkerd-policy-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-policy created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
configmap/linkerd-identity-trust-roots created
service/linkerd-identity created
service/linkerd-identity-headless created
deployment.apps/linkerd-identity created
service/linkerd-dst created
service/linkerd-dst-headless created
service/linkerd-sp-validator created
service/linkerd-policy created
service/linkerd-policy-validator created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
$ kubectl get pods -A
...
linkerd linkerd-destination-67b9cc8749-xqcbx 4/4 Running 0 69s
linkerd linkerd-identity-59b46789cc-ntfcx 2/2 Running 0 69s
linkerd linkerd-proxy-injector-7fc85556bf-vnvw6 1/2 Running 0 69s
The components are running in the new linkerd namespace.To verify the setup, run a check:
$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
linkerd-existence
-----------------
'linkerd-config' config map exists
heartbeat ServiceAccount exist
control plane replica sets are ready
no unschedulable pods
control plane pods are ready
cluster networks contains all pods
cluster networks contains all services
linkerd-config
--------------
control plane Namespace exists
control plane ClusterRoles exist
control plane ClusterRoleBindings exist
control plane ServiceAccounts exist
control plane CustomResourceDefinitions exist
control plane MutatingWebhookConfigurations exist
control plane ValidatingWebhookConfigurations exist
proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
certificate config is valid
trust anchors are using supported crypto algorithm
trust anchors are within their validity period
trust anchors are valid for at least 60 days
issuer cert is using supported crypto algorithm
issuer cert is within its validity period
issuer cert is valid for at least 60 days
issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
proxy-injector webhook has valid cert
proxy-injector cert is valid for at least 60 days
sp-validator webhook has valid cert
sp-validator cert is valid for at least 60 days
policy-validator webhook has valid cert
policy-validator cert is valid for at least 60 days
linkerd-version
---------------
can determine the latest version
cli is up-to-date
control-plane-version
---------------------
can retrieve the control plane version
control plane is up-to-date
control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
control plane proxies are healthy
control plane proxies are up-to-date
control plane proxies and cli versions match
Status check results are  
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install   kubectl apply -f -
namespace/linkerd-viz created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
serviceaccount/metrics-api created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created
serviceaccount/tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created
secret/tap-k8s-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
role.rbac.authorization.k8s.io/web created
rolebinding.rbac.authorization.k8s.io/web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
serviceaccount/web created
server.policy.linkerd.io/admin created
authorizationpolicy.policy.linkerd.io/admin created
networkauthentication.policy.linkerd.io/kubelet created
server.policy.linkerd.io/proxy-admin created
authorizationpolicy.policy.linkerd.io/proxy-admin created
service/metrics-api created
deployment.apps/metrics-api created
server.policy.linkerd.io/metrics-api created
authorizationpolicy.policy.linkerd.io/metrics-api created
meshtlsauthentication.policy.linkerd.io/metrics-api-web created
configmap/prometheus-config created
service/prometheus created
deployment.apps/prometheus created
service/tap created
deployment.apps/tap created
server.policy.linkerd.io/tap-api created
authorizationpolicy.policy.linkerd.io/tap created
clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created
serviceaccount/tap-injector created
secret/tap-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created
service/tap-injector created
deployment.apps/tap-injector created
server.policy.linkerd.io/tap-injector-webhook created
authorizationpolicy.policy.linkerd.io/tap-injector created
networkauthentication.policy.linkerd.io/kube-api-server created
service/web created
deployment.apps/web created
serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created
serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
$ kubectl get pods -A
...
linkerd-viz prometheus-b5865f776-w5ssf 1/2 Running 0 35s
linkerd-viz tap-64f5c8597b-rqgbk 2/2 Running 0 35s
linkerd-viz tap-injector-7c75cfff4c-wl9mx 2/2 Running 0 34s
linkerd-viz web-8c444745-jhzr5 2/2 Running 0 34s
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard
Linkerd dashboard available at:
http://localhost:50750
Grafana dashboard available at:
http://localhost:50750/grafana
Opening Linkerd dashboard in the default browser
Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components
2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
$ kubectl apply -f deploy.yaml
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:
Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
$ kubectl get pod nginx-deployment-cd55c47f5-cgxw2 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: aee0295dda906f7935ce5c150ae30360005f5330e98c75a550b7cc0d1532f529
cni.projectcalico.org/podIP: 172.16.36.89/32
cni.projectcalico.org/podIPs: 172.16.36.89/32
creationTimestamp: "2022-11-05T19:35:12Z"
generateName: nginx-deployment-cd55c47f5-
labels:
app: nginx
pod-template-hash: cd55c47f5
name: nginx-deployment-cd55c47f5-cgxw2
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-cd55c47f5
uid: b604f5c4-f662-4333-aaa0-bd1a2b8b08c6
resourceVersion: "22979"
uid: 8fe30214-491b-4753-9fb2-485b6341376c
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2bt6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-2bt6z
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:12Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:13Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://f088f200315b44cbeed16499aba9b2d1396f9f81645e53b032d4bfa44166128a
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:35:15Z"
hostIP: 192.168.2.216
phase: Running
podIP: 172.16.36.89
podIPs:
- ip: 172.16.36.89
qosClass: BestEffort
startTime: "2022-11-05T19:35:12Z"
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
$ kubectl get deploy nginx-deployment -o yaml   linkerd inject -   kubectl apply -f -
deployment "nginx-deployment" injected
deployment.apps/nginx-deployment configured
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.
Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
$ kubectl get pod nginx-deployment-858bdd545b-55jpf -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1ec3d345f859be8ead0374a7e880bcfdb9ba74a121b220a6fccbd342ac4b7ea8
cni.projectcalico.org/podIP: 172.16.36.90/32
cni.projectcalico.org/podIPs: 172.16.36.90/32
linkerd.io/created-by: linkerd/proxy-injector stable-2.12.2
linkerd.io/inject: enabled
linkerd.io/proxy-version: stable-2.12.2
linkerd.io/trust-root-sha256: 354fe6f49331e8e03d8fb07808e00a3e145d2661181cbfec7777b41051dc8e22
viz.linkerd.io/tap-enabled: "true"
creationTimestamp: "2022-11-05T19:44:15Z"
generateName: nginx-deployment-858bdd545b-
labels:
app: nginx
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: nginx-deployment
linkerd.io/workload-ns: default
pod-template-hash: 858bdd545b
name: nginx-deployment-858bdd545b-55jpf
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-858bdd545b
uid: 2e618972-aa10-4e35-a7dd-084853673a80
resourceVersion: "23820"
uid: 62f1857a-b701-4a19-8996-b5b605ff8488
spec:
containers:
- env:
- name: _pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: LINKERD2_PROXY_LOG
value: warn,linkerd=info
- name: LINKERD2_PROXY_LOG_FORMAT
value: plain
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: linkerd-policy.linkerd.svc.cluster.local.:8090
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: $(_pod_ns):$(_pod_name)
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: all-unauthenticated
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: 100ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: 1000ms
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: 0.0.0.0:4190
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: 0.0.0.0:4191
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: 0.0.0.0:4143
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
value: "80"
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: svc.cluster.local.
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: 25,587,3306,4444,5432,6379,9300,11211
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value:
"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)"
- name: _pod_sa
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: linkerd
- name: _l5d_trustdomain
value: cluster.local
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
value:
-----BEGIN CERTIFICATE-----
MIIBiDCCAS6gAwIBAgIBATAKBggqhkjOPQQDAjAcMRowGAYDVQQDExFpZGVudGl0
eS5saW5rZXJkLjAeFw0yMjExMDUxOTIxMDlaFw0yMzExMDUxOTIxMjlaMBwxGjAY
BgNVBAMTEWlkZW50aXR5LmxpbmtlcmQuMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcD
QgAE8AgxbWWa1qgEgN3ykFAOJ3sw9nSugUk1N5Qfvo6jXX/8/TZUW0ddko/N71+H
EcKc72kK0tlclj8jDi3pzJ4C0KNhMF8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdJQQW
MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBThSr0yAj5joW7pj/NZPYcfIIepbzAKBggqhkjOPQQDAgNIADBFAiAomg0TVn6N
UxhOyzZdg848lAvH0Io9Ra/Ef2hxZGN0LgIhAIKjrsgDUqZA8XHiiciYYicxFnKr
Tw5yj9gBhVAgYCaB
-----END CERTIFICATE-----
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: /var/run/secrets/tokens/linkerd-identity-token
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_TAP_SVC_NAME
value: tap.linkerd-viz.serviceaccount.identity.linkerd.cluster.local
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
- --timeout=2m
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 4191
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
protocol: TCP
- containerPort: 4191
name: linkerd-admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 4191
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 2102
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191,4567,4568
- --outbound-ports-to-ignore
- 4567,4568
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imagePullPolicy: IfNotPresent
name: linkerd-init
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /run
name: linkerd-proxy-init-xtables-lock
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-9zpnn
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- emptyDir:
name: linkerd-proxy-init-xtables-lock
- emptyDir:
medium: Memory
name: linkerd-identity-end-entity
- name: linkerd-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: identity.l5d.io
expirationSeconds: 86400
path: linkerd-identity-token
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:15Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://62028867c48aaa726df48249a27c52cd8820cd33e8e5695ad0d322540924754e
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imageID: cr.l5d.io/linkerd/proxy@sha256:787db5055b2a46a3c4318ef3b632461261f81254c8e47bf4b9b8dab2c42575e4
lastState:
name: linkerd-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:16Z"
- containerID: containerd://8f8ce663c19360a7b6868ace68a4a5119f0b18cd57ffebcc2d19331274038381
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:19Z"
hostIP: 192.168.2.216
initContainerStatuses:
- containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imageID: cr.l5d.io/linkerd/proxy-init@sha256:7d5e66b9e176b1ebbdd7f40b6385d1885e82c80a06f4c6af868247bb1dffe262
lastState:
name: linkerd-init
ready: true
restartCount: 0
state:
terminated:
containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
exitCode: 0
finishedAt: "2022-11-05T19:44:16Z"
reason: Completed
startedAt: "2022-11-05T19:44:15Z"
phase: Running
podIP: 172.16.36.90
podIPs:
- ip: 172.16.36.90
qosClass: Burstable
startTime: "2022-11-05T19:44:15Z"
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
annotations:
linkerd.io/inject: enabled
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
$ linkerd viz uninstall   kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-admin" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-admin" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
role.rbac.authorization.k8s.io "web" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-reader" deleted
rolebinding.rbac.authorization.k8s.io "web" deleted
apiservice.apiregistration.k8s.io "v1alpha1.tap.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-tap-injector-webhook-config" deleted
namespace "linkerd-viz" deleted
authorizationpolicy.policy.linkerd.io "admin" deleted
authorizationpolicy.policy.linkerd.io "metrics-api" deleted
authorizationpolicy.policy.linkerd.io "proxy-admin" deleted
authorizationpolicy.policy.linkerd.io "tap" deleted
authorizationpolicy.policy.linkerd.io "tap-injector" deleted
server.policy.linkerd.io "admin" deleted
server.policy.linkerd.io "metrics-api" deleted
server.policy.linkerd.io "proxy-admin" deleted
server.policy.linkerd.io "tap-api" deleted
server.policy.linkerd.io "tap-injector-webhook" deleted
In order to uninstall the control plane, you would need to first uninject the Linkerd control plane components from any existing running pods by:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 10m
$ kubectl get deployment nginx-deployment -o yaml   linkerd uninject -   kubectl apply -f -
deployment "nginx-deployment" uninjected
deployment.apps/nginx-deployment configured
Now you can delete the control plane.
$ linkerd uninstall   kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-destination-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
role.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
customresourcedefinition.apiextensions.k8s.io "authorizationpolicies.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "httproutes.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "meshtlsauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "networkauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serverauthorizations.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "servers.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serviceprofiles.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-proxy-injector-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-policy-validator-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-sp-validator-webhook-config" deleted
namespace "linkerd" deleted
At this point we re back to the original state:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-cd55c47f5-99xf2 1/1 Running 0 82s
default nginx-deployment-cd55c47f5-tt58t 1/1 Running 0 86s
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-6ptsh 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-7x5j8 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-qlnf6 1/1 Running 2 (3h39m ago) 7d1h
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (3h39m ago) 7d2h
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (3h39m ago) 7d2h
kube-system etcd-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-apiserver-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-4n9b7 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-k4rzv 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-lz2dd 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-scheduler-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
I hope you find this useful to get you started on your journey with Linkerd. Head on over to the docs for more information, guides and best practices.

12 October 2014

Iustin Pop: Day trip on the Olympic Peninsula

Day trip on the Olympic Peninsula TL;DR: drove many kilometres on very nice roads, took lots of pictures, saw sunshine and fog and clouds, an angry ocean and a calm one, a quiet lake and lots and lots of trees: a very well spent day. Pictures at http://photos.k1024.org/Daytrips/Olympic-Peninsula-2014/. Sometimes I travel to the US on business, and as such I've been a few times in the Seattle area. Until this summer, when I had my last trip there, I was content to spend any extra days (weekend or such) just visiting Seattle itself, or shopping (I can spend hours in the REI store!), or working on my laptop in the hotel. This summer though, I thought - I should do something a bit different. Not too much, but still - no sense in wasting both days of the weekend. So I thought maybe driving to Mount Rainier, or something like that. On the Wednesday of my first week in Kirkland, as I was preparing my drive to the mountain, I made the mistake of scrolling the map westwards, and I saw for the first time the Olympic Peninsula; furthermore, I was zoomed in enough that I saw there was a small road right up to the north-west corner. Intrigued, I zoomed further and learned about Cape Flattery ( the northwestern-most point of the contiguous United States! ), so after spending a bit time reading about it, I was determined to go there. Easier said than done - from Kirkland, it's a 4h 40m drive (according to Google Maps), so it would be a full day on the road. I was thinking of maybe spending the night somewhere on the peninsula then, in order to actually explore the area a bit, but from Wednesday to Saturday it was a too short notice - all hotels that seemed OK-ish were fully booked. I spent some time trying to find something, even not directly on my way, but I failed to find any room. What I did manage to do though, is to learn a bit about the area, and to realise that there's a nice loop around the whole peninsula - the 104 from Kirkland up to where it meets the 101N on the eastern side, then take the 101 all the way to Port Angeles, Lake Crescent, near Lake Pleasant, then south toward Forks, crossing the Hoh river, down to Ruby Beach, down along the coast, crossing the Queets River, east toward Lake Quinault, south toward Aberdeen, then east towards Olympia and back out of the wilderness, into the highway network and back to Kirkland. This looked like an awesome road trip, but it is as long as it sounds - around 8 hours (continuous) drive, though skipping Cape Flattery. Well, I said to myself, something to keep in mind for a future trip to this area, with a night in between. I was still planning to go just to Cape Flattery and back, without realising at that point that this trip was actually longer (as you drive on smaller, lower-speed roads). Preparing my route, I read about the queues at the Edmonds-Kingston ferry, so I was planning to wake up early on the weekend, go to Cape Flattery, and go right back (maybe stop by Lake Crescent). Saturday comes, I - of course - sleep longer than my trip schedule said, and start the day in a somewhat cloudy weather, driving north from my hotel on Simonds Road, which was quite nicer than the usual East-West or North-South roads in this area. The weather was becoming nicer, however as I was nearing the ferry terminal and the traffic was getting denser, I started suspecting that I'll spend a quite a bit of time waiting to board the ferry. And unfortunately so it was (photo altered to hide some personal information): Waiting for the ferry. The weather at least was nice, so I tried to enjoy it and simply observe the crowd - people were looking forward to a weekend relaxing, so nobody seemed annoyed by the wait. After almost half an hour, time to get on the ferry - my first time on a ferry in US, yay! But it was quite the same as in Europe, just that the ship was much larger. Once I secured the car, I went up deck, and was very surprised to be treated with some excellent views: Harbour view Looking towards the sun   and away from it The crossing was not very short, but it seemed so, because of the view, the sun, the water and the wind. Soon we were nearing the other shore; also, see how well panorama software deals with waves :P! Near the other shore And I was finally on the "real" part of the trip. The road was quite interesting. Taking the 104 North, crossing the "Hood Canal Floating Bridge" (my, what a boring name), then finally joining the 101 North. The environment was quite varied, from bare plains and hills, to wooded areas, to quite dense forests, then into inhabited areas - quite a long stretch of human presence, from the Sequim Bay to Port Angeles. Port Angeles surprised me: it had nice views of the ocean, and an interesting port (a few big ships), but it was much smaller than I expected. The 101 crosses it, and in less than 10 minutes or so it was already over. I expected something nicer, based on the name, but Anyway, onwards! Soon I was at a crossroads and had to decide: I could either follow the 101, crossing the Elwha River and then to Lake Crescent, then go north on the 113/112, or go right off 101 onto 112, and follow it until close to my goal. I took the 112, because on the map it looked "nicer", and closer to the shore. Well, the road itself was nice, but quite narrow and twisty here and there, and there was some annoying traffic, so I didn't enjoy this segment very much. At least it had the very interesting property (to me) that whenever I got closer to the ocean, the sun suddenly disappeared, and I was finding myself in the fog: Foggy road So my plan to drive nicely along the coast failed. At one point, there was even heavy smoke (not fog!), and I wondered for a moment how safe was to drive out there in the wilderness (there were other cars though, so I was not alone). Only quite a bit later, close to Neah Bay, did I finally see the ocean: I saw a small parking spot, stopped, and crossing a small line of trees I found myself in a small cove? bay? In any case, I had the impression I stepped out of the daily life in the city and out into the far far wilderness: Dead trees on the beach Trees growing on a rock Small panorama of the cove There was a couple, sitting on chairs, just enjoying the view. I felt very much intruding, behaving like I did as a tourist: running in, taking pictures, etc., so I tried at least to be quiet . I then quickly moved on, since I still had some road ahead of me. Soon I entered Neah Bay, and was surprised to see once more blue, and even more blue. I'm a sucker for blue, whether sky blue or sea blue , so I took a few more pictures (watch out for the evil fog in the second one): View towards Neah Bay port Sea view from Neah Bay Well, the town had some event, and there were lots of people, so I just drove on, now on the last stretch towards the cape. The road here was also very interesting, yet another environment - I was driving on Cape Flattery Road, which cuts across the tip of the peninsula (quite narrow here) along the Waatch River and through its flooding plains (at least this is how it looked to me). Then it finally starts going up through the dense forest, until it reaches the parking lot, and from there, one goes on foot towards the cape. It's a very easy and nice walk (not a hike), and the sun was shining very nicely through the trees: Sunny forest Sun shinning down Wooden path But as I reached the peak of the walk, and started descending towards the coast, I was surprised, yet again, by fog: Ugly fog again! I realised that probably this means the cape is fully in fog, so I won't have any chance to enjoy the view. Boy, was I wrong! There are three viewpoints on the cape, and at each one I was just "wow" and "aah" at the view. Even thought it was not a sunny summer view, and there was no blue in sight, the combination between the fog (which was hiding the horizon and even the closer islands), the angry ocean which was throwing wave after wave at the shore, making a loud noise, and the fact that even this seemingly inhospitable area was just teeming with life, was both unexpected and awesome. I took here waay to many pictures, here are just a couple inlined: First view at the cape Birds 'enjoying' the weather Foggy shore I spent around half an hour here, just enjoying the rawness of nature. It was so amazing to see life encroaching on each bit of land, even though it was not what I would consider a nice place. Ah, how we see everything through our own eyes! The walk back was through fog again, and at one point it switched over back to sunny. Driving back on the same road was quite different, knowing what lies at its end. On this side, the road had some parking spots, so I managed to stop and take a picture - even though this area was much less wild, it still has that outdoors flavour, at least for me: Waatch River Back in Neah Bay, I stopped to eat. I had a place in mind from TripAdvisor, and indeed - I was able to get a custom order pizza at "Linda's Woodfired Kitchen". Quite good, and I ate without hurry, looking at the people walking outside, as they were coming back from the fair or event that was taking place. While eating, a somewhat disturbing thought was going through my mind. It was still early, around two to half past two, so if I went straight back to Kirkland I would be early at the hotel. But it was also early enough that I could - in theory at least - still do the "big round-trip". I was still rummaging the thought as I left On the drive back I passed once more near Sekiu, Washington, which is a very small place but the map tells me it even has an airport! Fun, and the view was quite nice (a bit of blue before the sea is swallowed by the fog): Sekiu view After passing Sekiu and Clallam Bay, the 112 curves inland and goes on a bit until you are at the crossroads: to the left the 112 continues, back the same way I came; to the right, it's the 113, going south until it meets the 101. I looked left - remembering the not-so-nice road back, I looked south - where a very appealing, early afternoon sun was beckoning - so I said, let's take the long way home! It's just a short stretch on the 113, and then you're on the 101. The 101 is a very nice road, wide enough, and it goes through very very nice areas. Here, west to south-west of the Olympic Mountains, it's a very different atmosphere from the 112/101 that I drove on in the morning; much warmer colours, a bit different tree types (I think), and more flat. I soon passed through Forks, which is one of the places I looked at when searching for hotels. I did so without any knowledge of the town itself (its wikipedia page is quite drab), so imagine my surprise when a month later I learned from a colleague that this is actually a very important place for vampire-book fans. Oh my, and I didn't even stop! This town also had some event, so I just drove on, enjoying the (mostly empty) road. My next planned waypoint was Ruby Beach, and I was looking forward to relaxing a bit under the warm sun - the drive was excellent, weather perfect, so I was watching the distance countdown on my Garmin. At two miles out, the "Near waypoint Ruby Beach" message appeared, and two seconds later the sun went out. What the I was hoping this is something temporary, but as I slowly drove the remaining mile I couldn't believe my eyes that I was, yet again, finding myself in the fog I park the car, thinking that asking for a refund would at least allow me to feel better - but it was I who planned the trip! So I resigned myself, thinking that possibly this beach is another special location that is always in the fog. However, getting near the beach it was clear that it was not so - some people were still in their bathing suits, just getting dressed, so it seems I was just unlucky with regards to timing. However, I the beach itself was nice, even in the fog (I later saw online sunny pictures, and it is quite beautiful), the the lush trees reach almost to the shore, and the way the rocks are sitting on the beach: A lonely dinghy Driftwood  and human construction People on the beach Since the weather was not that nice, I took a few more pictures, then headed back and started driving again. I was soo happy that the weather didn't clear at the 2 mile mark (it was not just Ruby Beach!), but alas - it cleared as soon as the 101 turns left and leaves the shore, as it crosses the Queets river. Driving towards my next planned stop was again a nice drive in the afternoon sun, so I think it simply was not a sunny day on the Pacific shore. Maybe seas and oceans have something to do with fog and clouds ! In Switzerland, I'm very happy when I see fog, since it's a somewhat rare event (and seeing mountains disappearing in the fog is nice, since it gives the impression of a wider space). After this day, I was a bit fed up with fog for a while Along the 101 one reaches Lake Quinault, which seemed pretty nice on the map, and driving a bit along the lake - a local symbol, the "World's largest spruce tree". I don't know what a spruce tree is, but I like trees, so I was planning to go there, weather allowing. And the weather did cooperate, except that the tree was not so imposing as I thought! In any case, I was glad to stretch my legs a bit: Path to largest spruce tree Largest spruce tree, far view Largest spruce tree, closer view Very short path back to the road However, the most interesting thing here in Quinault was not this tree, but rather - the quiet little town and the view on the lake, in the late afternoon sun: Quinault Quinault Lake view The entire town was very very quiet, and the sun shining down on the lake gave an even stronger sense of tranquillity. No wind, not many noises that tell of human presence, just a few, and an overall sense of peace. It was quite the opposite of the Cape Flattery and a very nice way to end the trip. Well, almost end - I still had a bit of driving ahead. Starting from Quinault, driving back and entering the 101, driving down to Aberdeen: Afternoon ride then turning east towards Olympia, and back onto the highways. As to Aberdeen and Olympia, I just drove through, so I couldn't make any impression of them. The old harbour and the rusted things in Aberdeen were a bit interesting, but the day was late so I didn't stop. And since the day shouldn't end without any surprises, during the last profile change between walking and driving in Quinault, my GPS decided to reset its active maps list and I ended up with all maps activated. This usually is not a problem, at least if you follow a pre-calculated route, but I did trigger recalculation as I restarted my driving, so the Montana was trying to decide on which map to route me - between the Garmin North America map and the Open StreeMap one, the result was that it never understood which road I was on. It always said "Drive to I5", even though I was on I5. Anyway, thanks to road signs, and no thanks to "just this evening ramp closures", I was able to arrive safely at my hotel. Overall, a very successful, if long trip: around 725 kilometres, 10h:30m moving, 13h:30m total: Track picture There were many individual good parts, but the overall think about this road trip was that I was able to experience lots of different environments of the peninsula on the same day, and that overall it's a very very nice area. The downside was that I was in a rush, without being able to actually stop and enjoy the locations I visited. And there's still so much to see! A two nights trip sound just about right, with some long hikes in the rain forest, and afternoons spent on a lake somewhere. Another not so optimal part was that I only had my "travel" camera (a Nikon 1 series camera, with a small sensor), which was a bit overwhelmed here and there by the situation. It was fortunate that the light was more or less good, but looking back at the pictures, how I wish that I had my "serious" DSLR So, that means I have two reasons to go back! Not too soon though, since Mount Rainier is also a good location to visit . If the pictures didn't bore you yet, the entire gallery is on my smugmug site. In any case, thanks for reading!

27 November 2013

Richard Hartmann: pdiffs

Can we stop pretending that defaulting to pdiffs was ever a good idea, now?
# aptitude update
[...pain...]
Get:431 http://ftp5.gwdg.de testing/main 2013-11-27-1437.17.pdiff [46 B]
Fetched 2.445 kB in 9min 15s (4.401 B/s)
# aptitude -o Acquire::Pdiffs=false
[...joy...]
Get:17 http://ftp5.gwdg.de testing/non-free Translation-en [69,4 kB]
Fetched 616 kB in 7s (85,7 kB/s)

12 July 2012

Richard Hartmann: Do what I mean

And I still get into arguments about why I prefer apt-get over aptitude... Sometimes, stupid is good.
# apt-get update
[...get Debian sid package list...]
# aptitude install libdbd-csv-perl
The following NEW packages will be installed:
  libdbd-csv-perl libparams-util-perl a  libsql-statement-perl a  libtext-csv-xs-perl a 
The following packages will be upgraded:
  libdbi-perl perl perl-base perl-modules
4 packages upgraded, 4 newly installed, 0 to remove and 1459 not upgraded.
Need to get 9,845 kB of archives. After unpacking 1,069 kB will be used.
The following packages have unmet dependencies:
  libperl5.14: Depends: perl-base (= 5.14.2-7) but 5.14.2-12 is to be installed.
The following actions will resolve these dependencies:
      Remove the following packages:
1)      elinks
2)      irssi
3)      irssi-scripts
4)      libperl5.14
5)      perlmagick
6)      vim-gtk
7)      xchat
      Leave the following dependencies unresolved:
8)      elinks-data recommends elinks (= 0.12~pre5-7)
9)      vim-gui-common recommends vim-gnome   vim-gtk   vim-athena
10)     xchat-common recommends xchat
11)     inkscape recommends perlmagick
Accept this solution? [Y/n/q/?] ^C
# apt-get install libdbd-csv-perl
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libdbi-perl libparams-util-perl libperl5.14 libsql-statement-perl libtext-csv-xs-perl perl perl-base perl-modules
The following NEW packages will be installed:
  libdbd-csv-perl libparams-util-perl libsql-statement-perl libtext-csv-xs-perl
The following packages will be upgraded:
  libdbi-perl libperl5.14 perl perl-base perl-modules
5 upgraded, 4 newly installed, 0 to remove and 1458 not upgraded.
Need to get 10.6 MB of archives.
After this operation, 1,081 kB of additional disk space will be used.
Do you want to continue [Y/n]?
Get:1 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main perl i386 5.14.2-12 [3,701 kB]
Get:2 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libperl5.14 i386 5.14.2-12 [731 kB]
Get:3 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main perl-base i386 5.14.2-12 [1,494 kB]
Get:4 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main perl-modules all 5.14.2-12 [3,438 kB]
Get:5 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libtext-csv-xs-perl i386 0.90-1 [73.1 kB]
Get:6 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libdbi-perl i386 1.622-1 [897 kB]
Get:7 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libparams-util-perl i386 1.07-1 [25.4 kB]
Get:8 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libsql-statement-perl all 1.33-1 [179 kB]
Get:9 http://ftp5.gwdg.de/pub/linux/debian/debian/ unstable/main libdbd-csv-perl all 0.3500-1 [36.4 kB]
Fetched 10.6 MB in 10s (980 kB/s)
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Reading changelogs... Done
(Reading database ... 278465 files and directories currently installed.)
Preparing to replace perl 5.14.2-7 (using .../perl_5.14.2-12_i386.deb) ...
Unpacking replacement perl ...
Preparing to replace libperl5.14 5.14.2-7 (using .../libperl5.14_5.14.2-12_i386.deb) ...
Unpacking replacement libperl5.14 ...
Preparing to replace perl-base 5.14.2-7 (using .../perl-base_5.14.2-12_i386.deb) ...
Unpacking replacement perl-base ...
Processing triggers for man-db ...
Setting up perl-base (5.14.2-12) ...
(Reading database ... 278465 files and directories currently installed.)
Preparing to replace perl-modules 5.14.2-7 (using .../perl-modules_5.14.2-12_all.deb) ...
Unpacking replacement perl-modules ...
Selecting previously unselected package libtext-csv-xs-perl.
Unpacking libtext-csv-xs-perl (from .../libtext-csv-xs-perl_0.90-1_i386.deb) ...
Preparing to replace libdbi-perl 1.617-1 (using .../libdbi-perl_1.622-1_i386.deb) ...
Unpacking replacement libdbi-perl ...
Selecting previously unselected package libparams-util-perl.
Unpacking libparams-util-perl (from .../libparams-util-perl_1.07-1_i386.deb) ...
Selecting previously unselected package libsql-statement-perl.
Unpacking libsql-statement-perl (from .../libsql-statement-perl_1.33-1_all.deb) ...
Selecting previously unselected package libdbd-csv-perl.
Unpacking libdbd-csv-perl (from .../libdbd-csv-perl_0.3500-1_all.deb) ...
Processing triggers for man-db ...
Setting up libperl5.14 (5.14.2-12) ...
Setting up perl-modules (5.14.2-12) ...
Setting up perl (5.14.2-12) ...
Setting up libtext-csv-xs-perl (0.90-1) ...
Setting up libdbi-perl (1.622-1) ...
Setting up libparams-util-perl (1.07-1) ...
Setting up libsql-statement-perl (1.33-1) ...
Setting up libdbd-csv-perl (0.3500-1) ...
#

And don't get me started about waiting for aptitude *upgrade to resolve its dependency graph...

21 September 2008

Wouter Verhelst: SSL "telnet"

A common way to debug a network server is to use 'telnet' or 'nc' to connect to the server and issue some commands in the protocol to verify whether everything is working correctly. That obviously only works for ASCII protocols (as opposed to binary protocols), and it obviously also only works if you're not using any encryption. But that doesn't mean you can't test an encrypted protocol in a similar way, thanks to openssl's s_client:
wouter@country:~$ openssl s_client -host samba.grep.be -port 443
CONNECTED(00000003)
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
verify return:1
---
Certificate chain
 0 s:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
   i:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDXDCCAsWgAwIBAgIJAITRhiXp+37JMA0GCSqGSIb3DQEBBQUAMH0xCzAJBgNV
BAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQHEwhNZWNoZWxlbjEUMBIG
A1UEChMLTml4U3lzIEJWQkExFDASBgNVBAMTC3N2bi5ncmVwLmJlMR0wGwYJKoZI
hvcNAQkBFg53b3V0ZXJAZ3JlcC5iZTAeFw0wNTA1MjEwOTMwMDFaFw0xNTA1MTkw
OTMwMDFaMH0xCzAJBgNVBAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQH
EwhNZWNoZWxlbjEUMBIGA1UEChMLTml4U3lzIEJWQkExFDASBgNVBAMTC3N2bi5n
cmVwLmJlMR0wGwYJKoZIhvcNAQkBFg53b3V0ZXJAZ3JlcC5iZTCBnzANBgkqhkiG
9w0BAQEFAAOBjQAwgYkCgYEAsGTECq0VXyw09Zcg/OBijP1LALMh9InyU0Ebe2HH
NEQ605mfyjAENG8rKxrjOQyZzD25K5Oh56/F+clMNtKAfs6OuA2NygD1/y4w7Gcq
1kXhsM1MOIOBdtXAFi9s9i5ZATAgmDRIzuKZ6c2YJxJfyVbU+Pthr6L1SFftEdfb
L7MCAwEAAaOB4zCB4DAdBgNVHQ4EFgQUtUK7aapBDaCoSFRWTf1wRauCmdowgbAG
A1UdIwSBqDCBpYAUtUK7aapBDaCoSFRWTf1wRauCmdqhgYGkfzB9MQswCQYDVQQG
EwJCRTEQMA4GA1UECBMHQW50d2VycDERMA8GA1UEBxMITWVjaGVsZW4xFDASBgNV
BAoTC05peFN5cyBCVkJBMRQwEgYDVQQDEwtzdm4uZ3JlcC5iZTEdMBsGCSqGSIb3
DQEJARYOd291dGVyQGdyZXAuYmWCCQCE0YYl6ft+yTAMBgNVHRMEBTADAQH/MA0G
CSqGSIb3DQEBBQUAA4GBADGkLc+CWWbfpBpY2+Pmknsz01CK8P5qCX3XBt4OtZLZ
NYKdrqleYq7r7H8PHJbTTiGOv9L56B84QPGwAzGxw/GzblrqR67iIo8e5reGbvXl
s1TFqKyvoXy9LJoGecMwjznAEulw9cYcFz+VuV5xnYPyJMLWk4Bo9WCVKGuAqVdw
-----END CERTIFICATE-----
subject=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
issuer=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
---
No client certificate CA names sent
---
SSL handshake has read 1428 bytes and written 316 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 65E69139622D06B9D284AEDFBFC1969FE14E826FAD01FB45E51F1020B4CEA42C
    Session-ID-ctx: 
    Master-Key: 606553D558AF15491FEF6FD1A523E16D2E40A8A005A358DF9A756A21FC05DFAF2C9985ABE109DCD29DD5D77BE6BC5C4F
    Key-Arg   : None
    Start Time: 1222001082
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
HEAD / HTTP/1.1
Host: svn.grep.be
User-Agent: openssl s_client
Connection: close
HTTP/1.1 404 Not Found
Date: Sun, 21 Sep 2008 12:44:55 GMT
Server: Apache/2.2.3 (Debian) mod_auth_kerb/5.3 DAV/2 SVN/1.4.2 PHP/5.2.0-8+etch11 mod_ssl/2.2.3 OpenSSL/0.9.8c
Connection: close
Content-Type: text/html; charset=iso-8859-1
closed
wouter@country:~$ 
As you can see, we connect to an HTTPS server, get to see what the server's certificate looks like, issue some commands, and the server responds properly. It also works for (some) protocols who work in a STARTTLS kind of way:
wouter@country:~$ openssl s_client -host samba.grep.be -port 587 -starttls smtp
CONNECTED(00000003)
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
verify return:1
---
Certificate chain
 0 s:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
   i:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDBDCCAm2gAwIBAgIJAK53w+1YhWocMA0GCSqGSIb3DQEBBQUAMGAxCzAJBgNV
BAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQHEwhNZWNoZWxlbjEUMBIG
A1UEChMLTml4U3lzIEJWQkExFjAUBgNVBAMTDXNhbWJhLmdyZXAuYmUwHhcNMDgw
OTIwMTYyMjI3WhcNMDkwOTIwMTYyMjI3WjBgMQswCQYDVQQGEwJCRTEQMA4GA1UE
CBMHQW50d2VycDERMA8GA1UEBxMITWVjaGVsZW4xFDASBgNVBAoTC05peFN5cyBC
VkJBMRYwFAYDVQQDEw1zYW1iYS5ncmVwLmJlMIGfMA0GCSqGSIb3DQEBAQUAA4GN
ADCBiQKBgQCee+Ibci3atTgoJqUU7cK13oD/E1IV2lKcvdviJBtr4rd1aRWfxcvD
PS00jRXGJ9AAM+EO2iuZv0Z5NFQkcF3Yia0yj6hvjQvlev1OWxaWuvWhRRLV/013
JL8cIrKYrlHqgHow60cgUt7kfSxq9kjkMTWLsGdqlE+Q7eelMN94tQIDAQABo4HF
MIHCMB0GA1UdDgQWBBT9N54b/zoiUNl2GnWYbDf6YeixgTCBkgYDVR0jBIGKMIGH
gBT9N54b/zoiUNl2GnWYbDf6YeixgaFkpGIwYDELMAkGA1UEBhMCQkUxEDAOBgNV
BAgTB0FudHdlcnAxETAPBgNVBAcTCE1lY2hlbGVuMRQwEgYDVQQKEwtOaXhTeXMg
QlZCQTEWMBQGA1UEAxMNc2FtYmEuZ3JlcC5iZYIJAK53w+1YhWocMAwGA1UdEwQF
MAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAAnMdbAgLRJ3xWOBlqNjLDzGWAEzOJUHo
5R9ljMFPwt1WdjRy7L96ETdc0AquQsW31AJsDJDf+Ls4zka+++DrVWk4kCOC0FOO
40ar0WUfdOtuusdIFLDfHJgbzp0mBu125VBZ651Db99IX+0BuJLdtb8fz2LOOe8b
eN7obSZTguM=
-----END CERTIFICATE-----
subject=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
issuer=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
---
No client certificate CA names sent
---
SSL handshake has read 1707 bytes and written 351 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 6D28368494A3879054143C7C6B926C9BDCDBA20F1E099BF4BA7E76FCF357FD55
    Session-ID-ctx: 
    Master-Key: B246EA50357EAA6C335B50B67AE8CE41635EBCA6EFF7EFCE082225C4EFF5CFBB2E50C07D8320E0EFCBFABDCDF8A9A851
    Key-Arg   : None
    Start Time: 1222000892
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
250 HELP
quit
221 samba.grep.be closing connection
closed
wouter@country:~$ 
OpenSSL here connects to the server, issues a proper EHLO command, does STARTTLS, and then gives me the same data as it did for the HTTPS connection. Isn't that nice.

19 March 2006

Clint Adams: This report is flawed, but it sure is fun

91D63469DFdnusinow1243
63DEB0EC31eloy
55A965818Fvela1243
4658510B5Amyon2143
399B7C328Dluk31-2
391880283Canibal2134
370FE53DD9opal4213
322B0920C0lool1342
29788A3F4Cjoeyh
270F932C9Cdoko
258768B1D2sjoerd
23F1BCDB73aurel3213-2
19E02FEF11jordens1243
18AB963370schizo1243
186E74A7D1jdassen(Ks)1243
1868FD549Ftbm3142
186783ED5Efpeters1--2
1791B0D3B7edd-213
16E07F1CF9rousseau321-
16248AEB73rene1243
158E635A5Erafl
14C0143D2Dbubulle4123
13D87C6781krooger(P)4213
13A436AD25jfs(P)
133D08B612msp
131E880A84fjp4213
130F7A8D01nobse
12F1968D1Bdecklin1234
12E7075A54mhatta
12D75F8533joss1342
12BF24424Csrivasta1342
12B8C1FA69sto
127F961564kobold
122A30D729pere4213
1216D970C6eric12--
115E0577F2mpitt
11307D56EDnoel3241
112BE16D01moray1342
10BC7D020Aformorer-1--
10A7D91602apollock4213
10A51A4FDDgcs
10917A225Ejordi
104B729625pvaneynd3123
10497A176Dloic
962F1A57Fpa3aba
954FD2A58glandium1342
94A5D72FErafael
913FEFC40fenio-1--
90AFC7476rra1243
890267086duck31-2
886A118E6ch321-
8801EA932joey1243
87F4E0E11waldi-123
8514B3E7Cflorian21--
841954920fs12--
82A385C57mckinstry21-3
825BFB848rleigh1243
7BC70A6FFpape1---
7B70E403Bari1243
78E2D213Ajochen(Ks)
785FEC17Fkilian
784FB46D6lwall1342
7800969EFsmimram-1--
779CC6586haas
75BFA90ECkohda
752B7487Esesse2341
729499F61sho1342
71E161AFBbarbier12--
6FC05DA69wildfire(P)
6EEB6B4C2avdyk-12-
6EDF008C5blade1243
6E25F2102mejo1342
6D1C41882adeodato(Ks)3142
6D0B433DFross12-3
6B0EBC777piman1233
69D309C3Brobert4213
6882A6C4Bkov
66BBA3C84zugschlus4213
65662C734mvo
6554FB4C6petere-1-2
637155778stratus
62D9ACC8Elars1243
62809E61Ajosem
62252FA1Afrank2143
61CF2D62Amicah
610FA4CD1cjwatson2143
5EE6DC66Ajaldhar2143
5EA59038Esgran4123
5E1EE3FB1md4312
5E0B8B2DEjaybonci
5C9A5B54Esesse(Ps,Gs) 2341
5C4CF8EC3twerner
5C2FEE5CDacid213-
5C09FD35Atille
5C03C56DFrfrancoise---1
5B7CDA2DCxam213-
5A20EBC50cavok4214
5808D0FD0don1342
5797EBFABenrico1243
55230514Asjackman
549A5F855otavio-123
53DC29B41pdm
529982E5Avorlon1243
52763483Bmkoch213-
521DB31C5smr2143
51BF8DE0Fstigge312-
512CADFA5csmall3214
50A0AC927lamont
4F2CF01A8bdale
4F095E5E4mnencia
4E9F2C747frankie
4E9ABFCD2devin2143
4E81E55C1dancer2143
4E38E7ACFhmh(Gs)1243
4E298966Djrv(P)
4DF5CE2B4huggie12-3
4DD982A75speedblue
4C671257Ddamog-1-2
4C4A3823Ekmr4213
4C0B10A5Bdexter
4C02440B8js1342
4BE9F70EAtb1342
4B7D2F063varenet-213
4A3F9E30Eschultmc1243
4A3D7B9BClawrencc2143
4A1EE761Cmadcoder21--
49DE1EEB1he3142
49D928C9Bguillem1---
49B726B71racke
490788E11jsogo2143
4864826C3gotom4321
47244970Bkroeckx2143
45B48FFAEmarga2143
454E672DEisaac1243
44B3A135Cerich1243
44597A593agmartin4213
43FCC2A90amaya1243
43F3E6426agx-1-2
43EF23CD6sanvila1342
432C9C8BDwerner(K)
4204DDF1Baquette
400D8CD16tolimar12--
3FEC23FB2bap34-1
3F972BE03tmancill4213
3F801A743nduboc1---
3EBEDB32Bchrsmrtn4123
3EA291785taggart2314
3E4D47EC1tv(P)
3E19F188Etroyh1244
3DF6807BEsrk4213
3D2A913A1psg(P)
3D097A261chrisb
3C6CEA0C9adconrad1243
3C20DF273ondrej
3B5444815ballombe1342
3B1DF9A57cate2143
3AFA44BDDweasel(Ps,Gs) 1342
3AA6541EEbrlink1442
3A824B93Fasac3144
3A71C1E00turbo
3A2D7D292seb128
39ED101BFmbanck3132
3969457F0joostvb2143
389BF7E2Bkobras1--2
386946D69mooch12-3
374886B63nathans
36F222F1Fedelhard
36D67F790foka
360B6B958geiger
3607559E6mako
35C33C1B8dirson
35921B5D8ajmitch
34C1A5BE5sjq
3431B38BApxt312-
33E7B4B73lmamane2143
327572C47ucko1342
320021490schepler1342
31DEB8EAEgoedson
31BF2305Akrala(Gs)3142
319A42D19dannf21-4
3174FEE35wookey3124
3124B26F3mfurr21-3
30A327652tschmidt312-
3090DD8D5ingo3123
30813569Fjeroen1141
30644FAB7bas1332
30123F2F2gareuselesinge1243
300530C24bam1234
2FD6645ABrmurray-1-2
2F95C2F6Dchrism(P)
2F9138496graham(Gs)3142
2F5D65169jblache1332
2F28CD102absurd
2F2597E04samu
2F0B27113patrick
2EFA6B9D5hamish(P)3142
2EE0A35C7risko4213
2E91CD250daigo
2D688E0A7qjb-21-
2D4BE1450prudhomm
2D2A6B810joussen
2CFD42F26dilinger
2CEE44978dburrows1243
2CD4C0D9Dskx4213
2BFB880A3zeevon
2BD8B050Droland3214
2B74952A9alee
2B4D6DE13paul
2B345BDD3neilm1243
2B28C5995bod4213
2B0FA4F49schoepf
2B0DDAF42awoodland
2A8061F32osamu4213
2A21AD4F9tviehmann1342
299E81DA0kaplan
2964199E2fabbe3142
28DBFEC2Fpelle
28B8D7663ametzler1342
28B143975martignlo
288C7C1F793sam2134
283E5110Fovek
2817A996Atfheen
2807CAC25abi4123
2798DD95Cpiefel
278D621B4uwe-1--
26FF0ABF2rcw2143
26E8169D2hertzog3124
26C0084FCchrisvdb
26B79D401filippo-1--
267756F5Dfrn2341
25E2EB5B4nveber123-
25C6153ADbroonie1243
25B713DF0djpig1243
250ECFB98ccontavalli(Gs)
250064181paulvt
24F71955Adajobe21-3
24E2ECA5Ajmm4213
2496A1827srittau
23E8DCCC0maxx1342
23D97C149mstone(P)2143
22DB65596dz321-
229F19BD1meskes
21F41B907marillat1---
21EB2DE66boll
21557BC10kraai1342
2144843F5lolando1243
210656584voc
20D7CA701steinm
205410E97horms
1FC992520tpo-14-
1FB0DFE9Bgildor
1FAEEB4A9neil1342
1F7E8BC63cedric21--
1F2C423BCzack1332
1F0199162kreckel4214
1ECA94FA8ishikawa2143
1EAAC62DFcyb---1
1EA2D2C41malattia-312
1E77AC835bcwhite(P)
1E66C9BB0tach
1E145F334mquinson2143
1E0BA04C1treinen321-
1DFE80FB2tali
1DE054F69azekulic(P)
1DC814B09jfs
1CB467E27kalfa
1C9132DDByoush-21-
1C87FFC2Fstevenk-1--
1C2CE8099knok321-
1BED37FD2henning(Ks)1342
1BA0A7EB5treacy(P)
1B7D86E0Fcmb4213
1B62849B3smarenka2143
1B3C281F4alain2143
1B25A5CF1omote
1ABA0E8B2sasa
1AB474598baruch2143
1AB2A91F5troup1--2
1A827CEDEafayolle(Gs)
1A6C805B9zorglub2134
1A674A359maehara
1A57D8BF7drew2143
1A269D927sharky
1A1696D2Blfousse1232
19BF42B07zinoviev--12
19057B5D3vanicat2143
18E950E00mechanix
18BB527AFgwolf1132
18A1D9A1Fjgoerzen
18807529Bultrotter2134
1872EB4E5rcardenes
185EE3E0Eangdraug12-3
1835EB2FFbossekr
180C83E8Eigloo1243
17B8357E5andreas212-
17B80220Dsjr(Gs)1342
17796A60Bsfllaw1342
175CB1AD2toni1---
1746C51F4klindsay
172D03CB1kmuto4231
171473F66ttroxell13-4
16E76D81Dseanius1243
16C63746Dhector
16C5F196Bmalex4213
16A9F3C38rkrishnan
168021CE4ron---1
166F24521pyro-123
1631B4819anfra
162EEAD8Bfalk1342
161326D40jamessan13-4
1609CD2C0berin--1-
15D8CDA7Bguus1243
15D8C12EArganesan
15D64F870zobel
159EF5DBCbs
157F045DCcamm
1564EE4B6hazelsct
15623FC45moronito4213
1551BE447torsten
154AD21B5warmenhoven
153BBA490sjg
1532005DAseamus
150973B91pjb2143
14F83C751kmccarty12-3
14DB97694khkim
14CD6E3D2wjl4213
14A8854E6weinholt1243
14950EAA6ajkessel
14298C761robertc(Ks)
142955682kamop
13FD29468bengen-213
13FD25C84roktas3142
13B047084madhack
139CCF0C7tagoh3142
139A8CCE2eugen31-2
138015E7Ethb1234
136B861C1bab2143
133FC40A4mennucc13214
12C0FCD1Awdg4312
12B05B73Arjs
1258D8781grisu31-2
1206C5AFDchewie-1-1
1200D1596joy2143
11C74E0B7alfs
119D03486francois4123
118EA3457rvr
1176015EDevo
116BD77C6alfie
112AA1DB8jh
1128287E8daf
109FC015Cgodisch
106468DEBfog--12
105792F34rla-21-
1028AF63Cforcer3142
1004DA6B4bg66
0.zufus-1--
0.zoso-123
0.ykomatsu-123
0.xtifr1243
0.xavier-312
0.wouter2143
0.will-132
0.warp1342
0.voss1342
0.vlm2314
0.vleeuwen4312
0.vince2134
0.ukai4123
0.tytso-12-
0.tjrc14213
0.tats-1-2
0.tao1--2
0.stone2134
0.stevegr1243
0.smig-1-2
0.siggi1-44
0.shaul4213
0.sharpone1243
0.sfrost1342
0.seb-21-
0.salve4213
0.ruoso1243
0.rover--12
0.rmayr-213
0.riku4123
0.rdonald12-3
0.radu-1--
0.pzn112-
0.pronovic1243
0.profeta321-
0.portnoy12-3
0.porridge1342
0.pmhahn4123
0.pmachard1--2
0.pkern3124
0.pik1--2
0.phil4213
0.pfrauenf4213
0.pfaffben2143
0.p21243
0.ossk1243
0.oohara1234
0.ohura-213
0.nwp1342
0.noshiro4312
0.noodles2134
0.nomeata2143
0.noahm3124
0.nils3132
0.nico-213
0.ms3124
0.mpalmer2143
0.moth3241
0.mlang2134
0.mjr1342
0.mjg591342
0.merker2--1
0.mbuck2143
0.mbrubeck1243
0.madduck4123
0.mace-1-2
0.luther1243
0.luigi4213
0.lss-112
0.lightsey1--2
0.ley-1-2
0.ldrolez--1-
0.lange4124
0.kirk1342
0.killer1243
0.kelbert-214
0.juanma2134
0.jtarrio1342
0.jonas4312
0.joerg1342
0.jmintha-21-
0.jimmy1243
0.jerome21--
0.jaqque1342
0.jaq4123
0.jamuraa4123
0.iwj1243
0.ivan2341
0.hsteoh3142
0.hilliard4123
0.helen1243
0.hecker3142
0.hartmans1342
0.guterm312-
0.gniibe4213
0.glaweh4213
0.gemorin4213
0.gaudenz3142
0.fw2134
0.fmw12-3
0.evan1--2
0.ender4213
0.elonen4123
0.eevans13-4
0.ean-1--
0.dwhedon4213
0.duncf2133
0.ds1342
0.dparsons1342
0.dlehn1243
0.dfrey-123
0.deek1--2
0.davidw4132
0.davidc1342
0.dave4113
0.daenzer1243
0.cupis1---
0.cts-213
0.cph4312
0.cmc2143
0.clebars2143
0.chaton-21-
0.cgb-12-
0.calvin-1-2
0.branden1342
0.brad4213
0.bnelson1342
0.blarson1342
0.benj3132
0.bayle-213
0.baran1342
0.az2134
0.awm3124
0.atterer4132
0.andressh1---
0.amu1--2
0.akumria-312
0.ajt1144
0.ajk1342
0.agi2143
0.adric2143
0.adejong1243
0.adamm12--
0.aba1143