Search Results: "wjl"

28 May 2021

Jonathan McDowell: Trying to understand Kubernetes networking

I previously built a single node Kubernetes cluster as a test environment to learn more about it. The first thing I want to try to understand is its networking. In particular the IP addresses that are listed are all 10.* and my host s network is a 192.168/24. I understand each pod gets its own virtual ethernet interface and associated IP address, and these are generally private within the cluster (and firewalled out other than for exposed services). What does that actually look like?
$ ip route
default via 192.168.53.1 dev enx00e04c6851de
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev weave proto kernel scope link src 192.168.0.1
192.168.53.0/24 dev enx00e04c6851de proto kernel scope link src 192.168.53.147
Huh. No sign of any way to get to 10.107.66.138 (the IP my echoserver from the previous post is available on directly from the host). What about network interfaces? (under the cut because it s lengthy)
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enx00e04c6851de: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:e0:4c:68:51:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.53.147/24 brd 192.168.53.255 scope global dynamic enx00e04c6851de
       valid_lft 41571sec preferred_lft 41571sec
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 74:d8:3e:70:3b:18 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:18:04:9e:08 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d2:5a:fd:c1:56:23 brd ff:ff:ff:ff:ff:ff
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 12:82:8f:ed:c7:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global weave
       valid_lft forever preferred_lft forever
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether b6:49:88:d6:6d:84 brd ff:ff:ff:ff:ff:ff
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 6e:6c:03:1d:e5:0e brd ff:ff:ff:ff:ff:ff
11: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 9a:af:c5:0a:b3:fd brd ff:ff:ff:ff:ff:ff
13: vethwepl534c0a6@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 1e:ac:f1:85:61:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: vethwepl9ffd6b6@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 56:ca:71:2a:ab:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethwepl62b369d@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether e2:a0:bb:ee:fc:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: vethwepl6669168@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether f2:e7:e6:95:e0:61 brd ff:ff:ff:ff:ff:ff link-netnsid 3
That looks like a collection of virtual ethernet devices that are being managed by the weave networking plugin, and presumably partnered inside each pod. They re bridged to the weave interface (the master weave bit). Still no clues about the 10.* range. What about ARP?
ip neigh
192.168.53.1 dev enx00e04c6851de lladdr e4:8d:8c:35:98:d5 DELAY
192.168.0.4 dev datapath lladdr da:22:06:96:50:cb STALE
192.168.0.2 dev weave lladdr 66:eb:ce:16:3c:62 REACHABLE
192.168.53.136 dev enx00e04c6851de lladdr 00:e0:4c:39:f2:54 REACHABLE
192.168.0.6 dev weave lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.3 dev datapath lladdr f2:42:c9:c3:08:71 STALE
192.168.0.3 dev weave lladdr f2:42:c9:c3:08:71 REACHABLE
192.168.0.2 dev datapath lladdr 66:eb:ce:16:3c:62 STALE
192.168.0.6 dev datapath lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.4 dev weave lladdr da:22:06:96:50:cb STALE
192.168.0.5 dev datapath lladdr fe:6f:1b:14:56:5a STALE
192.168.0.5 dev weave lladdr fe:6f:1b:14:56:5a REACHABLE
Nope. That just looks like addresses on the weave managed bridge. Alright. What about firewalling?
nft list ruleset
table ip nat  
	chain DOCKER  
		iifname "docker0" counter packets 0 bytes 0 return
	 
	chain POSTROUTING  
		type nat hook postrouting priority srcnat; policy accept;
		 counter packets 531750 bytes 31913539 jump KUBE-POSTROUTING
		oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 84 masquerade 
		counter packets 525600 bytes 31544134 jump WEAVE
	 
	chain PREROUTING  
		type nat hook prerouting priority dstnat; policy accept;
		 counter packets 180 bytes 12525 jump KUBE-SERVICES
		fib daddr type local counter packets 23 bytes 1380 jump DOCKER
	 
	chain OUTPUT  
		type nat hook output priority -100; policy accept;
		 counter packets 527005 bytes 31628455 jump KUBE-SERVICES
		ip daddr != 127.0.0.0/8 fib daddr type local counter packets 285425 bytes 17125524 jump DOCKER
	 
	chain KUBE-MARK-DROP  
		counter packets 0 bytes 0 meta mark set mark or 0x8000 
	 
	chain KUBE-MARK-MASQ  
		counter packets 0 bytes 0 meta mark set mark or 0x4000 
	 
	chain KUBE-POSTROUTING  
		mark and 0x4000 != 0x4000 counter packets 4622 bytes 277720 return
		counter packets 0 bytes 0 meta mark set mark xor 0x4000 
		 counter packets 0 bytes 0 masquerade 
	 
	chain KUBE-KUBELET-CANARY  
	 
	chain INPUT  
		type nat hook input priority 100; policy accept;
	 
	chain KUBE-PROXY-CANARY  
	 
	chain KUBE-SERVICES  
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 9153 counter packets 0 bytes 0 jump KUBE-SVC-JD5MR3NA4I4DYORP
		meta l4proto tcp ip daddr 10.107.66.138  tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip daddr 10.111.16.129  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EZYNCFY2F7N6OQA2
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 443 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 80 counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 10.96.0.1  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-NPX46M4PTMTKRN6Y
		meta l4proto udp ip daddr 10.96.0.10  udp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-TCOU7JCQXEZGVUNU
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-ERIFXISQEP7F7OF4
		 fib daddr type local counter packets 3312 bytes 198720 jump KUBE-NODEPORTS
	 
	chain KUBE-NODEPORTS  
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
	 
	chain KUBE-SVC-NPX46M4PTMTKRN6Y  
		 counter packets 0 bytes 0 jump KUBE-SEP-Y6PHKONXBG3JINP2
	 
	chain KUBE-SEP-Y6PHKONXBG3JINP2  
		ip saddr 192.168.53.147  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.53.147:6443
	 
	chain WEAVE  
		# match-set weaver-no-masq-local dst  counter packets 135966 bytes 8160820 return
		ip saddr 192.168.0.0/24 ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ip saddr != 192.168.0.0/24 ip daddr 192.168.0.0/24 counter packets 0 bytes 0 masquerade 
		ip saddr 192.168.0.0/24 ip daddr != 192.168.0.0/24 counter packets 33 bytes 2941 masquerade 
	 
	chain WEAVE-CANARY  
	 
	chain KUBE-SVC-JD5MR3NA4I4DYORP  
		  counter packets 0 bytes 0 jump KUBE-SEP-6JI23ZDEH4VLR5EN
		 counter packets 0 bytes 0 jump KUBE-SEP-FATPLMAF37ZNQP5P
	 
	chain KUBE-SEP-6JI23ZDEH4VLR5EN  
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:9153
	 
	chain KUBE-SVC-TCOU7JCQXEZGVUNU  
		  counter packets 0 bytes 0 jump KUBE-SEP-JTN4UBVS7OG5RONX
		 counter packets 0 bytes 0 jump KUBE-SEP-4TCKAEJ6POVEFPVW
	 
	chain KUBE-SEP-JTN4UBVS7OG5RONX  
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	 
	chain KUBE-SVC-ERIFXISQEP7F7OF4  
		  counter packets 0 bytes 0 jump KUBE-SEP-UPZX2EM3TRFH2ASL
		 counter packets 0 bytes 0 jump KUBE-SEP-KPHYKKPVMB473Z76
	 
	chain KUBE-SEP-UPZX2EM3TRFH2ASL  
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	 
	chain KUBE-SEP-4TCKAEJ6POVEFPVW  
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	 
	chain KUBE-SEP-KPHYKKPVMB473Z76  
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	 
	chain KUBE-SEP-FATPLMAF37ZNQP5P  
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:9153
	 
	chain KUBE-SVC-666FUMINWJLRRQPD  
		 counter packets 1 bytes 60 jump KUBE-SEP-LYLDBZYLHY4MT3AQ
	 
	chain KUBE-SEP-LYLDBZYLHY4MT3AQ  
		ip saddr 192.168.0.4  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 1 bytes 60 dnat to 192.168.0.4:8080
	 
	chain KUBE-XLB-EDNDUDH2C75GIR6O  
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	 
	chain KUBE-XLB-CG5I4G2RS3ZVWGLK  
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	 
	chain KUBE-SVC-EDNDUDH2C75GIR6O  
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	 
	chain KUBE-SEP-BLQHCYCSXY3NRKLC  
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:443
	 
	chain KUBE-SVC-CG5I4G2RS3ZVWGLK  
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	 
	chain KUBE-SEP-5XVRKWM672JGTWXH  
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:80
	 
	chain KUBE-SVC-EZYNCFY2F7N6OQA2  
		 counter packets 0 bytes 0 jump KUBE-SEP-JYW326XAJ4KK7QPG
	 
	chain KUBE-SEP-JYW326XAJ4KK7QPG  
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:8443
	 
 
table ip filter  
	chain DOCKER  
	 
	chain DOCKER-ISOLATION-STAGE-1  
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
		counter packets 0 bytes 0 return
	 
	chain DOCKER-ISOLATION-STAGE-2  
		oifname "docker0" counter packets 0 bytes 0 drop
		counter packets 0 bytes 0 return
	 
	chain FORWARD  
		type filter hook forward priority filter; policy drop;
		iifname "weave"  counter packets 213 bytes 54014 jump WEAVE-NPC-EGRESS
		oifname "weave"  counter packets 150 bytes 30038 jump WEAVE-NPC
		oifname "weave" ct state new counter packets 0 bytes 0 log group 86 
		oifname "weave" counter packets 0 bytes 0 drop
		iifname "weave" oifname != "weave" counter packets 33 bytes 2941 accept
		oifname "weave" ct state related,established counter packets 0 bytes 0 accept
		 counter packets 0 bytes 0 jump KUBE-FORWARD
		ct state new  counter packets 0 bytes 0 jump KUBE-SERVICES
		ct state new  counter packets 0 bytes 0 jump KUBE-EXTERNAL-SERVICES
		counter packets 0 bytes 0 jump DOCKER-USER
		counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-1
		oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
		oifname "docker0" counter packets 0 bytes 0 jump DOCKER
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
		iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
	 
	chain DOCKER-USER  
		counter packets 0 bytes 0 return
	 
	chain KUBE-FIREWALL  
		 mark and 0x8000 == 0x8000 counter packets 0 bytes 0 drop
		ip saddr != 127.0.0.0/8 ip daddr 127.0.0.0/8  ct status dnat counter packets 0 bytes 0 drop
	 
	chain OUTPUT  
		type filter hook output priority filter; policy accept;
		ct state new  counter packets 527014 bytes 31628984 jump KUBE-SERVICES
		counter packets 36324809 bytes 6021214027 jump KUBE-FIREWALL
		meta l4proto != esp  mark and 0x20000 == 0x20000 counter packets 0 bytes 0 drop
	 
	chain INPUT  
		type filter hook input priority filter; policy accept;
		 counter packets 35869492 bytes 5971008896 jump KUBE-NODEPORTS
		ct state new  counter packets 390938 bytes 23457377 jump KUBE-EXTERNAL-SERVICES
		counter packets 36249774 bytes 6030068622 jump KUBE-FIREWALL
		meta l4proto tcp ip daddr 127.0.0.1 tcp dport 6784 fib saddr type != local ct state != related,established  counter packets 0 bytes 0 drop
		iifname "weave" counter packets 907273 bytes 88697229 jump WEAVE-NPC-EGRESS
		counter packets 34809601 bytes 5818213726 jump WEAVE-IPSEC-IN
	 
	chain KUBE-KUBELET-CANARY  
	 
	chain KUBE-PROXY-CANARY  
	 
	chain KUBE-EXTERNAL-SERVICES  
	 
	chain KUBE-NODEPORTS  
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
	 
	chain KUBE-SERVICES  
	 
	chain KUBE-FORWARD  
		ct state invalid counter packets 0 bytes 0 drop
		 mark and 0x4000 == 0x4000 counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
	 
	chain WEAVE-NPC-INGRESS  
	 
	chain WEAVE-NPC-DEFAULT  
		# match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst  counter packets 14 bytes 840 accept
		# match-set weave-P.B !ZhkAr5q=XZ?3 tMBA+0 dst  counter packets 0 bytes 0 accept
		# match-set weave-Rzff h:=]JaaJl/G;(XJpGjZ[ dst  counter packets 0 bytes 0 accept
		# match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst  counter packets 0 bytes 0 accept
		# match-set weave-iLgO^ o=U/*%KE[@=W:l~ 9T dst  counter packets 9 bytes 540 accept
	 
	chain WEAVE-NPC  
		ct state related,established counter packets 124 bytes 28478 accept
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 accept
		# PHYSDEV match --physdev-out vethwe-bridge --physdev-is-bridged counter packets 3 bytes 180 accept
		ct state new counter packets 23 bytes 1380 jump WEAVE-NPC-DEFAULT
		ct state new counter packets 0 bytes 0 jump WEAVE-NPC-INGRESS
	 
	chain WEAVE-NPC-EGRESS-ACCEPT  
		counter packets 48 bytes 3769 meta mark set mark or 0x40000 
	 
	chain WEAVE-NPC-EGRESS-CUSTOM  
	 
	chain WEAVE-NPC-EGRESS-DEFAULT  
		# match-set weave-s_+ChJId4Uy_$ G;WdH ~TK)I src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-s_+ChJId4Uy_$ G;WdH ~TK)I src  counter packets 0 bytes 0 return
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 return
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?# E src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?# E src  counter packets 0 bytes 0 return
		# match-set weave-sui%__gZ kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-sui%__gZ kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 return
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 return
	 
	chain WEAVE-NPC-EGRESS  
		ct state related,established counter packets 907425 bytes 88746642 accept
		# PHYSDEV match --physdev-in vethwe-bridge --physdev-is-bridged counter packets 0 bytes 0 return
		fib daddr type local counter packets 11 bytes 640 return
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ct state new counter packets 50 bytes 3961 jump WEAVE-NPC-EGRESS-DEFAULT
		ct state new mark and 0x40000 != 0x40000 counter packets 2 bytes 192 jump WEAVE-NPC-EGRESS-CUSTOM
	 
	chain WEAVE-IPSEC-IN  
	 
	chain WEAVE-CANARY  
	 
 
table ip mangle  
	chain KUBE-KUBELET-CANARY  
	 
	chain PREROUTING  
		type filter hook prerouting priority mangle; policy accept;
	 
	chain INPUT  
		type filter hook input priority mangle; policy accept;
		counter packets 35716863 bytes 5906910315 jump WEAVE-IPSEC-IN
	 
	chain FORWARD  
		type filter hook forward priority mangle; policy accept;
	 
	chain OUTPUT  
		type route hook output priority mangle; policy accept;
		counter packets 35804064 bytes 5938944956 jump WEAVE-IPSEC-OUT
	 
	chain POSTROUTING  
		type filter hook postrouting priority mangle; policy accept;
	 
	chain KUBE-PROXY-CANARY  
	 
	chain WEAVE-IPSEC-IN  
	 
	chain WEAVE-IPSEC-IN-MARK  
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	 
	chain WEAVE-IPSEC-OUT  
	 
	chain WEAVE-IPSEC-OUT-MARK  
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	 
	chain WEAVE-CANARY  
	 
 
Wow. That s a lot of nftables entries, but it explains what s going on. We have a nat entry for:
meta l4proto tcp ip daddr 10.107.66.138 tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
which ends up going to KUBE-SEP-LYLDBZYLHY4MT3AQ and:
meta l4proto tcp counter packets 1 bytes 60 dnat to 192.168.0.4:8080
So packets headed for our echoserver are eventually ending up in a container that has a local IP address of 192.168.0.4. Which we can see in our routing table via the weave interface. Mystery explained. We can see the ingress for the externally visible HTTP service as well:
meta l4proto tcp ip daddr 192.168.33.147 tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
which ends up redirected to:
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:80
So from that we d expect the IP inside the echoserver pod to be 192.168.0.4 and the IP address instead our nginx ingress pod to be 192.168.0.5. Let s look:
root@udon:/# docker ps   grep echoserver
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run. "   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
root@udon:/# docker exec -it 7cbb177bee18 /bin/bash
root@hello-node-59bffcc9fd-8hkgb:/# awk '/32 host/   print f    f=$2 ' <<< "$(</proc/net/fib_trie)"   sort -u
127.0.0.1
192.168.0.4
It s a slightly awkward method of determining the local IPs addresses due to the stripped down nature of the container, but it clearly shows the expected 192.168.0.4 address. I ve touched here upon the ability to actually enter a container and have a poke around its running environment by using docker directly. Next step is to use that to investigate what containers have actually been spun up and what they re doing. I ll also revisit networking when I get to the point of building a multi-node cluster, to examine how the bridging between different hosts is done.

12 September 2016

Steve Kemp: If your code accepts URIs as input..

There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request. If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.
Really the issue here is a confusion between URL & URI.
The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:
file:///etc/passwd
The software was vulnerable, read the file, and showed it to me. The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here. The site in question was http://fuckyeahmarkdown.com/ and allows you to enter a URL to convert to markdown - I found this via the hacker news submission. The following link shows the contents of /etc/hosts, and demonstrates the problem: http://fuckyeahmarkdown.example.com/go/?u=file:///etc/hosts&read=1&preview=1&showframe=0&submit=go The output looked like this:
..
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 stage
127.0.0.1 files
127.0.0.1 brettt..
..
In the actual output of '/etc/passwd' all newlines had been stripped. (Which I now recognize as being an artifact of the markdown processing.) UPDATE: The problem is fixed now.

5 October 2013

R&#233;mi Vanicat: Key-transition

A recent discussion on debian-project remind me I have to do this:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA256
Hello,
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted in the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
transition.
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
reauthenticating me.
The old key, which I am transitional away from, is:
   pub   1024D/9057B5D3 2002-02-07
         Key fingerprint = 7AA1 9755 336C 6D0B 8757  E393 B0E1 98D7 9057 B5D3
The new key, to which I am transitioning, is:
   pub   4096R/31ED8AEF 2009-05-08
         Key fingerprint = DE8F 92CD 16FA 1E5B A16E  E95E D265 C085 31ED 8AEF
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key D265C08531ED8AEF
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs D265C08531ED8AEF
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff D265C08531ED8AEF
Please contact me via e-mail at <vanicat@debian.org> if you have any
questions about this document or this transition.
  Remi vanicat
  vanicat@debian.org
  remi.vanicat@gmail.com
  remi.vanicat@ens-lyon.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iD8DBQFST8iPsOGY15BXtdMRAggfAJ4z5wEpUy8Bcicv9KTGOjsUAZF2xACfYKv9
GWXh8iT1N2Qqjhwtpvx9B3aJAhUDBQFST8iP0mXAhTHtiu8BCPldEADYM9e/22yu
En8lcZ5UUI/eQ5jFgZaxTZ90ShS0vPD/Mgn6xyKoeigA0Bk4ltTjDXA7vEWXLjOK
gbGv3SvffPJIsR1WJmhYtVyNquJXXjyElEBsbxvCJ/awYdU9BFXPqtLlLVCObvSc
bE9xlJhoLdk3bDqSSTO3nqoDa0qgPnJvwKNYIMBrNavGyIW3QT0BRUCKyBssqh+u
P4x8VhpHiR2Ee4LHfRVeJk+5ncvSXYluAohOXka5AnV2GgFQoVYfFqxn2Gh3BMWC
sqf/NUPnFOCSRw++oNP3mBv3jn/jZuo8BcVOECKL+/dO6/3otgO6a/5tUspfnAJA
m/UxBdc2vs7LkZ0wUipIHg10x4154f+hZfx4WuCJ05X0dqcKeh4eJ0zFBvxMyh+A
o2TfifT9WJlyb+Hah/w1MFAXI8cAj5RvwdVgTzcodXpggtpBpdLDvv3G1KYFm/TG
Zev480N6bGrBb3JKgUtAMuTls8+FngYtYg9YKBiajEDM3MVC+H4MiOzVNKV++y/n
YW3z59Oc04ZMi9hV+uR3kwq8D7aUJmc0QFeOGmq7W9LOjvVO+lTf87l3jh2ahxx/
FgiinSZr1YzE+9OtNj8CTsmAmApIxsTJUCR6h554z+lyrTwc0pdeUwzSWqV84k7G
V6HBmTiw9IGs22+W15pRzq/mCVYdrYT7zQ==
=c5fJ
-----END PGP SIGNATURE-----
Here is the link to the .txt version for easier checking of signature.

19 July 2011

John Goerzen: Trains and Birthdays

This weekend, the boys got what they ve been waiting for: another trip on Amtrak. For the last week or so, Jacob has had a morning ritual. He ll look at the calendar, figure out what today is, figure out when we ll get on the train, and then figure out how many days it will be. As the number gets lower, the excitement gets higher, of course. Friday morning, we woke the boys at 2:30AM to get to the train station. The only Amtrak trains through our area are middle of the night departures. Jacob is normally hard to wake up, but when I tell him that I m waking him up to go to the train station, he wakes up faster than I ve ever seen him before. It takes about 3 seconds for him to process that in his groggy state, and then he sits up straight, throws off the covers, and is instantly ready to go. Both boys were excited even in the station waiting room. Oliver has been on an Amtrak train before, but it s been awhile and he probably doesn t remember it. He constantly talked about it, jabbering as much as his vocabulary lets him. Once we were on the train, Terah and I would have liked to get some more sleep. The boys, on the other hand, were now wide awake, and didn t fall back asleep until about 5:30. That means Terah and I didn t, either. Here s what it usually looks like: When it came time for breakfast, we went to the dining car as usual. Jacob had already been telling us for days what he would eat for breakfast on the train: I always have French toast on the train, dad. And so he did. A few minutes later, I heard the waitress telling other people they were out of French toast, so I was glad I didn t have to disappoint Jacob over that! But it was Oliver that really came alive on this trip. I don t think I ve ever seen him so excited. It was almost constant. He happily talked about anything he could see or touch. But he also listened to other conversations, and would frequently pick out one word from a sentence and try to say it. He does that at home too, but not nearly so often as on the train. Both boys wanted to go exploring a lot my word for taking them for a walk in the train and seeing what they might find. We walked up and down the train several times, Jacob excited over opening the doors between the cars, and Oliver excited just to be there. Oliver s excitement kept him from sleeping well. He did eventually get a short nap, but that wasn t quite enough to avert a couple of tantrums later in the day. The reason for the trip was the 80th birthday party for Terah s grandpa. And it so happened that Oliver s 2nd birthday would be over the same time. So, Saturday morning, we all went for breakfast at Das Dutchman Essenhaus, one of the favorite local restaurants in northern Indiana. After that, it was over to a relative s place for some birthday festivities. The children got mini cakes to decorate as rail cars. There was frosting and all sorts of toppings. Great fun was had by all, and it was wise that this activity took place in a garage rather than indoors. Yes, that is a marshmallow stuck to Jacob s nose. Then in the evening, it was off to another relative s place for some more family time. Jacob had a great time all day, and was in high spirits. He asked me to sit by him at dinner, and started one of the longest conversations I ve had with him in some time. We just talked about the things that happened in the day, but it was nice when I told him, I like sitting by you, Jacob, and he said, Dad, me too! The big highlight for the day happened in the evening. We were gathered around a fire with a guitar to do some singing. Jacob was happily perched in his lawn chair, but got very excited when he saw some lightweight airplanes flying overhead. These kept flying at some distance, and he kept pointing them out to us. But that wasn t even the most exciting part. That came when the fireflies came out. Jacob ran around, catching them in his hands, and excitedly showing them to whatever person happened to be closest. He was laughing with joy for such a long time. At one point, someone asked him if the bugs were tickling his hands. He said, evidentally just realizing it, Oh yes, they ARE tickling my hands! He was one very happy boy. Sunday I had to leave to get back home, while Terah and the boys will return a couple of days later. Although I do sort of look forward to a train trip that I can relax without having to manage two young boys, I do miss them already and will be happy to have everyone back home in a few days. (This post written during the trip and posted a week later after arriving home)

20 February 2009

Theodore Ts'o: Aligning filesystems to an SSD s erase block size

I recently purchased a new toy, an Intel X25-M SSD, and when I was setting it up initially, I decided I wanted to make sure the file system was aligned on an erase block boundary. This is a generally considered to be a Very Good Thing to do for most SSD s available today, although there s some question about how important this really is for Intel SSD s more on that in a moment. It turns out this is much more difficult than you might first think most of Linux s storage stack is not set up well to worry about alignment of partitions and logical volumes. This is surprising, because it s useful for many things other than just SSD s. This kind of alignment is important if you are using any kind of hardware or software RAID, for example, especially RAID 5, because if writes are done on stripe boundaries, it can avoid a read-modify-write overhead. In addition, the hard drive industry is planning on moving to 4096 byte sectors instead of the way-too-small 512 byte sectors at some point in the future. Linux s default partition geometry of 255 heads and 63 sectors/track means that there are 16065 (512 byte) sectors per cylinder. The initial round of 4k sector disks will emulate 512 byte disks, but if the partitions are not 4k aligned, then the disk will end up doing a read/modify/write on two internal 4k sectors for each singleton 4k file system write, and that would be unfortunate. Vista has already started working around this problem, since it uses a default partitioning geometry of 240 heads and 63 sectors/track. This results in a cylinder boundary which is divisible by 8, and so the partitions (with the exception of the first, which is still misaligned unless you play some additional tricks) are 4k aligned. So this is one place where Vista is ahead of Linux . unfortunately the default 255 heads and 63 sectors is hard coded in many places in the kernel, in the SCSI stack, and in various partitioning programs; so fixing this will require changes in many places. However, with SSD s (remember SSD s? This is a blog post about SSD s ) you need to align partitions on at least 128k boundaries for maximum efficiency. The best way to do this that I ve found is to use 224 (32*7) heads and 56 (8*7) sectors/track. This results in 12544 (or 256*49) sectors/cylinder, so that each cylinder is 49*128k. You can do this by doing starting fdisk with the following options when first partitioning the SSD: # fdisk -H 224 -S 56 /dev/sdb The first partition will only be aligned on a 4k boundary, since in order to be compatible with MS-DOS, the first partition starts on track 1 instead of track 0, but I didn t worry too much about that since I tend to use the first partition for /boot, which tends not to get modified much. You can go into expert mode with fdisk and force the partition to begin on an 128k alignment, but many Linux partition tools will complain about potential compatibility problems (which are obsolete warnings, since the systems that would have booting systems with these issues haven t been made in about ten years), but I didn t needed to do that, so I didn t worry about it. So I created a 1 gigabyte /boot partition as /dev/sdb1, and allocated the rest of the SSD for use by LVM as /dev/sdb2. And that s where I ran into my next problem. LVM likes to allocate 192k for its header information, and 192k is not a multiple of 128k. So if you are creating file systems as logical volumes, and you want those volume to be properly aligned you have to tell LVM that it should reserve slightly more space for its meta-data, so that the physical extents that it allocates for its logical volumes are properly aligned. Unfortunately, the way this is done is slightly baroque: # pvcreate metadatasize 250k /dev/sdb2
Physical volume /dev/sdb2 successfully created Why 250k and not 256k? I can t tell you sometimes the LVM tools aren t terribly intuitive. However, you can test to make sure that physical extents start at the proper offset by using: # pvs /dev/sdb2 -o+pe_start
PV VG Fmt Attr PSize PFree 1st PE
/dev/sdb2 lvm2 73.52G 73.52G 256.00K If you use a metadata size of 256k, the first PE will be at 320k instead of 256k. There really ought to be an pe-align option to pvcreate, which would be far more user-friendly, but, we have to work with the tools that we have. Maybe in the next version of the LVM support tools . Once you do this, we re almost done. The last thing to do is to create the file system. As it turns out, if you are using ext4, there is a way to tell the file system that it should try to align files so they match up with the RAID stripe width. (These techniques can be used for RAID disks as well). If your SSD has an 128k erase block size, and you are creating the file system with the default 4k block size, you just have to specify a strip width when you create the file system, like so: # mke2fs -t ext4 -E stripe-width=32,resize=500G /dev/ssd/root (The resize=500G limits the number of blocks reserved for resizing this file system so that the guaranteed number size that the file system can be grown via online resize is 500G. The default is 1000 times the initial file system size, which is often far too big to be reasonable. Realistically, the file system I am creating is going to be used for a desktop device, and I don t foresee needing to resize it beyond 500G, so this saves about a 50 megabytes or so. Not a huge deal, but waste not, want not , as the saying goes.) With e2fsprogs 1.41.4, the journal will be 128k aligned, as will the start of the file system, and with the stripe-width specified, the ext4 allocator will try to align block writes to the stripe width where that makes sense. So this is as good as it gets without kernel changes to make the block and inode allocators more SSD aware, something which I hope to have a chance to look at.

horizontal separator All of this being said, it s time to revisit this question is all of this needed for a smart , better by design next-generation SSD such as Intel s? Aligning your file system on an erase block boundary is critical on first generation SSD s, but the Intel X25-M is supposed to have smarter algorithms that allow it to reduce the effect of write-amplification. The details are a little bit vague, but presumably there is a mapping table which maps sectors (at some internal sector size we don t know for sure whether it s 512 bytes or some larger size) to individual erase blocks. If the file system sends a series of 4k writes for file system blocks 10, 12, 13, 32, 33, 34, 35, 64, 65, 66, 67, 96, 97, 98, 99, followed by a barrier operation, a traditional SSD might do read/modify/write on four 128k erase blocks one covering the blocks 0-31, another for the blocks 32-63, and so on. However, the Intel SSD will simply write a single 128k block that indicates where the latest versions of blocks 10, 12, 13, 32, 33, 34, 35, 64, 65, 66, 67, 96, 97, 98, and 99 can be found. This technique tends to work very well. However, over time, the table will get terribly fragmented, and depending on whether the internal block sector size is 512 or 4k (or something in between), there could be a situation where all but one or two of the internal sectors on the disks have been mapped away to other erase blocks, leading to fragmentation of the erase blocks. This is not just a theoretical problem; there are reports from the field that this happens relatively easy. For example, see Allyn Malventano s Long-term performance analysis of Intel Mainstream SSDs and Marc Prieur s report from BeHardware.com which includes an official response from Intel regarding this phenomenon. Laurent Gilson posted on the Linux-Thinkpad mailing list that when he tried using the X25-M to store commit journals for a database, that after writing 170% of the capacity of an Intel SSD, the small writes caused the write performance to go through the floor. More troubling, Allyn Malventano indicated that if the drive is abused for too long with a mixture of small and large writes, it can get into a state where the performance degredation is permanent, and even a series of large writes apparently does not restore the drive s function only an ATA SECURITY ERASE command to completely reset the mapping table seems to help. So, what can be done to prevent this? Allyn s review speculates that aligning writes to erase write boundaries can help I m not 100% sure this is true, but without detailed knowledge of what is going on under the covers in Intel s SSD, we won t know for sure. It certainly can t hurt, though, and there is a distinct possibility that the internal sector size is larger than 512 bytes, which means the default partitioning scheme of 255 heads/63 sectors is probably not a good idea. (Even Vista has moved to a 240/63 scheme, which gives you 8k alignment of partitions; I prefer 224/56 partitioning, since the days when BIOS s used C/H/S I/O are long gone.) The Ext3 and Ext4 file system tend to defer meta-data writes by pinning them until a transaction commit; this definitely helps, and ext4 allows you to configure an erase block boundary, which should also be helpful. Enabling laptop mode will discourage writing to the disk except in large blocks, which probably helps significantly as well. And avoiding fsync() in applications will also be helpful, since a cache flush operation will force the SSD to write to an erase block even if it isn t completely filled. Beyond that, clearly some experimentation will be needed. My current thinking is to use a standard file system aging workload, and then performing an I/O benchmark to see if there has been any performance degradation. I can then vary various file system tuning parameters and algorithms, confirm whether or not a heavy fsync workload makes the performance worse. In the long term, hopefully Intel will release a firmware update which adds support for ATA TRIM/DISCARD commands, which will allow the file system to inform the SSD that various blocks have been deleted and no longer need to be preserved by the SSD. I suspect this will be a big help, if the SSD knows that certain sectors no longer need to be preserved, it can avoid copying them when trying to defragment the SSD. Given how expensive the X25-M SSD s are, I hope that there will be a firmware update to support this, and that Intel won t leave its early adopters high and dry by only offering this functionality in newer models of the SSD. If they were to do that, it would leave many of these early adopters, especially your humble writer (who paid for his SSD out of his own pocket), to be quite grumpy indeed. Hopefully, though, it won t come to that. Update: I ve since penned a follow-up post Should Filesystems Be Optimized for SSD s? Related posts (automatically generated):

  1. Should Filesystems Be Optimized for SSD s? In one of the comments to my last blog entry,...
  2. Fast ext4 fsck times This wasn t one of the things we were explicitly engineering...

15 December 2008

Miriam Ruiz: How to win online debates

From How to win online debates, by Adam Graham: You won t really influence the direction of what people think on a given issue. The prize is the satisfaction that you won the debate. You beat that guy. You showed him who was right! Congratz! As the author of the article says, quoting the film War Games: A strange game. The only winning move is not to play.

23 December 2006

Clint Adams: 91 I stepped in this game

Wesley J. Landaker for DPL.

19 March 2006

Clint Adams: This report is flawed, but it sure is fun

91D63469DFdnusinow1243
63DEB0EC31eloy
55A965818Fvela1243
4658510B5Amyon2143
399B7C328Dluk31-2
391880283Canibal2134
370FE53DD9opal4213
322B0920C0lool1342
29788A3F4Cjoeyh
270F932C9Cdoko
258768B1D2sjoerd
23F1BCDB73aurel3213-2
19E02FEF11jordens1243
18AB963370schizo1243
186E74A7D1jdassen(Ks)1243
1868FD549Ftbm3142
186783ED5Efpeters1--2
1791B0D3B7edd-213
16E07F1CF9rousseau321-
16248AEB73rene1243
158E635A5Erafl
14C0143D2Dbubulle4123
13D87C6781krooger(P)4213
13A436AD25jfs(P)
133D08B612msp
131E880A84fjp4213
130F7A8D01nobse
12F1968D1Bdecklin1234
12E7075A54mhatta
12D75F8533joss1342
12BF24424Csrivasta1342
12B8C1FA69sto
127F961564kobold
122A30D729pere4213
1216D970C6eric12--
115E0577F2mpitt
11307D56EDnoel3241
112BE16D01moray1342
10BC7D020Aformorer-1--
10A7D91602apollock4213
10A51A4FDDgcs
10917A225Ejordi
104B729625pvaneynd3123
10497A176Dloic
962F1A57Fpa3aba
954FD2A58glandium1342
94A5D72FErafael
913FEFC40fenio-1--
90AFC7476rra1243
890267086duck31-2
886A118E6ch321-
8801EA932joey1243
87F4E0E11waldi-123
8514B3E7Cflorian21--
841954920fs12--
82A385C57mckinstry21-3
825BFB848rleigh1243
7BC70A6FFpape1---
7B70E403Bari1243
78E2D213Ajochen(Ks)
785FEC17Fkilian
784FB46D6lwall1342
7800969EFsmimram-1--
779CC6586haas
75BFA90ECkohda
752B7487Esesse2341
729499F61sho1342
71E161AFBbarbier12--
6FC05DA69wildfire(P)
6EEB6B4C2avdyk-12-
6EDF008C5blade1243
6E25F2102mejo1342
6D1C41882adeodato(Ks)3142
6D0B433DFross12-3
6B0EBC777piman1233
69D309C3Brobert4213
6882A6C4Bkov
66BBA3C84zugschlus4213
65662C734mvo
6554FB4C6petere-1-2
637155778stratus
62D9ACC8Elars1243
62809E61Ajosem
62252FA1Afrank2143
61CF2D62Amicah
610FA4CD1cjwatson2143
5EE6DC66Ajaldhar2143
5EA59038Esgran4123
5E1EE3FB1md4312
5E0B8B2DEjaybonci
5C9A5B54Esesse(Ps,Gs) 2341
5C4CF8EC3twerner
5C2FEE5CDacid213-
5C09FD35Atille
5C03C56DFrfrancoise---1
5B7CDA2DCxam213-
5A20EBC50cavok4214
5808D0FD0don1342
5797EBFABenrico1243
55230514Asjackman
549A5F855otavio-123
53DC29B41pdm
529982E5Avorlon1243
52763483Bmkoch213-
521DB31C5smr2143
51BF8DE0Fstigge312-
512CADFA5csmall3214
50A0AC927lamont
4F2CF01A8bdale
4F095E5E4mnencia
4E9F2C747frankie
4E9ABFCD2devin2143
4E81E55C1dancer2143
4E38E7ACFhmh(Gs)1243
4E298966Djrv(P)
4DF5CE2B4huggie12-3
4DD982A75speedblue
4C671257Ddamog-1-2
4C4A3823Ekmr4213
4C0B10A5Bdexter
4C02440B8js1342
4BE9F70EAtb1342
4B7D2F063varenet-213
4A3F9E30Eschultmc1243
4A3D7B9BClawrencc2143
4A1EE761Cmadcoder21--
49DE1EEB1he3142
49D928C9Bguillem1---
49B726B71racke
490788E11jsogo2143
4864826C3gotom4321
47244970Bkroeckx2143
45B48FFAEmarga2143
454E672DEisaac1243
44B3A135Cerich1243
44597A593agmartin4213
43FCC2A90amaya1243
43F3E6426agx-1-2
43EF23CD6sanvila1342
432C9C8BDwerner(K)
4204DDF1Baquette
400D8CD16tolimar12--
3FEC23FB2bap34-1
3F972BE03tmancill4213
3F801A743nduboc1---
3EBEDB32Bchrsmrtn4123
3EA291785taggart2314
3E4D47EC1tv(P)
3E19F188Etroyh1244
3DF6807BEsrk4213
3D2A913A1psg(P)
3D097A261chrisb
3C6CEA0C9adconrad1243
3C20DF273ondrej
3B5444815ballombe1342
3B1DF9A57cate2143
3AFA44BDDweasel(Ps,Gs) 1342
3AA6541EEbrlink1442
3A824B93Fasac3144
3A71C1E00turbo
3A2D7D292seb128
39ED101BFmbanck3132
3969457F0joostvb2143
389BF7E2Bkobras1--2
386946D69mooch12-3
374886B63nathans
36F222F1Fedelhard
36D67F790foka
360B6B958geiger
3607559E6mako
35C33C1B8dirson
35921B5D8ajmitch
34C1A5BE5sjq
3431B38BApxt312-
33E7B4B73lmamane2143
327572C47ucko1342
320021490schepler1342
31DEB8EAEgoedson
31BF2305Akrala(Gs)3142
319A42D19dannf21-4
3174FEE35wookey3124
3124B26F3mfurr21-3
30A327652tschmidt312-
3090DD8D5ingo3123
30813569Fjeroen1141
30644FAB7bas1332
30123F2F2gareuselesinge1243
300530C24bam1234
2FD6645ABrmurray-1-2
2F95C2F6Dchrism(P)
2F9138496graham(Gs)3142
2F5D65169jblache1332
2F28CD102absurd
2F2597E04samu
2F0B27113patrick
2EFA6B9D5hamish(P)3142
2EE0A35C7risko4213
2E91CD250daigo
2D688E0A7qjb-21-
2D4BE1450prudhomm
2D2A6B810joussen
2CFD42F26dilinger
2CEE44978dburrows1243
2CD4C0D9Dskx4213
2BFB880A3zeevon
2BD8B050Droland3214
2B74952A9alee
2B4D6DE13paul
2B345BDD3neilm1243
2B28C5995bod4213
2B0FA4F49schoepf
2B0DDAF42awoodland
2A8061F32osamu4213
2A21AD4F9tviehmann1342
299E81DA0kaplan
2964199E2fabbe3142
28DBFEC2Fpelle
28B8D7663ametzler1342
28B143975martignlo
288C7C1F793sam2134
283E5110Fovek
2817A996Atfheen
2807CAC25abi4123
2798DD95Cpiefel
278D621B4uwe-1--
26FF0ABF2rcw2143
26E8169D2hertzog3124
26C0084FCchrisvdb
26B79D401filippo-1--
267756F5Dfrn2341
25E2EB5B4nveber123-
25C6153ADbroonie1243
25B713DF0djpig1243
250ECFB98ccontavalli(Gs)
250064181paulvt
24F71955Adajobe21-3
24E2ECA5Ajmm4213
2496A1827srittau
23E8DCCC0maxx1342
23D97C149mstone(P)2143
22DB65596dz321-
229F19BD1meskes
21F41B907marillat1---
21EB2DE66boll
21557BC10kraai1342
2144843F5lolando1243
210656584voc
20D7CA701steinm
205410E97horms
1FC992520tpo-14-
1FB0DFE9Bgildor
1FAEEB4A9neil1342
1F7E8BC63cedric21--
1F2C423BCzack1332
1F0199162kreckel4214
1ECA94FA8ishikawa2143
1EAAC62DFcyb---1
1EA2D2C41malattia-312
1E77AC835bcwhite(P)
1E66C9BB0tach
1E145F334mquinson2143
1E0BA04C1treinen321-
1DFE80FB2tali
1DE054F69azekulic(P)
1DC814B09jfs
1CB467E27kalfa
1C9132DDByoush-21-
1C87FFC2Fstevenk-1--
1C2CE8099knok321-
1BED37FD2henning(Ks)1342
1BA0A7EB5treacy(P)
1B7D86E0Fcmb4213
1B62849B3smarenka2143
1B3C281F4alain2143
1B25A5CF1omote
1ABA0E8B2sasa
1AB474598baruch2143
1AB2A91F5troup1--2
1A827CEDEafayolle(Gs)
1A6C805B9zorglub2134
1A674A359maehara
1A57D8BF7drew2143
1A269D927sharky
1A1696D2Blfousse1232
19BF42B07zinoviev--12
19057B5D3vanicat2143
18E950E00mechanix
18BB527AFgwolf1132
18A1D9A1Fjgoerzen
18807529Bultrotter2134
1872EB4E5rcardenes
185EE3E0Eangdraug12-3
1835EB2FFbossekr
180C83E8Eigloo1243
17B8357E5andreas212-
17B80220Dsjr(Gs)1342
17796A60Bsfllaw1342
175CB1AD2toni1---
1746C51F4klindsay
172D03CB1kmuto4231
171473F66ttroxell13-4
16E76D81Dseanius1243
16C63746Dhector
16C5F196Bmalex4213
16A9F3C38rkrishnan
168021CE4ron---1
166F24521pyro-123
1631B4819anfra
162EEAD8Bfalk1342
161326D40jamessan13-4
1609CD2C0berin--1-
15D8CDA7Bguus1243
15D8C12EArganesan
15D64F870zobel
159EF5DBCbs
157F045DCcamm
1564EE4B6hazelsct
15623FC45moronito4213
1551BE447torsten
154AD21B5warmenhoven
153BBA490sjg
1532005DAseamus
150973B91pjb2143
14F83C751kmccarty12-3
14DB97694khkim
14CD6E3D2wjl4213
14A8854E6weinholt1243
14950EAA6ajkessel
14298C761robertc(Ks)
142955682kamop
13FD29468bengen-213
13FD25C84roktas3142
13B047084madhack
139CCF0C7tagoh3142
139A8CCE2eugen31-2
138015E7Ethb1234
136B861C1bab2143
133FC40A4mennucc13214
12C0FCD1Awdg4312
12B05B73Arjs
1258D8781grisu31-2
1206C5AFDchewie-1-1
1200D1596joy2143
11C74E0B7alfs
119D03486francois4123
118EA3457rvr
1176015EDevo
116BD77C6alfie
112AA1DB8jh
1128287E8daf
109FC015Cgodisch
106468DEBfog--12
105792F34rla-21-
1028AF63Cforcer3142
1004DA6B4bg66
0.zufus-1--
0.zoso-123
0.ykomatsu-123
0.xtifr1243
0.xavier-312
0.wouter2143
0.will-132
0.warp1342
0.voss1342
0.vlm2314
0.vleeuwen4312
0.vince2134
0.ukai4123
0.tytso-12-
0.tjrc14213
0.tats-1-2
0.tao1--2
0.stone2134
0.stevegr1243
0.smig-1-2
0.siggi1-44
0.shaul4213
0.sharpone1243
0.sfrost1342
0.seb-21-
0.salve4213
0.ruoso1243
0.rover--12
0.rmayr-213
0.riku4123
0.rdonald12-3
0.radu-1--
0.pzn112-
0.pronovic1243
0.profeta321-
0.portnoy12-3
0.porridge1342
0.pmhahn4123
0.pmachard1--2
0.pkern3124
0.pik1--2
0.phil4213
0.pfrauenf4213
0.pfaffben2143
0.p21243
0.ossk1243
0.oohara1234
0.ohura-213
0.nwp1342
0.noshiro4312
0.noodles2134
0.nomeata2143
0.noahm3124
0.nils3132
0.nico-213
0.ms3124
0.mpalmer2143
0.moth3241
0.mlang2134
0.mjr1342
0.mjg591342
0.merker2--1
0.mbuck2143
0.mbrubeck1243
0.madduck4123
0.mace-1-2
0.luther1243
0.luigi4213
0.lss-112
0.lightsey1--2
0.ley-1-2
0.ldrolez--1-
0.lange4124
0.kirk1342
0.killer1243
0.kelbert-214
0.juanma2134
0.jtarrio1342
0.jonas4312
0.joerg1342
0.jmintha-21-
0.jimmy1243
0.jerome21--
0.jaqque1342
0.jaq4123
0.jamuraa4123
0.iwj1243
0.ivan2341
0.hsteoh3142
0.hilliard4123
0.helen1243
0.hecker3142
0.hartmans1342
0.guterm312-
0.gniibe4213
0.glaweh4213
0.gemorin4213
0.gaudenz3142
0.fw2134
0.fmw12-3
0.evan1--2
0.ender4213
0.elonen4123
0.eevans13-4
0.ean-1--
0.dwhedon4213
0.duncf2133
0.ds1342
0.dparsons1342
0.dlehn1243
0.dfrey-123
0.deek1--2
0.davidw4132
0.davidc1342
0.dave4113
0.daenzer1243
0.cupis1---
0.cts-213
0.cph4312
0.cmc2143
0.clebars2143
0.chaton-21-
0.cgb-12-
0.calvin-1-2
0.branden1342
0.brad4213
0.bnelson1342
0.blarson1342
0.benj3132
0.bayle-213
0.baran1342
0.az2134
0.awm3124
0.atterer4132
0.andressh1---
0.amu1--2
0.akumria-312
0.ajt1144
0.ajk1342
0.agi2143
0.adric2143
0.adejong1243
0.adamm12--
0.aba1143