Search Results: "segre"

21 June 2017

Vincent Bernat: IPv4 route lookup on Linux

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).
During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, ), this function should quickly provide a decision. Some possible options are: Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let s assume the following routing table:
$ ip route show scope global table 100
default via 203.0.113.5 dev out2
192.0.2.0/25
        nexthop via 203.0.113.7  dev out3 weight 1
        nexthop via 203.0.113.9  dev out4 weight 1
192.0.2.47 via 203.0.113.3 dev out1
192.0.2.48 via 203.0.113.3 dev out1
192.0.2.49 via 203.0.113.3 dev out1
192.0.2.50 via 203.0.113.3 dev out1
Here are some examples of lookups and the associated results:
Destination IP Next hop
192.0.2.49 203.0.113.3 via out1
192.0.2.50 203.0.113.3 via out1
192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP)
192.0.2.200 203.0.113.5 via out2
A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie The following trie encodes the previous routing table: Simple routing trie For each node, the prefix is known by its path from the root node and the prefix length is the current depth. A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry. Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32). Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie: Patricia trie Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf: Lookup in a Patricia trie The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough. Many routing systems are using Patricia trees:

Lookup with a level-compressed trie In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie: Level-compressed trie Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree. An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable. Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9). Here is the representation of our example routing table in memory3: Memory representation of a trie There are several structures involved: The trie can be retrieved through /proc/net/fib_trie:
$ cat /proc/net/fib_trie
Id 100:
  +-- 0.0.0.0/0 2 0 2
      -- 0.0.0.0
        /0 universe UNICAST
     +-- 192.0.2.0/26 2 0 1
         -- 192.0.2.0
           /25 universe UNICAST
         -- 192.0.2.47
           /32 universe UNICAST
        +-- 192.0.2.48/30 2 0 1
            -- 192.0.2.48
              /32 universe UNICAST
            -- 192.0.2.49
              /32 universe UNICAST
            -- 192.0.2.50
              /32 universe UNICAST
[...]
For internal nodes, the numbers after the prefix are:
  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.
Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:
$ cat /proc/net/fib_triestat
Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes.
Id 100:
        Aver depth:     2.33
        Max depth:      3
        Leaves:         6
        Prefixes:       6
        Internal nodes: 3
          2: 3
        Pointers: 12
Null ptrs: 4
Total size: 1  kB
[...]
When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector. The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited. Internal nodes and null pointers

Performance So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function: Maximum depth and lookup time The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast. When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores. The measurements are done with a Linux kernel 4.11 from Debian unstable. I have gathered performance metrics accross kernel versions in Performance progression of IPv4 route lookup on Linux . Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds: Insertion time

Memory usage The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is. Memory usage The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:
$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default
Linux will first lookup for a match in the local table. If it doesn t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables The local table contains routes for local delivery:
$ ip route show table local
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55
local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55
broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55
This table is populated automatically by the kernel when addresses are configured. Let s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:
  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.
When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:
$ ping -c1 127.42.42.42
PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data.
64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms
--- 127.42.42.42 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
The main table usually contains all the other routes:
$ ip route show table main
default via 192.168.117.1 dev eno1 proto static metric 100
192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100
The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface. The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using classes in Linux 2.1.156.

Performance Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Moreover, since Linux 3.0 (commit f4530fa574df), without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times: Routing rules impact on performance For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns). A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:
# ip rule add iif vlan457 table 10
# ip rule add iif vlan457 blackhole
# ip rule add iif vlan458 table 20
# ip rule add iif vlan458 blackhole
The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:
# ip route add blackhole default metric 9999 table 10
# ip route add blackhole default metric 9999 table 20
# ip rule add iif vlan457 table 10
# ip rule add iif vlan458 table 20
To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:
# ip link add vrf-A type vrf table 10
# ip link set dev vrf-A up
# ip link add vrf-B type vrf table 20
# ip link set dev vrf-B up
# ip link set dev vlan457 master vrf-A
# ip link set dev vlan458 master vrf-B
# ip rule show
0:      from all lookup local
1000:   from all lookup [l3mdev-table]
32766:  from all lookup main
32767:  from all lookup default
The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface. VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion The takeaways from this article are:
  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.

  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites.
  2. IP-address lookup using LC-tries , IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999.
  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable.
  4. One leaf can contain several routes (struct fib_alias is a list). The number of prefixes can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits.
  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC.
  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today s kernel tree and explains the usage of the default class .

13 April 2017

Antoine Beaupr : New approaches to network fast paths

With the speed of network hardware now reaching 100 Gbps and distributed denial-of-service (DDoS) attacks going in the Tbps range, Linux kernel developers are scrambling to optimize key network paths in the kernel to keep up. Many efforts are actually geared toward getting traffic out of the costly Linux TCP stack. We have already covered the XDP (eXpress Data Path) patch set, but two new ideas surfaced during the Netconf and Netdev conferences held in Toronto and Montreal in early April 2017. One is a patch set called af_packet, which aims at extracting raw packets from the kernel as fast as possible; the other is the idea of implementing in-kernel layer-7 proxying. There are also user-space network stacks like Netmap, DPDK, or Snabb (which we previously covered). This article aims at clarifying what all those components do and to provide a short status update for the tools we have already covered. We will focus on in-kernel solutions for now. Indeed, user-space tools have a fundamental limitation: if they need to re-inject packets onto the network, they must again pay the expensive cost of crossing the kernel barrier. User-space performance is effectively bounded by that fundamental design. So we'll focus on kernel solutions here. We will start from the lowest part of the stack, the af_packet patch set, and work our way up the stack all the way up to layer-7 and in-kernel proxying.

af_packet v4 John Fastabend presented a new version of a patch set that was first published in January regarding the af_packet protocol family, which is currently used by tcpdump to extract packets from network interfaces. The goal of this change is to allow zero-copy transfers between user-space applications and the NIC (network interface card) transmit and receive ring buffers. Such optimizations are useful for telecommunications companies, which may use it for deep packet inspection or running exotic protocols in user space. Another use case is running a high-performance intrusion detection system that needs to watch large traffic streams in realtime to catch certain types of attacks. Fastabend presented his work during the Netdev network-performance workshop, but also brought the patch set up for discussion during Netconf. There, he said he could achieve line-rate extraction (and injection) of packets, with packet rates as high as 30Mpps. This performance gain is possible because user-space pages are directly DMA-mapped to the NIC, which is also a security concern. The other downside of this approach is that a complete pair of ring buffers needs to be dedicated for this purpose; whereas before packets were copied to user space, now they are memory-mapped, so the user-space side needs to process those packets quickly otherwise they are simply dropped. Furthermore, it's an "all or nothing" approach; while NIC-level classifiers could be used to steer part of the traffic to a specific queue, once traffic hits that queue, it is only accessible through the af_packet interface and not the rest of the regular stack. If done correctly, however, this could actually improve the way user-space stacks access those packets, providing projects like DPDK a safer way to share pages with the NIC, because it is well defined and kernel-controlled. According to Jesper Dangaard Brouer (during review of this article):
This proposal will be a safer way to share raw packet data between user space and kernel space than what DPDK is doing, [by providing] a cleaner separation as we keep driver code in the kernel where it belongs.
During the Netdev network-performance workshop, Fastabend asked if there was a better data structure to use for such a purpose. The goal here is to provide a consistent interface to user space regardless of the driver or hardware used to extract packets from the wire. af_packet currently defines its own packet format that abstracts away the NIC-specific details, but there are other possible formats. For example, someone in the audience proposed the virtio packet format. Alexei Starovoitov rejected this idea because af_packet is a kernel-specific facility while virtio has its own separate specification with its own requirements. The next step for af_packet is the posting of the new "v4" patch set, although Miller warned that this wouldn't get merged until proper XDP support lands in the Intel drivers. The concern, of course, is that the kernel would have multiple incomplete bypass solutions available at once. Hopefully, Fastabend will present the (by then) merged patch set at the next Netdev conference in November.

XDP updates Higher up in the networking stack sits XDP. The af_packet feature differs from XDP in that it does not perform any sort of analysis or mangling of packets; its objective is purely to get the data into and out of the kernel as fast as possible, completely bypassing the regular kernel networking stack. XDP also sits before the networking stack except that, according to Brouer, it is "focused on cooperating with the existing network stack infrastructure, and on use-cases where the packet doesn't necessarily need to leave kernel space (like routing and bridging, or skipping complex code-paths)." XDP has evolved quite a bit since we last covered it in LWN. It seems that most of the controversy surrounding the introduction of XDP in the Linux kernel has died down in public discussions, under the leadership of David Miller, who heralded XDP as the right solution for a long-term architecture in the kernel. He presented XDP as a fast, flexible, and safe solution. Indeed, one of the controversies surrounding XDP was the question of the inherent security challenges with introducing user-provided programs directly into the Linux kernel to mangle packets at such a low level. Miller argued that whatever protections are expected for user-space programs also apply to XDP programs, comparing the virtual memory protections to the eBPF (extended BPF) verifier applied to XDP programs. Those programs are actually eBPF that have an interesting set of restrictions:
  • they have a limited size
  • they cannot jump backward (and thus cannot loop), so they execute in predictable time
  • they do only static allocation, so they are also limited in memory
XDP is not a one-size-fits-all solution: netfilter, the TC traffic shaper, and other normal Linux utilities still have their place. There is, however, a clear use case for a solution like XDP in the kernel. For example, Facebook and Cloudflare have both started testing XDP and, in Facebook's case, deploying XDP in production. Martin Kafai Lau, from Facebook, presented the tool set the company is using to construct a DDoS-resilience solution and a level-4 load balancer (L4LB), which got a ten-times performance improvement over the previous IPVS-based solution. Facebook rolled out its own user-space solution called "Droplet" to detect hostile traffic and deploy blocking rules in the form of eBPF programs loaded in XDP. Lau demonstrated the way Facebook deploys a three-part chained eBPF program: the first part allows debugging and dumping of packets, the second is Droplet itself, which drops undesirable traffic, and the last segment is the load balancer, which mangles the packets to tweak their destination according to internal rules. Droplet can drop DDoS attacks at line rate while keeping the architecture flexible, which were two key design requirements. Gilberto Bertin, from Cloudflare, presented a similar approach: Cloudflare has a tool that processes sFlow data generated from iptables in order to generate cBPF (classic BPF) mitigation rules that are then deployed on edge routers. Those rules are created with a tool called bpfgen, part of Cloudflare's BSD-licensed bpftools suite. For example, it could create a cBPF bytecode blob that would match DNS queries to any example.com domain with something like:
    bpfgen dns *.example.com
Originally, Cloudflare would deploy those rules to plain iptables firewalls with the xt_bpf module, but this led to performance issues. It then deployed a proprietary user-space solution based on Solarflare hardware, but this has the performance limitations of user-space applications getting packets back onto the wire involves the cost of re-injecting packets back into the kernel. This is why Cloudflare is experimenting with XDP, which was partly developed in response to the company's problems, to deploy those BPF programs. A concern that Bertin identified was the lack of visibility into dropped packets. Cloudflare currently samples some of the dropped traffic to analyze attacks; this is not currently possible with XDP unless you pass the packets down the stack, which is expensive. Miller agreed that the lack of monitoring for XDP programs is a large issue that needs to be resolved, and suggested creating a way to mark packets for extraction to allow analysis. Cloudflare is currently in a testing phase with XDP and it is unclear if its whole XDP tool chain will be publicly available. While those two companies are starting to use XDP as-is, there is more work needed to complete the XDP project. As mentioned above and in our previous coverage, massive statistics extraction is still limited in the Linux kernel and introspection is difficult. Furthermore, while the existing actions (XDP_DROP and XDP_TX, see the documentation for more information) are well implemented and used, another action may be introduced, called XDP_REDIRECT, which would allow redirecting packets to different network interfaces. Such an action could also be used to accelerate bridges as packets could be "switched" based on the MAC address table. XDP also requires network driver support, which is currently limited. For example, the Intel drivers still do not support XDP, although that should come pretty soon. Miller, in his Netdev keynote, focused on XDP and presented it as the standard solution that is safe, fast, and usable. He identified the next steps of XDP development to be the addition of debugging mechanisms, better sampling tools for statistics and analysis, and user-space consistency. Miller foresees a future for XDP similar to the popularization of the Arduino chips: a simple set of tools that anyone, not just developers, can use. He gave the example of an Arduino tutorial that he followed where he could just look up a part number and get easy-to-use instructions on how to program it. Similar components should be available for XDP. For this purpose, the conference saw the creation of a new mailing list called xdp-newbies where people can learn how to create XDP build environments and how to write XDP programs.

In-kernel layer-7 proxying The third approach that struck me as innovative is the idea of doing layer-7 (application) proxying directly in the kernel. This comes from the idea that, traditionally, we build firewalls to segregate traffic and apply controls, but as most services move to HTTP, those policies become ineffective. Thomas Graf, presented this idea during Netconf using a Star Wars allegory: what if the Death Star were a server with an API? You would have endpoints like /dock or /comms that would allow you to dock a ship or communicate with the Death Star. Those API endpoints should obviously be public, but then there is this /exhaust-port endpoint that should never be publicly available. In order for a firewall to protect such a system, it must be able to inspect traffic at a higher level than the traditional address-port pairs. Graf presented a design where the kernel would create an in-kernel socket that would negotiate TCP connections on behalf of user space and then be able to apply arbitrary eBPF rules in the kernel. Graf's design of in-kernel proxying In this scenario, instead of doing the traditional transfer from Netfilter's TPROXY to user space, the kernel directly decapsulates the HTTP traffic and passes it to BPF rules that can make decisions without doing expensive context switches or memory copies in the case of simply wanting to refuse traffic (e.g. issue an HTTP 403 error). This, of course, requires the inclusion of kTLS to process HTTPS connections. HTTP2 support may also prove problematic, as it multiplexes connections and is harder to decapsulate. This design was described as a "pure pre-accept() hook". Starovoitov also compared the design to the kernel connection multiplexer (KCM). Tom Herbert, KCM's author, agreed that it could be extended to support this, but would require some extensions in user space to provide an interface between regular socket-based applications and the KCM layer. In any case, if the application does TLS (and lots of them do), kTLS gets tricky because it breaks the end-to-end nature of TLS, in effect becoming a man in the middle between the client and the application. Eric Dumazet argued that HA-Proxy already does things like this: it uses splice() to avoid copying too much data around, but it still does a context switch to hand over processing to user space, something that could be fixed in the general case. Another similar project that was presented at Netdev is the Tempesta firewall and reverse-proxy. The speaker, Alex Krizhanovsky, explained the Tempesta developers have taken one person month to port the mbed TLS stack to the Linux kernel to allow an in-kernel TLS handshake. Tempesta also implements rate limiting, cookies, and JavaScript challenges to mitigate DDoS attacks. The argument behind the project is that "it's easier to move TLS to the kernel than it is to move the TCP/IP stack to user space". Graf explained that he is familiar with Krizhanovsky's work and he is hoping to collaborate. In effect, the design Graf is working on would serve as a foundation for Krizhanovsky's in-kernel HTTP server (kHTTP). In a private email, Graf explained that:
The main differences in the implementation are currently that we foresee to use BPF for protocol parsing to avoid having to implement every single application protocol natively in the kernel. Tempesta likely sees this less of an issue as they are probably only targeting HTTP/1.1 and HTTP/2 and to some [extent] JavaScript.
Neither project is really ready for production yet. There didn't seem to be any significant pushback from key network developers against the idea, which surprised some people, so it is likely we will see more and more layer-7 intelligence move into the kernel sooner rather than later.

Conclusion All of this work aims at replacing a rag-tag bunch of proprietary solutions that recently came up to bypass the Linux kernel TCP/IP stack and improve performance for firewalls, proxies, and other key edge network elements. The idea is that, unless the kernel improves its performance, or at least provides a way to bypass its more complex code paths, people will work around it. With this set of solutions in place, engineers will now be able to use standard APIs to hook high-performance systems into the Linux kernel.
The author would like to thank the Netdev and Netconf organizers for travel assistance, Thomas Graf for a review of the in-kernel proxying section of this article, and Jesper Dangaard Brouer for review of the af_packet and XDP sections. Note: this article first appeared in the Linux Weekly News.

3 October 2016

Kees Cook: security things in Linux v4.7

Previously: v4.6. Onward to security things I found interesting in Linux v4.7: KASLR text base offset for MIPS Matt Redfearn added text base address KASLR to MIPS, similar to what s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn t yet have anything as strong as x86 s RDRAND (though most have an instruction counter like x86 s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the /chosen/kaslr-seed property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit. SLAB freelist ASLR Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.) eBPF JIT constant blinding Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections). Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection. The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they ll gain all these protections immediatley. Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root). fix brk ASLR weakness on arm64 compat There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64 s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy. LoadPin LSM LSM stacking is well-defined since v4.2, so I finally upstreamed a small LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it s redundant to sign kernel modules (you ve already got the modules on read-only media: they can t change). The kernel just needs to know they re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is correct , hence load pinning ). That s it for v4.7. Prepare yourself for v4.8 next!

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

26 September 2016

Kees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn t a lot of time to talk about them all, I figured I d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn t everything security-related or generally of interest, but they re the things I thought needed to be pointed out. If there s something security-related you think I should cover from v4.3, please mention it in the comments. I m sure I haven t caught everything. :) A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the kernel self-protection ideal anyway some things are just really interesting userspace-facing features. So, to that end, here are things I found interesting in v4.3: CONFIG_CPU_SW_DOMAIN_PAN Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow. This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink. While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I m delighted to see a form of this land in upstream finally. To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features. Ambient Capabilities Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex least privilege execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier. For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn t enough, there is a self-test available in tools/testing/selftests/capabilities/ too. PowerPC and Tile support for seccomp filter Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly. That s it for v4.3. If I missed stuff you found interesting, please let me know! I m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

20 September 2016

Vincent Sanders: If I see an ending, I can work backward.

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

Chocolate chip cookie are much tastier than HTTP cookiesThe HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

 NetSurf cookie manager showing a supercookieThere is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk
I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode  
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
;

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode  
struct
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
label;
struct
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
child;
;

static const union pnode pnodes[8580] =
/* root entry */
.label = 0, 0, 1 , .child = 2, 1553 ,
/* entries 2 to 1794 */
.label = 37, 2, 1 , .child = 1795, 6 ,

...

/* entries 8577 to 8578 */
.label = 31820, 6, 1 , .child = 8579, 1 ,
/* entry 8579 */
.label = 0, 1, 0 ,

;

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

18 August 2016

Zlatan Todori : DebConf16 - new age in Debian community gathering

DebConf16 Finally got some time to write this blog post. DebConf for me is always something special, a family gathering of weird combination of geeks (or is weird a default geek state?). To be honest, I finally can compare Debian as hacker conference to other so-called hacker conferences. With that hat on, I can say that Debian is by far the most organized and highest quality conference. Maybe I am biased, but I don't care too much about that. I simply love Debian and that is no secret. So lets dive into my view on DebConf16 which was held in Cape Town, South Africa. Cape Town This was the first time we had conference on African continent (and I now see for the first time DebConf bid for Asia, which leaves only Australia and beautiful Pacific islands to start a bid). Cape Town by itself, is pretty much Europe-like city. That was kinda a bum for me on first day, especially as we were hosted at University of Cape Town (which is quite beautiful uni) and the surrounding neighborhood was very European. Almost right after the first day I was fine because I started exploring the huge city. Cape Town is really huge, it has by stats ~4mil people, and unofficially it has ~6mil. Certainly a lot to explore and I hope one day to be back there (I actually hope as soon as possible). The good, bad and ugly I will start with bad and ugly as I want to finish with good notes. Racism down there is still HUGE. You don't have signs on the road saying that, but there is clearly separation between white and black people. The houses near uni all had fences on walls (most of them even electrical ones with sharp blades on it) with bars on windows. That just bring tensions and certainly doesn't improve anything. To be honest, if someone wants to break in they still can do easily so the fences maybe need to bring intimidation but they actually only bring tension (my personal view). Also many houses have sign of Armed Force Response (something in those lines) where in case someone would start breaking in, armed forces would come to protect the home. Also compared to workforce, white appear to hold most of profit/big business positions and fields, while black are street workers, bar workers etc etc. On the street you can feel from time to time the tension between people. Going out to bars also showed the separation - they were either almost exclusively white or exclusively black. Very sad state to see. Sharing love and mixing is something that pushes us forward and here I saw clear blockades for such things. The bad part of Cape Town is, and this is not only special to Cape Town but to almost all major cities, is that small crime is on wide scale. Pickpocketing here is something you must pay attention to it. To me, personally, nothing happened but I heard a lot of stories from my friends on whom were such activities attempted (although I am not sure did the criminals succeed). Enough of bad as my blog post will not change this and it is a topic for debate and active involvement which I can't unfortunately do at this moment. THE GOOD! There are so many great local people I met! As I mentioned, I want to visit that city again and again and again. If you don't fear of those bad things, this city has great local cuisine, a lot of great people, awesome art soul and they dance with heart (I guess when you live in rough times, you try to use free time at your best). There were difference between white and black bars/clubs - white were almost like standard European, a lot of drinking and not much dancing, and black were a lot of dancing and not much drinking (maybe the economical power has something to do with it but I certainly felt more love in black bars). Cape Town has awesome mountain, the Table Mountain. I went on hiking with my friends, and I must say (again to myself) - do the damn hiking as much as possible. After every hike I feel so inspired, that I will start thinking that I hate myself for not doing it more often! The view from Table mountain is just majestic (you can even see the Cape of Good Hope). The WOW moments are just firing up in you. Now lets transfer to DebConf itself. As always, organization was on quite high level. I loved the badge design, it had a map and nice amount of information on it. The place we stayed was kinda not that good but if you take it into account that those a old student dorms (in we all were in female student dorm :D ) it is pretty fancy by its own account. Talks were near which is always good. The general layout of talks and front desk position was perfect in my opinion. All in one place basically. Wine and Cheese this year was kinda funny story because of the cheese restrictions but Cheese cabal managed to pull out things. It was actually very well organized. Met some new people during the party/ceremony which always makes me grow as a person. Cultural mix on DebConf is just fantastic. Not only you learn a lot about Debian, hacking on it, but sheer cultural diversity makes this small con such a vibrant place and home to a lot. Debian Dinner happened in Aquarium were I had nice dinner and chat with my old friends. Aquarium by itself is a thing where you can visit and see a lot of strange creatures that live on this third rock from Sun. Speaking of old friends - I love that I Apollo again rejoined us (by missing the DebConf15), seeing Joel again (and he finally visited Banja Luka as aftermath!), mbiebl, ah, moray, Milan, santiago and tons of others. Of course we always miss a few such as zack and vorlon this year (but they had pretty okay-ish reasons I would say). Speaking of new friends, I made few local friends which makes me happy and at least one Indian/Hindu friend. Why did I mention this separately - well we had an accident during Group Photo (btw, where is our Lithuanian, German based nowdays, photographer?!) where 3 laptops of our GSoC students were stolen :( . I was luckily enough to, on behalf of Purism, donate Librem11 prototype to one of them, which ended up being the Indian friend. She is working on real time communications which is of interest also to Purism for our future projects. Regarding Debian Day Trip, Joel and me opted out and we went on our own adventure through Cape Town in pursue of meeting and talking to local people, finding out interesting things which proved to be a great decision. We found about their first Thursday of month festival and we found about Mama Africa restaurant. That restaurant is going into special memories (me playing drums with local band must always be a special memory, right?!). Huh, to be honest writing about DebConf would probably need a book by itself and I always try to keep my posts as short as possible so I will try to stop here (maybe I write few bits in future more about it but hardly). Now the notes. Although I saw the racial segregation, I also saw the hope. These things need time. I come from country that is torn apart in nationalism and religious hate so I understand this issues is hard and deep on so many levels. While the tensions are high, I see people try to talk about it, try to find solution and I feel it is slowly transforming into open society, where we will realize that there is only one race on this planet and it is called - HUMAN RACE. We are all earthlings, and as sooner we realize that, sooner we will be on path to really build society up and not fake things that actually are enslaving our minds. I just want in the end to say thank you DebConf, thank you Debian and everyone could learn from this community as a model (which can be improved!) for future societies.

25 January 2016

Antoine Beaupr : Internet in Cuba

A lot has been written about the Internet in Cuba over the years. I have read a few articles, from New York Times' happy support for Google's invasion of Cuba to RSF's dramatic and fairly outdated report about censorship in Cuba. Having written before about Internet censorship in Tunisia, I was curious to see if I could get a feel of what it is like over there, now that a new Castro is in power and the Obama administration has started restoring diplomatic ties with Cuba. With those political changes coming signifying the end of an embargo that has been called genocidal by the Cuban government, it is surprisingly difficult to get fresh information about the current state of affairs. This article aims to fill that gap in clarifying how the internet works in Cuba, what kind of censorship mechanisms are in place and how to work around them. It also digs more technically into the network architecture and performance. It is published in the hope of providing both Cubans and the rest of the world with a better understanding of their network and, if possible, Cubans ways to access the internet more cheaply or without censorship.

"Censorship" and workarounds Unfortunately, I have been connected to the internet only through the the Varadero airport and the WiFi of a "full included" resort near Jibacoa. I have to come to assume that this network is likely to be on a segregated, uncensored internet while the rest of the country suffers the wrath of the Internet censorship in Cuba I have seen documented elsewhere. Through my research, I couldn't find any sort of direct censorship. The Netalyzr tool couldn't find anything significantly wrong with the connection, other than the obvious performance problems related both to the overloaded uplinks of the Cuban internet. I ran an incomplete OONI probe as well, and it seems there was no obvious censorship detected there as well, at least according to folks in the helpful #ooni IRC channel. Tor also works fine, and could be a great way to avoid the global surveillance system described later in this article. Nevertheless, it still remains to be seen how the internet is censored in the "real" Cuban internet, outside of the tourist designated areas - hopefully future visitors or locals can expand on this using the tools mentioned above, using the regular internet. Usual care should be taken when using any workaround tools, mentioned in this post or not, as different regimes over the world have accused, detained, tortured and killed sometimes for the mere fact of using or distributing circumvention tools. For example, a Russian developer was arrested and detained in 2001 by United States' FBI for exposing vulnerabilities in the Adobe e-books copy protection mechanisms. Similarly, people distributing Tor and other tools have been arrested during the period prior to the revolution in Tunisia.

The Cuban captive portal There is, however, a more pernicious and yet very obvious censorship mechanism at work in Cuba: to get access to the internet, you have to go through what seems to be a state-wide captive portal, which I have seen both at the hotel and the airport. It is presumably deployed at all levels of the internet access points. To get credentials through that portal, you need a username and password which you get by buying a Nauta card. Those cards cost 2$CUC and get you an hour of basically unlimited internet access. That may not seem like a lot for a rich northern hotel party-goer, but for Cubans, it's a lot of money, given that the average monthly salary is around 20$CUC. The system is also pretty annoying to use, because it means you do not get continuous network access: every hour, you need to input a new card, which will obviously make streaming movies and other online activities annoying. It also makes hosting servers basically impossible. So while Cuba does not have, like China or Iran, a "great firewall", there is definitely a big restriction to going online in Cuba. Indeed, it seems to be how the government ensures that Cubans do not foment too much dissent online: keep the internet slow and inaccessible, and you won't get too many Arab spring / blogger revolutions.

Bypassing the Cuban captive portal The good news is that it is perfectly possible for Cubans (or at least for a tourist like me with resources outside of the country) to bypass the captive portal. Like many poorly implemented portals, the portal allows DNS traffic to go through, which makes it possible to access the global network for free by using a tool like iodine which tunnels IP traffic over DNS requests. Of course, the bandwidth and reliability of the connection you get through such a portal is pretty bad. I have regularly seen 80% packet loss and over two minutes of latency:
--- 10.0.0.1 ping statistics ---
163 packets transmitted, 31 received, 80% packet loss, time 162391ms
rtt min/avg/max/mdev = 133.700/2669.535/64188.027/11257.336 ms, pipe 65
Still, it allowed me to login to my home server through SSH using Mosh to workaround the reliability issues. Every once in a while, mosh would get stuck and keep on trying to send packets to probe the server, which would clog the connection even more. So I regularly had to restart the whole stack using these commands:
killall iodine # stop DNS tunnel
nmcli n off # turn off wifi to change MAC address
macchanger -A wlan0 # change MAC address
nmcli n on # turn wifi back on
sleep 3 # wait for wifi to settle
iodine-client-start # restart DNS tunnel
The Koumbit Wiki has good instructions on how to setup a DNS tunnel. I am wondering if such a public service could be of use for Cubans, although I am not sure how it could be deployed only for Cubans, and what kind of traffic it could support... The fact is that iodine does require a server to operate, and that server must be run on the outside of the censored perimeter, something that Cubans may not be able to afford in the first place. Another possible way to save money with the captive portal would be to write something that automates connecting and disconnecting from the portal. You would feed that program a list of credentials and it would connect to the portal only on demand, and disconnect as soon as no traffic goes through. There are details on the implementation of the captive portal below that may help future endeavours in that field.

Private information revealed to the captive portal It should be mentioned, however, that the captive portal has a significant amount of information on clients, which is a direct threat to the online privacy of Cuban internet users. Of course the unique identifiers issued with the Nauta cards can be correlated with your identity, right from the start. For example, I had to give my room number to get a Nauta card issued. Then the central portal also knows which access point you are connected to. For example, the central portal I was connected to Wifi_Memories_Jibacoa which, for anyone that cares to research, will give them a location of about 20 square meters where I was located when connected (there is only one access point in the whole hotel). Finally, the central portal also knows my MAC address, a unique identifier for the computer I am using which also reveals which brand of computer I am using (Mac, Lenovo, etc). While this address can be changed, very few people know that, let alone how. This led me to question whether I would be allowed back in Cuba (or even allowed out!) after publishing this blog post, as it is obvious that I can be easily identified based on the time this article was published, my name and other details. Hopefully the Cuban government will either not notice or not care, but this can be a tricky situation, obviously. I have heard that Cuban prisons are not the best hangout place in Cuba, to say the least...

Network configuration assessment This section is more technical and delves more deeply in the Cuban internet to analyze the quality and topology of the network, along with hints as to which hardware and providers are being used to support the Cuban government.

Line quality The internet is actually not so bad in the hotel. Again, this may be because of the very fact that I am in that hotel, and I get a privileged access to the new fiber line to Venezuela, the ALBA-1 link. The line speed I get is around 1mbps, according to speedtest, which selected a server from LIME in George Town, Cayman Islands:
[1034]anarcat@angela:cuba$ speedtest
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Empresa de Telecomunicaciones de Cuba (152.206.92.146)...
Selecting best server based on latency...
Hosted by LIME (George Town) [391.78 km]: 317.546 ms
Testing download speed........................................
Download: 1.01 Mbits/s
Testing upload speed..................................................
Upload: 1.00 Mbits/s
Latency to the rest of the world is of couse slow:
--- koumbit.org ping statistics ---
122 packets transmitted, 120 received, 1,64% packet loss, time 18731,6ms
rtt min/avg/max/sdev = 127,457/156,097/725,211/94,688 ms
--- google.com ping statistics ---
122 packets transmitted, 121 received, 0,82% packet loss, time 19371,4ms
rtt min/avg/max/sdev = 132,517/160,095/724,971/93,273 ms
--- redcta.org.ar ping statistics ---
122 packets transmitted, 120 received, 1,64% packet loss, time 40748,6ms
rtt min/avg/max/sdev = 303,035/339,572/965,092/97,503 ms
--- ccc.de ping statistics ---
122 packets transmitted, 72 received, 40,98% packet loss, time 19560,2ms
rtt min/avg/max/sdev = 244,266/271,670/594,104/61,933 ms
Interestingly, Koumbit is actually the closest host in the above test. It could be that Canadian hosts are less affected by bandwidth problems compared to US hosts because of the embargo.

Network topology The various traceroutes show a fairly odd network topology, but that is typical of what I would described as "colonized internet users", which have layers and layers of NAT and obscure routing that keep them from the real internet. Just like large corporations are implementing NAT in a large scale, Cuba seems to have layers and layers of private RFC 1918 IPv4 space. A typical traceroute starts with:
traceroute to koumbit.net (199.58.80.33), 30 hops max, 60 byte packets
 1  10.156.41.1 (10.156.41.1)  9.724 ms  9.472 ms  9.405 ms
 2  192.168.134.137 (192.168.134.137)  16.089 ms  15.612 ms  15.509 ms
 3  172.31.252.113 (172.31.252.113)  15.350 ms  15.805 ms  15.358 ms
 4  pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197)  15.286 ms  14.832 ms  14.405 ms
 5  172.31.252.29 (172.31.252.29)  13.734 ms  13.685 ms  14.485 ms
 6  200.0.16.130 (200.0.16.130)  14.428 ms  11.393 ms  10.977 ms
 7  200.0.16.74 (200.0.16.74)  10.738 ms  10.019 ms  10.326 ms
 8  ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45)  108.577 ms  108.449 ms
Let's take this apart line by line:
 1  10.156.41.1 (10.156.41.1)  9.724 ms  9.472 ms  9.405 ms
This is my local gateway, probably the hotel's wifi router.
 2  192.168.134.137 (192.168.134.137)  16.089 ms  15.612 ms  15.509 ms
This is likely not very far from the local gateway, probably still in Cuba. It in one bit away from the captive portal IP address (see below) so it is very likely related to the captive portal implementation.
 3  172.31.252.113 (172.31.252.113)  15.350 ms  15.805 ms  15.358 ms
 4  pos6-0-0-agu-cr-1.mpls.enet.cu (172.31.253.197)  15.286 ms  14.832 ms  14.405 ms
 5  172.31.252.29 (172.31.252.29)  13.734 ms  13.685 ms  14.485 ms
All those are withing RFC 1918 space. Interestingly, the Cuban DNS servers resolve one of those private IPs as within Cuban space, on line #4. That line is interesting because it reveals the potential use of MPLS.
 6  200.0.16.130 (200.0.16.130)  14.428 ms  11.393 ms  10.977 ms
 7  200.0.16.74 (200.0.16.74)  10.738 ms  10.019 ms  10.326 ms
Those two lines are the only ones that actually reveal that the route belongs in Cuba at all. Both IPs are in a tiny (/24, or 256 IP addresses) network allocated to ETECSA, the state telco in Cuba:
inetnum:     200.0.16/24
status:      allocated
aut-num:     N/A
owner:       EMPRESA DE TELECOMUNICACIONES DE CUBA S.A. (IXP CUBA)
ownerid:     CU-CUBA-LACNIC
responsible: Rafael L pez Guerra
address:     Ave. Independencia y 19 Mayo, s/n,
address:     10600 - La Habana - CH
country:     CU
phone:       +53 7 574242 []
owner-c:     JOQ
tech-c:      JOQ
abuse-c:     JEM52
inetrev:     200.0.16/24
nserver:     NS1.NAP.ETECSA.NET
nsstat:      20160123 AA
nslastaa:    20160123
nserver:     NS2.NAP.ETECSA.NET
nsstat:      20160123 AA
nslastaa:    20160123
created:     20030512
changed:     20140610
Then the last hop:
 8  ix-11-3-1-0.tcore1.TNK-Toronto.as6453.net (64.86.33.45)  108.577 ms  108.449 ms  108.257 ms
...interestingly, lands directly in Toronto, in this case going later to Koumbit but that is the first hop that varies according to the destination, hops 1-7 being a common trunk to all external communications. It's also interesting that this shoves a good 90 milliseconds extra in latency, showing that a significant distance and number of equipment crossed. Yet a single hop is crossed, not showing the intermediate step of the Venezuelan link or any other links for that matter. Something obscure is going on there... Also interesting to note is the traceroute to the redirection host, which is only one hop away:
traceroute to 192.168.134.138 (192.168.134.138), 30 hops max, 60 byte packets
 1  192.168.134.138 (192.168.134.138)  6.027 ms  5.698 ms  5.596 ms
Even though it is not the gateway:
$ ip route
default via 10.156.41.1 dev wlan0  proto static  metric 1024
10.156.41.0/24 dev wlan0  proto kernel  scope link  src 10.156.41.4
169.254.0.0/16 dev wlan0  scope link  metric 1000
This means a very close coordination between the different access points and the captive portal system. Finally, note that there seems to be only three peers to the Cuban internet: Teleglobe, formerly Canadian, now owned by the Indian [[!wiki Tata group]], and Telef nica, the Spanish Telco that colonized most of Latin America's internet, all the way down to Argentina. This is confirmed by my traceroutes, which show traffic to Koumbit going through Tata and Google's going through Telef nica.

Captive portal implementation The captive portal is https://www.portal-wifi-temas.nauta.cu/ (not accessible outside of Cuba) and uses a self-signed certificate. The domain name resolves to 190.6.81.230 in the hotel. Accessing http://1.1.1.1/ gives you a status page which allows you to disconnect from the portal. It actually redirects you to https://192.168.134.138/logout.user. That is also a self-signed, but different certificate. That certificate actually reveals the implication of Gemtek which is a "world-leading provider of Wireless Broadband solutions, offering a wide range of solutions from residential to business". It is somewhat unclear if the implication of Gemtek here is deliberate or a misconfiguration on the part of Cuban officials, especially since the certificate is self-signed and was issued in 2002. It could be, however, a trace of the supposed involvement of China in the development of Cuba's networking systems, although Gemtek is based in Taiwan, and not in the China mainland. That IP, in turn, redirects you to the same portal but in a page that shows you the statistics:
https://www.portal-wifi-temas.nauta.cu/?mac=0024D1717D18&script=logout.user&remain_time=00%3A55%3A52&session_time=00%3A04%3A08&username=151003576287&clientip=10.156.41.21&nasid=Wifi_Memories_Jibacoa&r=ac%2Fpopup
Notice how you see the MAC address of the machine in the URL (randomized, this is not my MAC address), along with the remaining time, session time, client IP and the Wifi access point ESSID. There may be some potential in defrauding the session time there, I haven't tested it directly. Hitting Actualizar redirects you back to the IP address, which redirects you to the right URL on the portal. The "real" logout is at:
http://192.168.134.138/logout.user?cmd=logout
The login is performed against https://www.portal-wifi-temas.nauta.cu/index.php?r=ac/login with a referer of:
https://www.portal-wifi-temas.nauta.cu/?&nasid=Wifi_Memories_Jibacoa&nasip=192.168.134.138&clientip=10.156.41.21&mac=EC:55:F9:C5:F2:55&ourl=http%3a%2f%2fgoogle.ca%2f&sslport=443&lang=en-US%2cen%3bq%3d0.8&lanip=10.156.41.1
Again, notice the information revealed to the central portal.

Equipment and providers I ran Nmap probes against both the captive portal and the redirection host, in the hope of finding out how they were built and if they could reveal the source of the equipment used. The complete nmap probes are available in nmap, but it seems that the captive portal is running some embedded device. It is confusing because the probe for the captive portal responds as if it was the gateway, which blurs even more the distinction between the hotel's gateway and the captive portal. This raises the distinct possibility that all access points are actually captive portal that authenticate to another central server. The nmap traces do show three distinct hosts however:
  • the captive portal (www.portal-wifi-temas.nauta.cu, 190.6.81.230)
  • some redirection host (192.168.134.138)
  • the hotel's gateway (10.156.41.1)
They do have distinct signatures so the above may be just me misinterpreting traceroute and nmap results. Your comments may help in clarifying the above. Still, the three devices show up as running Linux, in the two last cases versions between 2.4.21 and 2.4.31. Now, to find out which version of Linux it is running is way more challenging, and it is possible it is just some custom Linux distribution. Indeed, the webserver shows up as G4200.GSI.2.22.0155 and the SSH server is running OpenSSH 3.0.2p1, which is basically prehistoric (2002!) which corroborates the idea that this is some Gemtek embedded device. The fact that those devices are running 14 years old software should be a concern to the people responsible for those networks. There is, for example, a remote root vulnerability that affects that specific version of OpenSSH, among many other vulnerabilities.

A note on Nauta card's security Finally, one can note that it is probably trivial to guess card UIDs. All cards i have here start with the prefix 15100, the following digits being 3576 or 4595, presumably depending on the "batch" that was sent to different hotels, which seems to be batches of 1000 cards. You can also correlate the UID with the date at which the card was issued. For example, 15100357XXX cards are all valid until 19/03/2017, and 151004595XXX cards are all valid until 23/03/2017. Here's the list of UIDs I have seen:
151004595313
151004595974
151003576287
151003576105
151003576097
The passwords, on the other hand, do seem fairly random (although my sample size is small). Interestingly, those passwords are also 12 digits long, which is about as strong as a seven-letter password (mixed uppercase and lowercase). If there are no rate-limiting provisions on that captive portal, it could be possible to guess those passwords, since you have free rein on accessing those routers. Depending on the performance of the routers, you could be lucky and find a working password for free...

Conclusion Clearly, Internet access in Cuba needs to be modernized. We can clearly see that Cuba years behind the rest of the Americas, if only through the percentage of the population with internet access, or download speeds. The existence of a centralized captive portal also enables a huge surveillance potential that should be a concern for any Cuban, or for that matter, anyone wishing to live in a free society. The answer, however, lies not in the liberalization of commerce and opening the doors to the US companies and their own systems of surveillance. It should be possible, and even desirable for Cubans to establish their own neutral network, a proposal I have made in the past even for here in Qu bec. This network could be used and improved by Cubans themselves, prioritizing local communities that would establish their own infrastructure according to their own needs. I have been impressed by this article about the El Paquete system - it shows great innovation and initiative from Cubans which are known for engaging in technology in a creative way. This should be leveraged by letting Cubans do what they want with their networks, not telling them what to do. The best the Googles of this world can do to help Cuba is not to colonize Cuba's technological landscape but to cleanup their own and make their own tools more easily accessible and shareable offline. It is something companies can do right now, something I detailed in a previous article.

30 March 2015

Dimitri John Ledkov: Boiling frog, or when did we loose it with /etc ?

$ sudo find /etc -type f wc -l
2794

StatelessWhen was the last time you looked at /etc and thought - "I honestly know what every single file in here is". Or for example had a thought "Each file in here is configuration changes that I made". Or for example do you have confidence that your system will continue to function correctly if any of those files and directories are removed?

Traditionally most *NIX utilities are simple enough utilities, that do not require any configuration files what's so ever. However most have command line arguments, and environment variables to manipulate their behavior. Some of the more complex utilities have configuration files under /etc, sometimes with "layer" configuration from user's home directory (~/). Most of them are generally widely accepted. However, these do not segregate upstream / distribution / site administrator / local administrator / user configuration changes. Most update mechanisms created various ways to deal with merging and maintaining the correct state of those. For example both dpkg & RPM (%config) have elaborate strategies and policies and ways to deal with them. However, even today, still, they cause problems: prompting user for whitespace changes in config files, not preserving user changes, or failing to migrate them.

I can't find exact date, but it has now been something like 12 years since XDG Base directory specification was drafted. It came from Desktop Environment requirements, but one thing it achieves is segregation between upstream / distro / admin / user induced changes. When applications started to implement Base directory specification, I started to feel empowered. Upstream ships sensible configs in /usr, distribution integrators ship their overlay tweaks packaged in /usr, my site admin applies further requirements in /etc, and as I user I am free to improve or brake everything with configs in ~/. One of the best things from this setup - no upgrade prompts, and ease of reverting each layer of those configs (or at least auditing where the settings are coming from).

However, the uptake of XDG Base directory spec is slow / non-existing among the core components of any OS today. And at the same time /etc has grown to be a dumping ground for pretty much everything under the sun:
  • Symlink farms - E.g. /etc/rc*.d/*, /etc/systemd/system/*.wants/*, /etc/ssl/certs/*
  • Cache files - E.g. /etc/ld.so.cache
  • Empty (and mandatory) directories
  • Empty (and mandatory) "configuration" files. - E.g. whitespace & comments only
Let's be brutally honest and say that none of the above belongs in /etc. /etc must be for end-user configuration only, made by the end user alone and nobody else (or e.g. an automation tool driven by the end-user, like puppet).

Documentation of available configuration options and syntax to specify those in the config files should be shipped... in the documentation. E.g. man pages, /usr/share/doc, and so on. And not as the system-wide "example" config files. Absence of the files in /etc must not be treated as fatal, but a norm, since most users use default settings (especially for the most obscure options). Lastly compiled-in defaults should be used where possible, or e.g. layer configuration from multiple locations (e.g. /usr, /etc, ~/ where appropriate).

Above observations are not novel, and shared by most developers and users in the wider open source ecosystem. There are many projects and concepts to deal with this problem by using automation (e.g. puppet, chef), by migrating to new layouts (e.g. implementing / supporting XDG base dir spec), using "app bundles" (e.g. mobile apps, docker), or fully enumerating/abstracting everything in a generic manner (e.g. NixOS). Whilst fixing the issue at hand, these solutions do increase the dependency on files in /etc to be available. In other words we grew a de-facto user-space API we must not break, because modifications to the well known files in /etc are expected to take effect by both users and many administrator tools.

Since August last year, I have joined Open Source Technology Center at Intel, and have been working on Clear Linux* Project for Intel Architecture. One of the goals we have set out is to achieve stateless operation - that is to have empty /etc by default, reserved for user modification alone, yet continuing to support all legacy / well-known configuration paths. The premise is that all software can be patched with auto-detection, built-in defaults or support for layered configuration to achieve this. I hope that this work would interest everyone and will be widely adopted.

Whilst the effort to convert everything is still on going, I want to discuss a few examples of any core system.

ShadowThe login(1) command, whilst having built-in default for every single option exits with status 1, if it cannot stat(2) login.defs(5) file.

The passwd(1) command will write out the salted/hashed password in the passwd(5) file, rather than in shadow(5), if it cannot stat the shadow(5) file. There is similar behavior with gshadow. I found it very ironic, that upstream project "shadow" does not use shadow(5) by default.

Similarly, stock files manipulated by passwd/useradd/groupadd utilities are not created, if missing.

Some settings in login.defs(5) are not applicable, when compiled with PAM support, yet present in the default shipped login.defs(5) file.

Patches to resolve above issues are undergoing review on the upstream mailing list.
DBusIn xml based configuration, includedir' elements are mandatory to exist on disk, that is empty directory must be present, if referenced. If these directories are non-existant, the configuration fails to load and the system or session bus are not started.

Similarly, upstream have general agreement with the stateless concept and patches to move all of dbus default configurations from /etc to /usr are being reviewed for inclusion at the bug tracker. I hope this change will make into the 1.10 stable release.

GNU Lib CToday, we live in a dual-stack IPv4 and IPv6 world, where even the localhost has multiple IP addresses. As a slightly ageist time reference, the first VCS I ever used was git. Thus when I read below, I get very confused:
$ cat /etc/host.conf
# The "order" line is only used by old versions of the C library.
order hosts,bind
multi on
Why not simply do this:
--- a/resolv/res_hconf.c
+++ b/resolv/res_hconf.c
@@ -309,6 +309,8 @@ do_init (void)
if (hconf_name == NULL)
hconf_name = _PATH_HOSTCONF;

+ arg_bool (ENV_MULTI, 1, "on", HCONF_FLAG_MULTI);
+
fp = fopen (hconf_name, "rce");
if (fp)

There are still many other packages that needed fixes similar to above. Stay tuned for further stateless observations about Glibc, OpenSSH, systemd and other well known packages.

In the mean time, you can try out https://clearlinux.org/ images that implement above and more already. If you want to chat about it more, comment on G+, find myself on irc - xnox @ irc.freenode.net #clearlinux and join our mailing list to kick the conversation off, if you are interested in making the world more stateless.

ps.
I am a professional Linux Distribution developer, currently employed by Intel, however the postings on this site are my own and don't necessarily represent Intel's or any other past/present/future employer positions, strategies, or opinions.

* Other names and brands may be claimed as the property of others


11 August 2014

Juliana Louback: JSCommunicator - Setup and Architecture

Preface During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I ll now be publishing a series of posts to recap all I ve done during GSoC. Better late than never. Intro JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way. Setup To begin, please fork the JSCommunicator repo. If you are new to git, feel free to follow the steps in section Setup and Clone in this post. If you read the README file (which you always should), you ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are: I didn t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server. In your project s root directory, you ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:
turn_servers: [
	  server:"turn:debrtc.org"       
],
Note that unlike the sample, I didn t set a username and password so SIP credentials will be used. Now fill in the websocket property here we use sip5060.net.
websocket:  
    servers: 'wss://ws.sip5060.net',
    connection_recovery_min_interval: 2,
    connection_recovery_max_interval: 30,
   ,
I m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any Remember me data. If you also set sip_auth_password, JSCommunicator will automatically log in. All the other properties are for other optional functionalities and are well explained. You ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I ve made the mistake several times. You ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project s root directory. If you ll be using the internationalization support you ll also need jquery.i18n.properties.js. To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see: jscommunicatorRaw Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same. Architecture Disclaimer: I m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father s stamp of approval, I ll be sure to confirm the accuracy. Now to explain how the JSCommunicator code interacts, the use of each code item is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following: Each of these files will be presented in what I hope is an intuitive order. The next three files could be considered utils : These two are where the magic happens: Now for some extras: Third party code: Here s a diagram of sorts to help you visualize how the code interacts: jscommArch In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three utils , parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js. When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

Juliana Louback: JSCommunicator - Setup and Architecture

Preface During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I ll now be publishing a series of posts to recap all I ve done during GSoC. Better late than never. Intro JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way. Setup To begin, please fork the JSCommunicator repo. If you are new to git, feel free to follow the steps in section Setup and Clone in this post. If you read the README file (which you always should), you ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are: I didn t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server. In your project s root directory, you ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:
turn_servers: [
	  server:"turn:debrtc.org"       
],
Note that unlike the sample, I didn t set a username and password so SIP credentials will be used. Now fill in the websocket property here we use sip5060.net.
websocket:  
    servers: 'wss://ws.sip5060.net',
    connection_recovery_min_interval: 2,
    connection_recovery_max_interval: 30,
   ,
I m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any Remember me data. If you also set sip_auth_password, JSCommunicator will automatically log in. All the other properties are for other optional functionalities and are well explained. You ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I ve made the mistake several times. You ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project s root directory. If you ll be using the internationalization support you ll also need jquery.i18n.properties.js. To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see: jscommunicatorRaw Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same. Architecture Disclaimer: I m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father s stamp of approval, I ll be sure to confirm the accuracy. Now to explain how the JSCommunicator code interacts, the use of each code item is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following: Each of these files will be presented in what I hope is an intuitive order. The next three files could be considered utils : These two are where the magic happens: Now for some extras: Third party code: Here s a diagram of sorts to help you visualize how the code interacts: jscommArch In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three utils , parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js. When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

26 June 2014

Russ Allbery: Review: A People's History of the Supreme Court

Review: A People's History of the Supreme Court, by Peter Irons
Publisher: Penguin
Copyright: 1999, 2006
Printing: 2006
ISBN: 0-14-303738-2
Format: Trade paperback
Pages: 553
I first encountered Peter Irons via his Teaching Company course, The History of the Supreme Court. I listened to that and enjoyed it, and would recommend it as an excellent overview. When I later ran across this book, I was excited: I usually prefer books to lectures on topics that can benefit from greater depth, and A People's History of the United States is one of my favorite history books. A book that talked about the Supreme Court from a similar bottom-up approach was appealing. Unfortunately, I think the title oversells this book. It is a history of the Supreme Court, and, as with Zinn's book, it carries its bias openly (if not as forcefully or compellingly as Zinn). It is a legal history concerned with individual rights, and Irons openly expresses his opinions of the merits of certain decisions. But it's not the full-throated cry for justice and for the voice of the general population that Zinn's book was, nor did I think it entirely delivered on the introductory promise to look at the people and situations underlying the case. It does provide more background and biographical sketch of the litigants in famous cases than some other histories, but those sketches are generally brief and focused on the background and relevant facts for the case. This is, in short, a history of the Supreme Court, with some attention paid to the people involved and open comfort with an authorial viewpoint, but without the eye-opening and convention-defying freshness of Zinn's refusal to point the camera at the usual suspects. It also largely duplicates Irons's course for the Teaching Company. That's only really a problem for those, like me, who listened to that course already and came to this book looking for additional material. If you haven't listened to the course, this is a reasonable medium in which to provide the same content, although you do miss the audio recordings of actual Supreme Court argument in some of the most famous cases. But after having listened or read both, I think I prefer the course. It felt a bit more focused; the book is not padded, but the additional material is also not horribly memorable. That said, as a history, I found this a solid book. Irons opens with a surprisingly detailed look at the constitutional convention and the debates over the wording of the US Constitution, focusing on those sections that will become the heart of later controversy. Some of them were known to be controversial at the time and were discussed and argued at great length, such as anything related to slavery and the role of a bill of rights. But others, including many sections at the heart of modern controversies, were barely considered. Despite being a bit off-topic for the book, I found this section very interesting and am now wanting to seek out a good, full history of the convention. Irons's history from there is primarily chronological, although he does shift the order slightly to group some major cases into themes. He doesn't quite provide the biographies of each Supreme Court justice that he discussed in the introduction, but he comes close, along with discussion of the surrounding politics of the nomination and the political climate in which they were sitting. There's a bit of an overview of the types of cases each court saw, although not as much as I would have liked. Most of the history is, as one might expect, focused on more detailed histories of the major cases. Here, I was sometimes left satisfied and sometimes left a bit annoyed. The discussion of the Japanese internment cases is excellent, as you might expect given Irons's personal role in getting them overturned. The discussion of the segregation cases is also excellent; in general, I found Section V the strongest part of this book. Irons also does a good job throughout in showing how clearly political the court was and has always been, and the degree to which many major decisions were pre-decided by politics rather than reasoned legal judgment. Where I think he fails, however, is that much of this book has little sense of narrative arc. This is admittedly difficult in a history of a body that takes on widely varying and almost random cases, but there are some clear narratives that run through judicial thinking, and I don't think Irons does a good enough job telling those stories for the readers. For example, I remembered the evolution of interpretation of the 14th Amendment from freedom of contract to the keystone of application of the Bill of Rights to the states as a compelling and coherent story from his course, and here it felt scattered and less clear. In general, I liked the earlier sections of this book better than the later, with Section V on the Warren court as the best and last strong section. Beyond that point in the history, it felt like Irons was running out of steam, and it was harder and harder to see an underlying structure to the book. He would describe a few cases, in a factual but somewhat dry manner, make a few comments on the merits of the decision that felt more superficial than earlier sections of the book, and then move on to another case that might be largely unrelated. Recent judicial history is fascinating, but I don't think this is the book for that. Irons is much stronger in the 19th and first half of the 20th centuries; beyond that, I prefer Jeffrey Toobin's The Nine, which also has far deeper and more interesting biographies of recent justices. This is not a bad history of the Supreme Court. If you're looking for one, I can recommend it. But if you're flexible about format, I recommend Irons's course more. I think it's less dry, better presented, and flows better, and I don't feel like you'll miss much in the transformation of this book into an 18 hour course. It's also remarkable to hear the actual voices of the lawyers and justices in some of the landmark cases of the middle of the 20th century. But if you specifically want a book, this does the job. Do also read The Nine, though. It's a good complement to Irons's straight history, showing much more of the inner workings, political maneuverings, and day-to-day struggles of the justices. Rating: 7 out of 10

20 April 2014

Russell Coker: Sociological Images 2012

In 2011 I wrote a post that was inspired by the Sociological Images blog [1]. After some delay here I ve written another one. I plan to continue documenting such things. Playground gender segregated playground in 1918 In 2011 I photographed a plaque at Flagstaff Gardens in Melbourne. It shows a picture of the playground in 1918 with segregated boys and girls sections. It s interesting that the only difference between the two sections is that the boys have horizontal bars and a trapeze. Do they still have gender segregated playgrounds anywhere in Australia? If so what is the difference in the sections? Aborigines The Android game Paradise Island [2] has a feature where you are supposed to stop Aborigines from stealing, it plays on the old racist stereotypes about Aborigines which are used to hide the historical record that it s always been white people stealing from the people that they colonise. Angry face icons over AboriginesAborigines described as thieves There is also another picture showing the grass skirts. Nowadays the vast majority of Aborigines don t wear such clothing, the only time they do is when doing some sort of historical presentation for tourists. I took those pictures in 2012, but apparently the game hasn t changed much since then. Lemonade lemonade flavored fizzy drink Is lemonade a drink or a flavour? Most people at the party where I took the above photo regard lemonade as a drink and found the phrase Lemonade Flavoured Soft Drink strange when it was pointed out to them. Incidentally the drink on the right tastes a bit like the US version of lemonade (which is quite different from the Australian version). For US readers, the convention in Australia is that lemonade has no flavor of lemons. Not Sweet maybe gender queer people on bikes In 2012 an apple cider company made a huge advertising campaign featuring people who might be gender queer, above is a picture of a bus stop poster and there were also TV ads. The adverts gave no information at all about what the drink might taste like apart from not being as sweet as you think . So it s basically an advertising campaign with no substance other than a joke about people who don t conform to gender norms. Also it should be noted that some women naturally grow beards and have religious reasons for not shaving [3]. Episode 2 of the TV documentary series Am I Normal has an interesting interview of a woman with a beard. Revolution communist revolution Schweppes drinks A violent political revolution is usually a bad thing, using such revolutions to advertise sugar drinks seems like a bad idea. But it seems particularly interesting to note the different attitudes to such things in various countries. In 2012 Schweppes in Australia ran a marketing campaign based on imagery related to a Communist revolution (the above photo was taken at Southern Cross station in Melbourne), I presume that Schweppes in the US didn t run that campaign. I wonder whether global media will stop such things, presumably that campaign has the potential to do more harm in the US than good in Australia. Racist Penis Size Joke at Southbank racist advert in Southbank paper The above advert was in a free newspaper at Southbank in 2012. Mini Movers thought that this advert was a good idea and so did the management of Southbank who approved the advert for their paper. Australia is so racist that people don t even realise they are being racist.

7 May 2013

Jo Shields: Windows 8: Blood from a Stone

Ordinarily, I m a big believer that it is important to keep up to date with what every piece of software which competes with yours is doing, to remain educated on the latest concepts. Sometimes, there are concepts that get added which are definitely worth ripping off. We ve ripped off plenty of the better design choices from Windows or Mac OS, over the years, for use in the Free Desktop. So, what about Windows 8, the hip new OS on everyone s lips? Well, here s the thing I ve been using it on and off for a few months now for running legacy apps, and I can t for the life of me find anything worth stealing. Let s take the key change Windows 8 has apps built with a new design paradigm which definitely isn t called Metro. Metro apps don t really have windows in the traditional sense they re more modeled on full-screen apps from smartphones or tablets than on Windows 1.0 -> 7. Which is fine, really, if you re running Windows 8 on a tablet or touchscreen device. But what if you re not? What about the normal PC user? As Microsoft themselves ask:

How do you multitask with #windows8? The answer to that is, well, you sorta don t. Metro apps can exist in three states fullscreen, almost fullscreen, or vertical stripe. You re allowed to have two apps at most at the same time one mostly full screen, and one vertical stripe. So what happens if you try to *use* that? Let s take a fairly common thing I do watch a video and play Minesweeper. In this example, the video player is the current replacement for Windows Media Player, and ships by default. The Minesweeper game isn t installed by default, but is the only Minesweeper game in the Windows 8 app store which is gratis and by Microsoft Game Studios. Here s option A:

Unusable Windows 8 option 1 And for contrast, here s option B:

Unusable Windows 8 option 2 Which of these does a better job of letting me play Minesweeper and watch a video at the same time? Oh, here s option C, dumping Microsoft s own software, and using a third-party video player and third party Minesweeper implementation:

Windows 7 flavour It s magical almost as if picking my own window sizes makes the experience better. So, as you can see above, the old OS is still hiding there, in the form of a Windows 8 app called Desktop . Oh, sorry, didn t I say? Metro apps, and non-Metro apps, are segregated. You can run both (the Desktop app can also be almost-fullscreen or a vertical strip), but they get their own lists of apps when multitasking. Compare the list on the left with the list at the bottom:

Needs moar task lists And it s even more fun for apps like Internet Explorer, which can be started in both modes (and you often need both modes). Oh, and notice how the Ribbon interface from Office 2007 has invaded Explorer, filling the view with large buttons to do things you never want to do under normal circumstances. So, that s a short primer on why Windows 8 is terrible. Is there really nothing here worth stealing? Actually, yes, there is! After much research, I have discovered Windows 8 s shining jewel:

win8multitasking-004 The new Task Manager is lovely. I want it on my Linux systems. But that s it.

1 January 2012

Gregor Herrmann: RC bugs 2011/52

a rather lazy bugsquashing week but with a little cheating a managed to get at at least 7 RC bugs:

24 June 2011

Raphaël Hertzog: People behind Debian: Sam Hartman, Kerberos package maintainer

Sam Hartman is a Debian developer since 2000. He has never taken any sort of official role within Debian (that is besides package maintainer), yet I know him for his very thoughtful contributions to discussions both on mailing lists and IRL during Debconf. Until I met him at Debconf, I didn t know that he was blind, and the first reaction was to be impressed because it must be some tremendous effort to read the volume of information that Debian generates on mailing lists. In truth he s at ease with his computer much like I am although he uses it in a completely different way. Read on to learn more, my questions are in bold, the rest is by Sam. Raphael: Who are you? Sam: I m Sam Hartman. I m a 35-year-old software engineer. I am a principal consultant and co-owner of a small consulting company, called Painless Security. I started using Debian in the mid 1990 s around the time of the bo release. I ended up deciding to join the project as a developer in 2000. Raphael: You re blind and yet you re using your computer as effectively as I am. Can you explain us how you setup your computer ? Sam: I gave a talk at Debconf9 on how my computer is set up; you can watch the video for full details. My main laptop runs Debian. I use the gnome-orca package as my primary screen reader. It speaks the Gnome desktop. It does a relatively good job of speaking Iceweasel/Firefox and Libreoffice. While it does speak gnome-terminal, it s not really good enough at speaking terminal programs that I am comfortable using it. So, I run Emacs with the Emacspeak package. Within that, I run the Emacs terminal emulator, and within that, I tend to run Screen. For added fun, I often run additional instances of Emacs within the inner screens. Raphael: Are there important problems in Debian in terms of accessibility to blind people? KDE documentation talks a lot about accessibility, but at least for blind users, the code completely fails to deliver. That means there are a lot of good packages a blind user cannot use. The non-free Adobe Flash player has some accessibility, but it could be a lot better. The free alternatives have none. The free PDF readers have basically no accessibility. You can use pdftotext, but you cannot actually read a PDF in a graphical application. It s way too easy for a misbehaving program to lock up the entire accessibility infrastructure. Gnash is a big culprit here: if my Iceweasel starts Gnash, there s a good chance that either Iceweasel or the entire desktop will appear to hang from an accessibility standpoint. Other programs, including gksu tend to fail in this way. Some of the more dynamic website features like pop up menus or selection lists are really difficult to find and click without causing them to disappear. This gets better and worse over time as the accessibility support in Iceweasel changes and as websites change. Raphael: What s your biggest achievement within Debian? Sam: I decided to be a Debian developer because I thought that in 2000, Debian support for using enterprise security and infrastructure software was lacking. Back then, any software that included crypto functionality was segregated into a special non-us archive. Some of the software was missing; I started by packaging MIT Kerberos for Debian. Other software had security or enterprise features disabled in the packages. At the height of working on this, I was maintaining krb5, some SASL modules, PAM, some PAM modules,OpenAFS, a version of and Ssh with Kerberos support. I was also involved in the effort to resolve the legal issues so that we could move security software into the main archive. I think this work has been a huge success. In fact, it s been such a success that other people are now doing most of the work. I still maintain the krb5 package. When I started I felt like I was pushing against the flow trying to get people to add patches, sometimes even having to fork a package. However, now, I can maintain just one component and there are enough others who shared my original goal that the work continues. These days, I m working on something that s an evolution of this enterprise security work. I m packaging Project Moonshot for Debian and Ubuntu. Project Moonshot is a response to the wide variety of identity management systems such as OpenID, Oauth, SAML, and the like. Moonshot works to create an identity management approach that works well both for the web and for client-server and other applications. The project is a lot of fun, but the role of Debian in the project is also interesting. One of the things we want to show with Moonshot is how our technology can be effective when it is integrated throughout a platform. To really show the potential it needs to work with as many applications in a given platform as possible. The open and collaborative nature of the Debian community makes it possible to introduce a new technology and have that technology evaluated based on its merits. We don t need huge business relationships or marketing deals to integrate our technology: we need some combination of doing the work ourselves and showing others the benefits of working with us. For someone trying to do innovative work, the Debian model is powerful. Raphael: If you could spend all your time on Debian, what would you work on? Sam: I d really love to work with the embedded Debian folks. The vision of a single source base that could be the stack from devices from small embedded devices all the way up to high-end servers is very appealing. Doing that with Debian involves a number of challenges. However with the right people working on meeting these challenges full-time, I think we could offer something promising. I d also love to have the time to contribute to project infrastructure: working on the release team, helping ftpmaster, that sort of thing. However I don t. I m just happy that so much of my consulting practice involves working on open-source software. Raphael: What s the biggest problem of Debian? Sam: There s something really not right about how we transition libraries from unstable to testing. Every time I get involved in a library transition I m shocked at how complicated it is and how disruptive it is both to testing and unstable. We need to look at technology and processes to break up the dependency snarles. For example we don t have good archive tools for keeping old versions of libraries around to ease transitions. If you haven t thought about this issue you ll probably say that I m being overly picky and this can t be the major problem for Debian. However, if you think about how much this impacts our ability to introduce things into unstable around the times of the freeze or about how much it slows the release process, you may begin to appreciate how big of an issue this is. Raphael: Is there someone in Debian that you admire for their contributions? Sam: There are a number of people who have been role models for me over the years. Anthony Towns really helped me understand a lot of what drives free-software projects and what needs to be true for positive motivation. Joey Hess showed us all that sometimes, social problems do have technical solutions. If the tools are so good that doing the right thing is far easier than any other course of action, quality improves.
Thank you to Sam for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

5 comments Liked this article? Click here. My blog is Flattr-enabled.

7 June 2011

Matthew Garrett: A use for EFI

Anyone who's been following anything I've written lately may be under the impression that I dislike EFI. They'd be entirely correct. It's an awful thing and I've lost far too much of my life to it. It complicates the process of booting for no real benefit to the OS. The only real advantage we've seen so far is that we can configure boot devices in a vaguely vendor-neutral manner without having to care about BIOS drive numbers. Woo.

But there is something else EFI gives us. We finally have more than 256 bytes of nvram available to us as standard. Enough nvram, in fact, for us to reasonably store crash output. Progress!

This isn't a novel concept. The UEFI spec provides for a specially segregated are of nvram for hardware error reports. This is lovely and not overly helpful for us, because they're supposed to be in a well-defined format that doesn't leave much scope for "I found a null pointer where I would really have preferred there not be one" followed by a pile of text, especially if the firmware's supposed to do something with it. Also, the record format has lots of metadata that I really don't care about. Apple have also been using EFI for this, creating a special variable that stores the crash data and letting them get away with just telling the user to turn their computer off and then turn it back on again.

EFI's not the only way this could be done, either. ACPI specifies something called the ERST, or Error Record Serialization Table. The OS can stick errors in here and then they can be retrieved later. Excellent! Except ERST is currently usually only present on high-end servers. But when ERST support was added to Linux, a generic interface called pstore went in as well.

Pstore's very simple. It's a virtual filesystem that has platform-specific plugins. The platform driver (such as ERST) registers with pstore and the ERST errors then get exposed as files in pstore. Deleting the files removes the records. pstore also registers with kmsg_dump, so when an oops happens the kernel output gets dumped back into a series of records. I'd been playing with pstore but really wanted something a little more convenient than an 8-socket server to test it with, so ended up writing a pstore backend that uses EFI variables. And now whenever I crash the kernel, pstore gives me a backtrace without me having to take photographs of the screen. Progress.

Patches are here. I should probably apologise to Seiji Aguchi, who was working on the same problem and posted a preliminary patch for some feedback last month. I replied to the thread without ever reading the patch and then promptly forgot about it, leading to me writing it all from scratch last week. Oops.

(There's an easter egg in the patchset. First person to find it doesn't win a prize. Sorry.)

( Comments Comment on this)

28 March 2010

Clint Adams: DPL Campaign Questions 007

Bernd Zeimetz:
From the other candidates I'd like to know their opinion and plans (if there are any) about license/copyright requirements in Debian.
I believe that intellectual property laws are wrong and should be abolished, so I am not particularly in favor of giving copyrights any more respect than necessary. However, I do not think that Debian is the place to undermine copyright, and that we should attempt to maintain standards which not only protect people from legal liability, but are also in the best interests of our developers and users. I do not know what level of nitpicking best serves our developers and users. A GR seems like an interesting way of determining at part of that. Anthony Towns:
What's your estimate of the current number of Debian users?
I do not have one. While I do acknowledge that the number of users does affect us in terms of potential contributor base, and scaling issues (including bug reporting), I do not think that the exact number is important at all to what we do. Also I would hope that whichever DPL is elected chooses to focus on practical matters rather than academic exercises. Kurt Roeckx:
I think that one of issues we have is that there is alot of work to be done by some teams, some of them even regularaly mail that they need more members, but they seem to have a hard time keeping the numbers up, burning the other team members out. What are your ideas to make sure those teams keep running?
Note that I do not think it appropriate either for core teams to choose their own members or to reserve the privilege of demanding that a new person be able to work with them in a certain way. An ideal team member should be able to work with anyone. If the existing delegates are unable to work with new people, perhaps they are not ideal team members. Charles Plessy:
During the discussions that started after the GR, I suggested that the GR proposer should have more control about the options put to the vote. In particular, it would be useful if he can refuse an option that would disequilibrate the voting system. That would make him responsible for the success of the GR: discarding a popular option is taking the risk that the whole GR is refused and the option is accepted as a separate GR, which is the kind of public failure that I expect that people will avoid.
I do not think GR proposers should have any special control over general resolutions beyond what is Constitutionally granted at present. In fact, I would prefer if people did not try to propose multiple options or strategically craft ballots for any purpose. To me, the only honest method is to propose the outcome one genuinely wants to win, and let our strange and special brand of parliamentary procedure take care of the rest. As for the supermajority issue, I am not a huge fan of political word games. Debian distributes non-free software. We have a non-free section within our normal infrastructure, and while it is segregated and does not receive quite the same love that main does, it is still very much part of what we work on and distribute from our official infrastructure. So there is no amount of word lawyering, cognitive dissonance, or NM indoctrination that can make me believe that it is an honest statement to say that non-free is not part of Debian. Likewise I do not believe in redefining success or the meanings of Foundation Documents. I do not think that we should allow ourselves to pass any kind of magical resolution that contravenes the DFSG or Social Contract without actually modifying them. That seems utterly absurd to me, and I have seen what has become of the United States government.

19 January 2009

Gunnar Wolf: About the recent events and possible outcomes in Israel and Palestine

Several friends, from different groups and backgrounds and with different points of view regarding the current war in Israel (and regarding the Arab-Israeli conflict in general) have asked me for an explanation on what is happening there, what is (my view of) the real conflict, its causes... and any possible answers. And yet I am quite far from being an authority, I do want to write something about it. Be prepared, as this post is quite long.
And yet, after writing frankly a lot more than what I expected... It is by far not enough. I have still much more to add, but I have to say "stop" at some point. So, here you have it: My points of view, as well as some explanation on why are we standing where we currently are.
I am writing this based just on my personal experiencie and, of course, my personal point of view. Furthermore, I wrote a good part of this text while riding a bus, with no network access, so I am offering very few references - In any case, it will allow me to make much more progress. References always take as much time as the text itself! About me Why am I writing this? Why do people ask for my opinion? I must start by explaining who I am, so the rest of this makes sense. I am a Mexican Jew. That means, I was born in a Jewish (albeit secular) family, and grew up in an environment with general Jewish culture. My direct family (say, my parents and brother, and to a lesser degree my closest cousins) are not at all religious, I'd even venture to say most of us are complete atheists. Yet, besides the cultural belonging (which is a mixture of a Eastern European culture with lots of Idish words and dishes and general humor), my family has a strong national identification - In other words, I grew up in a fully Zionist environment, which traces back to Poland.
My grandmother was member of Hashomer Hatzair in Poland, since the early years of its existence, late 1910s and early 1920s. What is it? To make it short, a Zionist Socialist, Kibbutzian youth movement. It has many similarities (and somewhat stems indirectly from) the Scouts many of you will be familiar with, but -obviously- has a way lengthier agenda. And, yes, nowadays I feel it is somewhat out of reach with the current state of the world - It was founded in 1913, and only slightly adjusted its principles since then.
I will talk more about Hashomer later on.
My grandmother arrived in the late 1920s to Mexico, for familiar and economic reasons, but still dreamt about living in Israel for a long time. As they grew up, first my uncle joined Hashomer in Mexico in the late 1940s, when it still pursued a very much Soviet-style ideals for Israel (one of the core points that changed during the 1950s); both my father and my mother joined in the late 1950s (in fact, that's where they met). My cousins and myself were very active in the 1980s and 1990s.
The Iszaevich family has something very unusual, I'd say, engraved in our genes. We live very deeply our ideologies. That's the only explanation I can find to the way we all have led our lives. I won't go into the other family members' details, but just into mine: I was fully convinced of all we taught to our younger members and what we discussed among ourselves. I am among the very few people who really learnt Hebrew at my school, and that was only because I really cared - I know some people that just after 12 years of pseudo-learning could maybe utter a few phrases.
After finishing high school I went with my Hashomer group, together with people my age from other similar-minded Mexican Zionist youth groups, to live and learn for a year in Israel, on what is usually known as shnat hajshar - A year to get ready. To get ready to what? But of course, to come and give back go the younger groups of the movement, and finally go back to Israel and settle there definitively. I came back to Mexico, and after one year I did the only thing that was logical: I became an Israeli and went to live to a kibutz - Zikim, a beautiful place just between Ashkelon and Gaza, very near the sea shore. It was a beautiful period in my life, and I really enjoyed it - But it didn't last for very long - I quickly grew to hate the Israeli society at large. A society full of rage, of hatred. Not just between Arabs and Israelis, as many would think from the outside, but between religious and seculars. And between immigrants and locals. And between leftists and rightists (in several dimensions, as it is one of the countries where you can most easily be a economic leftist while being a political rightist - a complex society it is). A society full of disrespect and intolerance. A very hypocritical society. And one of the things that most shocked me: I wanted to live in a society of proud of its existence, that's what I had learnt Israel was - But Israelis aspire and dream of being anything else (and mostly US-Americans) in a way that made me sick, more than anything I had previously seen in Mexico.
I loved the life at the Kibutz, and I loved being an agricultor, doing hard work every day and literally getting the fruit of it. However, I cannot live isolated to a 300-people universe - At least once a week you have to go to the city if you don't want to become insane. And I could not stand the sick Israeli society.
Anyway, to make things short: After six months as an Israeli, I came back to Mexico. I went through a long period of finding myself, as I could not uphold anymore my Hashomer ideology (if I am not going to live by it, how can I continue teaching it?). Many people do anyway, but for the first months, where my Hashomer work was basically all of my life, I felt really uncomfortable. So, in short, I severed all of my relations to the Jewish community, and even denied for a long time my Jewishness until I found a (I think) better balance.
Today I am at peace. But anyway, I wanted to talk about myself in the critical period that marked me in this regard: The 1990s. Enough, lets get down to business. Zionism 1870-1920 Many people argue that the Jews invaded the Palestinians homeland - And yes, nowadays I cannot counter this. But we cannot judge what happened then based on what we see now.
Modern Zionism started around the 1870s. The Jewish history is full of pogroms and persecution, in different countries all over Europe - And a group of young people decided the only solution was to build a place to call their own. And yes, this was full of idealism and in no small part the foolishness of youth. For several centuries and up to 1920, all of current Middle East were provinces of the Ottoman Empire. Before Zionism began, the population of today-Israel was very sparse, not more than 400,000 people - up to 5% of them Jewish, around 10% Christian Arabs, and the remainder, Muslim Arabs. Of course, the low population was mostly because the country was mostly hostile - A desert in the South, mostly swamps in the North, and some minor towns mostly in the hills. It was a mostly forgotten province, away from the central Turkish rule, and there had been no frictions with the local population. It was a nice place to go, thinking -again- as a fool youngster, or as an idealist. They said, lets take a folk without a land to a land without a folk.
The number of accomplishing Zionists (this is, people who actually went to Israel instead of just talking about doing so) was not high during the first decades. But towards the 1890s, it gained critical mass. The political Zionist movement was born, with Theodor Herzl as a leader, and they started pushing -politically- for different governments to concede a territory to Jews. They went to the Turks, and didn't really get much echo. Went to other colonial powers of the time, and got unfeasible promises with no real backing (i.e. Birobidjan in Eastern Russia, Uganda in Africa)... And, as it happens in strongly ideologized movements, the movement started splitting into different groups with slightly different positions.
And no, I am not talking about the People Front for the Liberation of Judea and the People Judean Liberation Front, but I very well could. Sometimes the differences are as ridiculous as that. But hey, I take that most of my readers are acquinted with the Free Software issues, right? Think BSD vs. GPL vs. OpenSource. To give a broad idea - Only in Mexico City, with a very small Jewish community (~30,000 people), in the mid-1990s we had over 10 such movements, with 50-150 youngsters attending each. And even though some were local, and even though some very important ideologies are not represented in this country, they are all different enough to exist as separate entities. British Mandate The Ottoman empire was divided after the first World War. Its possesions in Europe became independent states, while in Asia (and Africa, if you take the semi-independent Egypt and part of Sudan into account) they were taken over as colonies or protectorates by France and the UK - Evidently, attempting to secure a long-term dominion over the area. In some aspects, they failed. In some aspects, they were (and still are) tremendously successful. The division of people by setting arbitrary boundaries has led to countries sustainable only by force and a harsh rule (such as Iraq, Lebanon and Syria), or doomed to poverty (and thus submission) due to lack of natural resources (as Jordan). Focusing on Israel/Palestine, the UK entered the area by making mutually incompatible promises to arabs and Jews - i.e. the Hussein-McMahon letters and the Balfour declaration, both ambiguous enough to lead to... Well, today. The UK rule was disastrous to the region, both in giving (and taking away) power from all sorts of puppet regimes, and swiftly going away as soon as things started looking too complicated. Yes, typical colonialism.
So, while up to the 1920s there was no real animosity between Arabs and Jews (as i.e. the Faisal-Weizmann agreement shows), during the next decades the seeds of hatred started growing, from both sides.
Many people still quote the 1947 partition plan as the direct antecedent towards Israel's real existence - That was not the first partition plan that existed. Ten years earlier, the Peel commision suggested a similar partition involving forceful population transfer. And many people see the separation of Transjordan (now Jordan) from Palestine in 1922 as a first partition. It is understandable that both partition plans (much more the 1947 than the 1937 one) were accepted by Zionists: Going from having nothing to having something (and against all odds in the environment they lived) is acceptable.
From the Zionist side, two main groups rejected the partitions: The right-wing and religious groups, insisting that the whole of the Mandate should become Israel, and the left-wing groups, which advocated for a single, bi-national state, with equal rights for all of its population. Go over and read this last link, as it was quite interesting (to me at least) to see how this solution has kept existing and regarded by (relatively) many people, and has many interesting links. 1948 onwards: Where did the refugees come from? The November 1947 partition didn't exactly translate to a planned, smooth Israeli independence - It led to six months of revolts (basically, a civil war). By mid May, the UK government and troops abandoned the territory, and one day later, Israel declared its independence. And, of course, all neighbouring countries (and Iraq) sent their troops to invade Israel. The war lasted for over six months (cease-fire was signed in January 1949). There was an intense Arab campaign indicating the armies would enter Israel and devastate it, leaving no stone in place, indicating Arab population to temporarily leave the Jewish-destined areas. The war, they said, would not take more than a couple of months, and they would be able to go back home.
Only that... When the war ended, the results were far from what the Arab governments expected. Not only Israel continued to exist, but it conquered important territories.
Of course, the Israelis were not innocent from said exodus: During the 1947-1948 civil war, and the independence war, some of the existing so-called self-defense forces (some of them were really defensive, while some were quite aggressive, even terrorist) attacked Arab villages in strategic or predominantly Jewish areas to prompt them to leave - yes, what we today call ethnic cleansing.
I had (in my head) the number of 650,000 Arabs (from a total of slightly over a million) fleeing to neighbouring countries. Wikipedia states that it is somewhere between 367,000 and 950,000. A similar number of Jews were expelled from Arab countries, many of which arrived to settle at Israel (and many others went elsewhere - For instance, a good part of the Mexican Jewish community is from Syrian origins - Many of them fleed in those years). Israel didn't accept back the (relatively few) Arabs that requested to resettle, as they were seen as hostile population - but neither did the countries that "temporarily" accepted the Palestinians accepted them as citizens. The Palestinian refugee camps today, mainly in in Lebanon, Syria, and the occupied territories held by Israel have terrible living conditions, and its population -despite living there for over 60 years- have no civil rights at all. Note that I'm omitting Jordan here, although it has several camps as well, as their situation is way better.
The Arab population that didn't leave did receive full Israeli citizenship. No, their living standards are not up to level with the average Israeli. The country and the society do have a sensible degree of racism and segregation. But the situation is nowhere as terrible as it is in the camps.
The areas which were originally to become Palestinian and were not conquered by Israel -this is, current-day West Bank and Gaza- bacame respectively Jordan and Egyptian territory. While Jordan did fully extend its soverignty covering the West Bank, Egypt didn't - Gaza is, since 1949, occupied military territory. Gaza, among the most densely populated areas in the world, has had their inhabitants under military rule ever since. When Israel returned the Sinai after signing the peace treaty with President Sadat, Egypt didn't accept Gaza back - And that's where today's greatest problem is born.
Now, Israel conquered those territories in 1967, along with the very sparsely populated Sinai and Golan. For the first ten years, the territories were basically only administered (yes, under a military rule). In 1977, with the first right-wing Israeli government, an extensive settlement policy began (and led partly to today's seemingly unsolvable situation). In 1980-1982, Israel withdrew from the Sinai. In 1981, Israel claimed full soverignty over the whole of Jerusalem and the Golan. In 1993, the "Oslo Agreement" was signed between Israel and the PLO, and it seemed we were heading towards a bright future. I lived in Israel between 1994 and 1996 - Yes, the most hope-filled period in the country's life.
Since 1996, I have tried to keep up to date with the country's evolution. All in all, even if I won't live there again, it is a country I learnt to love, a society I have long studied (even if in the end I did include many references, I wrote most of this text just off the top of my head, with the data I remember - so it might have several big errata). And yes, I keep the political stand I had 12 years ago: The only solution is to dialogue, to treat the current enemies -and not only their governments- with respect, recognizing their dignity and right to life, to self-determination. Only then we will change the status quo. How not to fight hatred Since 1993, the dream of peaceful coexistence seems to have faded. What we saw during the past three weeks, along to what we saw in Lebanon in 2006, is plainly a gross mistake if the goal is to achieve good, lasting peace.
Try to imagine how could life in Gaza be, even in the total absence of Israeli attacks. Just to set some numbers first: The Gaza strip hosts almost 1.5 million people on 360 km . When I lived in Israel, I was constantly surprised at how small a country it is - Israel tops at 550Km North to South, 150Km east to West (it is amazing, almost wherever you stand, except in the middle of the Negev, you can see the country's borders. Yes, I recognize as the border the so-called Green Line); over half of the territory is a desert, and it hosts seven million inhabitants. And it is hard to imagine how that country can be economically viable.
I do not find it feasible to imagine Gaza and the West Bank integrating a single country, and not only because they are separated by ~40Km, but because they are so sociologically different. People in the West Bank, yes, live opressed under military rule and subject to a much more constant, more visible apartheid-like state (as the territory is truly sprinkled with Jewish outposts which many Israelis refuse to recognize as their own, but still, which have incredibly higher living standards). The best land has been taken away from them, yes, but they have some space between cities to have some farming, to communicate, to... Breathe. Besides, West Bank inhabitants -even those in refugee camps- have much better living standards than anybody in Gaza. It is still an overpopulated area, but not nearly as much as Gaza.
Gazan population have been driven towards extremism. And yes, there was a civil war between Palestinian factions, as the world views between both populations are completely different - However hostile a Jenin inhabitant can be towards Israel, he does not lead the life -if it can so be called- you see at Khan Yunis.
But back to Gaza... What Israel is doing (and not only during this military operation terror campaign is wrong, from any rational point of view. Israel wants Hamas to become weaker? Then don't drive the population into supporting them!
Palestinians started giving over 50% of their support to Fatah (ex-PLO), and under 20% to Hamas. That was less than 15 years ago. However, Fatah has shown to be corrupt and inefficient at building infrastructure and improving life conditions, ineffective at negotiating a permanent agreement which secures dignity and sustainability to their people. Hamas stands as a religious, righteous organization. There are no serious corruption charges against any Hamas leaders. Hamas is clearly still at war - They didn't subscribe any peace agreement so far, and their stated #1 goal is to build a Muslim State in the whole of Israel. And the Hamas movement -like Hizbollah in Lebanon- has built quite a bit of infrastructure in the areas they control - Mainly housing. Yes, housing where they mix their own offices, many will accuse, getting human shields for free. But still, they are benefactors to a dehumanized, pauperized population.
I find it obvious that, if living conditions were at a basic level in the region, support of Hamas would decrease. Even more, of course, if they improved due to Israeli support. Israel controls this territory, so it is responsible for the well-being of its population, like it or not. And Gaza is simply too small and low on resources to survive by itself.
I was a bit surprised to find mention -although very brief- of a three state solution - And yes, this is close to what I would expect as a viable outcome. It is clear that Israel will not ever grant full citizenship to the Palestinians, as they would -euphemistically speaking- challenge the Jewish character of the State. Dropping the euphemism, Israel relies on apartheid in order not to become an Arab majority country. I might have some numbers wrong, but AFAIK, there are ~7 million Israelis, 20% of which are Arab citizens (which means, 1.4 million Arabs and probably 5 million Jews, with many other minor denominations for the difference), and ~4 million Palestinians live in the territories. Today, the country is already predominantly Arab, or at least is close to being so. So, Israel should permanently, formally disengage from all of the occupied territories. And in order to ensure violence stops, start a comprehensive, unconditional, long-term aid program. Start with giving them autonomy to regain their sea, as the Gaza port has long been closed. Allow the airport to operate again. Instead of bombing tunnels in the Egypt-Gaza borders, allow Gaza to trade with Egypt - Perhaps even to integrate territorialy, if the conditions are met. Treat their people with respect, and help them ease the terrible situation they have lived for so many decades - and then, undoubtely, terror will stop.
The West Bank? Possibly it could become a Palestinian state by itself. Possibly, it could integrate back to Jordan. That would be up to Jordans and Palestinians to decide. Of course, Jordan has already a large segment of its population defining itself as Palestinians, which counts both for and against. So, I won't venture into this supposition.
But anyway - Back to what prompted me to write this text -yes, a very or maybe even too long text - I hope somebody even takes the time to read it!- is to explain what is my point of view on the current situation, and why.
Today, a cease-fire was announced, after 23 days of murder and destruction. I sadly do not hold very high hopes for it to be lasting, much the less to be enough, to lead to what they call a de-escalation of the conflict. The most I can currently do is to voice my opinion, and hope that mine is just one more voice pointing to a sane solution, to a permanent, dignifying way out, for all people involved. Every people has the right for survival and for safety. We cannot deny this to any others. And certainly, we cannot expect anybody not to fight for their right to live.

5 November 2008

Decklin Foster: Hello America, I missed you

As we walked back toward Mass Ave to catch the bus last night, fresh from the euphoria of my friend Josef's election-night party, I thought about how fortunate we are to be here, now. We cheered, we screamed, we sang the national anthem at the top of our lungs, popped champagne, and danced. I'm on some stranger's facebook somewhere, at the edge of a crowd of people smiling for what felt like the first time in years. I knew I'd remember, but I wanted to write something down. I'd been following along on Twitter all night, but I didn't know what to say when the good news finally came. I thought about our friends in California, where Prop 8 looked like it was going to win. Despite how far we've come, it's likely three states are about to write discrimination into their laws or constitutions rather than out of them. So I picked up my phone and just typed in, "charged." Everyone was excited, yes. I haven't felt electrified by the possibility of change like that in a very long time. But we also haven't automatically won change by changing our leaders. We have won a responsibility to make that change happen. "Yes we did", we joked to each other, and I emailed to my parents in the morning. But no one actually meant, "yes we can win an election". That's just the starting line. Now, we are charged with making America better. We are charged with protecting the liberties of everyone. I wanted to remember that too. That's why it matters that we won. Eight years from now, this could be a nation where the whole idea of "banning gay marriage" sounds as antiquated and offensive as segregated schools or not letting women vote. We have the ability to change our culture, nothing more. Mandates don't do that; people do. And yes, we can.

8 July 2008

David Moreno Garza: Updates

Next.