Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn t seem to fade away.
These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we ll examine their validity.
The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.
This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as out of date . The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).
When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.
Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers working space.
As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.
Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.
One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.
Github didn t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.
Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp s SourceForge account was hacked by SourceForge team itself!
These are very recent examples of what can do a commercial entity when it is under its stakeholders pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.
Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.
Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we re pleased what we own, etc.
Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.
Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.
Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can t be limited only to the code. It s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.
Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system? One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.
Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it s one of the main advantages of Github, since it can be done easily through its graphic interface.
However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.
Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost. Some also use Github as a bookmark place. They follow their favorite projects activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.
The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.
Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.
A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago. It s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.
Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there s more and more bridges between the two.
The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I ve never thought about: laziness. For those who don t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions. Since when Free Software project facing a roadblock request for clemency and don t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don t like what Github offers? Switch to Gitlab. You don t like it Gitlab? Improve it or make your own solution.
Let s be crystal clear. I ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.
In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.
About Me Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker. Follow me on social networksDocumentation/security/
tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they ll get some extra attention.
CONFIG_REFCOUNT_FULLrefcount_t
API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t
reference counters with refcount_t
. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t
implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an unchecked refcount_t
implementation (identical to the earlier atomic_t
implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL
. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t
that have performance characteristics on par with atomic_t
(as done in grsecurity s PAX_REFCOUNT).
CONFIG_FORTIFY_SOURCEFORTIFY_SOURCE
compile-time and run-time protection for finding overflows in the common string (e.g. strcpy
, strcmp
) and memory (e.g. memcpy
, memcmp
) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.
One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)
). If you have a string within a structure, CONFIG_FORTIFY_SOURCE
as currently implemented will make sure only that you can t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.
NULL-prefixed stack canarystrcpy
), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy
), but the trade-off is worth it. (Besdies, x86_64 s canary was 32-bits until recently.)
IPC refactoring__randomize_layout
). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.
A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.
lower ELF_ET_DYN_BASEELF_ET_DYN_BASE
(the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.
early device randomness 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
struct fib6_node
. When
a node has the RTN_RTINFO
flag set, it embeds a pointer to
a struct rt6_info
containing information about the
next-hop.
The fib6_lookup_1()
function walks the radix tree in two
steps:
static struct fib6_node *fib6_lookup_1(struct fib6_node *root, struct in6_addr *addr) struct fib6_node *fn; __be32 dir; /* Step 1: locate potential candidate */ fn = root; for (;;) struct fib6_node *next; dir = addr_bit_set(addr, fn->fn_bit); next = dir ? fn->right : fn->left; if (next) fn = next; continue; break; /* Step 2: check prefix and backtrack if needed */ while (fn) if (fn->fn_flags & RTN_RTINFO) struct rt6key *key; key = fn->leaf->rt6i_dst; if (ipv6_prefix_equal(&key->addr, addr, key->plen)) if (fn->fn_flags & RTN_RTINFO) return fn; if (fn->fn_flags & RTN_ROOT) break; fn = fn->parent; return NULL;
2001:db8:1::1
and 2001:db8:3::1
would get those two cache entries:
$ ip -6 route show cache 2001:db8:1::1 dev r2-r1 metric 0 cache 2001:db8:3::1 via 2001:db8:2::2 dev r2-r3 metric 0 cache
ip6_dst_gc()
function
controlled by the following parameters:
$ sysctl -a grep -F net.ipv6.route net.ipv6.route.gc_elasticity = 9 net.ipv6.route.gc_interval = 30 net.ipv6.route.gc_min_interval = 0 net.ipv6.route.gc_min_interval_ms = 500 net.ipv6.route.gc_thresh = 1024 net.ipv6.route.gc_timeout = 60 net.ipv6.route.max_size = 4096 net.ipv6.route.mtu_expires = 600
gc_elasticity
) after a 500 ms pause.
Starting from Linux 4.2 (commit 45e4fd26683c), only a PMTU exception would
create a cache entry. A router doesn t have to handle those exceptions, so only
hosts would get cache entries. And they should be pretty rare. Martin KaFai
Lau explains:
Out of all IPv6Here is how a cache entry with a PMTU exception looks like:RTF_CACHE
routes that are created, the percentage that has a different MTU is very small. In one of our end-user facing proxy server, only 1k out of 80kRTF_CACHE
routes have a smaller MTU. For our DC traffic, there is no MTU exception.
$ ip -6 route show cache 2001:db8:1::50 via 2001:db8:1::13 dev out6 metric 0 cache expires 573sec mtu 1400 pref medium
ip6_route_output_flags()
function and
correlate the results with the radix tree size:
Getting meaningful results is challenging due to the size of the address
space. None of the scenarios have a fallback route and we only measure time for
successful hits4. For the full view scenario, only the range from
2400::/16
to 2a06::/16
is scanned (it contains more than half of the
routes). For the /128 scenario, the whole /108 subnet is scanned. For the /48
scenario, the range from the first /48 to the last one is scanned. For each
range, 5000 addresses are picked semi-randomly. This operation is repeated until
we get 5000 hits or until 1 million tests have been executed.
The relation between the maximum depth and the lookup time is incomplete and I
can t explain the difference of performance between the different densities of
the /48 scenario.
We can extract two important performance points:
fib_lookup()
and
IPv6 s ip6_route_output_flags()
functions have a
fixed cost implied by the evaluation of routing rules, IPv4 has several
optimizations when the rules are left unmodified5. Those optimizations
are removed on the first modification. If we cancel those optimizations, the
lookup time for IPv4 is impacted by about 30 ns. This still leaves a 100 ns
difference with IPv6 to be explained.
Let s compare how time is spent in each lookup function. Here is
a CPU flamegraph for IPv4 s fib_lookup()
:
Only 50% of the time is spent in the actual route lookup. The remaining time is
spent evaluating the routing rules (about 30 ns). This ratio is dependent on the
number of routes we inserted (only 1000 in this example). It should be noted the
fib_table_lookup()
function is executed twice: once with the local routing
table and once with the main routing table.
The equivalent flamegraph for
IPv6 s ip6_route_output_flags()
is depicted below:
Here is an approximate breakdown on the time spent:
CONFIG_SMP
, CONFIG_IPV6
,
CONFIG_IPV6_MULTIPLE_TABLES
and CONFIG_IPV6_SUBTREES
options enabled. Some
other unrelated options are enabled to be able to boot them in a virtual machine
and run the benchmark.
There are three notable performance changes:
struct rt6_info
(commit 887c95cc1da5). This should have lead to
a performance increase. The small regression may be due to cache-related
issues.struct fib6_node
) and routing information (struct
rt6_info
) are allocated with the slab allocator7. It is therefore
possible to extract the information from /proc/slabinfo
when the kernel is
booted with the slab_nomerge
flag:
# sed -ne 2p -e '/^ip6_dst/p' -e '/^fib6_nodes/p' /proc/slabinfo cut -f1 -d: name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> fib6_nodes 76101 76104 64 63 1 ip6_dst_cache 40090 40090 384 10 1
struct rt6_info
matches the number of routes while the
number of nodes is roughly twice the number of routes:
The memory usage is therefore quite predictable and reasonable, as even a small
single-board computer can support several full views (20 MiB for each):
The LPC-trie used for IPv4 is more efficient: when 512 MiB of memory is needed
for IPv6 to store 1 million routes, only 128 MiB are needed for IPv4. The
difference is mainly due to the size of struct rt6_info
(336 bytes) compared
to the size of IPv4 s struct fib_alias
(48 bytes): IPv4 puts most information
about next-hops in struct fib_info
structures that are shared with many
entries.
CONFIG_IPV6_MULTIPLE_TABLES
option incurs a fixed penalty of 100 ns by lookup,ip -6 route add 2001:db8:1::/64 \ from 2001:db8:3::/64 \ via fe80::1 \ dev eth0
CONFIG_SMP
and CONFIG_IP_MULTIPLE_TABLES
options enabled (however,
no IP rules are used). Some other unrelated options are enabled to be
able to boot them in a virtual machine and run the benchmark.
The measurements are done in a virtual machine with one
vCPU2. The host is an Intel Core i5-4670K and the CPU
governor was set to performance . The benchmark is
single-threaded. Implemented as a kernel module, it calls
fib_lookup()
with various destinations in 100,000 timed iterations
and keeps the median. Timings of individual runs are computed from the
TSC (and converted to nanoseconds by assuming a constant clock).
The following kernel versions bring a notable performance improvement:
CONFIG_IP_MULTIPLE_TABLES
option doesn t impact the performances
unless some IP rules are configured. This version also removes
the route cache (commit 5e9965c15ba8). However, this has no
effect on the benchmark as it directly calls fib_lookup()
which
doesn t involve the cache.
CONFIG_SMP
option to use
the hierarchical RCU and activate more of the same code paths
as actual routers. However, progress on parallelism are left
unnoticed.
fib_lookup()
function. From essential
information about the datagram (source and destination IP addresses,
interfaces, firewall mark, ), this function should quickly provide
a decision. Some possible options are:
RTN_LOCAL
),RTN_UNICAST
),RTN_BLACKHOLE
).$ ip route show scope global table 100 default via 203.0.113.5 dev out2 192.0.2.0/25 nexthop via 203.0.113.7 dev out3 weight 1 nexthop via 203.0.113.9 dev out4 weight 1 192.0.2.47 via 203.0.113.3 dev out1 192.0.2.48 via 203.0.113.3 dev out1 192.0.2.49 via 203.0.113.3 dev out1 192.0.2.50 via 203.0.113.3 dev out1
Destination IP | Next hop |
---|---|
192.0.2.49 |
203.0.113.3 via out1 |
192.0.2.50 |
203.0.113.3 via out1 |
192.0.2.51 |
203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP) |
192.0.2.200 |
203.0.113.5 via out2 |
192.0.2.50
, we will
find the result in the corresponding leaf (at depth 32). However for
192.0.2.51
, we will reach 192.0.2.50/31
but there is no second
child. Therefore, we backtrack until the 192.0.2.0/25
routing entry.
Adding and removing routes is quite easy. From a performance point of
view, the lookup is done in constant time relative to the number of
routes (due to maximum depth being capped to 32).
Quagga is an example of routing software still using
this simple approach.
struct fib_table
represents a routing table,struct trie
represents a complete trie,struct key_vector
represents either
an internal node (when bits
is not zero) or a leaf,struct fib_info
contains the characteristics
shared by several routes (like a next-hop gateway and an output
interface),struct fib_alias
is the glue between the
leaves and the fib_info
structures./proc/net/fib_trie
:
$ cat /proc/net/fib_trie Id 100: +-- 0.0.0.0/0 2 0 2 -- 0.0.0.0 /0 universe UNICAST +-- 192.0.2.0/26 2 0 1 -- 192.0.2.0 /25 universe UNICAST -- 192.0.2.47 /32 universe UNICAST +-- 192.0.2.48/30 2 0 1 -- 192.0.2.48 /32 universe UNICAST -- 192.0.2.49 /32 universe UNICAST -- 192.0.2.50 /32 universe UNICAST [...]
CONFIG_IP_FIB_TRIE_STATS
,
some interesting statistics are available in /proc/net/fib_triestat
4:
$ cat /proc/net/fib_triestat Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes. Id 100: Aver depth: 2.33 Max depth: 3 Leaves: 6 Prefixes: 6 Internal nodes: 3 2: 3 Pointers: 12 Null ptrs: 4 Total size: 1 kB [...]
fib_lookup()
function:
The lookup time is loosely tied to the maximum depth. When the routing
table is densily populated, the maximum depth is low and the lookup
times are fast.
When forwarding at 10 Gbps, the time budget for a packet would be
about 50 ns. Since this is also the time needed for the route lookup
alone in some cases, we wouldn t be able to forward at line rate with
only one core. Nonetheless, the results are pretty good and they are
expected to scale linearly with the number of cores.
The measurements are done with a Linux kernel 4.11 from Debian
unstable. I have gathered performance metrics accross kernel versions
in Performance progression of IPv4 route lookup on Linux .
Another interesting figure is the time it takes to insert all those
routes into the kernel. Linux is also quite efficient in this area
since you can insert 2 million routes in less than 10 seconds:
/proc/net/fib_triestat
. The statistic provided doesn t account for
the fib_info
structures, but you should only have a handful of them
(one for each possible next-hop). As you can see on the graph below,
the memory use is linear with the number of routes inserted, whatever
the shape of the routes is.
The results are quite good. With only 256 MiB, about 2 million routes
can be stored!
CONFIG_IP_MULTIPLE_TABLES
, Linux
supports several routing tables and has a system of configurable
rules to select the table to use. These rules can be configured with
ip rule
. By default, there are three of them:
$ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default
local
table. If it
doesn t find one, it will lookup in the main
table and at last
resort, the default
table.
local
table contains routes for local delivery:
$ ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55 local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55 broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55
192.168.117.55
was configured on the eno1
interface, the kernel
automatically added the appropriate routes:
192.168.117.55
for local unicast delivery to the IP address,192.168.117.255
for broadcast delivery to the broadcast address,192.168.117.0
for broadcast delivery to the network address.127.0.0.1
was configured on the loopback interface, the same
kind of routes were added to the local
table. However, a loopback
address receives a special treatment and the kernel also adds the
whole subnet to the local
table. As a result, you can ping any IP in
127.0.0.0/8
:
$ ping -c1 127.42.42.42 PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data. 64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms --- 127.42.42.42 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
main
table usually contains all the other routes:
$ ip route show table main default via 192.168.117.1 dev eno1 proto static metric 100 192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100
default
route has been configured by some DHCP daemon. The
connected route (scope link
) has been automatically added by the
kernel (proto kernel
) when configuring an IP address on the eno1
interface.
The default
table is empty and has little use. It has been kept when
the current incarnation of advanced routing has been introduced in
Linux 2.1.68 after a first tentative using classes in Linux 2.1.156.
main
and local
tables are merged and the
lookup is done with this single table (and the default
table if not
empty). Moreover, since Linux 3.0 (commit f4530fa574df), without
specific rules, there is no performance hit when enabling the support
for multiple routing tables. However, as soon as you add new rules,
some CPU cycles will be spent for each datagram to evaluate them. Here
is a couple of graphs demonstrating the impact of routing rules on
lookup times:
For some reason, the relation is linear when the number of rules is
between 1 and 100 but the slope increases noticeably past this
threshold. The second graph highlights the negative impact of the
first rule (about 30 ns).
A common use of rules is to create virtual routers: interfaces
are segregated into domains and when a datagram enters through an
interface from domain A, it should use routing table A:
# ip rule add iif vlan457 table 10 # ip rule add iif vlan457 blackhole # ip rule add iif vlan458 table 20 # ip rule add iif vlan458 blackhole
# ip route add blackhole default metric 9999 table 10 # ip route add blackhole default metric 9999 table 20 # ip rule add iif vlan457 table 10 # ip rule add iif vlan458 table 20
# ip link add vrf-A type vrf table 10 # ip link set dev vrf-A up # ip link add vrf-B type vrf table 20 # ip link set dev vrf-B up # ip link set dev vlan457 master vrf-A # ip link set dev vlan458 master vrf-B # ip rule show 0: from all lookup local 1000: from all lookup [l3mdev-table] 32766: from all lookup main 32767: from all lookup default
l3mdev-table
rule was automatically added when
configuring the first VRF interface. This rule will select the routing
table associated to the VRF owning the input (or output) interface.
VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the
performance was greatly enhanced in Linux 4.8
(commit 7889681f4a6c) and the special routing rule was also
introduced in Linux 4.8
(commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more
details about it in the kernel documentation.
key_vector
structure is embedded
into a tnode
structure. This structure contains information
rarely used during lookup, notably the reference to the parent
that is usually not needed for backtracking as Linux keeps the
nearest candidate in a variable.
struct fib_alias
is a
list). The number of prefixes can therefore be greater than the
number of leaves. The system also keeps statistics about the
distribution of the internal nodes relative to the number of bits
they handle. In our example, all the three internal nodes are
handling 2 bits.
To verify the timestamp, you first need to download the public key of the trusted timestamp service, for example using this command:openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \ curl -s -H "Content-Type: application/timestamp-query" \ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
Note, the public key should be stored alongside the timestamps in the archive to make sure it is also available 100 years from now. It is probably a good idea to standardise how and were to store such public keys, to make it easier to find for those trying to verify documents 100 or 1000 years from now. :) The verification itself is a simple openssl command:wget -O ca-cert.txt \ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
Is there any reason this approach would not work? Is it somehow against the Noark 5 specification?openssl ts -verify -data $inputfile -in $sha256.tsr \ -CAfile ca-cert.txt -text
You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester. In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification. The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 0 - Title of the test case file created 2017-03-18T23:49:32.103446 1 - Title of the test file created 2017-03-18T23:49:32.103446 Select which mappe you want (or search term): 0 Uploading mangelmelding/mangler.pdf PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 ~/src//noark5-tester$
Four keynote speakers will anchor the event. Kade Crockford, director of the Technology for Liberty program of the American Civil Liberties Union of Massachusetts, will kick things off on Saturday morning by sharing how technologists can enlist in the growing fight for civil liberties. On Saturday night, Free Software Foundation president Richard Stallman will present the Free Software Awards and discuss pressing threats and important opportunities for software freedom. Day two will begin with Cory Doctorow, science fiction author and special consultant to the Electronic Frontier Foundation, revealing how to eradicate all Digital Restrictions Management (DRM) in a decade. The conference will draw to a close with Sumana Harihareswara, leader, speaker, and advocate for free software and communities, giving a talk entitled "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998." That's not all. We'll hear about the GNU philosophy from Marianne Corvellec of the French free software organization April, Joey Hess will touch on encryption with a talk about backing up your GPG keys, and Denver Gingerich will update us on a crucial free software need: the mobile phone. Others will look at ways to grow the free software movement: through cross-pollination with other activist movements, removal of barriers to free software use and contribution, and new ideas for free software as paid work.-- Here's a sneak peek at LibrePlanet 2017: Register today! I'll be giving some varient of the keysafe talk from Linux.Conf.Au. By the way, videos of my keysafe and propellor talks at Linux.Conf.Au are now available, see the talks page.
Before the 2.6 series, there was a stable branch (2.4) where only relatively minor and safe changes were merged, and an unstable branch (2.5), where bigger changes and cleanups were allowed. Both of these branches had been maintained by the same set of people, led by Torvalds. This meant that users would always have a well-tested 2.4 version with the latest security and bug fixes to use, though they would have to wait for the features which went into the 2.5 branch. The downside of this was that the stable kernel ended up so far behind that it no longer supported recent hardware and lacked needed features. In the late 2.5 kernel series, some maintainers elected to try backporting of their changes to the stable kernel series, which resulted in bugs being introduced into the 2.4 kernel series. The 2.5 branch was then eventually declared stable and renamed to 2.6. But instead of opening an unstable 2.7 branch, the kernel developers decided to continue putting major changes into the 2.6 branch, which would then be released at a pace faster than 2.4.x but slower than 2.5.x. This had the desirable effect of making new features more quickly available and getting more testing of the new code, which was added in smaller batches and easier to test. Then, in the Ruby community. In 2007, Ruby 1.8.6 was the stable version of Ruby. Ruby 1.9.0 was released on 2007-12-26, without being declared stable, as a snapshot from Ruby s trunk branch, and most of the development s attention moved to 1.9.x. On 2009-01-31, Ruby 1.9.1 was the first release of the 1.9 branch to be declared stable. But at the same time, the disruptive changes introduced in Ruby 1.9 made users stay with Ruby 1.8, as many libraries (gems) remained incompatible with Ruby 1.9.x. Debian provided packages for both branches of Ruby in Squeeze (2011) but only changed the default to 1.9 in 2012 (in a stable release with Wheezy 2013). Finally, in the Python community. Similarly to what happened with Ruby 1.9, Python 3.0 was released in December 2008. Releases from the 3.x branch have been shipped in Debian Squeeze (3.1), Wheezy (3.2), Jessie (3.4). But the python command still points to 2.7 (I don t think that there are plans to make it point to 3.x, making python 3.x essentially a different language), and there are talks about really getting rid of Python 2.7 in Buster (Stretch+1, Jessie+2). In retrospect, and looking at what those projects have been doing in recent years, it is probably a better idea to break early, break often, and fix a constant stream of breakages, on a regular basis, even if that means temporarily exposing breakage to users, and spending more time seeking strategies to limit the damage caused by introducing breakage. What also changed since the time those branches were introduced is the increased popularity of automated testing and continuous integration, which makes it easier to measure breakage caused by disruptive changes. Distributions are in a good position to help here, by being able to provide early feedback to upstream projects about potentially disruptive changes. And distributions also have good motivations to help here, because it is usually not a great solution to ship two incompatible branches of the same project. (I wonder if there are other occurrences of the same pattern?) Update: There s a discussion about this post on HN
__latent_entropy
. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is credited , but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.
vmapped kernel stack and thread_info relocation on x86
Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloc
ed memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK
on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.
Related to this, the kernel was storing thread_info
(which contained sensitive values like addr_limit
) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info
, removing needless fields, and entirely moving thread_info
off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK
for x86.
CONFIG_DEBUG_RODATA mandatory on arm64
As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there s no reason to make the protection optional.
random_page()
cleanup
Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd()
for ET_DYN
and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int()
(or similar) in and around arch_mmap_rnd()
(which is used for mmap
(and therefore shared library) and PIE ASLR), as well as in randomize_stack_top()
(which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range()
entirely and replacing it with the saner random_page()
, making the per-architecture arch_randomize_brk()
(responsible for brk
ASLR) much easier to understand.
That s it for now! Let me know if there are other fun things to call attention to in v4.9.
2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
uname -a
). It is unclear why those patches are necessary since the
ARMv7 Armada 385 CPU has been
supported in Linux since at least 4.2-rc1, but it is common for
OpenWRT ports to ship patches to the kernel, either to backport
missing functionality or perform some optimization.
There has been some pressure from backers to petition Turris to
"speedup the process of upstreaming Omnia support to OpenWrt". It
could be that the team is too busy with delivering the devices already
ordered to complete that process at this point. The software is
available on the CZ-NIC GitHub repository and the actual Linux
patches can be found here and here. CZ.NIC also operates a
private GitLab instance where more software is available. There is
technically no reason why you wouldn't be able to run your own
distribution on the Omnia router: OpenWRT development snapshots should
be able to run on the Omnia hardware and some people have
installed Debian on Omnia. It may require some customization
(e.g. the kernel) to make sure the Omnia hardware is correctly
supported. Most people seem to prefer to run TurrisOS because of
the extra features.
The hardware itself is also free and open for the most part. There is
a binary blob needed for the 5GHz wireless card, which seems to be the
only proprietary component on the board. The schematics of the device
are available through the Omnia wiki, but oddly not in the GitHub
repository like the rest of the software.
Note: this article first appeared in the Linux Weekly News.
unzip angler-nrd90u-factory-7c9b6a2b.zip cd angler-nrd90u/ unzip image-angler-nrd90u.zip |
fastboot flash bootloader bootloader-angler-angler-03.58.img fastboot reboot-bootloader sleep 5 fastboot flash radio radio-angler-angler-03.72.img fastboot reboot-bootloader sleep 5 fastboot erase system fastboot flash system system.img fastboot erase boot fastboot flash boot boot.img fastboot erase cache fastboot flash cache cache.img fastboot erase vendor fastboot flash vendor vendor.img fastboot erase recovery fastboot flash recovery recovery.img fastboot reboot |
fastboot erase recovery fastboot flash recovery twrp-3.0.2-2-angler.img fastboot reboot-bootloader |
32-bit number of index entries.What? The index/staging area can t handle more than ~4.3 billion files? There I was, writing Rust code to write out the index.
try!(out.write_u32::<NetworkOrder>(self.entries.len()));
(For people familiar with the byteorder crate and wondering what NetworkOrder is, I have a use byteorder::BigEndian as NetworkOrder
)
And the Rust compiler rightfully barfed:
error: mismatched types:
expected u32 ,
found usize [E0308]
And there I was, wondering: mmmm should I just add as u32
and silently truncate or hey what does git do?
And it turns out, git uses an unsigned int
to track the number of entries in the first place, so there is no truncation happening.
Then I thought but what happens when cache_nr
reaches the max?
Well, it turns out there s only one obvious place where the field is incremented.
What? Holy coffin nails, Batman! No overflow check?
Wait a second, look 3 lines above that:
ALLOC_GROW(istate->cache, istate->cache_nr + 1, istate->cache_alloc);
Yeah, obviously, if you re incrementing cache_nr
, you already have that many entries in memory. So, how big would that array be?
So it s an array of pointers, assuming 64-bits pointers, that s ~34.3 GB. But, all thosestruct cache_entry **cache;
cache_nr
entries are in memory too. How big is a cache entry?
So, 4 ints, 20 bytes, and as many bytes as necessary to hold a path. And two inline structs. How big are they?struct cache_entry struct hashmap_entry ent; struct stat_data ce_stat_data; unsigned int ce_mode; unsigned int ce_flags; unsigned int ce_namelen; unsigned int index; /* for link extension */ unsigned char sha1[20]; char name[FLEX_ARRAY]; /* more */ ;
Woohoo, nested structs.struct hashmap_entry struct hashmap_entry *next; unsigned int hash; ; struct stat_data struct cache_time sd_ctime; struct cache_time sd_mtime; unsigned int sd_dev; unsigned int sd_ino; unsigned int sd_uid; unsigned int sd_gid; unsigned int sd_size; ;
So all in all, we re looking at 1 + 2 + 2 + 5 + 4 32-bit integers, 1 64-bits pointer, 2 32-bits padding, 20 bytes of sha1, for a total of 92 bytes, not counting the variable size for file paths. The average path length in mozilla-central, which only has slightly over 140 thousands of them, is 59 (including the terminal NUL character). Let s conservatively assume our crazy repository would have the same average, making the average cache entry 151 bytes. But memory allocators usually allocate more than requested. In this particular case, with the default allocator on GNU/Linux, it s 156 (weirdly enough, it s 152 on my machine). 156 times 4.3 billion 670 GB. Plus the 34.3 from the array of pointers: 704.3 GB. Of RAM. Not counting the memory allocator overhead of handling that. Or all the other things git might have in memory as well (which apparently involves a hashmap, too, but I won t look at that, I promise). I think one would have run out of memory before hitting that integer overflow. Interestingly, looking at Documentation/technical/index-format.txt again, the on-disk format appears smaller, with 62 bytes per file instead of 92, so the corresponding index file would be smaller. (And in version 4, paths are prefix-compressed, so paths would be smaller too). But having an index that large supposes those files are checked out. So let s say I have an empty ext4 file system as large as possible (which I m told is 2^60 bytes (1.15 billion gigabytes)). Creating a small empty ext4 tells me at least 10 inodes are allocated by default. I seem to remember there s at least one reserved for the journal, there s the top-level directory, and there sstruct cache_time uint32_t sec; uint32_t nsec; ;
lost+found
; there apparently are more. Obviously, on that very large file system, We d have a git repository. git init
with an empty template creates 9 files and directories, so that s 19 more inodes taken. But git init
doesn t create an index, and doesn t have any objects. We d thus have at least one file for our hundreds of gigabyte index, and at least 2 who-knows-how-big files for the objects (a pack and its index). How many inodes does that leave us with?
The Linux kernel source tells us the number of inodes in an ext4 file system is stored in a 32-bits integer.
So all in all, if we had an empty very large file system, we d only be able to store, at best, 2^32 22 files And we wouldn t even be able to get cache_nr
to overflow.
while following the rules. Because the index can keep files that have been removed, it is actually possible to fill the index without filling the file system. After hours (days? months? years? decades?*) of running
seq 0 4294967296 while read i; do touch $i; git update-index --add $i; rm $i; done
One should be able to reach the integer overflow. But that d still require hundreds of gigabytes of disk space and even more RAM.
seq 0 100000 4294967296 while read i; do j=$(seq $i $(($i + 99999))); touch $j; git update-index --add $j; rm $j; done
At the rate the first million files were added, still assuming a constant rate, it would take about a month on my machine. Considering reading/writing a list of a million files is a thousand times faster than reading a list of a billion files, assuming linear increase, we re still talking about decades, and plentiful RAM. Fun fact: after leaving it run for 5 times as much as it had run for the first million files, it hasn t even done half more
One could generate the necessary hundreds-of-gigabytes index manually, that wouldn t be too hard, and assuming it could be done at about 1 GB/s on a good machine with a good SSD, we d be able to craft a close-to-explosion index within a few minutes. But we d still lack the RAM to load it.
So, here is the open question: should I report that integer overflow?
Wow, that was some serious procrastination.
Edit: Epilogue: Actually, oops, there is a separate integer overflow on the reading side that can trigger a buffer overflow, that doesn t actually require a large index, just a crafted header, demonstrating that yes, not all integer overflows are equal.
total used free shared buff/cache available Mem: 15G 3.7G 641M 222M 11G 11G Swap: 15G 194M 15GI ve used the -h option for human-readable output here for the sake of brevity and because I hate typing long lists of long numbers. People who have good memories (or old computers) may notice there is a missing -/+ buffers/cache line. This was intentionally removed in mid-2014 because as the memory management of Linux got more and more complicated, these lines became less relevant. These used to help with the not used used memory problem mentioned in the introduction but progress caught up with it. To explain what free is showing, you need to understand some of the underlying statistics that it works with. This isn t a lesson on how Linux its memory (the honest short answer is, I don t fully know) but just enough hopefully to understand what free is doing. Let s start with the two simple columns first; total and free. Total Memory This is what memory you have available to Linux. It is almost, but not quite, the amount of memory you put into a physical host or the amount of memory you allocate for a virtual one. Some memory you just can t have; either due to early reservations or devices shadowing the memory area. Unless you start mucking around with those settings or the virtual host, this number stays the same. Free Memory Memory that nobody at all is using. They haven t reserved it, haven t stashed it away for future use or even just, you know, actually using it. People often obsess about this statistic but its probably the most useless one to use for anything directly. I have even considered removing this column, or replacing it with available (see later what that is) because of the confusion this statistic causes. The reason for its uselessness is that Linux has memory management where it allocates memory it doesn t use. This decrements the free counter but it is not truly used . If you application needs that memory, it can be given back. A very important statistic to know for running a system is how much memory have I got left before I either run out or I start to serious swap stuff to swap drives. Despite its name, this statistic will not tell you that and will probably mislead you. My advice is unless you really understand the Linux memory statistics, ignore this one. Who s Using What Now we come to the components that are using (if that is the right word) the memory within a system. Shared Memory Shared memory is often thought of only in the context of processes (and makes working out how much memory a process uses tricky but that s another story) but the kernel has this as well. The shared column lists this, which is a direct report from the Shmem field in the meminfo file. Slabs For things used a lot within the kernel, it is inefficient to keep going to get small bits of memory here and there all the time. The kernel has this concept of slabs where it creates small caches for objects or in-kernel data strucutures that slabinfo(5) states [such as] buffer heads, inodes and dentries . So basically kernel stuff for the kernel to do kernelly things with. Slab memory comes in two flavours. There is reclaimable and unreclaimable. This is important because unreclaimable cannot be handed back if your system starts to run out of memory. Funny enough, not all reclaimable is, well, reclaimable. A good estimate is you ll only get 50% back, top and free ignore this inconvenient truth and assume it can be 100%. All of the reclaimable slab memory is considered part of the Cached statistic. Unreclaimable is memory that is part of Used. Page Cache and Cached Page caches are used to read and write to storage, such as a disk drive. These are the things that get written out when you use sync and make the second read of the same file much faster. An interesting quirk is that tmpfs is part of the page cache. So the Cached column may increase if you have a few of these. The Cached column may seem like it should only have Page Cache, but the Reclaimable part of the Slab is added to this value. For some older versions of some programs, they will have no or all Slab counted in Cached. Both of these versions are incorrect. Cached makes up part of the buff/cache column with the standard options for free or has a column to itself for the wide option. Buffers The second component to the buff/cache column (or separate with the wide option) is kernel buffers. These are the low-level I/O buffers inside the kernel. Generally they are small compared to the other components and can basically ignored or just considered part of the Cached, which is the default for free. Used Unlike most of the previous statistics that are either directly pulled out of the meminfo file or have some simple addition, the Used column is calculated and completely dependent on the other values. As such it is not telling the whole story here but it is reasonably OK estimate of used memory. Used component is what you have left of your Total memory once you have removed:
Next.