Search Results: "cyb"

13 April 2017

Antoine Beaupr : New approaches to network fast paths

With the speed of network hardware now reaching 100 Gbps and distributed denial-of-service (DDoS) attacks going in the Tbps range, Linux kernel developers are scrambling to optimize key network paths in the kernel to keep up. Many efforts are actually geared toward getting traffic out of the costly Linux TCP stack. We have already covered the XDP (eXpress Data Path) patch set, but two new ideas surfaced during the Netconf and Netdev conferences held in Toronto and Montreal in early April 2017. One is a patch set called af_packet, which aims at extracting raw packets from the kernel as fast as possible; the other is the idea of implementing in-kernel layer-7 proxying. There are also user-space network stacks like Netmap, DPDK, or Snabb (which we previously covered). This article aims at clarifying what all those components do and to provide a short status update for the tools we have already covered. We will focus on in-kernel solutions for now. Indeed, user-space tools have a fundamental limitation: if they need to re-inject packets onto the network, they must again pay the expensive cost of crossing the kernel barrier. User-space performance is effectively bounded by that fundamental design. So we'll focus on kernel solutions here. We will start from the lowest part of the stack, the af_packet patch set, and work our way up the stack all the way up to layer-7 and in-kernel proxying.

af_packet v4 John Fastabend presented a new version of a patch set that was first published in January regarding the af_packet protocol family, which is currently used by tcpdump to extract packets from network interfaces. The goal of this change is to allow zero-copy transfers between user-space applications and the NIC (network interface card) transmit and receive ring buffers. Such optimizations are useful for telecommunications companies, which may use it for deep packet inspection or running exotic protocols in user space. Another use case is running a high-performance intrusion detection system that needs to watch large traffic streams in realtime to catch certain types of attacks. Fastabend presented his work during the Netdev network-performance workshop, but also brought the patch set up for discussion during Netconf. There, he said he could achieve line-rate extraction (and injection) of packets, with packet rates as high as 30Mpps. This performance gain is possible because user-space pages are directly DMA-mapped to the NIC, which is also a security concern. The other downside of this approach is that a complete pair of ring buffers needs to be dedicated for this purpose; whereas before packets were copied to user space, now they are memory-mapped, so the user-space side needs to process those packets quickly otherwise they are simply dropped. Furthermore, it's an "all or nothing" approach; while NIC-level classifiers could be used to steer part of the traffic to a specific queue, once traffic hits that queue, it is only accessible through the af_packet interface and not the rest of the regular stack. If done correctly, however, this could actually improve the way user-space stacks access those packets, providing projects like DPDK a safer way to share pages with the NIC, because it is well defined and kernel-controlled. According to Jesper Dangaard Brouer (during review of this article):
This proposal will be a safer way to share raw packet data between user space and kernel space than what DPDK is doing, [by providing] a cleaner separation as we keep driver code in the kernel where it belongs.
During the Netdev network-performance workshop, Fastabend asked if there was a better data structure to use for such a purpose. The goal here is to provide a consistent interface to user space regardless of the driver or hardware used to extract packets from the wire. af_packet currently defines its own packet format that abstracts away the NIC-specific details, but there are other possible formats. For example, someone in the audience proposed the virtio packet format. Alexei Starovoitov rejected this idea because af_packet is a kernel-specific facility while virtio has its own separate specification with its own requirements. The next step for af_packet is the posting of the new "v4" patch set, although Miller warned that this wouldn't get merged until proper XDP support lands in the Intel drivers. The concern, of course, is that the kernel would have multiple incomplete bypass solutions available at once. Hopefully, Fastabend will present the (by then) merged patch set at the next Netdev conference in November.

XDP updates Higher up in the networking stack sits XDP. The af_packet feature differs from XDP in that it does not perform any sort of analysis or mangling of packets; its objective is purely to get the data into and out of the kernel as fast as possible, completely bypassing the regular kernel networking stack. XDP also sits before the networking stack except that, according to Brouer, it is "focused on cooperating with the existing network stack infrastructure, and on use-cases where the packet doesn't necessarily need to leave kernel space (like routing and bridging, or skipping complex code-paths)." XDP has evolved quite a bit since we last covered it in LWN. It seems that most of the controversy surrounding the introduction of XDP in the Linux kernel has died down in public discussions, under the leadership of David Miller, who heralded XDP as the right solution for a long-term architecture in the kernel. He presented XDP as a fast, flexible, and safe solution. Indeed, one of the controversies surrounding XDP was the question of the inherent security challenges with introducing user-provided programs directly into the Linux kernel to mangle packets at such a low level. Miller argued that whatever protections are expected for user-space programs also apply to XDP programs, comparing the virtual memory protections to the eBPF (extended BPF) verifier applied to XDP programs. Those programs are actually eBPF that have an interesting set of restrictions:
  • they have a limited size
  • they cannot jump backward (and thus cannot loop), so they execute in predictable time
  • they do only static allocation, so they are also limited in memory
XDP is not a one-size-fits-all solution: netfilter, the TC traffic shaper, and other normal Linux utilities still have their place. There is, however, a clear use case for a solution like XDP in the kernel. For example, Facebook and Cloudflare have both started testing XDP and, in Facebook's case, deploying XDP in production. Martin Kafai Lau, from Facebook, presented the tool set the company is using to construct a DDoS-resilience solution and a level-4 load balancer (L4LB), which got a ten-times performance improvement over the previous IPVS-based solution. Facebook rolled out its own user-space solution called "Droplet" to detect hostile traffic and deploy blocking rules in the form of eBPF programs loaded in XDP. Lau demonstrated the way Facebook deploys a three-part chained eBPF program: the first part allows debugging and dumping of packets, the second is Droplet itself, which drops undesirable traffic, and the last segment is the load balancer, which mangles the packets to tweak their destination according to internal rules. Droplet can drop DDoS attacks at line rate while keeping the architecture flexible, which were two key design requirements. Gilberto Bertin, from Cloudflare, presented a similar approach: Cloudflare has a tool that processes sFlow data generated from iptables in order to generate cBPF (classic BPF) mitigation rules that are then deployed on edge routers. Those rules are created with a tool called bpfgen, part of Cloudflare's BSD-licensed bpftools suite. For example, it could create a cBPF bytecode blob that would match DNS queries to any example.com domain with something like:
    bpfgen dns *.example.com
Originally, Cloudflare would deploy those rules to plain iptables firewalls with the xt_bpf module, but this led to performance issues. It then deployed a proprietary user-space solution based on Solarflare hardware, but this has the performance limitations of user-space applications getting packets back onto the wire involves the cost of re-injecting packets back into the kernel. This is why Cloudflare is experimenting with XDP, which was partly developed in response to the company's problems, to deploy those BPF programs. A concern that Bertin identified was the lack of visibility into dropped packets. Cloudflare currently samples some of the dropped traffic to analyze attacks; this is not currently possible with XDP unless you pass the packets down the stack, which is expensive. Miller agreed that the lack of monitoring for XDP programs is a large issue that needs to be resolved, and suggested creating a way to mark packets for extraction to allow analysis. Cloudflare is currently in a testing phase with XDP and it is unclear if its whole XDP tool chain will be publicly available. While those two companies are starting to use XDP as-is, there is more work needed to complete the XDP project. As mentioned above and in our previous coverage, massive statistics extraction is still limited in the Linux kernel and introspection is difficult. Furthermore, while the existing actions (XDP_DROP and XDP_TX, see the documentation for more information) are well implemented and used, another action may be introduced, called XDP_REDIRECT, which would allow redirecting packets to different network interfaces. Such an action could also be used to accelerate bridges as packets could be "switched" based on the MAC address table. XDP also requires network driver support, which is currently limited. For example, the Intel drivers still do not support XDP, although that should come pretty soon. Miller, in his Netdev keynote, focused on XDP and presented it as the standard solution that is safe, fast, and usable. He identified the next steps of XDP development to be the addition of debugging mechanisms, better sampling tools for statistics and analysis, and user-space consistency. Miller foresees a future for XDP similar to the popularization of the Arduino chips: a simple set of tools that anyone, not just developers, can use. He gave the example of an Arduino tutorial that he followed where he could just look up a part number and get easy-to-use instructions on how to program it. Similar components should be available for XDP. For this purpose, the conference saw the creation of a new mailing list called xdp-newbies where people can learn how to create XDP build environments and how to write XDP programs.

In-kernel layer-7 proxying The third approach that struck me as innovative is the idea of doing layer-7 (application) proxying directly in the kernel. This comes from the idea that, traditionally, we build firewalls to segregate traffic and apply controls, but as most services move to HTTP, those policies become ineffective. Thomas Graf, presented this idea during Netconf using a Star Wars allegory: what if the Death Star were a server with an API? You would have endpoints like /dock or /comms that would allow you to dock a ship or communicate with the Death Star. Those API endpoints should obviously be public, but then there is this /exhaust-port endpoint that should never be publicly available. In order for a firewall to protect such a system, it must be able to inspect traffic at a higher level than the traditional address-port pairs. Graf presented a design where the kernel would create an in-kernel socket that would negotiate TCP connections on behalf of user space and then be able to apply arbitrary eBPF rules in the kernel. Graf's design of in-kernel proxying In this scenario, instead of doing the traditional transfer from Netfilter's TPROXY to user space, the kernel directly decapsulates the HTTP traffic and passes it to BPF rules that can make decisions without doing expensive context switches or memory copies in the case of simply wanting to refuse traffic (e.g. issue an HTTP 403 error). This, of course, requires the inclusion of kTLS to process HTTPS connections. HTTP2 support may also prove problematic, as it multiplexes connections and is harder to decapsulate. This design was described as a "pure pre-accept() hook". Starovoitov also compared the design to the kernel connection multiplexer (KCM). Tom Herbert, KCM's author, agreed that it could be extended to support this, but would require some extensions in user space to provide an interface between regular socket-based applications and the KCM layer. In any case, if the application does TLS (and lots of them do), kTLS gets tricky because it breaks the end-to-end nature of TLS, in effect becoming a man in the middle between the client and the application. Eric Dumazet argued that HA-Proxy already does things like this: it uses splice() to avoid copying too much data around, but it still does a context switch to hand over processing to user space, something that could be fixed in the general case. Another similar project that was presented at Netdev is the Tempesta firewall and reverse-proxy. The speaker, Alex Krizhanovsky, explained the Tempesta developers have taken one person month to port the mbed TLS stack to the Linux kernel to allow an in-kernel TLS handshake. Tempesta also implements rate limiting, cookies, and JavaScript challenges to mitigate DDoS attacks. The argument behind the project is that "it's easier to move TLS to the kernel than it is to move the TCP/IP stack to user space". Graf explained that he is familiar with Krizhanovsky's work and he is hoping to collaborate. In effect, the design Graf is working on would serve as a foundation for Krizhanovsky's in-kernel HTTP server (kHTTP). In a private email, Graf explained that:
The main differences in the implementation are currently that we foresee to use BPF for protocol parsing to avoid having to implement every single application protocol natively in the kernel. Tempesta likely sees this less of an issue as they are probably only targeting HTTP/1.1 and HTTP/2 and to some [extent] JavaScript.
Neither project is really ready for production yet. There didn't seem to be any significant pushback from key network developers against the idea, which surprised some people, so it is likely we will see more and more layer-7 intelligence move into the kernel sooner rather than later.

Conclusion All of this work aims at replacing a rag-tag bunch of proprietary solutions that recently came up to bypass the Linux kernel TCP/IP stack and improve performance for firewalls, proxies, and other key edge network elements. The idea is that, unless the kernel improves its performance, or at least provides a way to bypass its more complex code paths, people will work around it. With this set of solutions in place, engineers will now be able to use standard APIs to hook high-performance systems into the Linux kernel.
The author would like to thank the Netdev and Netconf organizers for travel assistance, Thomas Graf for a review of the in-kernel proxying section of this article, and Jesper Dangaard Brouer for review of the af_packet and XDP sections. Note: this article first appeared in the Linux Weekly News.

24 March 2017

Gunnar Wolf: Dear lazyweb: How would you visualize..?

Dear lazyweb, I am trying to get a good way to present the categorization of several cases studied with a fitting graph. I am rating several vulnerabilities / failures according to James Cebula et. al.'s paper, A taxonomy of Operational Cyber Security Risks; this is a somewhat deep taxonomy, with 57 end items, but organized in a three levels deep hierarchy. Copying a table from the cited paper (click to display it full-sized): My categorization is binary: I care only whether it falls within a given category or not. My first stab at this was to represent each case using a star or radar graph. As an example: As you can see, to a "bare" star graph, I added a background color for each top-level category (blue for actions of people, green for systems and technology failures), red for failed internal processes and gray for external events), and printed out only the labels for the second level categories; for an accurate reading of the graphs, you have to refer to the table and count bars. And, yes, according to the Engineering Statistics Handbook:
Star plots are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming.
I strongly agree with the above statement And stating that "a few hundred points" can be understood is even an overstatement. 50 points are just too much. Now, trying to increase usability for this graph, I came across the Sunburst diagram. One of the proponents for this diagram, John Stasko, has written quite a bit about it. Now... How to create my beautiful Sunburst diagram? That's a tougher one. Even though the page I linked to in the (great!) Data visualization catalogue presents even some free-as-in-software tools to do this... They are Javascript projects that will render their beautiful plots (even including an animation)... To the browser. I need them for a static (i.e. to be printed) document. Yes, I can screenshot and all, but I want them to be automatically generated, so I can review and regenerate them all automatically. Oh, I could just write JSON and use SaaS sites such as Aculocity to do the heavy-lifting, but if you know me, you will understand why I don't want to. So... I set out to find a Gunnar-approved way to display the information I need. Now, as the Protovis documentation says, an icicle is simply a sunburst transformed from polar to cartesian coordinates... But I came to a similar conclusion: The tools I found are not what I need. OK, but an icicle graph seems much simpler to produce I fired up my Emacs, and started writing using Ruby, RMagick and RVG... I decided to try a different way. This is my result so far: So... What do you think? Does this look right to you? Clearer than the previous one? Worst? Do you have any idea on how I could make this better? Oh... You want to tell me there is something odd about it? Well, yes, of course! I still need to tweak it quite a bit. Would you believe me if I told you this is not really a left-to-right icicle graph, but rather a strangely formatted Graphviz non-directed graph using the dot formatter? I can assure you you don't want to look at my Graphviz sources... But in case you insist... Take them and laugh. Or cry. Of course, this file comes from a hand-crafted template, but has some autogenerated bits to it. I have still to tweak it quite a bit to correct several of its usability shortcomings, but at least it looks somewhat like what I want to achieve. Anyway, I started out by making a "dear lazyweb" question. So, here it goes: Do you think I'm using the right visualization for my data? Do you have any better suggestions, either of a graph or of a graph-generating tool? Thanks! [update] Thanks for the first pointer, Lazyweb! I found a beautiful solution; we will see if it is what I need or not (it is too space-greedy to be readable... But I will check it out more thoroughly). It lays out much better than anything I can spew out by myself Writing it as a mindmap using TikZ directly from within LaTeX, I get the following result:

24 December 2016

Russ Allbery: Review: Warchild

Review: Warchild, by Karin Lowachee
Series: Warchild #1
Publisher: Warner Aspect
Copyright: April 2002
ISBN: 0-446-61077-1
Format: Mass market
Pages: 451
In a future world of deep space stations and starship trade routes, Jos Musey grew up on a merchant ship with a loving family and typical childhood companions. But, at the age of eight, his ship was taken by pirates and he's taken as a slave. That might have been the end of his story, but after a year of captivity he manages to escape during an alien attack on a distant frontier station. Jos then learns more than he ever expected to learn about the ongoing deep space war between the human military and the aliens and their human sympathizers. From both sides. Warchild feels so much like a collection of 1980s SF tropes that I'm a bit surprised it was published in 2002. Some of those have been part of SF well before the 1980s: the coming-of-age story of a child in space, deep-space combat and merchant fleets, pirates, and sketchy stations. But when one adds the Japanese-inspired philosophy and combat training, with a bit of Karate Kid feel, plus the (oddly bolted on) cyberpunk "burndiving," this book feels deeply embedded in a specific generation of SF storytelling. That's not necessarily a drawback. I like some of those tropes. The martial arts training coupled with careful and patient psychology worked very well for me. It may be a bit stereotyped, but Lowachee is careful to never present it as Asian; it's an alien philosophy and environment, and although it happens to wear its influences on its sleeves, it makes no attempt to tie that to any particular human culture. And the philosophy and, more to the point, the approach Niko takes with Jos is exactly what Jos needs. That section of the book (the second) was by far my favorite. I wish the whole book had been like that. Unfortunately, it's not. The first part is a deeply uncomfortable account of Jos's capture and enslavement (with bonus implied pedophilia). It's thankfully the shortest section of the book, but it's an endless parade of horrors that I didn't enjoy reading. Lowachee took the stylistic choice of writing it in the second person, which is a literary trick that rarely works for me and didn't work here. I'm sure the goal is to make it feel more immediate, but I didn't need this scene to be more immediate, and second person always reads as awkward and forced. If the authors write characters well, I will identify with them, but if I feel like I'm being forced to identify with them, I just start getting irritated. The third part of the book goes in yet a different direction: military SF, complete with hazing, camaraderie, esprit de corps, and bloody combat, with an uncomfortable undertone of constant stress due to Jos's complex and dangerous position. I wanted this to be much shorter and wanted the book to return to the part that I really liked. Unfortunately, that's not to be; the tone of this section is the tone for the rest of the book. To be fair, it's better than I expected it to be, and Jos's recovery and coming-of-age continues in more subtle and more satisfying ways than at first it seemed like it would. But Lowachee complicates and largely breaks a recovery that I was hoping would proceed down a more peaceful path, and replaced a beautiful and interesting (if a bit stereotyped) environment with bog-standard military SF. If you like that sort of thing, there's a lot of that thing here, but I've read a lot of books with that setting and far fewer about an Asian-inspired martial alien philosophy. I think Warchild has a bit too much stuff going on and not enough recovery space. The cyberpunk angle probably gets developed more in later books of the series (the next book is Burndive, which is the name for cyberpunk hacking in this book), but it felt bolted on here. Jos's story has multiple false starts and complications, and Lowachee keeps pulling the rug out from under him again until both he and the reader go a bit numb. The ending mostly works, but it's a brutal resolution to the complex psychological situation Lowachee sets up. This book reminds me a bit of C.J. Cherryh in that the characters seem constantly stressed beyond their ability to cope. I wanted something a bit kinder and softer. Despite that, the psychology and the brief moments of understanding and light are compelling enough that I'm still tempted to read on in this series. The subsequent books follow other characters; maybe they'll be a bit less nasty to their protagonists. Followed by Burndive. Rating: 6 out of 10

7 December 2016

Jonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned. In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row. Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself. Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:
It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not. There are many "levels" of physical access. [...] In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations. And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.
While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts. The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least. Hector and Ismael argue that the default should be changed for enhanced security:
[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.
For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree. But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well. To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs. But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion. For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual: After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached. 2. tries=n option ignored, local brute-force slightly cheaper Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords. Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug. The reason for the bug was twofold:
  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:
    setup_mapping()
     
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...]
    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:
    setup_mapping()
     
      [...]
      while [ $crypttries -le 0 ]   [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
     
    
    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!
  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks. In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid. The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function. The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned. Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes. But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System. Honestly, this headline is missleading - if not wrong - in several ways:
  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.
Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer. If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all. After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

3 December 2016

Vincent Bernat: Build-time dependency patching for Android

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.
buildscript  
    dependencies  
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
     
 
Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context This section adds some context to the example. Feel free to skip it. Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1. Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:
Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.
The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2. Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement Let s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:
// In SslUtil class
public static boolean shouldDenyRequest(int error)  
    return false;
 

Transform registration Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:
import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger
class PatchXWalkTransform extends Transform  
    Logger logger = null;
    public PatchXWalkTransform(Logger logger)  
        this.logger = logger
     
    @Override
    String getName()  
        return "PatchXWalk"
     
    @Override
    Set<QualifiedContent.ContentType> getInputTypes()  
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
     
    @Override
    Set<QualifiedContent.Scope> getScopes()  
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
     
    @Override
    boolean isIncremental()  
        return true
     
    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException  
        // We should do something here
     
 
// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))
The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources. The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It s also possible to transform our own classes. The isIncremental() method returns true because we support incremental builds. The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform To keep all external dependencies unmodified, we must copy them:
@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException  
    inputs.each  
        it.jarInputs.each  
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED)   //  
                logger.info("Remove $ src ")
                FileUtils.delete(dest)
              else if (!isIncremental   status != Status.NOTCHANGED)   //  
                logger.info("Copy $ src ")
                FileUtils.copyFile(src, dest)
             
         
     
 
We also need two additional imports:
import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils
Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed ( ) or it has been modified ( ). In all other cases, we can safely assume the file is already correctly copied.

JAR patching When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ):
if ("$ src " ==~ ".*/org.xwalk/xwalk_core.*/classes.jar")  
    def pool = new ClassPool()
    pool.insertClassPath("$ src ")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') //  
    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) //  
    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error)  
    return false;
 
""", ctc)) //  
    def sslUtilBytecode = ctc.toBytecode() //  
    // Write back the JAR file
    //  
  else  
    logger.info("Copy $ src ")
    FileUtils.copyFile(src, dest)
 
We also need the following additional imports to use Javassist:
import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod
Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in ( ). We locate the appropriate method and delete it ( ). Then, we add our custom method using the same name ( ). The whole operation is done in memory. We retrieve the bytecode of the modified class in . The remaining step is to rebuild the JAR file:
def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))
//  
input.entries().each  
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class"))  
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
     
 
//  
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)
output.close()
We need the following additional imports:
import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils
There are two steps. In , all classes are copied to the new JAR, except the SslUtil class. In , the modified bytecode for SslUtil is added to the JAR. That s all! You can view the complete example on GitHub.

More complex method replacement In the above example, the new method doesn t use any external dependency. Let s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:
import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;
// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url)  
    switch(error)  
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
     
    return new SslError(SslError.SSL_INVALID, cert, url);
 
The major difference with the previous example is that we need to import some additional classes.

Android SDK import The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:
androidJar = "$ android.getSdkDirectory().getAbsolutePath() /platforms/" +
             "$ android.getCompileSdkVersion() /android.jar"
We need to load it before adding the new method into SslUtil class:
def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("$ src ")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
//  

External dependency import We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.
def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each  
    it.jarInputs.each  
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED)  
            pool.insertClassPath("$ src ")
         
     
 
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make(" "))
// Then, rebuild the JAR...
Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on.
  2. I hope to have this fixed in later versions.
  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk.
  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources.

20 August 2016

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

31 July 2016

Hideki Yamane: another apt proxy tool: "go-apt-cacher" and "go-apt-mirror"

Recently I've attended Tokyo Debian meeting at Cybozu, Inc., Nihonbashi, Tokyo.


And people from Cybozu introduced their product named "go-apt-cacher" and "go-apt-mirror".

apt-cacher-ng and apt-mirror have some problems and their product solve it, they said. They put them into their production environment (with thousands of Ubuntu servers) and it works well, so some people uses apt proxy tools may be interested to it (ping Vasudev Kamath :-) .

If it would be interesting for you, please give a comment via Twitter (@ymmt2005) or at their GitHub repo. (or help to package them and put into official repo :-)


23 July 2016

Francois Marier: Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:
smartctl -a /dev/sdb
Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive After booting with the new blank drive in, I copied the partition table using parted. First, I took a look at what the partition table looks like on the good drive:
$ parted /dev/sda
unit s
print
and created a new empty one on the replacement drive:
$ parted /dev/sdb
unit s
mktable gpt
then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda. Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
and kept an eye on the status of this sync using:
watch -n 2 cat /proc/mdstat
In order to speed up the sync, I used the following trick:
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Then, I recreated my RAID0 swap partition like this:
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:
  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:
grub-install /dev/sdb
before rebooting with both drives to first make sure that my new config works. Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb). This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:
cat /proc/mdstat
Then I ran a full SMART test over the new replacement drive:
smartctl -t long /dev/sdb

20 July 2016

Daniel Pocock: How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo. If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught. With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting. Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11. For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility. Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.
Can you escape your mobile phone while on vacation? Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards. Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing? What can be done?
  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.
Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

22 June 2016

Andrew Cater: Why I must use Free Software - and why I tell others to do so

My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times, I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
...
In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace 1996 https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much a "do-ocracy" - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.




1 June 2016

Keerthana Krishnan: Dealing with the Light Blue Screen Of Doom

Hi! With my third consecutive post of the day, I d like to speak about the login error usually seen on Gnome desktops, aka : Light Blue Screen of Doom . As far as I know, this bug is associated with Gnome and is usually because of faulty installation of gnome files and it was something that tormented me endlessly last week when I tried to install Debian 8 (Jessie) Unfortunately, I never really fixed it myself. What started off as a problem with gnome quickly snowballed into something else that wiped my routing table, making sure I couldn t even update the corrupted files properly. Although I came across a lot of bug reports in my quest to solve it, I never really found a solution.
Reinstallation seems to be only option left here, but there are a few checks you can make
  1. Call the terminal. Use Ctrl+Alt+F2, you should be able to get your command line.
  2. Login as root. This is extreme troubleshooting. You need root
  3. Check and see if startx works. (Yay! problem solved if it does!)
  4. Try to see if the basic gnome packages are installed type apt-get install with basic gnome packages like gnome-desktop-environment, gdm, gnome, etc.. to see if they are installed. If not, try to install them and then run the startx command.
  5. If you can t install them, or if they are already installed, try running the updating the system

    apt-get update
    apt-get upgrade
    apt-get dist-upgrade //If OS is at newest version
  6. If updating doesn t work and gives Failed to Fetch errors for all the source links, try pinging the network to make sure you have a connection.

    ping google.com
    ping 8.8.8.8
  7. If you do have a connection, verify the sources :

    nano /etc/apt/sources.list
    deb http://ftp.us.debian.org/debian jessie main contrib non-free
    #deb-src http://ftp.us.debian.org/debian jessie main contrib non-free
    deb http://security.debian.org/ jessie/updates main contrib non-free
    #deb-src http://security.debian.org/ jessie/updates main contrib non-free
    deb http://ftp.us.debian.org/debian testing main contrib non-free
    #deb-src http://ftp.us.debian.org/debian testing main contrib non-free
    deb http://security.debian.org/ testing/updates main contrib non-free
    #deb-src http://security.debian.org/ testing/updates main contrib non-free
  8. If you have no connection, check your routing table with the command netstat -nr . If this routing table is completely empty, then I ve got some bad news for you. Reinstallation is your new friend. Try and see if you can copy your files using command line. cp and ls are basically your new best friends.
Here s a guide to help change your OS. Hopefully, everything works out and you re back to working in no time!

25 April 2016

Antoine Beaupr : My free software activities, April 2016

Debian Long Term Support (LTS) This is my 5th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, in which I worked on completing the Xen and NSS packages updates from last month, but also spent a significant amount of time working on phpMyAdmin and libidn.

Updates to NSS and Xen completed This basically consisted on following up on the reviews from other security people. I basically continued building up on Brian's work and tested the package on a test server at Koumbit, which seems to have survived well the upgrade. The details are in this post to the debian-lts mailing list. As for NSS, the package was mostly complete, but I forgot to ship the tests for some reason, so I added them back. I also wrote the release team to see if it was possible to update NSS to the same version in all suites. Unfortunately, this issue is still pending, but I still hope we can find better ways of managing that package in the long term.

IDN and phpMyAdmin Most of my time this month was spent working on IDN and phpMyAdmin. Unfortunately, it turns out that someone else had worked on the libidn package. This is partly my fault: I forgot to check in the dsa-needed.txt file for assignment before working on the package. But considering how in flux the workflow currently is with the switch between the main security team and the LTS team for the wheezy maintenance, I don't feel too bad. Still, I prepared a package which was a bit more painful than it should have been because of GNUlib. I didn't even know about GNUlib before, oddly enough, and after that experience, I feel that it should just not exist at all anymore. I have filed a bug to remove that dependency at the very least, but I do not clearly see how such a tool is necessary on Debian at this point in time. But phpMyAdmin, no one had worked on that. And I understand why: even though it's a very popular package, there were quite a few outstanding issues (8!) in Wheezy, with 10-15 patches to be ported. Plus, it's ... well, PHP. And old PHP at that, with parts of it with modern autoloader classes, and other with mixed HTML and PHP content, require (and not require_once) and all sorts of nasty things you still find in the PHP universe. I nevertheless managed to produce a new Debian package for wheezy and test it on Koumbit's test servers. Hopefully, that can land into Wheezy soon.

Long term software support I am a little worried that we are, both in Jessie and Wheezy sitting in between stable releases for phpMyAdmin, something that is a recurring issue for a lot of packages in Debian stable or LTS. Sometimes, it just happens that the version that happens to be in Debian testing when it is released as stable is just not a supported release upstream. It's the case for phpMyAdmin in Jessie (4.3, whereas 4.0 and 4.4 are supported) and Wheezy (3.4, although it's unclear how long that was supported upstream). But even if the next Debian stable (Stretch), would pick a stable release upstream, there is actually no phpMyAdmin release that has a support window as long as Debian stable (roughly 3 years), let alone as long as Debian LTS (5 years). This is a similar problem with NSS: upstream is simply not supporting their product in the long term, or at least not in the terms we are used to in the Debian community (ie. only security fixes or regressions). This is, in my opinion, a real concern for the reliability and sustainability of the computing infrastructure we are creating. While some developers are of the opinion that software older than 18 months is too old, here we are shipping hardware and software in space or we have Solaris, which is supported for 16 years! Now that is a serious commitment and something we can rely on. 18 months is really, really, a tiny short time in the history of our civilization. I know computer programmers and engineers like to think of themselves in elitist terms, that they are changing the world every other year. But the truth is that things have not changed much in the last 4 decades where computing has existed, both in terms of security or functionality. Two quotes from my little quotes collection come to mind here:
Software gets slower faster than hardware gets faster. - Wirth's law The future is already here it's just not very evenly distributed. - William Gibson
Because of course, the response to my claims that computing is not really advancing is "but look! we have supercomputers in our pockets now!" Yes, of course, we do have those fancy phones and they are super-connected, but they are a threat to our fundamental rights and freedom. And those glittering advances always pale in comparison to what could be done if we wouldn't be wasting our time rewriting the same software over and over again on different, often proprietary, platforms, simply because of the megalomaniac fantasies of egoistic programmers. It would be great to build something that lasts, for a while. Software that does not need to be updated every 16 months. You'd think that something as basic as a screensaver release could survive what is basically the range of human infancy. Yet it seems we like to run mindlessly in the minefield of software development, one generation following the other without questioning the underlying assumption of infinite growth that permeates our dying civilization. I have talked about this before of course, but working with the LTS project just unnerves me so bad that I just needed to let out another rant. (For the record, I really have a lot of respect for JWZ and all the work he has done in the free software world. I frequently refer to his "no-bullshit" backup guide and use Xscreensaver daily. But I do think he was wrong in this case: asking Debian to remove Xscreensaver is just too much. The response from the maintainer was exemplary of how to handle such issues in the future. I restarted Xscreensaver after the stable update, and the message is gone, and things are still all fine. Thanks JWZ and Tormod for keeping things rolling.)

Other free software work With that in mind, I obviously didn't stop doing other computing work this month. In fact, I did a lot of work to try to generally fix the internet, that endless and not-quite-gratifying hobby so many of us are destroying our bodies to accomplish.

Build tools I have experimented a bit more with different build tools. I got worried because at some point cowbuilder got orphaned and I figured I could refresh the way I build packages for LTS. I looked into sbuild, but that ended up not bringing much improvements over my current cowbuilder setup (which I really need to document better). I was asked by the new maintainer to open a bug report to make such setups easier by guessing the basepath better, so we'll see how that goes. I did enjoy the simplicity of gitpkg and discovered cowpoke which made it about 2 times faster to build packages because I could use another server to build larger packages. I also found that gitpkg doesn't use -n by default when calling gzip, which makes it harder to reproduce tarballs when they are themselves built reproducibly, which is the case for Github tarballs (really nice of them). So I filed bug #820842 about that. It would be pretty awesome if such buildds would be available for Debian Developers to do their daily tasks. It could simply be a machine that would spin up a chroot with cowbuilder or it could even be a full, temporary VM although that would take way more resources than a simple VM with a cowbuilder setup. In the meantime, I should probably look at whalebuilder as an alternative to cowbuilder. It is a tool that supports building packages within a Docker chroot, which means that packages are built from a clean environment like pbuilder, and using COW optimisations but also without privileges or network access, which is a huge plus especially when you build untrusted packages.

Ikiwiki admonitions I have done some work to implement Moinmoin-like admonitions in Ikiwiki, something I am quite happy about since it's something I was really missing about Moinmoin. Admonitions bring a really nice way to outline certain blocks with varying severity levels and distinct styles. For example:
Admonitions are great!
This was done with a macro, but basically, since Markdown allows more or less arbitrary HTML, this can also be done with the <div> tag. I like that we don't have a new weird markup here. Yet, I still liked the sub-parser feature of MoinMoin, something that can be implemented in Ikiwiki, but it's a little more clunky. Normally, you'd probably do this with the inline macro and subpages, but it's certainly less intuitive that directly inlined content.

Ereader I got a new e-reader! I was hesitant between the Kobo Aura H20 and the Kobo Glo HD, which were the ones available at Bestbuy. So I bought both and figured I would return the other. That was a painful decision! In the end, both machines are pretty nice:
  • Aura H2O
    • Pros:
      • waterproof
      • larger screen (makes it easier to see web pages and better for the eyes)
      • naturally drawn to it
    • Cons:
      • heavier
      • larger (fits in outside pocket though)
      • port cover finicky
      • more expensive (180$) - prices may go down in future
  • Aura Glo HD
    • Pros
      • smaller (fits in inside pocket of both coats)
      • better resolution in theory (can't notice in practice)
      • cheaper (100$)
      • may be less common on the future (larger models more common? just a guess)
    • Cons
      • no SD card
      • smaller screen
      • power button in the middle
... but in the end, I ended up settling down on the Glo, mostly for the price. Heck, I saved around 100$, so for that amount, I could have gotten two machines so that if one breaks I would still have the other. I have ordered a cover for it on Deal Extreme about two weeks ago, and it still hasn't arrived. I suspect it's going to take a few more months to get there, by which point I may have changed e-reader again. Note that both e-readers needed an update to calibre, so I started working on a calibre backport (#818309) which I will complete soon. So anyways, I looked into better ways of transferring articles from the web to the e-reader, something which I do quite a bit to avoid spending too much time on the computer. Since the bookmark manager I use (Bookie) is pretty much dead, I started looking at other alternatives. And partly inspired by Framasoft's choice of Wallabag for their bookmarking service (Framabag), I started to look into that software, especially since my friend who currently runs the Bookie instance is thinking of switching to Wallabag as well. It seems the simplest way to browse articles remotely through a standard protocol is by implementing OPDS support in Wallabag. OPDS is a standard developed in part by the Internet Archive and it allows for browsing book collections and downloading them. Articles and bookmarks would then be presented as individual books that would be accessible from any OPDS-compatible device. Unfortunately, the Kobo e-readers don't support OPDS out of the box: you need to setup some OPDS-compatible reader like Koreader. And that I found nearly impossible to do: I was able to setup KSM (the "start menu", not to be confused with that KSM), but not Koreader in KSM. Besides, I do not want a separate OS running here on the tablet: I just want to start Koreader every once in a while. KSM just starts another system when you reboot the e-reader, something which is not really convenient on the Kobo. Basically, I just want to add Koreader as a tile in the home screen on the e-reader. I found the documentation on that topic to be sparse and hard to follow. It is often dispersed across multiple forum threads and involves uploading random binaries, often proprietary, to the e-reader. It had been a long time since I was asked to "register my software" frenetically, and I hadn't missed that one bit. So I decided to stay away from this until that solution and its documentation matures a bit.

Streaming re-established I have worked a bit on my home radio stream. I simply turned the Liquidsoap stream back online, and did a few tweaks to the documentation that I had built back then. All that experimenting led me to do two NMUs. One was for gmpc-plugins to fix a FTBFS (bug #807735) and to properly kill the shout streamer when completing the playback (bug #820908). The other was to fix the ezstream manpage (bug #573928), a patch that had been sitting there for 5 years! This was to try to find an easy way to stream random audio (say from a microphone) to the Icecast server, something which is surprisingly difficult, consider how basic that functionality is. I was surprised to see that Darkice just completely fails to start (bug #821040) and I had to fallback to the simplest ices2 software to stream the audio. I am still having issues with Liquidsoap: it's really unstable! As a server, I would expect it to keep running for days if not years. Unfortunately, there's always something that makes it crash. I had assertion failed (bug #821112) and I keep seeing it crash after 2-3 days fairly reliably, a bug I reported 3 years ago and that is still not fixed (bug #727307). Switching back the stream to Vorbis (because I was having problems with the commandline mp3 players and ogg123 is much more lightweight) created another set of problems too, this time with the phone. It seems that Android cannot stream Vorbis at all, something that is even worse in Cyanogenmod... I also had to tweak my MPD config to make the Android client be able to load the larger playlists (see dmix buffer is full).

Android apps So I have also done quite a bit of work again on my phone. I finally was told how to access from Termux from adb shell which is pretty cool because now I can start a screen on my phone and then, when I'm tired of tapping to type, I can just reconnect to it when I plug in a USB cable on my laptop. I sent a pull request to fix the documentation regarding that. I also tried to see how my SMS and Signal situation could be improved. Right now, I have two different apps to do SMS on my phone: I use both Signal and the VoIP.ms SMS client, because I do not have a contract or SIM card in my phone. Both work well independently, but it's somewhat annoying to have to switch between the two. (In fact, I feel that Signal itself has an issue with how it depends on the network to send encrypted messages: I often have to "ping" people in clear text (and therefore in the other app) so that they connect to their data plan to fetch my "secure" signal messages...) Anyways, I figured it would be nice if at least there would be a SMS fallback in Signal that would allow me to send regular text messages from signal through Voip.MS. That was dismissed really fast. Moxie even closed the conversation completely, something I had never experienced before, and doesn't feel exactly pleasant. The Voip.MS side was of course not impressed and basically shrugged it off because Signal was not receptive. I also tried to clear up the Libresignal confusion: there are 3 different projects named "Libresignal", and I am not sure I figured out the right thing, even though the issue is now closed and there's a FAQ that is supposed to make all that clear. Nevertheless, I opened two distinct feature requests to try to steer the conversation into a more productive direction: GCM-less calls and GCM support. But I'm not sure neither will go anywhere. In the meantime, I am using the official signal client, which I downloaded using gplaycli and which I keep up to date with Aptoide. Even though the reliability of that site is definitely questionable, it seems that once you have a trusted version, it is safe to upgrade, regardless of the source because there is a trust path between you and the developer. I also filed a few issues with varying levels of success/response from the community:

Random background radiation And then there's of course the monthly background noise of all the projects I happened to not only stumble on, but also file bugs or patches against:

16 April 2016

Scott Kitterman: Future of secure systems in the US

As a rule, I avoid writing publicly on political topics, but I m making an exception. In case you haven t been following it, the senior Republican and the senior Democrat on the Senate Intelligence Committee recently announced a legislative proposal misleadingly called the Compliance with Court Orders Act of 2016. The full text of the draft can be found here. It would effectively ban devices and software in the United States that the manufacturer cannot retrieve data from. Here is a good analysis of the breadth of the proposal and a good analysis of the bill itself. While complying with court orders might sound great in theory, in practice this means these devices and software will be insecure by design. While that s probably reasonably obvious to most normal readers here, don t just take my word for it, take Bruce Schneier s. In my opinion, policy makers (and it s not just in the United States) are suffering from a perception gap about security and how technically hard it is to get right. It seems to me that they are convinced that technologists could just do security right while still allowing some level of extraordinary access for law enforcement if they only wanted to. We ve tried this before and the story never seems to end well. This isn t a complaint from wide eyed radicals that such extraordinary access is morally wrong or inappropriate. It s hard core technologists saying it can t be done. I don t know how to get the message across. Here s President Obama, in my opinion, completely missing the point when he equates a desire for security with fetishizing our phones above every other value. Here are some very smart people trying very hard to be reasonable about some mythical middle ground. As Riana Pfefferkorn s analysis that I linked in the first paragraph discusses, this middle ground doesn t exist and all the arm waving in the world by policy makers won t create it. Coincidentally, this same week, the White House announced a new Commission on Enhancing National Cybersecurity . Cybersecurity is certainly something we could use more of, unfortunately Congress seems to be heading off in the opposite direction and no one from the executive branch has spoken out against it. Security and privacy are important to many people. Given the personal and financial importance of data stored in computers (traditional or mobile), users don t want criminals to get a hold of it. Companies know this, which is why both Apple IOS and Google Android both encrypt their local file systems by default now. If a bill anything like what s been proposed becomes law, users that care about security are going to go elsewhere. That may end up being non-US companies products or US companies may shift operations to localities more friendly to secure design. Either way, the US tech sector loses. A more accurate title would have been Technology Jobs Off-Shoring Act of 2016. EDIT: Fixed a typo.

31 March 2016

Chris Lamb: Free software activities in March 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):
Debian
  • Presented Reproducible Builds - fulfilling the original promise of free software at FOSSASIA '16.
  • Uploaded libfiu (0.94-4), adding a patch from Logan Rose to fix a FTBFS with ld --as-needed.
My work in the Reproducible Builds project was also covered in more depth in Lunar's weekly reports (#44, #45, #46, #47).
LTS

This month I have been paid to work 7 hours on Debian Long Term Support (LTS). Whilst the LTS team will take over support from the Security Team on April 26, 2016, in the meantime I did the following:
  • Archived the squeeze distribution (via the FTPteam).
  • Assisted in preparing updates for python-django.
  • Helping end-users migrate to wheezy now that squeeze LTS has reached end-of-life.


FTP Team

As a Debian FTP assistant I ACCEPTed 143 packages: acme-tiny, berkshelf-api, circlator, cloud-utils, corsix-th, cronic, diaspora-installer, dub, dumb-init, firehol, firetools, flask-bcrypt, flask-oldsessions, flycheck, ganeti, geany-plugins, git-build-recipe, git-phab, gnome-shell-extension-caffeine, gnome-shell-extension-mediaplayer, golang-github-cheggaaa-pb, golang-github-coreos-ioprogress, golang-github-cyberdelia-go-metrics-graphite, golang-github-cznic-ql, golang-github-elazarl-goproxy, golang-github-hashicorp-hil, golang-github-mitchellh-go-wordwrap, golang-github-mvdan-xurls, golang-github-paulrosania-go-charset, golang-github-xeipuuv-gojsonreference, golang-github-xeipuuv-gojsonschema, grilo-plugins, gtk3-nocsd, herisvm, identity4c, lemonldap-ng, libisal, libmath-gsl-perl, libmemcached-libmemcached-perl, libplack-middleware-logany-perl, libplack-middleware-logwarn-perl, libpng1.6, libqmi, librdf-generator-http-perl, libtime-moment-perl, libvirt-php, libxml-compile-soap-perl, libxml-compile-wsdl11-perl, linux, linux-tools, mdk-doc, mesa, mpdecimal, msi-keyboard, nauty, node-addressparser, node-ansi-regex, node-argparse, node-array-find-index, node-base62, node-co, node-component-consoler, node-crypto-cacerts, node-decamelize, node-delve, node-for-in, node-function-bind, node-generator-supported, node-invert-kv, node-json-localizer, node-normalize-git-url, node-nth-check, node-obj-util, node-read-file, node-require-dir, node-require-main-filename, node-seq, node-starttls, node-through, node-uid-number, node-uri-path, node-url-join, node-xmlhttprequest-ssl, ocrmypdf, octave-netcdf, open-infrastructure-container-tools, osmose-emulator, pdal, pep8, pg-backup-ctl, php-guzzle, printrun, pydocstyle, pysynphot, python-antlr3, python-biom-format, python-brainstorm, python-django-adminsortable, python-feather-format, python-gevent, python-lxc, python-mongoengine, python-nameparser, python-pdal, python-pefile, python-phabricator, python-pika-pool, python-pynlpl, python-qtawesome, python-requests-unixsocket, python-saharaclient, python-stringtemplate3, r-cran-adegraphics, r-cran-assertthat, r-cran-bold, r-cran-curl, r-cran-data.table, r-cran-htmltools, r-cran-httr, r-cran-lazyeval, r-cran-mcmc, r-cran-openssl, r-cran-pbdzmq, r-cran-rncl, r-cran-uuid, rawtran, reel, ruby-certificate-authority, ruby-rspec-pending-for, ruby-ruby-engine, ruby-ruby-version, scribus-ng, specutils, symfony, tandem-mass, tdb, thrift, udfclient, vala, why3, wmaker, xdg-app & xiccd.

7 March 2016

Hideki Yamane: Debian meeting in Tokyo (2016 March)

5th March, we've held Debian (Debian meeting) in Tokyo. (Cybozu, Inc.) kindly provides their office for meeting, thanks for folks in Cybozu.



This time, three talks were given:

    • Kentaro is "groonga"(an open-source fulltext search engine and column store) upstream author and package maintainer in Debian and Fedora. This talk is about his experience how to use Debian "porterbox" for non-DDs.
  • "Porting Debian to tilegx" by @wskoka
    • About his experience porting Debian to tilegx architecture, multicore processor family by Tilera. He is not porting expert ("I'm sale person", he said :), but did try&error and now apt is working on that machine.
  • "Introduction to Debian Ports" by John Paul Adrian Glaubitz
    • Adrian comes from Germany(!) and gave a talk about debian-ports.


During break time, did some discussion, GPG keysigning and enjoyed coffee and sweets provided by Cybozu, thanks!

And thanks for all participates!

7 December 2015

Francois Marier: Tweaking Cookies For Privacy in Firefox

Cookies are an important part of the Web since they are the primary mechanism that websites use to maintain user sessions. Unfortunately, they are also abused by surveillance marketing companies to follow you around the Web. Here are a few things you can do in Firefox to protect your privacy.

Cookie Expiry Cookies are sent from the website to your browser via a Set-Cookie HTTP header on the response. It looks like this:
HTTP/1.1 200 OK
Date: Mon, 07 Dec 2015 16:55:43 GMT
Server: Apache
Set-Cookie: SESSIONID=65576c6c64206e6f2c657920756f632061726b636465742065686320646f2165
Content-Length: 2036
Content-Type: text/html;charset=UTF-8
When your browser sees this, it saves that cookie for the given hostname and keeps it until you close the browser. Should a site want to persist their cookie for longer, they can add an Expires attribute:
Set-Cookie: SESSIONID=65576c...; expires=Tue, 06-Dec-2016 22:38:26 GMT
in which case the browser will retain the cookie until the server-provided expiry date (which could be in a few years). Of course, that's if you don't instruct your browser to do things differently.

Third-Party Cookies So far, we've only looked at first-party cookies: the ones set by the website you visit and which are typically used to synchronize your login state with the server. There is however another kind: third-party cookies. These ones are set by the third-party resources that a page loads. For example, if a page loads JavaScript from a third-party ad network, you can be pretty confident that they will set their own cookie in order to build a profile on you and serve you "better and more relevant ads".

Controlling Third-Party Cookies If you'd like to opt out of these, you have a couple of options. The first one is to turn off third-party cookies entirely by going back into the Privacy preferences and selecting "Never" next to the "Accept third-party cookies" setting (network.cookie.cookieBehavior = 1). Unfortunately, turning off third-party cookies entirely tends to break a number of sites which rely on this functionality (for example as part of their for login process). A more forgiving option is to accept third-party cookies only for sites which you have actually visited directly. For example, if you visit Facebook and login, you will get a cookie from them. Then when you visit other sites which include Facebook widgets they will not recognize you unless you allow cookies to be sent in a third-party context. To do that, choose the "From visited" option (network.cookie.cookieBehavior = 3). In addition to this setting, you can also choose to make all third-party cookies automatically expire when you close Firefox by setting the network.cookie.thirdparty.sessionOnly option to true in about:config.

Other Ways to Limit Third-Party Cookies Another way to limit undesirable third-party cookies is to tell the browser to avoid connecting to trackers in the first place. This functionality is now built into Private Browsing mode and enabled by default. To enable it outside of Private Browsing too, simply go into about:config and set privacy.trackingprotection.enabled to true. You could also install the EFF's Privacy Badger add-on which uses heuristics to detect and block trackers, unlike Firefox tracking protection which uses a blocklist of known trackers.

My Recommended Settings On my work computer I currently use the following:
network.cookie.cookieBehavior = 3
network.cookie.lifetimePolicy = 3
network.cookie.lifetime.days = 5
network.cookie.thirdparty.sessionOnly = true
privacy.trackingprotection.enabled = true
which allows me to stay logged into most sites for the whole week (no matter now often I restart Firefox Nightly) while limiting tracking and other undesirable cookies as much as possible.

5 November 2015

Gunnar Wolf: WTF @omarfayad Should *all* computer activity be illegal now? #LeyFayad

Last week, Senator Omar Fayad presented one of the prime examples of a poorly redacted law that, if enacted, will make basically any way of computer use illegal. And yes, even if he states this is merely a draft, it has so many factual and conceptual errors that there is no way to trust sanity can be regained at any point. Oh, and before I continue with this rant: If the topic interests you, I suggest you to read the 10 key points about Ley Fayad, the worst Internet initiative in history, published by r3d.mx. [update] An English equivalent of the work at r3d, at revolution-news.com: #LeyFayad: The Worst Bill in Internet History The full text (in Spanish, of course) for the law initiative is available at the Senate webpage; the law will be called Ley Federal para Prevenir y Sancionar los Delitos Inform ticos (Federal law to prevent and punish informatic felonies<) A bad name to start with, as there are many laws already in that contested area. I started reading with the preamble (Exposici n de motivos), which already shows bad signs of imprecise redaction and is plagued with factual errors (i.e. asserting that the real danger stems from the Web migrating to the Web 2.0, from which stems that this migration and not any previous one. Or by stating that (quoting+translating a full paragraph):
Activities such as electronic commerce, digital periodism, publicity and the opinions, messages or elements written in social networks can lead to patrimonial, reputation, honor or professional activity losses for people.
He continues by stating that only 16% of the countries have some kind of cybersecurity strategy (and, of course, Mexico doesn't). That... Well, is very hard to believe, as Mexico has two separate policial groups devoted to cybersecurity, and laws regulating from electronic signatures, commerce, identity, privacy, use and abuse, and a long list. Of course, as most law proposals go, it quickly decays into a dry, boring document... And I must admit I didn't fully read it, but picked here and there. I won't copy in full the note I mentioned at the beginning at r3d.mx, but will continue with some strange points, such as:
Article 16
Every person that, without the corresponding authorization or exceeding the authorization confered, accesses, intercepts, interfers or uses an information system, will be punished by one to eight years of prision and fined by 800 to 1000 days of minimum wage
So, yes, borrowing your computer without getting explicit permission, or playing around with the options in kiosks, or tons of whatever we curious people do with systems we encounter are basis for jail. (And yes, fines in this country are expressed in "days of minimum wage", which goes at ~MX$70 per day, which is ~US$4). But it gets funkier quickly:
Article 17
Whoever fraudulently destroys, disables, damages or in any way alters the working of an informatic system or any of its components, will be punished by fice to fifteen years of jail and fined by up to a thousand minimum wage days The same punishment will be given to whoever, without authorization, destroys, damages, modifies, divulges, transfers or disables information contained in any Informatic System or any of its components. The punishment will be ten to twenty years in prision and a fine of up to a thousand days of minimum wage if the effects here mentioned are done by the creation, introduction or fraudulent transmission, by any means, of an informatic weapon or malicious code
This law is meant to protect against cyberfelonies, if such a thing exists. However, here we are putting at risk people even for accidental equipment destructions. I dropped your portable hard disk with my elbow off the table? Accuse me of acting fraudulently, and I'm up for a serious jail time. And yes, laws are meant to be interpreted... And I don't want to be at the receiving end of this one! In this last article, Fayad mentions informatic weapons, which are defined in the preamble as any informatic program, informatic system, or in general, any device or material created or designed with the purpose of committing an informatic crime. So the very next article makes me, as it should make all of my fellow students and researchers, very uneasy:
Article 18
Whoever uses informatic weapons or malicious code will be imprisioned by two to six years, and fined with 200 to 500 days of minimum wage.
Article 19
Whoever builds, distributes, commerces with informatic weapons or malicious codes will be punished by three to seven years of prision and 200 to 500 days of minimum wage.
If we need to analyze malware for our classes (or for paid work, or as a hobby), we clearly fall in article 18. If we write something that can be classified as malware (without even releasing it, as an academic excercise only!), we are covered by article 19. If I give my students code that's known to be malicious (which could be as inofensive as linking to a well-known Web comic), I'm also covered by article 19. I'll jump all the way to article 31 (reproduced only partially):
Article 31
Whoever, by any means, creates, captures, records, copies, alters, duplicates, clones or deletes the information contained in a credit or debit card (...) will be punished by 8 to 14 years of prision and 300 to 500 days of minimum wage. (...)
This clearly disincentivates any way of e-commerce. When I try to buy anything online, I have to capture+copy my (rightfully owned) credit card data. The services provider has to copy, process and then delete said information. Any e-transaction is punished by jail! Well... But thinking about this again, maybe I shouldn't be so worried about the malware distribution issue at my classes. There are clearer and more contundent articles. Say...
Article 35
Whoever convenes, organizes, is part of, or executes a cibernetic attack, will be punished by 20 to 30 years of prision and fined with 100 to 1000 days of minimum wage
Of course we have convened, organized, been part of and executed cibernetic attacks at the computer security lab at ESIME. Why would there be such a lab otherwise? Then, there are clear indications that the Senator didn't understand the topic his team was working on:
Article 37
Who manipulates the digital seals used by command of the public authority will be punished with 240 days of community work
Now... What is a digital seal? It's not a phisical one that does not allow opening the doors to a business found at fault, but something that just proves a document is legitimate and pristine. How can I manipulate them? Of course, if the seals are MD5-based, I can easily forge them (and SHA1-based, it seems they will be broken enough soon to be considered no longer trustable)... But that's about it! And there is more, lots more. I'm swamped with work, and have to get back to it. But chapters the following chapters have a lot of potential for finding holes. PS - And yes, the only use I do of Twitter is via the headlines in my blog ;-) [update] Ley Fayad is dead, yay! \o/ The senator withdrew the proposal.

4 November 2015

Craig Sanders: Part-time sysadmin work in Melbourne?

I m looking for a part-time Systems Administration role in Melbourne, either in a senior capacity or happy to assist an existing sysadmin or development team. I m mostly recovered from a long illness and want to get back to work, on a part-time basis (up to 3 days per week). Preferably in the City or Inner North near public transport. I can commute further if there is scope for telecommuting once I know your systems and people, and trust has been established. If you have a suitable position available or know of someone who does, please contact me by email. Why hire me? Pros:
  • I have over 20 years experience working with linux and unix
  • Over 30 years in IT, tech support, sysadmin type roles
  • I have excellent problem-solving skills
  • I have excellent English language communication skills
  • I can only work part time so you get a senior sysadmin at a discount part-time price, and I m able to get things done both quickly and correctly.
  • My programming strengths are in systems administration and automation.
  • I m a real-life cyborg (at least part-time, on dialysis)
Cons:
  • My programming weaknesses are in applications development.
  • I can t travel at all. I have to dialyse every 2nd night.
  • I can t work late (except via telecommute)
  • I can t drink alcohol, not even beer.
  • I am on the transplant waiting list. At some time in the next few years I might get a phone call from the hospital and have to drop everything with 1 or 2 hours notice and be out of action for a few weeks.
Full CV available on request. I m in the top few percent on ServerFault and Unix & Linux StackExchange sites if you want to get a preview of my problem-solving and technical communication skills, see my profile at: http://unix.stackexchange.com/users/7696/cas?tab=profile CV Summary: Systems Administrator and Programmer with extensive exposure to a wide variety of hardware and software systems. Excellent fault-diagnosis, problem-solving and system design skills. Strong technical/user support background. Ability to communicate technical concepts clearly to non-IT people. Significant IT Management and supervisory experience. Particular Skills
  • Unix, Linux
  • Internet-based Services and Security
  • Systems & Network Administration
  • Virtualisation Openstack, Libvirt, KVM, Xen, Vmware
  • HPC Cluster slurm, Torque, OpenMPI, pdsh
  • Perl, Python, shell, awk, sed, etc scripting for systems automation
  • Data extraction/conversion, processing, and reporting. (incl. CSV, XML, many others)
  • Web server administation, incl. Apache.
  • Web development HTML, CSS, Javascript, perl, PHP, CGI scripting etc.
  • Postgresql, Mysql, Microsoft SQL Server, and Oracle
  • DNS bind8/bind9, nlnet s nsd & unbound
  • SMTP postfix, sendmail, exim, qmail.
  • High Level Technical Support
  • Database design & development
  • Mentoring and training of colleagues
Part-time sysadmin work in Melbourne? is a post from: Errata

23 June 2015

Russell Coker: One Android Phone Per Child

I was asked for advice on whether children should have access to smart phones, it s an issue that many people are discussing and seems worthy of a blog post. Claimed Problems with Smart Phones The first thing that I think people should read is this XKCD post with quotes about the demise of letter writing from 99+ years ago [1]. Given the lack of evidence cited by people who oppose phone use I think we should consider to what extent the current concerns about smart phone use are just reactions to changes in society. I ve done some web searching for reasons that people give for opposing smart phone use by kids and addressed the issues below. Some people claim that children shouldn t get a phone when they are so young that it will just be a toy. That s interesting given the dramatic increase in the amount of money spent on toys for children in recent times. It s particularly interesting when parents buy game consoles for their children but refuse mobile phone toys (I know someone who did this). I think this is more of a social issue regarding what is a suitable toy than any real objection to phones used as toys. Obviously the educational potential of a mobile phone is much greater than that of a game console. It s often claimed that kids should spend their time reading books instead of using phones. When visiting libraries I ve observed kids using phones to store lists of books that they want to read, this seems to discredit that theory. Also some libraries have Android and iOS apps for searching their catalogs. There are a variety of apps for reading eBooks, some of which have access to many free books but I don t expect many people to read novels on a phone. Cyber-bullying is the subject of a lot of anxiety in the media. At least with cyber-bullying there s an electronic trail, anyone who suspects that their child is being cyber-bullied can check that while old-fashioned bullying is more difficult to track down. Also while cyber-bullying can happen faster on smart phones the victim can also be harassed on a PC. I don t think that waiting to use a PC and learn what nasty thing people are saying about you is going to be much better than getting an instant notification on a smart phone. It seems to me that the main disadvantage of smart phones in regard to cyber-bullying is that it s easier for a child to participate in bullying if they have such a device. As most parents don t seem concerned that their child might be a bully (unfortunately many parents think it s a good thing) this doesn t seem like a logical objection. Fear of missing out (FOMO) is claimed to be a problem, apparently if a child has a phone then they will want to take it to bed with them and that would be a bad thing. But parents could have a policy about when phones may be used and insist that a phone not be taken into the bedroom. If it s impossible for a child to own a phone without taking it to bed then the parents are probably dealing with other problems. I m not convinced that a phone in bed is necessarily a bad thing anyway, a phone can be used as an alarm clock and instant-message notifications can be turned off at night. When I was young I used to wait until my parents were asleep before getting out of bed to use my PC, so if smart-phones were available when I was young it wouldn t have changed my night-time computer use. Some people complain that kids might use phones to play games too much or talk to their friends too much. What do people expect kids to do? In recent times the fear of abduction has led to children doing playing outside a lot less, it used to be that 6yos would play with other kids in their street and 9yos would be allowed to walk to the local park. Now people aren t allowing 14yo kids walk to the nearest park alone. Playing games and socialising with other kids has to be done over the Internet because kids aren t often allowed out of the house. Play and socialising are important learning experiences that have to happen online if they can t happen offline. Apps can be expensive. But it s optional to sign up for a credit card with the Google Play store and the range of free apps is really good. Also the default configuration of the app store is to require a password entry before every purchase. Finally it is possible to give kids pre-paid credit cards and let them pay for their own stuff, such pre-paid cards are sold at Australian post offices and I m sure that most first-world countries have similar facilities. Electronic communication is claimed to be somehow different and lesser than old-fashioned communication. I presume that people made the same claims about the telephone when it first became popular. The only real difference between email and posted letters is that email tends to be shorter because the reply time is smaller, you can reply to any questions in the same day not wait a week for a response so it makes sense to expect questions rather than covering all possibilities in the first email. If it s a good thing to have longer forms of communication then a smart phone with a big screen would be a better option than a feature phone , and if face to face communication is preferred then a smart phone with video-call access would be the way to go (better even than old fashioned telephony). Real Problems with Smart Phones The majority opinion among everyone who matters (parents, teachers, and police) seems to be that crime at school isn t important. Many crimes that would result in jail sentences if committed by adults receive either no punishment or something trivial (such as lunchtime detention) if committed by school kids. Introducing items that are both intrinsically valuable and which have personal value due to the data storage into a typical school environment is probably going to increase the amount of crime. The best options to deal with this problem are to prevent kids from taking phones to school or to home-school kids. Fixing the crime problem at typical schools isn t a viable option. Bills can potentially be unexpectedly large due to kids inability to restrain their usage and telcos deliberately making their plans tricky to profit from excess usage fees. The solution is to only use pre-paid plans, fortunately many companies offer good deals for pre-paid use. In Australia Aldi sells pre-paid credit in $15 increments that lasts a year [2]. So it s possible to pay $15 per year for a child s phone use, have them use Wifi for data access and pay from their own money if they make excessive calls. For older kids who need data access when they aren t at home or near their parents there are other pre-paid phone companies that offer good deals, I ve previously compared prices of telcos in Australia, some of those telcos should do [3]. It s expensive to buy phones. The solution to this is to not buy new phones for kids, give them an old phone that was used by an older relative or buy an old phone on ebay. Also let kids petition wealthy relatives for a phone as a birthday present. If grandparents want to buy the latest smart-phone for a 7yo then there s no reason to stop them IMHO (this isn t a hypothetical situation). Kids can be irresponsible and lose or break their phone. But the way kids learn to act responsibly is by practice. If they break a good phone and get a lesser phone as a replacement or have to keep using a broken phone then it s a learning experience. A friend s son head-butted his phone and cracked the screen he used it for 6 months after that, I think he learned from that experience. I think that kids should learn to be responsible with a phone several years before they are allowed to get a learner s permit to drive a car on public roads, which means that they should have their own phone when they are 12. I ve seen an article about a school finding that tablets didn t work as well as laptops which was touted as news. Laptops or desktop PCs obviously work best for typing. Tablets are for situations where a laptop isn t convenient and when the usage involves mostly reading/watching, I ve seen school kids using tablets on excursions which seems like a good use of them. Phones are even less suited to writing than tablets. This isn t a problem for phone use, you just need to use the right device for each task. Phones vs Tablets Some people think that a tablet is somehow different from a phone. I ve just read an article by a parent who proudly described their policy of buying feature phones for their children and tablets for them to do homework etc. Really a phone is just a smaller tablet, once you have decided to buy a tablet the choice to buy a smart phone is just about whether you want a smaller version of what you have already got. The iPad doesn t appear to be able to make phone calls (but it supports many different VOIP and video-conferencing apps) so that could technically be described as a difference. AFAIK all Android tablets that support 3G networking also support making and receiving phone calls if you have a SIM installed. It is awkward to use a tablet to make phone calls but most usage of a modern phone is as an ultra portable computer not as a telephone. The phone vs tablet issue doesn t seem to be about the capabilities of the device. It s about how portable the device should be and the image of the device. I think that if a tablet is good then a more portable computing device can only be better (at least when you need greater portability). Recently I ve been carrying a 10 tablet around a lot for work, sometimes a tablet will do for emergency work when a phone is too small and a laptop is too heavy. Even though tablets are thin and light it s still inconvenient to carry, the issue of size and weight is a greater problem for kids. 7 tablets are a lot smaller and lighter, but that s getting close to a 5 phone. Benefits of Smart Phones Using a smart phone is good for teaching children dexterity. It can also be used for teaching art in situations where more traditional art forms such as finger painting aren t possible (I have met a professional artist who has used a Samsung Galaxy Note phone for creating art work). There is a huge range of educational apps for smart phones. The Wikireader (that I reviewed 4 years ago) [4] has obvious educational benefits. But a phone with Internet access (either 3G or Wifi) gives Wikipedia access including all pictures and is a better fit for most pockets. There are lots of educational web sites and random web sites that can be used for education (Googling the answer to random questions). When it comes to preparing kids for the real world or the work environment people often claim that kids need to use Microsoft software because most companies do (regardless of the fact that most companies will be using radically different versions of MS software by the time current school kids graduate from university). In my typical work environment I m expected to be able to find the answer to all sorts of random work-related questions at any time and I think that many careers have similar expectations. Being able to quickly look things up on a phone is a real work skill, and a skill that s going to last a lot longer than knowing today s version of MS-Office. There are a variety of apps for tracking phones. There are non-creepy ways of using such apps for monitoring kids. Also with two-way monitoring kids will know when their parents are about to collect them from an event and can stay inside until their parents are in the area. This combined with the phone/SMS functionality that is available on feature-phones provides some benefits for child safety. iOS vs Android Rumour has it that iOS is better than Android for kids diagnosed with Low Functioning Autism. There are apparently apps that help non-verbal kids communicate with icons and for arranging schedules for kids who have difficulty with changes to plans. I don t know anyone who has a LFA child so I haven t had any reason to investigate such things. Anyone can visit an Apple store and a Samsung Experience store as they have phones and tablets you can use to test out the apps (at least the ones with free versions). As an aside the money the Australian government provides to assist Autistic children can be used to purchase a phone or tablet if a registered therapist signs a document declaring that it has a therapeutic benefit. I think that Android devices are generally better for educational purposes than iOS devices because Android is a less restrictive platform. On an Android device you can install apps downloaded from a web site or from a 3rd party app download service. Even if you stick to the Google Play store there s a wider range of apps to choose from because Google is apparently less restrictive. Android devices usually allow installation of a replacement OS. The Nexus devices are always unlocked and have a wide range of alternate OS images and the other commonly used devices can usually have an alternate OS installed. This allows kids who have the interest and technical skill to extensively customise their device and learn all about it s operation. iOS devices are designed to be sealed against the user. Admittedly there probably aren t many kids with the skill and desire to replace the OS on their phone, but I think it s good to have option. Android phones have a range of sizes and features while Apple only makes a few devices at any time and there s usually only a couple of different phones on sale. iPhones are also a lot smaller than most Android phones, according to my previous estimates of hand size the iPhone 5 would be a good tablet for a 3yo or good for side-grasp phone use for a 10yo [5]. The main benefits of a phone are for things other than making phone calls so generally the biggest phone that will fit in a pocket is the best choice. The tiny iPhones don t seem very suitable. Also buying one of each is a viable option. Conclusion I think that mobile phone ownership is good for almost all kids even from a very young age (there are many reports of kids learning to use phones and tablets before they learn to read). There are no real down-sides that I can find. I think that Android devices are generally a better option than iOS devices. But in the case of special needs kids there may be advantages to iOS.

Next.

Previous.