Search Results: "p2"

17 November 2017

Renata Scheibler: Hello, world!

Renata's picture, a white woman profile. She touches her chin with her left hand fingertips About me Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil. Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here. Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:
Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.
There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django. I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning. I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network). I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home. So, how come it is that only now, 14 years later, I am trying to get into tech? Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.) I ended up going to study... teacher training in History as an undergrad course. A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that). Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did. I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher. There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

07 November 2017

Don Armstrong: Autorandr: automatically adjust screen layout

Like many laptop users, I often plug my laptop into different monitor setups (multiple monitors at my desk, projector when presenting, etc.) Running xrandr commands or clicking through interfaces gets tedious, and writing scripts isn't much better. Recently, I ran across autorandr, which detects attached monitors using EDID (and other settings), saves xrandr configurations, and restores them. It can also run arbitrary scripts when a particular configuration is loaded. I've packed it, and it is currently waiting in NEW. If you can't wait, the deb is here and the git repo is here. To use it, simply install the package, and create your initial configuration (in my case, undocked):
 autorandr --save undocked
then, dock your laptop (or plug in your external monitor(s)), change the configuration using xrandr (or whatever you use), and save your new configuration (in my case, workstation):
autorandr --save workstation
repeat for any additional configurations you have (or as you find new configurations). Autorandr has udev, systemd, and pm-utils hooks, and autorandr --change should be run any time that new displays appear. You can also run autorandr --change or autorandr --load workstation manually too if you need to. You can also add your own ~/.config/autorandr/$PROFILE/postswitch script to run after a configuration is loaded. Since I run i3, my workstation configuration looks like this:
 #!/bin/bash
 xrandr --dpi 92
 xrandr --output DP2-2 --primary
 i3-msg '[workspace="^(1 4 6)"] move workspace to output DP2-2;'
 i3-msg '[workspace="^(2 5 9)"] move workspace to output DP2-3;'
 i3-msg '[workspace="^(3 8)"] move workspace to output DP2-1;'
which fixes the dpi appropriately, sets the primary screen (possibly not needed?), and moves the i3 workspaces about. You can also arrange for configurations to never be run by adding a block hook in the profile directory. Check it out if you change your monitor configuration regularly!

31 October 2017

Daniel Leidert: Troubleshoot

I have no idea, what these errors mean. $searchengine and manual pages didn't reveal anything.That's the first one. It occurs during boot time. Might be a bug somewhere, recently introduced in Debian Sid.

kernel: [ ... ] cgroup: cgroup2: unknown option "nsdelegate"
And that's the second one. It simply occurred. No real issue with NFS though.

kernel: [ ... ] nfs: RPC call returned error 22
kernel: [ ... ] NFS: state manager: check lease failed on NFSv4 server XXX.XXX.XXX.XXX with error 5
Any explanation is appreciated.

25 October 2017

Petter Reinholdtsen: Locating IMDB IDs of movies in the Internet Archive using Wikidata

Recently, I needed to automatically check the copyright status of a set of The Internet Movie database (IMDB) entries, to figure out which one of the movies they refer to can be freely distributed on the Internet. This proved to be harder than it sounds. IMDB for sure list movies without any copyright protection, where the copyright protection has expired or where the movie is lisenced using a permissive license like one from Creative Commons. These are mixed with copyright protected movies, and there seem to be no way to separate these classes of movies using the information in IMDB. First I tried to look up entries manually in IMDB, Wikipedia and The Internet Archive, to get a feel how to do this. It is hard to know for sure using these sources, but it should be possible to be reasonable confident a movie is "out of copyright" with a few hours work per movie. As I needed to check almost 20,000 entries, this approach was not sustainable. I simply can not work around the clock for about 6 years to check this data set. I asked the people behind The Internet Archive if they could introduce a new metadata field in their metadata XML for IMDB ID, but was told that they leave it completely to the uploaders to update the metadata. Some of the metadata entries had IMDB links in the description, but I found no way to download all metadata files in bulk to locate those ones and put that approach aside. In the process I noticed several Wikipedia articles about movies had links to both IMDB and The Internet Archive, and it occured to me that I could use the Wikipedia RDF data set to locate entries with both, to at least get a lower bound on the number of movies on The Internet Archive with a IMDB ID. This is useful based on the assumption that movies distributed by The Internet Archive can be legally distributed on the Internet. With some help from the RDF community (thank you DanC), I was able to come up with this query to pass to the SPARQL interface on Wikidata:
SELECT ?work ?imdb ?ia ?when ?label
WHERE
 
  ?work wdt:P31/wdt:P279* wd:Q11424.
  ?work wdt:P345 ?imdb.
  ?work wdt:P724 ?ia.
  OPTIONAL  
        ?work wdt:P577 ?when.
        ?work rdfs:label ?label.
        FILTER(LANG(?label) = "en").
   
 
If I understand the query right, for every film entry anywhere in Wikpedia, it will return the IMDB ID and The Internet Archive ID, and when the movie was released and its English title, if either or both of the latter two are available. At the moment the result set contain 2338 entries. Of course, it depend on volunteers including both correct IMDB and The Internet Archive IDs in the wikipedia articles for the movie. It should be noted that the result will include duplicates if the movie have entries in several languages. There are some bogus entries, either because The Internet Archive ID contain a typo or because the movie is not available from The Internet Archive. I did not verify the IMDB IDs, as I am unsure how to do that automatically. I wrote a small python script to extract the data set from Wikidata and check if the XML metadata for the movie is available from The Internet Archive, and after around 1.5 hour it produced a list of 2097 free movies and their IMDB ID. In total, 171 entries in Wikidata lack the refered Internet Archive entry. I assume the 70 "disappearing" entries (ie 2338-2097-171) are duplicate entries. This is not too bad, given that The Internet Archive report to contain 5331 feature films at the moment, but it also mean more than 3000 movies are missing on Wikipedia or are missing the pair of references on Wikipedia. I was curious about the distribution by release year, and made a little graph to show how the amount of free movies is spread over the years: I expect the relative distribution of the remaining 3000 movies to be similar. If you want to help, and want to ensure Wikipedia can be used to cross reference The Internet Archive and The Internet Movie Database, please make sure entries like this are listed under the "External links" heading on the Wikipedia article for the movie:
*  Internet Archive film id=FightingLady 
*  IMDb title id=0036823 title=The Fighting Lady 
Please verify the links on the final page, to make sure you did not introduce a typo. Here is the complete list, if you want to correct the 171 identified Wikipedia entries with broken links to The Internet Archive: Q1140317, Q458656, Q458656, Q470560, Q743340, Q822580, Q480696, Q128761, Q1307059, Q1335091, Q1537166, Q1438334, Q1479751, Q1497200, Q1498122, Q865973, Q834269, Q841781, Q841781, Q1548193, Q499031, Q1564769, Q1585239, Q1585569, Q1624236, Q4796595, Q4853469, Q4873046, Q915016, Q4660396, Q4677708, Q4738449, Q4756096, Q4766785, Q880357, Q882066, Q882066, Q204191, Q204191, Q1194170, Q940014, Q946863, Q172837, Q573077, Q1219005, Q1219599, Q1643798, Q1656352, Q1659549, Q1660007, Q1698154, Q1737980, Q1877284, Q1199354, Q1199354, Q1199451, Q1211871, Q1212179, Q1238382, Q4906454, Q320219, Q1148649, Q645094, Q5050350, Q5166548, Q2677926, Q2698139, Q2707305, Q2740725, Q2024780, Q2117418, Q2138984, Q1127992, Q1058087, Q1070484, Q1080080, Q1090813, Q1251918, Q1254110, Q1257070, Q1257079, Q1197410, Q1198423, Q706951, Q723239, Q2079261, Q1171364, Q617858, Q5166611, Q5166611, Q324513, Q374172, Q7533269, Q970386, Q976849, Q7458614, Q5347416, Q5460005, Q5463392, Q3038555, Q5288458, Q2346516, Q5183645, Q5185497, Q5216127, Q5223127, Q5261159, Q1300759, Q5521241, Q7733434, Q7736264, Q7737032, Q7882671, Q7719427, Q7719444, Q7722575, Q2629763, Q2640346, Q2649671, Q7703851, Q7747041, Q6544949, Q6672759, Q2445896, Q12124891, Q3127044, Q2511262, Q2517672, Q2543165, Q426628, Q426628, Q12126890, Q13359969, Q13359969, Q2294295, Q2294295, Q2559509, Q2559912, Q7760469, Q6703974, Q4744, Q7766962, Q7768516, Q7769205, Q7769988, Q2946945, Q3212086, Q3212086, Q18218448, Q18218448, Q18218448, Q6909175, Q7405709, Q7416149, Q7239952, Q7317332, Q7783674, Q7783704, Q7857590, Q3372526, Q3372642, Q3372816, Q3372909, Q7959649, Q7977485, Q7992684, Q3817966, Q3821852, Q3420907, Q3429733, Q774474

10 October 2017

Yves-Alexis Perez: OpenPGP smartcard transition (part 1)

A long time ago, I switched my GnuPG setup to a smartcard based one. I kept using the same master key, but: I've been working with that setup for a few years now and it is working perfectly fine. The signature counter on the OpenPGP basic card is a bit north of 5000 which is large but not that huge, all considered (and not counting authentication and decryption key usage).

One very nice feature of using a smartcard is that my laptop (or other machines I work on) never manipulates the private key directly but only sends request to the card, which is a really huge improvement, in my opinion. But it's also not the perfect solution for me: the OpenPGP card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information on the smartcard, besides the fact that Werner Koch trust ZeistControl to not mess up. One caveat for me is that the card does not use a certified secure microcontroler like you would find in smartcard chips found in debit card or electronic IDs. That means it's not really been audited by a competent hardware lab, and thus can't be considered secure against physical attacks. The cardOS software and the application implementing the OpenPGP specification are not public either and have not been audited either, to the best of my knowledge.

At one point I was interested in the Yubikey Neo, especially since the architecture Yubico used was common: a (supposedly) certified platform (secure microcontroler, card OS) and a GlobalPlatform / JavaCard virtual machine. The applet used in the Yubikey Neo is open-source, too, so you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned to a less common and more proprietary infrastructure for Yubikey 4: it's not longer Javacard based, and they don't provide the applet source anymore. This was not really seen as a good move by a lot of people, including Konstantin Ryabitsev (kernel.org administrator). Also, it wasn't possible even for the Yubico Neo to actually build the applet yourself and inject it on the card: when the Yubikey leaves the facility, the applet is already installed and the smartcard is locked (for obvious security reason). I've tried asking about getting naked/empty Yubikey with developers keys to load the applet myself, but it' was apparently not possible or would have required signing an NDA with NXP (the chip maker), which is not really possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard applet, with the intention to support latest version of the OpenPGP specification, and especially elliptic curve cryptography. The applet is called SmartPGP and has been released on ANSSI github repository. I investigated a bit, and found a smartcard with correct specification: certified (in France or Germany), and supporting Javacard 3.0.4 (required for ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256, P384, P512) curves.

I've ordered some cards, and when they arrived started playing. I've built the SmartPGP applet and pushed it to a smartcard, then generated some keys and tried with GnuPG. I'm right now in the process of migrating to a new smartcard based on that setup, which seems to work just fine after few days.

Part two of this serie will describe how to build the applet and inject it in the smartcard. The process is already documented here and there, but there are few things not to forget, like how to lock the card after provisionning, so I guess having the complete process somewhere might be useful in case some people want to reproduce it.

09 October 2017

Antonio Terceiro: pristine-tar updates

Introduction pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team. pristine-tar goals are to import the content of a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway. The current state of the art, and perspectives for the future pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while. So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar. Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic. Recent improvements Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit. In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let s look at the improvements made since pristine-tar 1.38, which was the version included in stretch. pristine-tar 1.39: xdelta3 by default. This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports. The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release. pristine-tar 1.40: improved compression heuristics pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz: In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB pristine-tar 1.41: support for signatures This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb. pristine-tar 1.42: optionally recompressing tarballs I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the pristine bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine. In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all: util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed In other cases, if the current heuristics can t produce a reasonably small delta, recompressing makes a huge difference. It s the case for mumble_1.1.8.orig.tar.gz: with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations. Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what too large means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.

08 October 2017

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3. Now, I m publishing an updated version, containing the following changes: A couple of issues remain, notably the lack of WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome! As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/. To install the image, insert the SD card into your computer (I m assuming it s available as /dev/sdb) and copy the image onto it:
$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ bunzip2 2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ sudo dd if=2017-10-08-raspberry-pi-3-buster-PREVIEW.img of=/dev/sdb bs=5M
If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:
$ ssh root@rpi3
# Password is  raspberry 

13 September 2017

Vincent Bernat: Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:
conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route
The same configuration can be used on both sides. Each side will figure out if it is left or right . The IPsec site-to-site tunnel endpoints are 2001:db8: 1::1 and 2001:db8: 2::1. The protected subnets are 2001:db8: a1::/64 and 2001:db8: a2::/64. As a result, strongSwan configures the following policies in the kernel:
$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[ ]
This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements: When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):
$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[ ]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[ ]
If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:
$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)
All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs: Redundant VPNs between 3 sites A possible configuration between V1-1 and V2-1 could be:
conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route
Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8: a6::/64 on one side and 2001:db8: a6::cc:1/128 at the other) can also be problematic. The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:
  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper Before looking at how to achieve that on Linux, let s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform). Let s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the private routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):
interfaces  
    st0  
        unit 1  
            family inet6  
                address 2001:db8:ff::7/127;
             
         
     
 
routing-instances  
    private  
        instance-type virtual-router;
        interface st0.1;
     
 
The second step is to configure the VPN:
security  
    /* Phase 1 configuration */
    ike  
        proposal IKE-P1  
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
         
        policy IKE-V1-1  
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
         
        gateway GW-V1-1  
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
         
     
    /* Phase 2 configuration */
    ipsec  
        proposal ESP-P2  
            protocol esp;
            encryption-algorithm aes-256-gcm;
         
        policy IPSEC-V1-1  
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
         
        vpn VPN-V1-1  
            bind-interface st0.1;
            df-bit copy;
            ike  
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
             
            establish-tunnels on-traffic;
         
     
 
We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8: 1::1 endpoint. The last step is to configure BGP in the private routing instance to exchange routes with the remote site:
routing-instances  
    private  
        routing-options  
            router-id 1.0.3.2;
            maximum-paths 16;
         
        protocols  
            bgp  
                preference 140;
                log-updown;
                group v4-VPN  
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                 
             
         
     
 
The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:
policy-options  
    policy-statement OUR-ROUTES  
        term 10  
            from  
                protocol ospf3;
                route-type internal;
             
            then  
                metric 0;
                accept;
             
         
     
 
The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8: a1::/64:
> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path
private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2
It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won t lead to a disaster.

Route-based VPN on Linux Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the private namespace:
# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1
Any private interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):
# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2
Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):
# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up
The tunnel interface is created in the initial namespace and moved to the private one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work. We can then configure strongSwan5:
conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no
The IKE daemon configures the following policies in the kernel:
$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[ ]
Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface. The last step is to configure BGP to exchange routes. We can use BIRD for this:
router id 1.0.1.1;
protocol device  
   scan time 10;
 
protocol kernel  
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
 
protocol bgp IBGP_V3_2  
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
 
Once BIRD is started in the private namespace, we can check routes are learned correctly:
$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1
The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the private namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network. As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:
$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69
You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.

  1. Everything in this post should work with Libreswan.
  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity.
  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces.
  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts.
  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them.

10 September 2017

intrigeri: Can you reproduce this Tails ISO image?

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly. We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help! Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly. Please try to build an ISO image today, and tell us whether it matches ours! Build an ISO These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them. If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us: Setup the build environment You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space.
  1. Install the build dependencies:
    sudo apt install \
        git \
        rake \
        libvirt-daemon-system \
        dnsmasq-base \
        ebtables \
        qemu-system-x86 \
        qemu-utils \
        vagrant \
        vagrant-libvirt \
        vmdebootstrap && \
    sudo systemctl restart libvirtd
    
  2. Ensure your user is in the relevant groups:
    for group in kvm libvirt libvirt-qemu ; do
       sudo adduser "$(whoami)" "$group"
    done
    
  3. Logout and log back in to apply the new group memberships.
Build Tails 3.2~alpha2 This should produce a Tails ISO image:
git clone https://git-tails.immerda.ch/tails && \
cd tails && \
git checkout 3.2-alpha2 && \
git submodule update --init && \
rake build
Send us feedback! No matter how your build attempt turned out we are interested in your feedback. Gather system information To gather the information we need about your system, run the following commands in the terminal where you've run rake build:
sudo apt install apt-show-versions && \
(
  for f in /etc/issue /proc/cpuinfo
  do
    echo "--- File: $ f  ---"
    cat "$ f "
    echo
  done
  for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \
            'qemu-system-x86_64 --version' 'vagrant --version'
  do
    echo "--- Command: $ c  ---"
    eval "$ c "
    echo
  done
  echo '--- APT package versions ---'
  apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \
                    libvirt0:amd64
)   bzip2 > system-info.txt.bz2
Then check that the generated file doesn't contain any sensitive information you do not want to leak:
bzless system-info.txt.bz2
Next, please follow the instructions below that match your situation! If the build failed Sorry about that. Please help us fix it by opening a ticket: If the build succeeded Compute the SHA-512 checksum of the resulting ISO image:
sha512sum tails-amd64-3.2~alpha2.iso
Compare your checksum with ours:
9b4e9e7ee7b2ab6a3fb959d4e4a2db346ae322f9db5409be4d5460156fa1101c23d834a1886c0ce6bef2ed6fe378a7e76f03394c7f651cc4c9a44ba608dda0bc
If the checksums match: success, congrats for reproducing Tails 3.2~alpha2! Please send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 successful" and system-info.txt.bz2 attached. Thanks in advance! Then you can stop reading here. Else, if the checksums differ: too bad, but really it's good news as the whole point of the exercise is precisely to identify such problems :) Now you are in a great position to help improve the reproducibility of Tails ISO images by following these instructions:
  1. Install diffoscope version 83 or higher and all the packages it recommends. For example, if you're using Debian Stretch:
    sudo apt remove diffoscope && \
    echo 'deb http://ftp.debian.org/debian stretch-backports main' \
        sudo tee /etc/apt/sources.list.d/stretch-backports.list && \
    sudo apt update && \
    sudo apt -o APT::Install-Recommends="true" \
             install diffoscope/stretch-backports
    
  2. Download the official Tails 3.2~alpha2 ISO image.
  3. Compare the official Tails 3.2~alpha2 ISO image with yours:
    diffoscope \
           --text diffoscope.txt \
           --html diffoscope.html \
           --max-report-size 262144000 \
           --max-diff-block-lines 10000 \
           --max-diff-input-lines 10000000 \
           path/to/official/tails-amd64-3.2~alpha2.iso \
           path/to/your/own/tails-amd64-3.2~alpha2.iso
    bzip2 diffoscope. txt,html 
    
  4. Send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 failed", attaching:
    • system-info.txt.bz2;
    • the smallest file among diffoscope.txt.bz2 and diffoscope.html.bz2, except if they are larger than 100 KiB, in which case better upload the file somewhere (e.g. share.riseup.net and share the link in your email.
Thanks a lot! Credits Thanks to Ulrike & anonym who authored a draft on which this blog post is based.

22 August 2017

Daniel Silverstone: Building a USB descriptor table set

In order to proceed further on our USB/STM32 oddessy, we need to start to build a USB descriptor set for our first prototype piece of code. For this piece, we're going to put together a USB device which is vendor-specific class-wise and has a single configuration with a interface with a single endpoint which we're not going to actually implement anything of. What we're after is just to get the information presented to the computer so that lsusb can see it. To get these built, let's refer to information we discovered and recorded in a previous post about how descriptors go together.

Device descriptor Remembering that values which are > 1 byte in length are always stored little-endian, we can construct our device descriptor as:
Our device descriptor
Field Name Value Bytes
bLength 18 0x12
bDescriptorType DEVICE 0x01
bcdUSB USB 2.0 0x00 0x02
bDeviceClass 0 0x00
bDeviceSubClass 0 0x00
bDeviceProtocol 0 0x00
bMaxPacketSize 64 0x40
idVendor TEST 0xff 0xff
idProduct TEST 0xff 0xff
bcdDevice 0.0.1 0x01 0x00
iManufacturer 1 0x01
iProduct 2 0x02
iSerialNumber 3 0x03
bNumConfigurations 1 0x01
We're using the vendor ID and product id 0xffff because at this point we don't have any useful values for this (it costs $5,000 to register a vendor ID). This gives us a final byte array of:
0x12 0x01 0x00 0x02 0x00 0x00 0x00 0x40 (Early descriptor) 0xff 0xff 0xff 0xff 0x01 0x00 0x01 0x02 0x03 0x01 (and the rest)
We're reserving string ids 1, 2, and 3, for the manufacturer string, product name string, and serial number string respectively. I'm deliberately including them all so that we can see it all come out later in lsusb. If you feed the above hex sequence into a USB descriptor decoder then you can check my working.

Endpoint Descriptor We want a single configuration, which covers our one interface, with one endpoint in it. Let's start with the endpoint...
Our bulk IN endpoint
Field Name Value Bytes
bLength 7 0x07
bDescriptorType ENDPOINT 0x05
bEndpointAddress EP2IN 0x82
bmAttributes BULK 0x02
wMaxPacketSize 64 0x40 0x00
bInterval IGNORED 0x00
We're giving a single bulk IN endpoint, since that's the simplest thing to describe at this time. This endpoint will never be ready and so nothing will ever be read into the host. All that gives us:
0x07 0x05 0x82 0x02 0x40 0x00 0x00

Interface Descriptor The interface descriptor prefaces the endpoint set, and thanks to our simple single endpoint, and no plans for alternate interfaces, we can construct the interface simply as:
Our single simple interface
Field Name Value Bytes
bLength 9 0x09
bDescriptorType INTERFACE 0x04
bInterfaceNumber 1 0x01
bAlternateSetting 1 0x01
bNumEndpoints 1 0x01
bInterfaceClass 0 0x00
bInterfaceSubClass 0 0x00
bInterfaceProtocol 0 0x00
iInterface 5 0x05
All that gives us:
0x09 0x04 0x01 0x01 0x01 0x00 0x00 0x00 0x05

Configuration descriptor Finally we can put it all together and get the configuration descriptor...
Our sole configuration, encapsulating the interface and endpoint above
Field Name Value Bytes
bLength 9 0x09
bDescriptorType CONFIG 0x02
wTotalLength 9+9+7 0x19 0x00
bNumInterfaces 1 0x01
bConfigurationValue 1 0x01
iConfiguration 4 0x04
bmAttributes Bus powered, no wake 0x80
bMaxPower 500mA 0xfa
The wTotalLength field is interesting. It contains the configuration length, the interface length, and the endpoint length, hence 9 plus 9 plus 7 is 25. This gives:
0x09 0x02 0x19 0x00 0x01 0x01 0x04 0x80 0xfa

String descriptors We allowed ourselves a total of five strings, they were iManufacturer, iProduct, iSerial (from the device descriptor), iConfiguration (from the configuration descriptor), and iInterface (from the interface descriptor) respectively. Our string descriptors will therefore be:
String descriptor zero, en_GB only
Field Name Value Bytes
bLength 4 0x04
bDescriptorType STRING 0x03
wLangID[0] en_GB 0x09 0x08
0x04 0x03 0x09 0x08
...and...
String descriptor one, iManufacturer
Field Name Value Bytes
bLength 38 0x26
bDescriptorType STRING 0x03
bString "Rusty Manufacturer" ...
0x26 0x03 0x52 0x00 0x75 0x00 0x73 0x00 0x74 0x00 0x79 0x00 0x20 0x00 0x4d 0x00 0x61 0x00 0x6e 0x00 0x75 0x00 0x66 0x00 0x61 0x00 0x63 0x00 0x74 0x00 0x75 0x00 0x72 0x00 0x65 0x00 0x72 0x00
(You get the idea, there's no point me breaking down the rest of the string descriptors here, suffice it to say that the other strings are appropriate for the values they represent - namely product, serial, configuration, and interface.)

Putting it all together Given all the above, we have a device descriptor which is standalone, then a configuration descriptor which encompasses the interface and endpoint descriptors too. Finally we have a string descriptor table with six entries, the first is the language sets available, and the rest are our strings. In total we have:
    // Device descriptor
    const DEV_DESC: [u8; 18] =  
        0x12, 0x01, 0x00, 0x02, 0x00, 0x00, 0x00, 0x40,
        0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0x01, 0x02,
        0x03, 0x01
     ;
    // Configuration descriptor
    const CONF_DESC: [u8; 25] =  
        0x09, 0x02, 0x19, 0x00, 0x01, 0x01, 0x04, 0x80, 0xfa,
        0x09, 0x04, 0x01, 0x01, 0x01, 0x00, 0x00, 0x00, 0x05,
        0x07, 0x05, 0x82, 0x02, 0x40, 0x00, 0x00
     ;
    // String Descriptor zero
    const STR_DESC_0: [u8; 4] =  0x04, 0x03, 0x09, 0x08 ;
    // String Descriptor 1, "Rusty Manufacturer"
    const STR_DESC_1: [u8; 38] =  
        0x26, 0x03, 0x52, 0x00, 0x75, 0x00, 0x73, 0x00,
        0x74, 0x00, 0x79, 0x00, 0x20, 0x00, 0x4d, 0x00,
        0x61, 0x00, 0x6e, 0x00, 0x75, 0x00, 0x66, 0x00,
        0x61, 0x00, 0x63, 0x00, 0x74, 0x00, 0x75, 0x00,
        0x72, 0x00, 0x65, 0x00, 0x72, 0x00
     ;
    // String Descriptor 2, "Rusty Product"
    const STR_DESC_2: [u8; 28] =  
        0x1c, 0x03, 0x52, 0x00, 0x75, 0x00, 0x73, 0x00,
        0x74, 0x00, 0x79, 0x00, 0x20, 0x00, 0x50, 0x00,
        0x72, 0x00, 0x6f, 0x00, 0x64, 0x00, 0x75, 0x00,
        0x63, 0x00, 0x74, 0x00
     ;
    // String Descriptor 3, "123ABC"
    const STR_DESC_3: [u8; 14] =  
        0x0e, 0x03, 0x31, 0x00, 0x32, 0x00, 0x33, 0x00,
        0x41, 0x00, 0x42, 0x00, 0x43, 0x00
     ;
    // String Descriptor 4, "Rusty Configuration"
    const STR_DESC_4: [u8; 40] =  
        0x28, 0x03, 0x52, 0x00, 0x75, 0x00, 0x73, 0x00,
        0x74, 0x00, 0x79, 0x00, 0x20, 0x00, 0x43, 0x00,
        0x6f, 0x00, 0x6e, 0x00, 0x66, 0x00, 0x69, 0x00,
        0x67, 0x00, 0x75, 0x00, 0x72, 0x00, 0x61, 0x00,
        0x74, 0x00, 0x69, 0x00, 0x6f, 0x00, 0x6e, 0x00
     ;
    // String Descriptor 5, "Rusty Interface"
    const STR_DESC_5: [u8; 32] =  
        0x20, 0x03, 0x52, 0x00, 0x75, 0x00, 0x73, 0x00,
        0x74, 0x00, 0x79, 0x00, 0x20, 0x00, 0x49, 0x00,
        0x6e, 0x00, 0x74, 0x00, 0x65, 0x00, 0x72, 0x00,
        0x66, 0x00, 0x61, 0x00, 0x63, 0x00, 0x65, 0x00
     ;
With the above, we're a step closer to our first prototype which will hopefully be enumerable. Next time we'll look at beginning our prototype low level USB device stack mock-up.

13 August 2017

Mike Gabriel: @DebConf17: Work for Debian and FLOSS I got done during DebCamp and DebConf... and Beyond...

People I Met and will Remember Topics I have worked on Talks and BoFs Packages Uploaded to Debian unstable Packages Uploaded to Debian NEW I also looked into lightdm-webkit2-greeter, but upstream is in the middle of a transition from Gtk3 to Qt5, so this has been suspended for now. Packages Uploaded to oldstable-/stable-proposed-updates or -security Other Package related Stuff Thanks to Everyone Making This Event Possible A big thanks to everyone who made it possible for me to attend this event!!!

01 August 2017

Reproducible builds folks: Reproducible Builds: Weekly report #118

Here's what happened in the Reproducible Builds effort between Sunday July 23 and Saturday July 29 2017: Toolchain development and fixes Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 4 package reviews have been added, 2 have been updated and 24 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Misc. This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

31 July 2017

Chris Lamb: Free software activities in July 2017

Here is my monthly update covering what I have been doing in the free software world during July 2017 (previous month): I also blogged about my recent lintian hacking and installation-birthday package.
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. (I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.) This month I:
  • Assisted Mattia with a draft of an extensive status update to the debian-devel-announce mailing list. There were interesting follow-up discussions on Hacker News and Reddit.
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
  • I also submitted 5 patches to fix specific reproducibility issues in autopep8, castle-game-engine, grep, libcdio & tinymux.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#114 #115, #116 & #117)

I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • comparators.xml:
    • Fix EPUB "missing file" tests; they ship a META-INF/container.xml file. [ ]
    • Misc style fixups. [ ]
  • APK files can also be identified as "DOS/MBR boot sector". (#868486)
  • comparators.sqlite: Simplify file detection by rewriting manual recognizes call with a Sqlite3Database.RE_FILE_TYPE definition. [ ]
  • comparators.directory:
    • Revert the removal of a try-except. (#868534)
    • Tidy module. [ ]

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add missing File::Temp imports in the JAR and PNG handlers. This appears to have been exposed by lazily-loading handlers in #867982. (#868077)

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Avoid a race condition between check-and-creation of Buildinfo instances. [ ]


Debian My activities as the current Debian Project Leader are covered in my "Bits from the DPL emails to the debian-devel-announce mailing list.
Patches contributed
  • obs-studio: Remove annoying "click wrapper" on first startup. (#867756)
  • vim: Syntax highlighting for debian/copyright files. (#869965)
  • moin: Incorrect timezone offset applied due to "84600" typo. (#868463)
  • ssss: Add a simple autopkgtest. (#869645)
  • dch: Please bump $latest_bpo_dist to current stable release. (#867662)
  • python-kaitaistruct: Remove Markdown and homepage references from package long descriptions. (#869265)
  • album-data: Correct invalid Vcs-Git URI. (#869822)
  • pytest-sourceorder: Update Homepage field. (#869125)
I also made a very large number of contributions to the Lintian static analysis tool. To avoid duplication here, I have outlined them in a separate post.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1014-1 for libclamunrar, a library to add unrar support to the Clam anti-virus software to fix an arbitrary code execution vulnerability.
  • Issued DLA 1015-1 for the libgcrypt11 crypto library to fix a "sliding windows" information leak.
  • Issued DLA 1016-1 for radare2 (a reverse-engineering framework) to prevent a remote denial-of-service attack.
  • Issued DLA 1017-1 to fix a heap-based buffer over-read in the mpg123 audio library.
  • Issued DLA 1018-1 for the sqlite3 database engine to prevent a vulnerability that could be exploited via a specially-crafted database file.
  • Issued DLA 1019-1 to patch a cross-site scripting (XSS) exploit in phpldapadmin, a web-based interface for administering LDAP servers.
  • Issued DLA 1024-1 to prevent an information leak in nginx via a specially-crafted HTTP range.
  • Issued DLA 1028-1 for apache2 to prevent the leakage of potentially confidential information via providing Authorization Digest headers.
  • Issued DLA 1033-1 for the memcached in-memory object caching server to prevent a remote denial-of-service attack.

Uploads
  • redis:
    • 4:4.0.0-1 Upload new major upstream release to unstable.
    • 4:4.0.0-2 Make /usr/bin/redis-server in the primary package a symlink to /usr/bin/redis-check-rdb in the redis-tools package to prevent duplicate debug symbols that result in a package file collision. (#868551)
    • 4:4.0.0-3 Add -latomic to LDFLAGS to avoid a FTBFS on the mips & mipsel architectures.
    • 4:4.0.1-1 New upstream version. Install 00-RELEASENOTES as the upstream changelog.
    • 4:4.0.1-2 Skip non-deterministic tests that rely on timing. (#857855)
  • python-django:
    • 1:1.11.3-1 New upstream bugfix release. Check DEB_BUILD_PROFILES consistently, not DEB_BUILD_OPTIONS.
  • bfs:
    • 1.0.2-2 & 1.0.2-3 Use help2man to generate a manpage.
    • 1.0.2-4 Set hardening=+all for bindnow, etc.
    • 1.0.2-5 & 1.0.2-6 Don't use upstream's release target as it overrides our CFLAGS & install RELEASES.md as the upstream changelog.
    • 1.1-1 New upstream release.
  • libfiu:
    • 0.95-4 Apply patch from Steve Langasek to fix autopkgtests. (#869709)
  • python-daiquiri:
    • 1.0.1-1 Initial upload. (ITP)
    • 1.1.0-1 New upstream release.
    • 1.1.0-2 Tidy package long description.
    • 1.2.1-1 New upstream release.

I also reviewed and sponsored the uploads of gtts-token 1.1.1-1 and nlopt 2.4.2+dfsg-3.

Debian bugs filed
  • ITP: python-daiquiri Python library to easily setup basic logging functionality. (#867322)
  • twittering-mode: Correct incorrect time formatting due to "84600" typo. (#868479)

18 June 2017

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 Stretch on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard I use a YubiKey NEO does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 Jessie earlier, and I thought I d do a similar blog post for Debian 9.0 Stretch . The situation is slightly different than before (e.g., GnuPG works better but SSH doesn t) so there is some progress. May I hope that Debian 10.0 Buster gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report). After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.
jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 
This fails because scdaemon is not installed. Isn t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.
root@latte:~# apt-get install scdaemon
Then try again.
jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 
I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.
root@latte:~# apt-get install pcscd
Now gpg --card-status works!
jas@latte:~$ gpg --card-status
Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 
Using the key will not work though.
jas@latte:~$ echo foo gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 
This is because the public key and the secret key stub are not available.
jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 
You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 
Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.
root@latte:~# apt-get install dirmngr
Below I proceed to trust the clouds to find my key.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 
Now the public key and the secret key stub are available locally.
jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ 
I am now able to sign data with the smartcard, yay!
jas@latte:~$ echo foo gpg -a --sign
-----BEGIN PGP MESSAGE-----
owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 
Encrypting to myself will not work smoothly though.
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting
jas@latte:~$ 
The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.
jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> quit
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----
hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 
So everything is fine, isn t it? Alas, not quite.
jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 
Tracking this down, I now realize that GNOME s keyring is used for SSH but GnuPG s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.
jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 
Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.
jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 
Log out from GNOME and log in again. Now you should see ssh-add -L working.
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 
Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key. Enjoy!

14 June 2017

Sjoerd Simons: Debian armhf VM on arm64 server

At Collabora one of the many things we do is build Debian derivatives/overlays for customers on a variety of architectures including 32 bit and 64 bit ARM systems. And just as Debian does, our OBS system builds on native systems rather than emulators. Luckily with the advent of ARM server systems some years ago building natively for those systems has been a lot less painful than it used to be. For 32 bit ARM we've been relying on Calxeda blade servers, however Calxeda unfortunately tanked ages ago and the hardware is starting to show its age (though luckily Debian Stretch does support it properly, so at least the software is still fresh). On the 64 bit ARM side, we're running on Gigabyte MP30-AR1 based servers which can run 32 bit arm code (As opposed to e.g. ThunderX based servers which can only run 64 bit code). As such running armhf VMs on them to act as build slaves seems a good choice, but setting that up is a bit more involved than it might appear. The first pitfall is that there is no standard bootloader or boot firmware available in Debian to boot on the "virt" machine emulated by qemu (I didn't want to use an emulation of a real machine). That also means there is nothing to pick the kernel inside the guest at boot nor something which can e.g. have the guest network boot, which means direct kernel booting needs to be used. The second pitfall was that the current Debian Stretch armhf kernel isn't built with support for the generic PCI host controller which the qemu virtual machine exposes, which means no storage and no network shows up in the guest. Hopefully that will get solved soonish (Debian bug 864726) and can be in a Stretch update, until then a custom kernel package is required using the patch attach to the bug report is required but I won't go into that any further in this post. So on the happy assumption that we have a kernel that works, the challenge left is to nicely manage direct kernel loading. Or more specifically, how ensure the hosts boots the kernel the guest has installed via the standard apt tools without having to copy kernels around between guest/host, which essentially comes down to exposing /boot from the guest to the host. The solution we picked is to use qemu's 9pfs support to share a folder from the host and use that as /boot of the guest. For the 9p folder the "mapped" security mode seems needed as the "none" mode seems to get confused by dpkg (Debian bug 864718). As we're using libvirt as our virtual machine manager the remainder of how to glue it all together will be mostly specific to that. First step is to install the system, mostly as normal. One can directly boot into the vmlinuz and initrd.gz provided by normal Stretch armhf netboot installer (downloaded into e.g /tmp). The setup overall is straight-forward with a few small tweaks: Apart from those tweaks the resulting example command is similar to the one that can be found in the virt-install man-page:
virt-install --name armhf-vm --arch armv7l --memory 512 \
  --disk /srv/armhf-vm.img,bus=virtio
  --filesystem /srv/armhf-vm-boot,virtio-boot,mode=mapped \
  --boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0,root=/dev/vda1"

Run through the install as you'd normally would. Towards the end the installer will likely complain it can't figure out how to install a bootloader, which is fine. Just before ending the install/reboot, switch to the shell and copy the /boot/vmlinuz and /boot/initrd.img from the target system to the host in some fashion (e.g. chroot into /target and use scp from the installed system). This is required as the installer doesn't support 9p, but to boot the system an initramfs will be needed with the modules needed to mount the root fs, which is provided by the installed initramfs :). Once that's all moved around, the installer can be finished. Next, booting the installed system. For that adjust the libvirt config (e.g. using virsh edit and tuning the xml) to use the kernel and initramfs copied from the installer rather then the installer ones. Spool up the VM again and it should happily boot into a freshly installed Debian system. To finalize on the guest side /boot should be moved onto the shared 9pfs, the fstab entry for the new /boot should look something like:
virtio-boot /boot  9p trans=virtio,version=9p2000.L,x-systemd.automount 0 0

With that setup, it's just a matter of shuffling the files in /boot around to the new filesystem and the guest is done (make sure vmlinuz/initrd.img stay symlinks). Kernel upgrades will work as normal and visible to the host. Now on the host side there is one extra hoop to jump through, as the guest uses the 9p mapped security model symlinks in the guest will be normal files on the host containing the symlink target. To resolve that one, we've used libvirt's qemu hook support to setup a proper symlink before the guest is started. Below is the script we ended up using as an example (/etc/libvirt/hooks/qemu):
vm=$1
action=$2
bootdir=/srv/$ vm -boot
if [ $ action  != "prepare" ] ; then
  exit 0
fi
if [ ! -d $ bootdir  ] ; then
  exit 0
fi
ln -sf $(basename $(cat $ bootdir /vmlinuz))  $ bootdir /virtio-vmlinuz
ln -sf $(basename $(cat $ bootdir /initrd.img))  $ bootdir /virtio-initrd.img

With that in place, we can simply point the libvirt definition to use /srv/$ vm -boot/virtio- vmlinuz,initrd.img as the kernel/initramfs for the machine and it'll automatically get the latest kernel/initramfs as installed by the guest when the VM is started. Just one final rough edge remains, when doing reboot from the VM libvirt leaves qemu to handle that rather than restarting qemu. This unfortunately means a reboot won't pick up a new kernel if any, for now we've solved this by configuring libvirt to stop the VM on reboot instead. As we typically only reboot VMs on kernel (security) upgrades, while a bit tedious, this avoid rebooting with an older kernel/initramfs than intended.

31 May 2017

Chris Lamb: Free software activities in May 2017

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. (I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.) This month I:
I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Don't fail when run under perversely-recursive input files. (#780761).

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Move from verbose_print to nonquiet_print so we print when normalising a file. This is so we can start to target the removal of strip-nondeterminism itself.
  • Only print log messages by default if the file was actually modified. (#863033)
  • Update package long descriptions to clarify that the tool itself is a temporary workaround. (#862029)


Debian My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce list. However, I:
  • Represented Debian at the OSCAL 2017 in Tirana, Albania.
  • Attended the Reproducible Builds hackathon in Hamburg, Germany. (Report)
  • Finally, I attended Debian SunCamp 2017 in Lloret de Mar in Catalonia, Spain.

Patches contributed
  • xarchiver: Adding files to .tar.xz deletes existing content. (#862593)
  • screen-message: Please invert the default colours. (#862056)
  • fontconfig: fc-cache returns with exit code 0 on 256 errors. (#863427)
  • quadrapassel: Segfaults when unpausing a paused finished game. (#863106)
  • camping: Broken symlink. (#861040)
  • dns-root-data: Does not build if /bin/sh is Bash. (#862252)
  • dh-python: bit.ly link doesn't work anymore. (#863074)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, adding links to upstream patches, etc.
  • Issued DLA 930-1 fixing a remote application crash vulnerability in libxstream-java, a Java library to serialize objects to XML and back again
  • Issued DLA 935-1 correcting a local denial of service vulnerability in lxterminal, the terminal emulator for the LXDE desktop environment.
  • Issued DLA 940-1 to remedy an issue in sane-backends which allowed remote attackers to obtain sensitive memory information via a crafted SANE_NET_CONTROL_OPTION packet.
  • Issued DLA 943-1 for the deluge bittorrent client to fix a directory traversal attack vulnerability in the web user interface.
  • Issued DLA 949-1 fixing an integer signedness error in the miniupnpc UPnP client that could allow remote attackers to cause a denial of service attack.
  • Issued DLA 959-1 for the libical calendaring library. A use-after-free vulnerability could allow remote attackers could cause a denial of service and possibly read heap memory via a specially crafted .ICS file.

Uploads
  • redis (3:3.2.9-1) New upstream release.
  • python-django:
    • 1:1.11.1-1 New upstream minor release.
    • 1:1.11.1-2 & 1:1.11.1-3 Add missing Build-Depends on libgdal-dev due to new GIS tests.
  • docbook-to-man:
    • 1:2.0.0-36 Adopt package. Apply a patch to prevent undefined behaviour caused by a memcpy(3) parameter overlap. (#842635, #858389)
    • 1:2.0.0-37 Install manpages using debian/docbook-to-man.manpages over manual calls.
  • installation-birthday Initial upload and misc. subsequent fixes.
  • bfs:
    • 1.0-3 Fix FTBFS on hurd-i386. (#861569)
    • 1.0.1-1 New upstream release & correct debian/watch file.

I also made the following non-maintainer uploads (NMUs):
  • ca-certificates (20161130+nmu1) Remove StartCom and WoSign certificates as they are now untrusted by the major browser vendors. (#858539)
  • sane-backends (1.0.25-4.1) Correct missing error handler in (generated) prerm script. (#862334)
  • seqan2 (2.3.1+dfsg-3.1) Fix broken /usr/bin/splazers symlink on 32-bit architectures. (#863669)
  • jackeq (0.5.9-2.1) Fix a segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863416)
  • kluppe (0.6.20-1.1) Fix segmentation fault at startup. (#863421)
  • coyim (0.3.7-2.1) Skip tests that require internet access to avoid FTBFS. (#863414)
  • pavuk (0.9.35-6.1) Fix segmentation fault when opening "Limitations" window. (#863492)
  • porg (2:0.10-1.1) Fix broken LD_PRELOAD path. (#863495)
  • timemachine (0.3.3-2.1) Fix two segmentation faults caused by truncated pointers. (#863420)

Debian bugs filed
  • acct: Docs incorrectly installed to "accounting.html" directory. (#862180)
  • git-hub: Does not work with 2FA-enabled accounts. (#863265)
  • libwibble: Homepage and Vcs-Darcs fields are outdated. (#861673)



I additionally filed 2 bugs for packages that access the internet during build against flower and r-bioc-gviz.


I also filed 6 FTBFS bugs against cronutils, isoquery, libgnupg-interface-perl, maven-plugin-tools, node-dateformat, password-store & simple-tpm-pk11.

FTP Team

As a Debian FTP assistant I ACCEPTed 105 packages: boinc-app-eah-brp, debug-me, e-mem, etcd, fdroidcl, firejail, gcc-6-cross-ports, gcc-7-cross-ports, gcc-defaults, gl2ps, gnome-software, gnupg2, golang-github-dlclark-regexp2, golang-github-dop251-goja, golang-github-nebulouslabs-fastrand, golang-github-pkg-profile, haskell-call-stack, haskell-foundation, haskell-nanospec, haskell-parallel-tree-search, haskell-posix-pty, haskell-protobuf, htmlmin, iannix, libarchive-cpio-perl, libexternalsortinginjava-java, libgetdata, libpll, libtgvoip, mariadb-10.3, maven-resolver, mysql-transitional, network-manager, node-async-each, node-aws-sign2, node-bcrypt-pbkdf, node-browserify-rsa, node-builtin-status-codes, node-caseless, node-chokidar, node-concat-with-sourcemaps, node-console-control-strings, node-create-ecdh, node-create-hash, node-create-hmac, node-cryptiles, node-dot, node-ecc-jsbn, node-elliptic, node-evp-bytestokey, node-extsprintf, node-getpass, node-gulp-coffee, node-har-schema, node-har-validator, node-hawk, node-jsprim, node-memory-fs, node-pbkdf2, node-performance-now, node-set-immediate-shim, node-sinon-chai, node-source-list-map, node-stream-array, node-string-decoder, node-stringstream, node-verror, node-vinyl-sourcemaps-apply, node-vm-browserify, node-webpack-sources, node-wide-align, odil, onionshare, opensvc, otb, perl, petsc4py, pglogical, postgresql-10, psortb, purl, pymodbus, pymssql, python-decouple, python-django-rules, python-glob2, python-ncclient, python-parse-type, python-prctl, python-sparse, quoin-clojure, quorum, r-bioc-genomeinfodbdata, radlib, reprounzip, rustc, sbt-test-interface, slepc4py, slick-greeter, sparse, te923con, trabucco, traildb, typescript-types & writegood-mode. I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: libgetdata, odil, opensvc, python-ncclient, radlib and reprounzip.

Lars Wirzenius: Using a Yubikey 4 for ensafening one's encryption

Introduction I've written before about using a U2F key with PAM. This post continues the theme and explains how to use a smartcard with GnuPG for storing OpenPGP private keys. Specifically, a Yubikey 4 card, because that's what I have, but any good GnuPG compatible card should work. The Yubikey is both a GnuPG compatible smart card, and a U2F card. The Yubikey 4 can handle keys up to 4096 bits. Older Yubikeys can only handle keys up to 2095 bits. The reason to do this is to make it harder for an attacker to steal your encryption keys. I will assume you don't already have an OpenPGP key, or are willing to generate a new one. I will also assume you run Debian stretch; some of the desktop environment setup details may differ between Debian versions or between Linux distributions. You will need: Terminology Some terminology: Outline The process outline is:
  1. Create a new, signing-only master key with GnuPG.
  2. Create three "subkeys", one each for encryption, signing, and authentication. These subkeys are what everyone else uses.
  3. Export copies of the master key pair and the subkey pairs and put them in a safe place.
  4. Put the subkeys on the Yubikey.
  5. GnuPG will automatically use the keys from the card. You have to have the card plugged into a USB port for things to work. If someone steals your laptop, they won't get the private subkeys. Even if they steal your Yubikey, they won't get them (the smartcard is physically designed to prevent that), and can't even use them (because there's PIN codes or passphrases and getting them wrong several times locks up the smartcard).
  6. Use gpg-agent as your SSH agent, and the authentication-only subkey on the Yubikey is used as your ssh key.
Configure GnuPG The process in more detail: Create new keys
$ gpg --full-generate-key
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:43:54 PM EEST
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Lars Wirzenius
Email address: liw@liw.fi
Comment: test key
You selected this USER-ID:
"Lars Wirzenius (test key) <liw@liw.fi>>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 25FB738D6EE435F7 marked as ultimately trusted
gpg: directory '/home/liw/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/liw/.gnupg/openpgp-revocs.d/A734C10BF2DF39D19DC0F6C025FB738D6EE435F7.rev'
public and secret key created and signed.

Note that this key cannot be used for encryption. You may want to use
the command "--edit-key" to generate a subkey for this purpose.
pub rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid Lars Wirzenius (test key) <liw@liw.fi>
  • Note that I set a 1-year expiration for they key. The expiration can be extended at any time (if you have the master secret key), but unless you do, the key won't accidentally live longer than the chosen time.
  • Review the key:
$ gpg --list-secret-keys
/home/liw/.gnupg/pubring.kbx
----------------------------
sec rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid [ultimate] Lars Wirzenius (test key) <liw@liw.fi>
  • You now have the signing-only master key. You should now create three subkeys (keyid is the key identifier shown in the key listing, A734C10BF2DF39D19DC0F6C025FB738D6EE435F7 above). Use the --expert option to be able to add an authentication-only subkey.
$ gpg --edit-key --expert A734C10BF2DF39D19DC0F6C025FB738D6EE435F7z
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:44:52 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 6
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:45:22 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 8

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? a

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? s

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Encrypt Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? e

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? q
RSA keys may be btween 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:45:56 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> save
Export secret keys to files, make a backup
  • You now have a master key and three subkeys. They are hidden in the ~/.gnupg directory. It is time to "export" the secret keys out from there.
$ gpg --export-secret-key --armor keyid > master.key
$ gpg --export-secret-subkeys --armor keyid > subkeys.key
  • You should keep these files safe. You don't want to lose them, and you don't want anyone else to get access to them. I recommend you format two USB memory sticks, format them using full-disk encryption, and copy the exported files to both of them. Then keep them somewhere safe. There's ways of making this part more sophisticated, but that's for another time.
  • The next step involves some hoop-jumping. What we want is to have the master secret key NOT on you machine, so we tell GnuPG to remove it. We exported it above, so we won't lose it. However, deleting the master secret key also removes the secret subkeys. But we can import those without importing the master secret key.
$ gpg --delete-secret-key keyid
$ gpg --import subkeys.key
  • Now verify that you have the secret subkeys, but not the master key. There should be one line starting with sec# (note the hash mark, which indicates the key isn't available), and three lines starting with ssb (no hash mark).
$ gpg -K
/home/liw/.gnupg/pubring.kbx
----------------------------
sec# rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid [ultimate] Lars Wirzenius (test key) <liw@liw.fi>
ssb rsa4096 2017-05-29 [S] [expires: 2018-05-29]
ssb rsa4096 2017-05-29 [E] [expires: 2018-05-29]
ssb rsa4096 2017-05-29 [A] [expires: 2018-05-29]
Install subkeys on a Yubikey
  • Now insert the Yubikey in a USB slot. We can start transferring the secret subkeys to the Yubikey. If you want, you can set your name and other information, and change PIN codes. There's several types of PIN codes: normal use, unblocking a locked card, and a third PIN code for admin operations. Changing the PIN codes is a good idea, otherwise everyone will just try the default of 123456 (admin 12345678). However, I'm skipping that in the interest of brevity.
$ gpg -card-edit
...
  • Actually move the subkeys to the card. Note that this does a move, not a copy, and the subkeys will be removed from your ~/.gnupg (check with gpg -K).
$ gpg --edit-key liw
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb* rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb* rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb* rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(2) Encryption key
Your selection? 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb* rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 3

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb* rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(3) Authentication key
Your selection? 3

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb* rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> save
  • If you want to use several Yubikeys, or have a spare one just in case, repeat the previous four steps (starting from importing subkeys back into ~/.gnupg).
  • You're now done, as far GnuPG use is concerned. Any time you need to sign, encrypt, or decrypt something, GnuPG will look for your subkeys on the Yubikey, and will tell you to insert it in a USB port if it can't find the key.
Use subkey on Yubikey as your SSH key
  • To actually use the authentication-only subkey on the Yubikey for ssh, you need to configure your system to use gpg-agent as the SSH agent. Add the following line to .gnupg/gpg-agent.conf:
     enable-ssh-support
    
  • On a Debian stretch system with GNOME, edit /etc/xdg/autostart/gnome-keyring-ssh.desktop to have the following line, to prevent the GNOME ssh agent from starting up:
     Hidden=true
    
  • Edit /etc/X11/Xsession.options and remove or comment out the line that says use-ssh-agent. This stops a system-started ssh-agent from being started when the desktop start.
  • Create the file ~/.config/autostart/gpg-agent.desktop with the following content:
     [Desktop Entry]
     Type=Application
     Name=gpg-agent
     Comment=gpg-agent
     Exec=/usr/bin/gpg-agent --daemon
     OnlyShowIn=GNOME;Unity;MATE;
     X-GNOME-Autostart-Phase=PreDisplayServer
     X-GNOME-AutoRestart=false
     X-GNOME-Autostart-Notify=true
     X-GNOME-Bugzilla-Bugzilla=GNOME
     X-GNOME-Bugzilla-Product=gnome-keyring
     X-GNOME-Bugzilla-Component=general
     X-GNOME-Bugzilla-Version=3.20.0
    
  • To test, log out, and back in again, run the following in a terminal:
$ ssh-add -l
The output should contain a line that looks like this:
    4096 SHA256:PDCzyQPpd9tiWsELM8LwaLBsMDMm42J8/eEfezNgnVc cardno:000604626953 (RSA)
  • You need to export the authentication-only subkey in the SSH key format. You need this for adding to .ssh/authorized_keys, if nothing else.
$ gpg --export-ssh-key keyid > ssh.pub
  • Happy hacking.
See also See also the following links. I've used them to learn enough to write the above. Edited to fix:
  • Output of gpg -K after removing secret master key.

29 May 2017

Enrico Zini: Egg-walking with qemu-nbd and kpartx

I wanted to retrieve a file from a VirtualBox VDI image for this blog post. I followed these instructions and ended up here:
Once having used nbd0, only rebooting the system makes it possible to mount another image ... a little bit unpractical.
What happened was this:
# modprobe nbd  # NOO! Don't *EVER* do that!
# qemu-nbd -c /dev/nbd0 file.vdi
# kpartx -d /dev/nbd0
# mount /dev/nbd0  EHI! Where's /dev/nbdpp1 ??
# qemu-nbd -d /dev/nbd0
# rmmod nbd
rmmod: ERROR: Module nbd is in use
# kpartx -d /dev/nbd0
read error, sector 0
llseek error
llseek error
llseek error
# rmmod nbd
rmmod: ERROR: Module nbd is in use
# WHAT THE 
It turns out it's really modprobe nbd max_part=16, otherwise max_part defaults to, uhm, zero? really? and kpartx cannot create device mappings because there are not enough (as in, not even a single one) partition devices available. At this point, however, kpartx did create some mappings connected to, uhm, probably Ancient Beings from beyond spacetime, and because of those the device is in use and cannot be removed, and unmapping doesn't work either because the Ancient Beings from beyond spacetime are keeping the device busy by feeding on it. I energized the pentacle and tried a desperate ritual of banishment:
# # Reconnect nbd0 to the vdi file to Restore the Balance
# qemu-nbd --verbose -c /dev/nbd0 file.vdi
# # This works now
# kpartx -vd /dev/nbd0
del devmap : nbd0p5
del devmap : nbd0p2
del devmap : nbd0p1
# # This too, the Ancient Beings lie asleep yet again
# modprobe nbd -r
At this point I managed to get my file, almost:
# modprobe nbd max_part=16
# qemu-nbd --verbose -c /dev/nbd0 file.vdi
NBD device /dev/nbd0 is now connected to file.vdi
# kpartx -va /dev/nbd0
add map nbd0p1 (254:12): 0 60260352 linear 43:0 2048
add map nbd0p2 (254:13): 0 2 linear 43:0 60264446
add map nbd0p5 (254:14): 0 2648064 linear 43:0 60264448
# mount /dev/nbd0p1 /mnt
mount: /dev/nbd0p1 is already mounted or /mnt busy
# # WHAT NOW?!
# lsblk
NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
 
nbd0                                        43:0    0    30G  0 disk
 nbd0p1                                    43:1    0  28.8G  0 part
 nbd0p2                                    43:2    0     1K  0 part
 nbd0p5                                    43:5    0   1.3G  0 part
 nbd0p1                                   254:12   0  28.8G  0 part
 nbd0p2                                   254:13   0     1K  0 part
 nbd0p5                                   254:14   0   1.3G  0 part
# # WHAAAT?!!
# kpartx -vd /dev/nbd0
del devmap : nbd0p5
del devmap : nbd0p2
del devmap : nbd0p1
# lsblk
NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
 
nbd0                                        43:0    0    30G  0 disk
 nbd0p1                                    43:1    0  28.8G  0 part
 nbd0p2                                    43:2    0     1K  0 part
 nbd0p5                                    43:5    0   1.3G  0 part
# mount /dev/nbd0p1 /mnt
# # I got my file, my preciouss file!
# umount /mnt
# kpartx -vd /dev/nbd0
# qemu-nbd -d /dev/nbd0
# rmmod nbd
# # sit in a corner hugging my precious file and sobbing quietly
As can be seen from the multiple exclamation marks, those Ancient Beings from beyond spacetime did manage to have a bite on my sanity after all.

02 May 2017

Markus Koschany: My Free Software Activities in April 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
  • I released the final version 2 of debian-games for Stretch, a collection of metapackages that make it easier to install one s favorite games. No surprises here, but two games were removed from Debian and some others didn t make it into Stretch, so an update of the debian/control file was necessary.
  • I could close three RC bugs in ufoai (#860680) and slashem (#860393), thanks to the analysis and help from Bernhard belacker, and one in tecnoballz. (#861268)
  • I reviewed and sponsored gnome-games-app and retro-gtk for Jeremy Bicha. The Gnome application is basically a game browser and a unified interface to access various games, engines and emulators.
  • I adopted torcs, a racing car simulator, updated the package to the latest upstream release and completely overhauled the packaging. This is still work in progress but I think I can upload the new version in May.
Debian Java
  • I prepared security updates for bouncycastle, logback, activemq, tomcat 7 and tomcat 8 which were later accepted for Stretch and Jessie.
  • New upstream releases this month: hsqldb and robocode.
  • We had to deal with a weird bug in libnb-platform18-java respectively libjna-jni (#858876). We are still not sure why the JNA system library was not found by Netbeans but we could find a way to resolve it. I took the opportunity to fix another annoying bug in Netbeans and added the missing dependencies for some symlinked system JAR files.
  • Last but not least I made the test results in jnr-posix non-fatal (#860691) until we receive more information from upstream why two tests fail on i386. Since the package is arch:all and completes all tests on amd64 successfully, the RC bug was later marked as stretch ignore by the release team.
Debian LTS This was my fourteenth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following:
  • From 10. April until 16. April I was in charge of our LTS frontdesk. I triaged security issues in wavpack, binutils, tiff, tiff3, imagemagick, wireshark, elfutils, libosip2, feh, freetype, web2py and libplist.
  • DLA-888-1. Issued a security update for logback fixing 1 CVE.
  • DLA-893-1. Issued a security update for bouncycastle fixing 1 CVE.
  • DLA-924-1. Issued a security update for tomcat7 fixing 2 CVE.
  • DLA-900-1. Issued a security update for freetype. Later it was determined that freetype in Wheezy was not affected and the change was reverted.
  • DLA-899-1. Issued a security update for feh fixing 1 CVE.
  • DLA-902-1. Issued a security update for imagemagick fixing 2 CVE.
  • DLA-911-1. Issued a security update for tiff fixing 11 CVE.
  • DLA-912-1. Issued a security update for tiff3 fixing 8 CVE.
  • DLA-913-1. Issued a security update for activemq fixing 1 CVE.
  • DLA-929-1. Issued a security update for libpodofo fixing 7 CVE.
Misc
  • I packaged a new major version of Osmo, a personal organizer and calendar application. It has been completely converted to GTK3 and WebKitGtk2 now.
Thanks for reading and see you next time.

01 May 2017

Thorsten Alteholz: My Debian Activities in April 2017

FTP assistant This month I marked 72 packages for accept and sent one email to a maintainer asking questions. The number of rejections went down to 15. I would name that a good level again. Debian LTS This was my thirty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. As others reduced their workload for this month, my all in all workload has been 23.75h. During that time I did uploads of
    [DLA 897-1] qbittorrent security update for two CVEs
    [DLA 898-1] libosip2 security update for four CVEs
    [DLA 901-1] radare2 security update for one CVE
    [DLA 914-1] minicom security update for one CVE
    [DLA 915-1] botan1.10 security update for one CVE
    [DLA 920-1] jasper security update for two CVEs
In addition I had one week of frontdesk duties. I also started to work on icu and bind9. The patches for icu applied fine but the corresponding test did not work but stopped somewhere in the middle!? I am open for any suggestions why this could happen. Other stuff This has been a busy LTS month, so I only created node-tunein and adopted smstools as DOPOM.

Next.