Search Results: "sesse"

11 March 2016

Steinar H. Gunderson: Agon and the Candidates tournament

The situation where Agon (the designated organizer of the Chess World Championship, and also the Candidates tournament, the prequalifier to said WC) is trying to claim exclusive rights of the broadcasting of the moves (not just the video) is turning bizarre. First of all, they have readily acknowledged they have no basis in copyright to do so; chess moves, once played, are facts and cannot be limited. They try to jump through some hoops with a New York-specific doctrine (even though the Candidates, unlike the World Championship, is played in Moscow) about hot news , but their main weapon seems to be that they simply will throw out anyone from the hall who tries to report on the moves, and then try to give them only to those that promise not to give them on. This leads to the previously unheard-of situation where you need to register and accept their terms just to get to watch the games in your browser. You have to wonder what they will be doing about the World Championship, which is broadcast unencrypted on Norwegian television (previous editions also with no geoblock). Needless to say, this wasn't practically possible to hold together. All the big sites (like Chessdom, ChessBomb and Chess24) had coverage as if nothing had happened. Move sourcing is a bit of a murky business where nobody really wants to say where they get the moves from (although it's pretty clear that for many tournaments, the tournament organizers will simply come to one or more of the big players with an URL they can poll at will, containing the games in the standard PGN format), and this was no exception ChessBomb went to the unusual move of asking their viewers to download Tor and crowdsource the moves, while Chessdom and Chess24 appeared to do no such thing. In fact, unlike Chessdom and ChessBomb, Chess24 didn't seem to say a thing about the controversy, possibly because they now found themselves on the other side of the fence from Norway Chess 2015, where they themselves had exclusive rights to the PGN in a similar controversy although it would seem from a tweet that they were perfectly okay with people just re-broadcasting from their site if they paid for a (quite expensive) premium membership, and didn't come up with any similar legal acrobatics to try to scare other sites. However, their ToS were less clear on the issue, and they didn't respond to requests for clarification at the time, so I guess all of this just continues to be on some sort of gentleman's agreement among the bigger players. (ChessBomb also provides PGNs for premium members for the tournaments they serve, but they expressly prohibit rebroadcast. They claim that for the tournaments they host, which is a small minority, they provide free PGNs for all.) Agon, predictably, sent out angry letters where they threatened to sue the sites in question, although it's not clear at all to me what exactly they would sue for. Nobody seemed to care, except one entity TWIC, which normally has live PGNs from most tournaments, announced they would not be broadcasting from the Candidats tournament. This isn't that unexpected, as TWIC (which is pretty much a one-man project anyway) mainly is about archival, where they publish weekly dumps of all top-level games played that week. This didn't affect a lot of sites, though, as TWIC's live PGNs are often not what you'd want to base a top-caliber site on (they usually lack clock information, and moves are often delayed by half a minute or so). I run a hobby chess relay/analysis site myself (mainly focusing on the games of Magnus Carlsen), though, so I've used TWIC a fair bit in the past, and if I were to cover the Candidates tournament (I don't plan to do so, given Agon's behavior, although I plan to cover the World Championship itself), I might have been hit by this. So, that was the background. The strange part started when worldchess.com, Agon's broadcasting site, promptly went down during the first round of the Candidates tournament today Agon blamed DDoS, which I'm sure is true, but it's unclear exactly how strong the DDoS was, and if they did anything at all to deal with it other than to simply wait it out. But this lead to the crazy situation where the self-declared monopolist was the only big player not broadcasting the tournament in some form. And now, in the trully bizarre move, World Chess is publishing a detailed rebuttal of Agon's arguments, explaining how it is bad for chess, not juridically sound, and also morally wrong. Yes, you read that right; Agon's broadcast site is carrying an op-ed saying Agon is wrong. You at least have to give them credit for not trying to censor their columinst when he says something they don't agree with. Oh, and if you want those PGNs? I will, at least for the time being, be pushing them out live on http://pgn.sesse.net/. I have not gone into any agreement with Agon, and they're hosted in Norway, far from any New York-specific doctrines. So feel free to relay from them, although I would of course be happy to know if you do.

2 March 2016

Steinar H. Gunderson: Nageru FOSDEM talk video

I got tired of waiting for the video of my FOSDEM talk about Nageru, my live video mixer, to come out, so I made an edit myself. (This wouldn't have been possible without getting access to the raw video from the team, of course.) Of course, in maximally bad timing, the video team published their own (auto-)edit of mine and a lot of other videos on the very same day (and also updated their counts to a whopping 151 fully or partial lost videos out of a bit over 500!), so now there's two competing ones. Of course, since I have only one and not 500 videos to care about, I could afford to give it a bit more love; in particular, I spliced in digital versions of the original slides where appropriate, modified the audio levels a bit when there are audience questions, added manual transcriptions and so on, so I think it ended up quite a bit better. Nageru itself is also coming pretty nicely along, with an 1.1.0 release that supports DeckLink PCI cards (not just the USB ones) and also the NVIDIA proprietary driver, for much increased oomph. It's naturally a bit more quiet now than it was, though conferences always tend to generate a flurry of interest, and then you get back to other interests afterwards. You can find the talk on YouTube; I'll also be happy to provide a full-quality master of my edit if FOSDEM or anyone else wants. Enjoy :-)

25 February 2016

Steinar H. Gunderson: Frankenmachine

My desktop machine, from the back My video testing machine has now seemingly accumulated: Fascinatingly enough, Debian actually has no problem having installed Intel, NVIDIA and AMD graphics drivers all at once. I can't run more than one at the same time, though; somehow X servers are still bound to this concept of vtys (so you can only run one), and NVIDIA/AMD drivers crash if you try to run them at the same time. You almost certainly could dedicate each card to a VM (PCI-Express passthrough) and run it that way, though, but just being able to switch is fine. Now about that fan noise

23 February 2016

Steinar H. Gunderson: Multithreaded OpenGL driver quality

Multithreaded OpenGL is tricky, both for application programmers and drivers. Based on some recent experience with developing an application that would like to run on all three major desktop GPU vendors, allow me to present my survey with sample size 1. I'll let you draw your own conclusions: Curiously, somewhat similar to an Intel/Mesa bug I reported back in July, and which has no response yet. Update: The NVIDIA driver is now up to exposing three bugs in my code. One of them even came with a textual error message when running with a debug context (in apitrace).

31 January 2016

Steinar H. Gunderson: Back from FOSDEM

Back safely from FOSDEM; just wanted to write down a few things while it's still fresh. FOSDEM continues to be huge. There are just so many people, and it overflows everywhere into ULB even the hallways during the talks are packed! I don't have a good solution for this, but I wish I did. Perhaps some rooms could be used as overflow rooms , ie., do a video link/stream to them, so that more people can get to watch the talks in the most popular rooms. The talks were of variable quality. I were to some that were great and some that were less than great, and it's really hard to know beforehand from the title/abstract alone; FOSDEM is really a place that goes for breadth. But the main attraction keeps being bumping into people in the hallways; I met a lot of people I knew (and some that I didn't know), which was the main thing for me. My own talk about Nageru, my live video mixer, went reasonably well; the room wasn't packed (about 75% full) and the live demo had to be run with only one camera (partly because the SDI camera I was supposed to borrow couldn't get to the conference due to unfortunate circumstances, and partly because I had left a command in the demo script to run with only one anyway), but I got a lot of good questions from the audience. The room was rather crummy, though; with no audio amplification, it was really hard to hear in the back (at least on the talks I visited myself in the same room), and half of the projector screen was essentially unreadable due to others' heads being in the way. The slides (with speaker notes) are out on the home page, and there will be a recording as soon as FOSDEM publishes it. All in all, I'm happy I went; presenting for an unknown audience is always a thrill, especially with the schedule being so tight. Keeps you on your toes. Lastly, I want to put out a shoutout to the FOSDEM networking team (supported by Cisco, as I understand it). The wireless was near-spotless; I had an issue reaching the Internet the first five minutes I was at the conference, and then there was ~30 seconds where my laptop chose (or was directed towards) a far-away AP; apart from that, it was super-responsive everywhere, including locations that were far from any auditorium. Doing this with 7000 heavy users is impressive. And NAT64 as primary ESSID is bold =) PS: Uber, can you please increase the surge pricing during FOSDEM next year? It's insane to have zero cars available for half an hour, and then only 1.6x surge at most.

29 January 2016

Steinar H. Gunderson: En route to FOSDEM

FOSDEM is almost here! And in an hour or so, I'm leaving for the airport. My talk tomorrow is about Nageru, my live video mixer. HDMI/SDI signals in, stream that doesn't look like crap out. Or, citing the abstract for the talk:
Nageru is an M/E (mixer/effects) video mixer capable of high-quality output on modest hardware. We'll go through the fundamental goals of the project, what we can learn from the outside world, performance challenges in mixing 720p60 video on an ultraportable laptop, and how all of this translates into a design and implementation that differs significantly from existing choices in the free software world.
Saturday 17:00, Open Media devroom (H.2214). Feel free to come and ask difficult questions. :-) (I've heard there's supposed to be a live stream, but there's zero public information on details yet. And while you can still ask difficult questions while watching the stream, it's unlikely that I'll hear them.)

25 January 2016

Steinar H. Gunderson: Chess endgame tablebases

A very short post: This link contains an interesting exposition of the 50-move rule in chess, and what it means for various endings. You can probably stop halfway, though; most of it is only interest for people deeply into endgame theory. Personally, I think DTZ50, as used by the Syzygy tablebases, is the best tradeoff for computer chess. It always produces the correct result (never throws away a win as a draw, or a draw as a loss), but the actual mates are of suboptimal length and look very strange (e.g., it will happily give away most of its pieces and then play very tricky endgames to mate). Then again, if you ever want to swindle against a non-optimal opponent (ie., try to make the position as hard as possible to play, to possibly convert e.g. a loss to a draw), you've opened up an entirely new can of worms. :-) Update: Eric P Smith has written a different explanation of the DTM50 metric that's probably easier to understand, although it contains less new information than the other one.

6 January 2016

Steinar H. Gunderson: IPv6 non-alternatives: DJB's article, 13 years later

With the world passing 10% IPv6 penetration over the weekend, we see the same old debates coming up again; people claiming IPv6 will never happen (despite several years now of exponential growth!), and that if they had only designed it differently, it would have been all over by now. In particular, people like to point to a 2002 3 article by D. J. Bernstein, complete with rants about how Google would never set up useless IPv6 addresses (and then they did that in 2007 I was involved). It's difficult to understand exactly what the article proposes since it's heavy on calling people idiots and light on actual implementation details (as opposed to when DJB's gotten involved in other fields; e.g. thanks to him we now have elliptical curve crypto that doesn't suck, even if the reference implementation was sort of a pain to build), but I will try to go through it nevertheless and show how I cannot find any way it would work well in practice. One thing first, though: Sorry, guys, the ship has sailed. Whatever genius solution DJB may have thought up that I'm missing, and whatever IPv6's shortcomings (they're certainly there), IPv6 is what we have. By now, you can not expect anything else to arise and take over the momentum; we will either live with IPv6 or die with IPv4. So, let's see what DJB says. As far as I can see, his primary call is for a version of IPv6 where the address space is an extension of the IPv4 space. For sake of discussion, let's call that IPv4+ , although it would share a number of properties with IPv6. In particular, his proposal requires changing the OS and other software on every single end host out there, just as IPv6; he readily admits that and outlines how it's done in rough terms (change all structs, change all configuration files, change all databases, change all OS APIs, etc.). From what I can see, he also readily admits that IPv4 and IPv4+ hosts cannot talk to each other, or more clearly, we cannot start using the extended address space before almost everybody has IPv4+ capable software. (E.g., quote: Once these software upgrades have been done on practically every Internet computer, we'll have reached the magic moment: people can start relying on public IPv6 addresses as replacements for public IPv4 addresses. ) So, exactly how does the IPv4 address space fit into the IPv4+ address space? The article doesn't really say anything about this, but I can imagine only two strategies: Build the IPv4+ space around the IPv4 space (ie., the IPv4 space occupies a little corner of the IPv4+ space, similar to how v4-mapped addresses are used within software but not on the wire today, to let applications do unified treatment of IPv4 addresses as a sort of special IPv6 address), or build it as a hierarchical extension. Let's look at the former first; one IPv4 address gives you one IPv4+ addresses. Somehow this seems to give you all the disadvantages of IPv4 and all the disadvantages of IPv6. The ISP is not supposed to give you any more IPv4+ addresses (or at least DJB doesn't want to contact his ISP about more also saying that the fact that automatic address distribution does not change his argument), so if you have one, you're stuck with one. So you still need NAT. (DJB talks about proxies , but I guess that the way things evolved, this either actually means NAT, or it talks about the practice of application-level proxies such as Squid or SOCKS proxies to reach the Internet, which really isn't commonplace anymore, so I'll assume for the sake of discussion it means NAT.) However, we already do NAT. The IPv4 crunch happened despite ubiquitous NAT everywhere; we're actually pretty empty. So we will need to hand out IPv4+ addresses at the very least to new deployments, and also probably reconfigure every site that wants to expand and is out of IPv4 addresses. ( Site here could mean any organizational unit, such as if your neighborhood gets too many new subscribers for your ISP's local addressing scheme to have enough addresses for you.) A much more difficult problem is that we now need to route these addresses on the wire. Ironically, the least clear part of DJB's plan is step 1, saying we will extend the format of IP packets to allow 16-byte addresses ; how exactly will this happen? For this scheme, I can only assume some sort of IPv4 option that says the stuff in the dstaddr field is just the start and doesn't make sense as an IPv4 address on its own; here are the remaining 12 bytes to complete the IPv4+ address . But now your routers need to understand that format, so you cannot do with only upgrading the end hosts; you also need to upgrade every single router out there, not just the end hosts. (Note that many of these do routing in hardware, so you can't just upgrade the software and call it a day.) And until that's done, you're exactly in the same situation as with IPv4/IPv6 today; it's incompatible. I do believe this option is what DJB talks about. However, I fail to see exactly how it is much better than the IPv6 we got ourselves into; you still need to upgrade all software on the planet and all routers on the planet. The benefit is supposedly that a company or user that doesn't care can just keep doing nothing, but they do need to care, since they need to upgrade 100% of their stuff to understand IPv4+ before we can start even deploying it alongside IPv4 (in contrast with IPv6, where we now have lots of experience in running production networks). The single benefit is that they won't have to renumber until they need to grow, at which point they need to anyway. However, let me also discuss the other possible interpretation, namely that of the IPv4+ address space being an extension of IPv4, ie. if you have 1.2.3.4 in IPv4, you have 1.2.3.4.x.x.x.x or similar in IPv4+. (DJB's article mentions 128-bit addresses and not 64-bit, though; we'll get to that in a moment.) People keep bringing this up, too; it's occasionally been called BangIP (probably jokingly, as in this April Fool's joke) due to the similarity with how explicit mail routing would work before SMTP became commonplace. I'll use that name, even though others have been proposed. The main advantage of BangIP is that you can keep your Internet core routing infrastructure. One way or the other, they will keep seeing IPv4 addresses and IPv4 packets; you need no new peering arrangements etc.. The exact details are unclear, though; I've seen people suggest GRE tunneling, ignoring problems they have through NAT, and I've seen suggestions of IPv4 options for source/destination addresses, also ignoring that someting as innocious as setting the ECN bits has been known to break middleboxes left and right. But let's assume you can pull that off, because your middlebox will almost certainly need to be the point that decapsulates BangIP anyway and converts it to IPv4 on the inside, presumably with a 10.0.0.0/8 address space so that your internal routing can keep using IPv4 without an IPv4+ forklift upgrade. (Note that you now lose the supposed security benefit of NAT, by the way, although you could probably encrypt the address.) Of course, your hosts will need to support IPv4+ still, and you will need some way of communicating that you are on the inside of the BangIP boundary. And you will need to know what the inside is, so that when you communicate on this side, you'll send IPv4 and not IPv4+. (For a home network with no routing, you could probably even just do IPv4+ on the inside, although I can imagine complications.) But like I wrote above, experience has shown us that 32 extra bits isn't enough. One layer of NAT isn't doing it, we need two. You could imagine the inter-block routability of BangIP helping a fair bit here (e.g., a company with too many machines for 10.0.0.0/8 could probably easily get more addresses for more external IPv4 addresses, yielding 10.0.0.0/8 blocks), but ultimately, it is a problem that you chop the Internet off in two distinct halves that work very differently. My ISP will probably want to use BangIP for itself, meaning I'm on the outside of the core; how many of those extra bits will they allocate for me? Any at all? Having multiple levels of bang sounds like pain; effectively we're creating a variable-length address. Does anyone ever want that? From experience, when we're creating protocols with variable-length addresses, people just tend to use the maximum level anyway, so why not design it with 128-bit to begin with? (The original IP protocol proposals actually had variable-length addresses, by the way.) So we can create our 32/96 BangIP , where the first 32 bits are for the existing public Internet, and then every IPv4 address gives you a 2^96 addresses to play with. (In a sense, it reminds me of 6to4, which never worked very well and is now thankfully dead.) However, this makes the inside/outside-core problem even worse. I now need two very different wire protocols coexisting on the Internet; IPv4+ (which looks like regular IPv4 to the core) for the core, and a sort of IPv4+-for-the-outside (similar to IPv6) outside it. If I build a company network, I need to make sure all of my routers are IPv4+-for-the-outside and talk that, while if I build the Internet core, I need to make sure all of my connections are IPv4 since I have no guarantee that I will be routable on the Internet otherwise. Furthermore, I have a fixed prefix that I cannot really get out of, defined by my IPv4 address(es). This is called hierarchical routing , and the IPv6 world gave it up relatively early despite it sounding like a great idea at first, because it makes multihoming a complete pain: If I have an address 1.2.3.4 from ISP A and 5.6.7.8 from ISP B, which one do I use as the first 32 bits of my IPv4+ network if I want it routable on the public Internet? You could argue that the solution for me is to get an IPv4 PI netblock (supposedly a /24, since we're not changing the Internet core), but we're already out of those, which is why we started this thing to begin with. Furthermore, if the IPv4/IPv4+ boundary is above my immediate connection to the Internet (say, ISP A doesn't have an IPv4 address, just IPv4+), I'm pretty hosed; I cannot announce an IPv4 netblock in BGP. The fact that the Internet runs on largely the same protocol everywhere is a very nice thing; in contrast, what is described here really would be a mess! So, well. I honestly don't think it's as easy to just do extension instead of alternative when it comes to the address spaces. We'll just need to deal with the pain and realize that upgrading the equipment and software is the larger part of the job anyway, and we'll need to do that no matter what solution we go with. Congrats on reaching 10%! Now get to work with the remaining 90%.

28 December 2015

Steinar H. Gunderson: The difference between logs and no logs

An USB3 device of mine stopped working one day. In Windows (its native environment; Linux is not officially supported), there would be a device plugged in sound and then nothing. Nothing in the event log, no indication there was ever a device of any kind in Device Manager. In Linux, after 20 seconds or so, this would come up:
[   71.831659] usb usb2-port1: Cannot enable. Maybe the USB cable is bad?
I bought a new and shorter cable. The card started working again.

27 December 2015

Steinar H. Gunderson: Going to FOSDEM 2016

I've ordered my tickets and my hotel room, so it's clear; I'm going to FOSDEM 2016! I'll be having a talk in the Open Media devroom (H.2214), Saturday 17:00, about my new project, Nageru. (Actually, it's sort of a launch as well, since the source code isn't out yet.) So, what is Nageru, might you ask? Well it has to do with video. And it's made for 2016, not 1996, so it uses your GPU via Movit, also released at FOSDEM two years ago. For the rest, come see my talk :-)

17 December 2015

Steinar H. Gunderson: sRGB weirdness: Doing the right thing causes a worse result

A while back, I wrote about how you should always do image calculations in linear light, not gamma space; since then, Tom Forsythe has come out with a much better metaphor than I could cough up myself, namely that you should look at sRGB value as compressed values, not integers. So naturally, when I needed a deinterlacing filter for Movit, my GPU filter library, I wanted to do it in linear light. (In fact, all pixel processing in Movit is in linear light, except in the cases where it's 100% equivalent to do it on the gamma-encoded values and the conversion can be skipped for speed.) After some deliberations, I made an implementation of Martin Weston's three-field deinterlacing filter, known in ffmpeg as w3fdif. I won't discuss deinterlacing in detail here since it's really hard, but I'll note that w3fdif works by way of applying two filters; low-frequency components are estimated from the current field, and high-frequency from the previous and next fields. (This makes intuitive sense; you cannot get the HF information from the current field since you don't have the lines you need for that, but you can hope it hasn't changed too much.) Aha! A filter. Brilliant, that's exactly when linear light means the most, too. But when implementing it, I found that it sometimes looked weird -- and ffmpeg's implementation (which works directly on the sRGB values, which we already established is wrong) didn't. After lots of tweaking back and forth, I decided to set up a synthetic test to settle this once and for all; I took a static test picture (eliminating everything related to video capture, codecs, frame rates, etc.) and compared to ffmpeg. Of course, deinterlacing is all about movement, but this would do to try to nail things down. So after lots of fruitless debugging, I did a last-ditch: What if I turned off the gamma conversions? This gave me a huge surprise; indeed it looked better! I'll provide some upscaled versions; left is the original image, middle is deinterlaced in sRGB space and right is deinterlaced in linear light: Original picture Deinterlaced in sRGB space Deinterlaced in linear light If that's not dramatic enough for you (trust me, you'll notice it when it's animated as you flicker through the two different fields), here's an even more high-contrast example (same ordering): Original picture Deinterlaced in sRGB space Deinterlaced in linear light I guess it's obvious in retrospect what happens; the HF filter picks up residue from its outer edges, and even if the coefficient is just 0.031 (well, times two; it adds that value from both the previous and next field), 3% the photons of a fully lit pixel (which is what you get when working in linear light) is actually quite a bit, whereas a 3% gray is only pixel value 8 or so, which is barely visible. So what am I to make of this? I'm honestly not sure. Maybe it's somehow related to that these filter values were chosen in 1988, where they were relatively unlikely to do this in linear light (although if they did it with analog circuitry, perhaps they could?) and it was tweaked to look good despite doing the wrong thing. Or maybe I need to change my approach here entirely. It always sucks when your fundamental assumptions are challenged, but I think it shows once again that if you notice something funny in your output, you really ought to investigate, because you never know how deep the rabbit hole goes. :-/

10 November 2015

Steinar H. Gunderson: HTTPS-enabling gitweb

If you have a HTTPS-enabling proxy in front of your gitweb, so that it tries to do <base href="http://..."> (because it doesn't know that the user is actually using HTTPS), here's the Apache configuration variable to tell it otherwise:
SetEnv HTTPS ON
So now git.sesse.net works with HTTPS after Let's Encrypt, without the CSS being broken. Woo. (Well, the clone URL still says http. So, halfway there, at least.)

Steinar H. Gunderson: Launch

We launched Offline Maps!

7 November 2015

Stig Sandbeck Mathisen: Let's Encrypt with Hitch and Varnish

Let's Encrypt is now in beta. Here's how to use the automated CA with Hitch TLS proxy, and Varnish HTTP accelerator as a backend for Hitch.
Let's Encrypt Let's Encrypt is a free, automated and open Certificate Authority (CA), run for the public's benefit.
Varnish Varnish is a HTTP accelerator. I use it as a web server, and it serves content from various backends. Varnish is used on a lot of high-traffic web servers, and supports simple and complex web site configurations.
Hitch Hitch is a TLS proxy, by Varnish Software. It terminates TLS connections, and forwards them to a backend, unencrypted. The TLS proxy is simple to configure, and handles many thousands of requests per second on commodity hardware. Hitch was originally called stud and was written by Jamie Turner at Bump.com.
Varnish plugin for Let's Encrypt Let's Encrypt Varnish Plugin is needed for the Let's Encrypt client to authenticate against their service, for getting the certificates. You can skip this if you have the option of temporarily disabling your web service, when registering and renewing your certificates.
Hitch TLS proxy Hitch must be configured to listen on the https port (TCP/443), with a list of strong ciphers. To get Forward Secrecy, we need ciphers with EECDH or EDH. To use these, we need to generate a set of DH Parameters. In /etc/hitch/hitch.conf:
      
# Listening
frontend = "[*]:443"
ciphers  = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
      
    
Varnish as backend for Hitch Hitch will forward traffic to Varnish on localhost:6082, using the PROXY protocol. Here's a snippet for /etc/hitch/hitch.conf to do this:
      
# Send traffic to the varnish backend
backend        = "[::1]:6086"
write-proxy-v2 = on
      
    
For varnish, I've added a separate port for the "PROXY" protocol, by adding an extra "-a" argument for port 6086. The -a :80 handles the existing http traffic, while the -a '[::1]:6086,PROXY' adds a listener on localhost, which will be used by hitch.
      
/usr/sbin/varnishd -j unix,user=vcache -a :80 -a '[::1]:6086,PROXY' ...
      
    
Let's Encrypt install letsencrypt First, install the letsencrypt client. The beta program sends you a mail with instructions. When it is generally available, I suspect it'll be something like:
      
apt install letsencrypt
      
    
install varnish plugin Then, get the let's encrypt varnish plugin. The plugin will extend the letsencrypt software to be able to rewrite the VCL and reload Varnish to satisfy the authentication challenge of the Let's Encrypt server.
      
cd /usr/local/src
git clone http://git.sesse.net/letsencrypt-varnish-plugin
. ~/.local/share/letsencrypt/bin/activate
pip install -e /usr/local/src/letsencrypt-varnish-plugin
      
    
get certificate Get certificates. Let's Encrypt is still in beta, so I had to add a few more command line arguments than the documentation specifies:
      
~/.local/share/letsencrypt/bin/letsencrypt --agree-dev-preview \
    --server https://acme-v01.api.letsencrypt.org/directory \
    -a letsencrypt-varnish-plugin:varnish -d example.org certonly
      
    
Adding certificates to Hitch Hitch requires one PEM file per domain we serve. Each .pem file contains the private key, the signed certificate and any required intermediates. Create a file with DH parameters:
      
openssl dhparam -out /etc/hitch/dhparam.pem 2048
      
    
This file may take a long while to generate. Combine the key, the signed certificate, and the dhparam file to something hitch can use:
      
cat \
    /etc/letsencrypt/live/example.org/privkey.pem \
    /etc/letsencrypt/live/example.org/fullchain.pem \
    /etc/hitch/dhparam.pem \
    > /etc/hitch/example.org.pem
chmod 0600 /etc/hitch/example.org.pem
      
    
and configure hitch to use the file, by adding it to /etc/hitch/hitch.conf:
      
# List of PEM files, each with key, certificates and dhparams
pem-file = "/etc/hitch/example.org.pem"
pem-file = "/etc/hitch/www.example.org.pem"
pem-file = "/etc/hitch/example.com.pem"
pem-file = "/etc/hitch/www.example.com.pem"
      
    

5 November 2015

Steinar H. Gunderson: Let's Encrypt Varnish plugin

I made a Varnish authentication plugin for the Let's Encrypt client. I dislike the huge amount of magic and layers of, well, stuff in the client, but the project is hugely important for the web, and I doubt there will be another ACME client anytime soon, so I can just as well get on the bandwagon. :-) It's really ugly. But it works for me.

1 November 2015

Steinar H. Gunderson: YUV color primaries

Attention: If these two videos don't both look identical (save for rounding errors) to each other and to this slide, it has broken understanding of YUV color primaries, and will render lots of perfectly normal video subtly off in color, one way or the other. Remux in MP4 instead of MPEG-TS here, for easier testing in browsers etc.: First, second. Chrome passes with perfect marks, Iceweasel segfaults on both (GStreamer's quality or lack thereof continues to amaze me). MPlayer and VLC both get one of them wrong (although VLC gets it more right if you use its screenshot function to save a PNG to disk, so check what's actually on the screen); ffmpeg with PNG output gets it right but ffplay doesn't. Edit to add: The point is the stable picture, not the flickering in the first few frames, of course. The video was encoded quite hastily.

19 September 2015

Russ Allbery: Review: Magic's Pawn

Review: Magic's Pawn, by Mercedes Lackey
Series: Last Herald Mage #1
Publisher: DAW
Copyright: June 1989
ISBN: 0-88677-352-0
Format: Mass market
Pages: 349
Continuing my (very slow) re-read of Mercedes Lackey's Valdemar series. I've read all these books before (this series several times, in fact), but long before I started writing reviews. Vanyel is a legendary hero by the time of Talia (the hero of the first Valdemar series, starting with Arrows of the Queen). Talia was reading stories about him at the start of her story: the mage Herald who defended Valdemar against its enemies in the distant past. This is the first book of the trilogy that tells his story, starting (as Lackey's books often do) from rather inauspicious beginnings. Vanyel is the son of a border lord, and about as poorly suited for it as possible. He's small, pretty, entirely uninterested in the hammer-and-tongs sword fighting the arms master wants to teach him, and also gay, not that he has any idea that's even something that exists and has a name. His only ally was his sister, who is now off serving in Valdemar's military. Most of his life is spent hiding, feeling utterly lost and out of place, and wishing he could be a Bard. This is a Valdemar story, so if you're familiar with the series, you know what's coming next. What might come as a surprise is that "next" is quite some time into the story. Vanyel does not get rescued from his situation by a Companion. Instead, his father decides to exile him to the capital under the care of his aunt, who he's only met once and who didn't think highly of him. He expects to be even more miserable, and shuts emotionally down in anticipation. This is one of those books that I remembered as being better than it actually was, and one of the reasons why I enjoyed it less than I expected is that far too much of this book is devoted to describing Vanyel's mental state. This usually involves various elaborate emo analogies (which can be a failing of this series in general), and it's quite hard to maintain sympathy for Vanyel. Lackey gets us to feel for him at the start of the book, when he's really in a bad situation. But once he gets to the capital, he's his own worst enemy, and it's hard to avoid the desire to shake him. I also didn't remember just how much I disliked Tylendel. I'm sure it will come as a surprise to no one that Vanyel eventually meets someone who draws him out of his shell and gives him a reason to want to live. Unfortunately, that person, despite a positive surface impression, is self-obsessed, unstable, and not above manipulating Vanyel into actions that are so obviously catastrophic that it makes one want to yell at the book. I disliked this part of the book so much that even a Companion's choosing was overshadowed. The book does get a bit better after that truly awful middle, but it never hits the emotional stride that other Valdemar books hit. Lackey does introduce the Tayledras, which will be hugely important in later books in this series (and who are some of my favorite characters), but there too I prefer their later appearances. Vanyel's inability to see a good thing when it hits him in the face, punctuated by being occasionally cruel to the people who try to help him, makes it quite hard to enjoy his slow path to becoming a better person. I remember really liking this trilogy, so I think it gets better. But I also remembered liking Vanyel's claiming, and it didn't do much for me on a re-read. I can only recommend this one as a necessary preface to the later books in the trilogy, and expect to need a high tolerance for constant woe-is-me despair. Followed by Magic's Promise. Rating: 5 out of 10

17 September 2015

Julien Danjou: My interview in le Journal du Hacker

A few days ago, the French equivalent of Hacker News, called "Le Journal du Hacker", interviewed me about my work on OpenStack, my job at Red Hat and my self-published book The Hacker's Guide to Python. I've spent some time translating it into English so you can read it if you don't understand French! I hope you'll enjoy it.
Hi Julien, and thanks for participating in this interview for the Journal du Hacker. For our readers who don't know you, can you introduce you briefly?
You're welcome! My name is Julien, I'm 31 years old, and I live in Paris. I now have been developing free software for around fifteen years. I had the pleasure to work (among other things) on Debian, Emacs and awesome these last years, and more recently on OpenStack. Since a few months now, I work at Red Hat, as a Principal Software Engineer on OpenStack. I am in charge of doing upstream development for that cloud-computing platform, mainly around the Ceilometer, Aodh and Gnocchi projects.
Being myself a system architect, I follow your work in OpenStack since a while. It's uncommon to have the point of view of someone as implied as you are. Can you give us a summary of the state of the project, and then detail your activities in this project?
The OpenStack project has grown and changed a lot since I started 4 years ago. It started as a few projects providing the basics, like Nova (compute), Swift (object storage), Cinder (volume), Keystone (identity) or even Neutron (network) who are basis for a cloud-computing platform, and finally became composed of a lot more projects. For a while, the inclusion of projects was the subject of a strict review from the technical committee. But since a few months, the rules have been relaxed, and we see a lot more projects connected to cloud-computing joining us. As far as I'm concerned, I've started with a few others people the Ceilometer project in 2012, devoted to handling metrics of OpenStack platforms. Our goal is to be able to collect all the metrics and record them to analyze them later. We also have a module providing the ability to trigger actions on threshold crossing (alarm). The project grew in a monolithic way, and in a linear way for the number of contributors, during the first two years. I was the PTL (Project Technical Leader) for a year. This leader position asks for a lot of time for bureaucratic things and people management, so I decided to leave my spot in order to be able to spend more time solving the technical challenges that Ceilometer offered. I've started the Gnocchi project in 2014. The first stable version (1.0.0) was released a few months ago. It's a timeseries database offering a REST API and a strong ability to scale. It was a necessary development to solve the problems tied to the large amount of metrics created by a cloud-computing platform, where tens of thousands of virtual machines have to be metered as often as possible. This project works as a standalone deployment or with the rest of OpenStack. More recently, I've started Aodh, the result of moving out the code and features of Ceilometer related to threshold action triggering (alarming). That's the logical suite to what we started with Gnocchi. It means Ceilometer is to be split into independent modules that can work together with or without OpenStack. It seems to me that the features provided by Ceilometer, Aodh and Gnocchi can also be interesting for operators running more classical infrastructures. That's why I've pushed the projects into that direction, and also to have a more service-oriented architecture (SOA)
I'd like to stop for a moment on Ceilometer. I think that this solution was very expected, especially by the cloud-computing providers using OpenStack for billing resources sold to their customers. I remember reading a blog post where you were talking about the high-speed construction of this brick, and features that were not supposed to be there. Nowadays, with Gnocchi and Aodh, what is the quality of the brick Ceilometer and the programs it relies on?
Indeed, one of the first use-case for Ceilometer was tied to the ability to get metrics to feed a billing tool. That's now a reached goal since we have billing tools for OpenStack using Ceilometer, such as CloudKitty. However, other use-cases appeared rapidly, such as the ability to trigger alarms. This feature was necessary, for example, to implement the auto-scaling feature that Heat needed. At the time, for technical and political reasons, it was not possible to implement this feature in a new project, and the functionality ended up in Ceilometer, since it was using the metrics collected and stored by Ceilometer itself. Though, like I said, this feature is now in its own project, Aodh. The alarm feature is used since a few cycles in production, and the Aodh project brings new features on the table. It allows to trigger threshold actions and is one of the few solutions able to work at high scale with several thousands of alarms. It's impossible to make Nagios run with millions of instances to fetch metrics and triggers alarms. Ceilometer and Aodh can do that easily on a few tens of nodes automatically. On the other side, Ceilometer has been for a long time painted as slow and complicated to use, because its metrics storage system was by default using MongoDB. Clearly, the data structure model picked was not optimal for what the users were doing with the data. That's why I started Gnocchi last year, which is perfectly designed for this use case. It allows linear access time to metrics (O(1) complexity) and fast access time to the resources data via an index. Today, with 3 projects having their own perimeter of features defined and which can work together Ceilometer, Aodh and Gnocchi finally erased the biggest problems and defects of the initial project.
To end with OpenStack, one last question. You're a Python developer for a long time and a fervent user of software testing and test-driven development. Several of your blogs posts point how important their usage are. Can you tell us more about the usage of tests in OpenStack, and the test prerequisites to contribute to OpenStack?
I don't know any project that is as tested on every layer as OpenStack is. At the start of the project, there was a vague test coverage, made of a few unit tests. For each release, a bunch of new features were provided, and you had to keep your fingers crossed to have them working. That's already almost unacceptable. But the big issue was that there was also a lot of regressions, et things that were working were not anymore. It was often corner cases that developers forgot about that stopped working. Then the project decided to change its policy and started to refuse all patches new features or bug fix that would not implement a minimal set of unit tests, proving the patch would work. Quickly, regressions were history, and the number of bugs largely reduced months after months. Then came the functional tests, with the Tempest project, which runs a test battery on a complete OpenStack deployment. OpenStack now possesses a complete test infrastructure, with operators hired full-time to maintain them. The developers have to write the test, and the operators maintain an architecture based on Gerrit, Zuul, and Jenkins, which runs the test battery of each project for each patch sent. Indeed, for each version of a patch sent, a full OpenStack is deployed into a virtual machine, and a battery of thousands of unit and functional tests is run to check that no regressions are possible. To contribute to OpenStack, you need to know how to write a unit test the policy on functional tests is laxer. The tools used are standard Python tools, unittest for the framework and tox to run a virtual environment (venv) and run them. It's also possible to use DevStack to deploy an OpenStack platform on a virtual machine and run functional tests. However, since the project infrastructure also do that when a patch is submitted, it's not mandatory to do that yourself locally.
The tools and tests you write for OpenStack are written in Python, a language which is very popular today. You seem to like it more than you have to, since you wrote a book about it, The Hacker's Guide to Python, that I really enjoyed. Can you explain what brought you to Python, the main strong points you attribute to this language (quickly) and how you went from developer to author?
I stumbled upon Python by chance, around 2005. I don't remember how I hear about it, but I bought a first book to discover it and started toying with that language. At that time, I didn't find any project to contribute to or to start. My first project with Python was rebuildd for Debian in 2007, a bit later. I like Python for its simplicity, its object orientation rather clean, its easiness to be deployed and its rich open source ecosystem. Once you get the basics, it's very easy to evolve and to use it for anything, because the ecosystem makes it easy to find libraries to solve any kind of problem. I became an author by chance, writing blog posts from time to time about Python. I finally realized that after a few years studying Python internals (CPython), I learned a lot of things. While writing a post about the differences between method types in Python which is still one of the most read post on my blog I realized that a lot of things that seemed obvious to me where not for other developers. I wrote that initial post after thousands of hours spent doing code reviews on OpenStack. I, therefore, decided to note all the developers pain points and to write a book about that. A compilation of what years of experience taught me and taught to the other developers I decided to interview in the book.
I've been very interested by the publication of your book, for the subject itself, but also the process you chose. You self-published the book, which seems very relevant nowadays. Is that a choice from the start? Did you look for an editor? Can you tell use more about that?
I've been lucky to find out about others self-published authors, such as Nathan Barry who even wrote a book on that subject, called Authority. That's what convinced me it was possible and gave me hints for that project. I've started to write in August 2013, and I ran the firs interviews with other developers at that time. I started to write the table of contents and then filled the pages with what I knew and what I wanted to share. I manage to finish the book around January 2014. The proof-reading took more time than I expected, so the book was only released in March 2014. I wrote a complete report on that on my blog, where I explain the full process in detail, from writing to launching. I did not look for editors though I've been proposed some. The idea of self-publishing really convince me, so I decided to go on my own, and I have no regret. It's true that you have to wear two hats at the same time and handle a lot more things, but with a minimal audience and some help from the Internet, anything's possible! I've been reached by two editors since then, a Chinese and Korean one. I gave them rights to translate and publish the books in their countries, so you can buy the Chinese and Korean version of the first edition of the book out there. Seeing how successful it was, I decided to launch a second edition in Mai 2015, and it's likely that a third edition will be released in 2016.
Nowadays, you work for Red Hat, a company that represents the success of using Free Software as a commercial business model. This company fascinates a lot in our community. What can you say about your employer from your point of view?
It only has been a year since I joined Red Hat (when they bought eNovance), so my experience is quite recent. Though, Red Hat is really a special company on every level. It's hard to see from the outside how open it is, and how it works. It's really close to and it really looks like an open source project. For more details, you should read The Open Organization, a book wrote by Jim Whitehurst (CEO of Red Hat), which he just published. It describes perfectly how Red Hat works. To summarize, meritocracy and the lack of organization in silos is what makes Red Hat a strong organization and puts them as one of the most innovative company. In the end, I'm lucky enough to be autonomous for the project I work on with my team around OpenStack, and I can spend 100% working upstream and enhance the Python ecosystem.

15 September 2015

Russ Allbery: Review: Let's Pretend This Never Happened

Review: Let's Pretend This Never Happened, by Jenny Lawson
Publisher: Berkley
Copyright: 2012, 2013
Printing: March 2013
ISBN: 0-425-26101-8
Format: Trade paperback
Pages: 366
Let's Pretend This Never Happened, subtitled (A Mostly True Memoir), is the closest that I've ever found to the book form of a stand-up comedy routine. Lawson grew up in rural Texas with a taxidermist father, frequent contact with animals in various forms of distress, an undiagnosed anxiety disorder, and a talent for creatively freaking out about things that never occur to anyone else. But, more importantly, she has a talent for putting down on paper the random thoughts that go through her head so the rest of us can read them. Not to mention excellent comic timing and absolute mastery of the strangely relevant digression. It's always tricky to review comedy. I think tastes differ more wildly in this genre than any other. Things some people find hilarious others will find offensive or just boring. That may be particularly true of Lawson, who, similar to some of the best stand-up comics, specializes in taking off filters and saying all sorts of offensive things that people might think but not say. This kind of comedy is a knife's edge, since it can easily turn into punching down. Lawson avoids this (rather well, in my opinion) by making herself the punch line of most of the jokes. A pretty typical paragraph of the book, so that you know the sort of thing that you're in for:
The following is a series of actual events pulled form my journal that led me to believe that our home was possessed by demons and/or built over an Indian burial ground. (Also, please note that the first part of this chapter actually happens just before the previous chapter, and the last part of it happens just after it. This could be viewed as "clunky and awkward," but I prefer to think of it as "intellectually challenging and chronologically surreal. Like if Memento was a book. About dead dogs and vaginas and puppets made out of squirrel corpses." You can feel free to use that quote if you're reviewing this chapter, or if you're a student and your teacher asks you, "What was the author trying to say here?" That was it. That's what I was trying to say. That and "Use condoms if you're going to have sex, for God's sake. There are a lot of skanks out there." That's not really covered in this book, but it's still good advice.)
That has a little bit of everything this book had for me: Lawson's somewhat surreal worries, the extended digression, a rhythm that's quite compelling once you start reading it, random uncomfortable topics, and the occasional miss that I don't find funny (the last few sentences). It's all mixed together in a slightly breathless rush of narrative momentum. For more samples, Lawson's writing started as a blog and she's still actively blogging, so you can get a good advance sample by reading some of The Bloggess. Her tone there matches the book closely. What makes this book more than only comedy is that Lawson is very open about her struggles with mental illness (anxiety and depression). A lot of the humor comes from "this is the ridiculous nonsense that my brain throws out on a regular basis" and inviting you to laugh along with her, but the undertone is use of humor as a coping mechanism to deal with anxiety spirals. And alongside that coping mechanism is an open-hearted message of "you are not the only person to have completely irrational reactions to the world please laugh along with mine and feel better about yours." Due to that, the best comparison I can make to another book I've read is to Allie Brosh's Hyperbole and a Half. Brosh is more serious in places, more analytical, and a bit better at generalizing to experiences the reader can identify with. (And, of course, more graphical.) Lawson is more madcap, a bit more manic, and focused on absurd situations that don't normally happen to people. I loved this book from beginning to end, and it had me laughing out-loud in multiple places. Despite being a collection of disconnected stories, it has a rhythm and flow that kept me reading. Some books of this kind are best read in small segments with a break between, but I devoured Let's Pretend This Never Happened in large chunks (and had to be careful about reading it in public and laughing too loudly). Check first whether the sense of humor works for you, but if it does, highly recommended. Rating: 9 out of 10

13 September 2015

Russ Allbery: Review: My Real Children

Review: My Real Children, by Jo Walton
Publisher: Tor
Copyright: May 2014
ISBN: 0-7653-3265-5
Format: Hardcover
Pages: 317
Patricia, the one form of her name that she's never used, is what the staff in the nursing home call her. "VC" is the frequent annotation on her daily chart: very confused. She's old, and forgetful, and trying very hard not to be her mother, and she remembers two lives. Not only remembers but lives: the nursing home isn't the same from day to day. They keep misplacing the lift, and the staff keep changing. But the memories are the strongest. She has two lives, two sets of children, all of whom she recognizes. When someone from one life visits, the other feels like a dream, but both of them persist. Two lives that divided at the most significant decision of her life. My Real Children is one of those books that fits in the broader speculative fiction genre as alternate history, but also has many of the virtues of mimetic fiction. It's the story of one woman's life, told, except for the opening frame, chronologically from start to finish. At the start, it's simple biography, telling Patty's story from childhood, through World War II, until the moment of her fateful decision in 1949. From there the narrative splits, and she lives two lives. And the world splits as well: in one, Kennedy was assassinated by a bomb; in another, the Cuban Missile Crisis escalated into something much worse. Much is still unchanged, including the general lives of people Patty knew well in one life and not in another, but the worlds keep diverging from each other and from ours. The exact nature of the divergence is not, however, a significant part of the book. This is not the type of alternate history that is obsessed with the point of change, or even the implications for the world. Rather, this novel is about one woman, one decision, and two very different paths through life. Different friends, different challenges, different careers... different hopes and dreams, and different children. But always the same person, the same sense of practicality, the same ethics. It's beautifully told, striking just the right balance of showing important moments in detail and passing quickly, but smoothly, over some years. And, as with Among Others, Walton strikes the perfect emotional balance, leaving some reactions understated, letting the reader bring their own reactions to the book, and filling it with small bits of telling, factual detail rather than internal monologues. This is also one of the most heart-wrenching books I've ever read. My Real Children is a whole life, and not an easy one, complete with the heartaches and betrayals, the good and bad decisions, care for elderly relatives, dementia, tragedy, and even abuse. And moments of shining joy or profound satisfaction, but made all the more poignant because they're fleeting. The reader knows, in a sense, how the story ends, which casts a shadow over the rest of the book. But, even without that shadow, life can be hard, or cruel, alongside the wonder and joy. And it ends, and the ending is not neat and clean and uplifting, although one can choose how one meets it. This is the story of a life, all of it, and how choices can divide that life, and how they can cast a long shadow over how one later defines oneself. This book hurts. A lot. Enough that I want to warn you in advance I was in tears through the last third of the book. But it hurts because it's so very real and deep, not via a blow-by-blow description of daily actions, but by striding unafraid into the deep complications of choice and consequence. It's also a profoundly feminist book, in a quiet and descriptive way. It shared the Tiptree award for 2015 and fully deserved it. Both of the protagonist's lives are struggles against expectations, around gender, sexuality, and the role of women in society. Those struggles take very different forms, but they share some deep similarities. Walton shows the impact from the bottom up: the effects on specific lives, specific people, specific situations. It's brilliantly done. One can't help but think of The Female Man given the telling of parallel lives, but this is a case study, whereas Russ's novel was a battle cry. It gets under your skin instead of in your face. (Use of subtlety on this topic is easier now, forty years of occasional progress past when Russ wrote her classic.) This is a much harder book to read than Among Others. It's less forgiving, less reassuring, less willing to provide a happy ending. Lives rarely have clean, happy endings; one has to construct them, by choosing the material to build the ending on. But it's a brilliant, profound book, in that way that stories about people in all their complexity can be. Be cautious about your mood when you read this, particularly if you've had the experience of caring for a dying relative, but it's a book I will remember for a very long time. Rating: 9 out of 10

Next.

Previous.