Search Results: "sesse"

16 October 2016

Steinar H. Gunderson: backup.sh opensourced

It's been said that backup is a bit like flossing; everybody knows you should do it, but nobody does it. If you want to start flossing, an immediate question is what kind of dental floss to get and conversely, for backup, which backup software do you want to rely on? I had some criteria: I looked at basically everything that existed in Debian and then some, and all of them failed. But Samfundet had its own script that's basically just a simple wrapper around tar and ssh, which has worked for 15+ years without a hitch (including several restores), so why not use it? All the authors agreed to a GPLv2+ licensing, so now it's time for backup.sh to meet the world. It does about the simplest thing you can imagine: ssh to the server and use GNU tar to tar down every filesystem that has the dump bit set in fstab. Every 30 days, it does a full backup; otherwise, it does an incremental backup using GNU tar's incremental mode (which makes sure you will also get information about file deletes). It doesn't do inter-file diffs (so if you have huge files that change only a little bit every day, you'll get blowup), and you can't do single-file restores without basically scanning through all the files; tar isn't random-access. So it doesn't do much fancy, but it works, and it sends you a nice little email every day so you can know your backup went well. (There's also a less frequently used mode where the backed-up server encrypts the backup using GnuPG, so you don't even need to trust the backup server.) It really takes fifteen minutes to set up, so now there's no excuse. :-) Oh, and the only good dental floss is this one. :-)

2 October 2016

Steinar H. Gunderson: SNMP MIB setup

If you just install the snmp package out of the box, you won't get the MIBs, so it's pretty much useless for anything vendor without some setup. I'm sure this is documented somewhere, but I have to figure it out afresh every single time, so this time I'm writing it down; I can't possibly be the only one getting confused. First, install snmp-mibs-downloader from non-free. You'll need to work around bug #839574 to get the Cisco MIBs right:
# cp /usr/share/doc/snmp-mibs-downloader/examples/cisco.conf /etc/snmp-mibs-downloader/
# gzip -cd /usr/share/doc/snmp-mibs-downloader/examples/ciscolist.gz > /etc/snmp-mibs-downloader/ciscolist
Now you can download the Cisco MIBs:
# download-mibs cisco
However, this only downloads them; you will need to modify snmp.conf to actually use them. Comment out the line that says mibs : , and then add:
mibdirs +/var/lib/snmp/mibs/cisco/
Voila! Now you can use snmpwalk with e.g. -m AIRESPACE-WIRELESS-MIB to get the full range of Cisco WLC objects (and the first time you do so as root or the Debian-snmp user, the MIBs will be indexed in /var/lib/snmp/mib_indexes/.)

25 September 2016

Steinar H. Gunderson: Nageru @ Fyrrom

When Samfundet wanted to make their own Boiler Room spinoff (called Fyrrom more or less a direct translation), it was a great opportunity to try out the new multitrack code in Nageru. After all, what can go wrong with a pretty much untested and unfinished git branch, right? So we cobbled together a bunch of random equipment from here and there: Video equipment Hooked it up to Nageru: Nageru screenshot and together with some great work from the people actually pulling together the event, this was the result. Lots of fun. And yes, some bugs were discovered of course, field testing without followup patches is meaningless (that would either mean you're not actually taking your test experience into account, or that your testing gave no actionable feedback and thus was useless), so they will be fixed in due time for the 1.4.0 release. Edit: Fixed a screenshot link.

16 September 2016

Steinar H. Gunderson: BBR opensourced

This is pretty big stuff for anyone who cares about TCP. Huge congrats to the team at Google.

4 September 2016

Steinar H. Gunderson: Multitrack audio in Nageru 1.4.0

Even though the Chess Olympiad takes some attention right now, development on Nageru (my live video mixer) has continued steadily throughout since the 1.3.0 release. I wanted to take a little time to talk about the upcoming 1.4.0 release, and why things are as they are; writing down things often make them a bit clearer. Every major release of Nageru has had a specific primary focus: 1.0.0 was about just getting everything to work, 1.1.0 was about NVIDIA support for more oomph, 1.2.0 was about stabilization and polish (and added support for Blackmagic's PCI cards as a nice little bonus), and 1.3.0 was about x264 integration. For 1.4.0, I wanted to work on multitrack audio and mixing. Audio has always been a clear focus for Nageru, and for good reason; video is 90% about audio, and it's sorely neglected in most amateur productions (not to mention that processing tools are nearly non-existant in most free or cheap software video mixers). Right from the get-go, it's had a chain with proper leveling, compressors and most importantly visual monitoring, so that you know when things are not as they should be. However, it was also written with an assumption that there would be a single audio input source (one of the cameras), and that's going to change. Single-input is less of a showstopper than one'd think at first; you can work around it by buying a mixer, plug everything into that and then feed that signal into the computer. However, there are a few downsides: If you want camera audio, you'll need to pull more cable from each camera (or have SDI de-embedders). Your mixer is likely to require an extra power supply, and that means yet more cable (any decent USB video card can power over USB, so why shouldn't your audio?). You'll need to buy and transport yet another device. And so on. (If you already have a PA mixer, of course you can use it, but just reusing the PA mix as a stream mix rarely gives the best results, and mixing on an aux bus gives very little flexibility.) So for 1.4.0, I wanted something to get essentially the processing equivalent of a mid-range mixer. But even though my education is in DSP, my experience with mixers is rather limited, so I did the only reasonable thing and went over to a friend who's also an (award-winning) audio engineer. (It turns out that everything on a mixer is the way it is for a pretty good reason, tuned through 50+ years of collective audio experience. If you just try to make up something on your own without understanding what's going on, you have a 0.001% chance of stumbling upon some genius new way of doing things by accident, and a significantly larger chance than that of messing things up.) After some back and forth, we figured out a reasonable set of basic tools that would be useful in the right hands, and not too confusing for a beginner. So let's have a look at the new controls you get: Nageru expanded audio control view There's one set of these controls for each bus. (This is the expanded view; there's also a compact view that has only the meters and the fader, which is what you'll typically want to use during the run itself the expanded view is for setup and tweaking.) A bus in Nageru is a pair of channels (left/right), sourced from a video capture or ALSA card. The channel mapping is flexible; my USB sound card has 18 channels, for instance, and you can use that to make several buses. Each bus has a name (here I named it very creatively Main , but in a real setting you might want something like Blue microphone or Speaker PC ), which is just for convenience; it doesn't mean much. The most important parts of the mix are given the most screen estate, so even though the way through the signal chain is left-to-right top-to-bottom, I'll go over it in the opposite direction. By far the most important part is the audio level, so the fader naturally is very prominent. (Note that the scale is nonlinear; you want more resolution in the most important area.) Changing a fader with the mouse or keyboard is possible, and probably most people will be doing that, but Nageru will also support USB faders. These usually speak MIDI, for historical reasons, and there are some UI challenges when they're all so different, but you can get really small ones if you want to get that tactile feel without blowing up your budget or getting a bigger backpack. Then there's the meter to the left of that. Nageru already has R128 level meters in the mastering section (not shown here, but generally unchanged from 1.3.0), and those are kept as-is, but for each bus, you don't want to know loudness; you want to know recording levels, so you want a peak meter, not a loudness meter. In particular, you don't want the bus to send clipped data to the master (which would happen if you set it too high); Nageru can handle this situation pretty well (unlike most digital mixers, it mixes in full 32-bit floating-point so there's no internal clipping, and there's a limiter on the master by default), but it's still not a good place to be in, so you can see that being marked in red in this example. The meter doubles as an input peak check during setup; if you turn off all the effects and set the fader to neutral, you can see if the input hits peak or not, and then adjust it down. (Also, you can see here that I only have audio in the left channel; I'd better check my connections, or perhaps just use mono, by setting the right channel on the bus mapping to the same input as the left one.) The compressor (now moved from the mastering section to each bus) should be well-known for those using 1.3.0, but in this view, it also has a reduction meter, so that you can see whether it kicks in or not. Most casual users would want to just leave the gain staging and compressor settings alone, but a skilled audio engineer will know how to adjust these to each speaker's antics (some speak at a pretty even volume and thus can get a bit of headroom, while some are much more variable and need tighter settings). Finally (or, well, first), there's the EQ section. The lo-cut is again well-known from 1.3.0 (the cutoff frequency is the same across all buses), but there's now also a simple three-band EQ per bus. Simply ask the speaker to talk normally for a bit, and tweak the controls until it sounds good. People have different voices and different ways of holding the microphone, and if you have a reasonable ear, you can use the EQ to your advantage to make them sound a little more even on the stream. Either that, or just put it in neutral, and the entire EQ code will be bypassed. The code is making pretty good progress; all the DSP stuff is done (save for some optimizations I want to do in zita-resampler, now that the discussion flow is started again), and in theory, one could use it already as-is. However, there's a fair amount of gnarly support code that still needs to be written: In particular, I need to do some refactoring to support ALSA hotplug (you don't want your entire stream to go down just because an USB soundcard dropped out for a split second), and similarly some serialization of saving/loading bus mappings. It's not exactly rocket science, but all the code still needs to be written, and there are a number of corner cases to think of. If you want to peek, the code is in the multichannel_audio branch, but beware; I rebase/squash it pretty frequently, so if you pull from it, expect frequent git breakage.

17 August 2016

Gunnar Wolf: Talking about the Debian keyring in Investigaciones Nucleares, UNAM

For the readers of my blog that happen to be in Mexico City, I was invited to give a talk at Instituto de Ciencias Nucleares, Ciudad Universitaria, UNAM.

I will be at Auditorio Marcos Moshinsky, on August 26 starting at 13:00. Auditorio Marcos Moshinsky is where we met for the early (~1996-1997) Mexico Linux User Group meetings. And... Wow. I'm amazed to realize it's been twenty years that I arrived there, young and innocent, the newest of what looked like a sect obsessed with world domination and a penguin fetish.
AttachmentSize
llavero_chico.png220.84 KB
llavero_orig.png1.64 MB

14 August 2016

Steinar H. Gunderson: Linear interpolation, source alignment, and Debian's embedding policy

At some point back when the dinosaurs roamed the Earth and I was in high school, I borrowed my first digital signal processing book from a friend. I later went on to an engineering education and master's thesis about DSP, but the very basics of DSP never stop to fascinate me. Today, I wanted to write something about one of them and how it affects audio processing in Nageru (and finally, how Debian's policies put me in a bit of a bind on this issue). DSP texts tend to obscure profound truths with large amounts of maths, so I'll try to present a somewhat less general result that doesn't require going into the mathematical details. That rule is: Adding a signal to weighted, delayed copies of itself is a filtering operation. (It's simple, but ignoring it will have sinister effects, as we'll see later.) Let's see exactly what that means with a motivating example. Let's say that I have a signal where I want to get rid of (or rather, reduce) high frequencies. The simplest way I can think of is to add every neighboring sample; that is, set yn = xn + xn-1. For each sample, we add the previous sample, ie., the signal as it was one sample ago. (We ignore what happens at the edges; the common convention is to assume signals extend out to infinity with zeros.) What effect will this have? We can figure it out with some trigonometry, but let's just demonstrate it by plotting instead: We assume 48 kHz sample rate (which means that our one-sample delay is 20.83 s) and a 22 kHz note (definitely treble!), and plot the signal with one-sample delay (the x axis is sample number): Filtered 22 kHz As you can see, the resulting signal is a new signal of the same frequency (which is always true; linear filtering can never create new frequencies, just boost or dampen existing ones), but with much lower amplitude. The signal and the delayed version of it end up cancelling each other mostly out. Also note that there signal has changed phase; the resulting signal has been a bit delayed compared to the original. Now let's look at a 50 Hz signal (turn on your bass face). We need to zoom out a bit to see full 50 Hz cycles: Filtered 50 Hz The original signal and the delayed one overlap almost exactly! For a lower frequency, the one-sample delay means almost nothing (since the waveform is varying so slowly), and thus, in this case, the resulting signal is amplified, not dampened. (The signal has changed phase here, too actually exactly as much in terms of real time but we don't really see it, because we've zoomed out.) Real signals are not pure sines, but they can be seen as sums of many sines (another fundamental DSP result), and since filtering is a linear operation, it affects those sines independently. In other words, we now have a very simple filter that will amplify low frequencies and dampen high frequencies (and delay the entire signal a little bit). We can do this for all frequencies from 0 to 24000 Hz; let's ask Octave to do it for us: Frequency plot for simple filter (Of course, in a real filter, we'd probably multiply the result with 0.5 to leave the bass untouched instead of boosting it, but it doesn't really change anything. A real filter would have a lot more coefficients, though, and they wouldn't all be the same!) Let's now turn to a problem that will at first seem different: Combining audio from multiple different time sources. For instance, when mixing video, you could have input from two different cameras or sounds card and would want to combine them (say, a source playing music and then some audience sound from a camera). However, unless you are lucky enough to have a professional-grade setup where everything runs off the same clock (and separate clock source cables run to every device), they won't be in sync; sample clocks are good, but they are not perfect, and they have e.g. some temperature variance. Say we have really good clocks and they only differ by 0.01%; this means that after an hour of streaming, we have 360 ms delay, completely ruining lip sync! This means we'll need to resample at least one of the sources to match the other; that is, play one of them faster or slower than it came in originally. There are two problems here: How do you determine how much to resample the signals, and how do we resample them? The former is a difficult problem in its own right; about every algorithm not backed in solid control theory is doomed to fail in one way or another, and when they fail, it's extremely annoying to listen to. Nageru follows a 2012 paper by Fons Adriaensen; GStreamer does well, something else. It fails pretty badly in a number of cases; see e.g. this 2015 master's thesis that tries to patch it up. However, let's ignore this part of the problem for now and focus on the resampling. So let's look at the case where we've determined we have a signal and need to play it 0.01% faster (or slower); in a real situation, this number would vary a bit (clocks are not even consistently wrong). This means that at some point, we want to output sample number 3000 and that corresponds to input sample number 3000.3, ie., we need to figure out what's between two input samples. As with so many other things, there's a way to do this that's simple, obvious and wrong, namely linear interpolation. The basis of linear interpolation is to look at the two neighboring samples and weigh them according to the position we want. If we need sample 3000.3, we calculate y = 0.7 x3000 + 0.3 x3001 (don't switch the two coefficients!), or, if we want to save one multiplication and get better numerical behavior, we can use the equivalent y = x3000 + 0.3 (x3001 - x3000). And if we need sample 5000.5, we take y = 0.5 x5000 + 0.5 x5001. And after a while, we'll be back on integer samples; output sample 10001 corresponds to x10000 exactly. By now, I guess it should be obvious what's going on: We're creating a filter! Linear interpolation will inevitably result in high frequencies being dampened; and even worse, we are creating a time-varying filter, which means that the amount of dampening will vary over time. This manifests itself as a kind of high-frequency flutter , where the amount of flutter depends on the relative resampling frequencies. There's also cubic resampling (which can mean any of several different algorithms), but it only really reduces the problem, it doesn't really solve it. The proper way of interpolating depends a lot on exactly what you want (e.g., whether you intend to change the rate quickly or not); this paper lays out a bunch of them, and was the paper that originally made me understand why linear interpolation is so bad. Nageru outsources this problem to zita-resampler, again by Fons Adriaensen; it yields extremely high-quality resampling under controlled delay, through a relatively common technique known as polyphase filters. Unfortunately, doing this kind of calculations takes CPU. Not a lot of CPU, but Nageru runs in rather CPU-challenged environments (ultraportable laptops where the GPU wants most of the TDP, and the CPU has to go down to the lowest frequency), and it is moving in a direction where it needs to resample many more channels (more on the later), so every bit of CPU helps. So I coded up an SSE optimization of the inner loop for a particular common case (stereo signals) and sent it in for upstream inclusion. (It made the code 2.7 times as fast without any structural changes or reducing precision, which is pretty much what you can expect from SSE.) Unfortunately, after a productive discussion, suddenly upstream went silent. I tried pinging, pinging again, and after half a year pinging again, but to no avail. I filed the patch in Debian's BTS, but the maintainer understandably is reluctant to carry a delta against upstream. I also can't embed a copy; Debian policy would dictate that I build against the system's zita-resampler. I could work around it by rewriting zita-resampler until it looks nothing like the original, which might be a good idea anyway if I wanted to squeeze out the last drops of speed; there are AVX optimizations to be had in addition to SSE, and the structure as-is isn't ideal for SSE optimizations (although some of the changes I have in mind would have to be offset against increased L1 cache footprint, so careful benchmarking would be needed). But in a sense, it feels like just working around a policy that's there for good reason. So like I said, I'm in a bit of a bind. Maybe I should just buy a faster laptop. Oh, and how does GStreamer solve this? Well, it doesn't use linear interpolation. It does something even worse it uses nearest neighbor. Gah. Update: I was asked to clarify that this is about the audio resampling done by the GStreamer audio sink to sync signals, not in the audioresample element, which solves a related but different problem (static sample rate conversion). The audioresample element supports a number of different resampling methods; I haven't evaluated them.

27 July 2016

Steinar H. Gunderson: Nageru in Debian

Uploading to ftp-master (via ftp to ftp.upload.debian.org):
Uploading nageru_1.3.3-1.dsc: done.
Uploading nageru_1.3.3.orig.tar.gz: done.
Uploading nageru_1.3.3-1.debian.tar.xz: done.
Uploading nageru-dbgsym_1.3.3-1_amd64.deb: done.
Uploading nageru_1.3.3-1_amd64.deb: done.
Uploading nageru_1.3.3-1_amd64.changes: done.
So now it's in the NEW queue, along with its dependency bmusb. Let's see if I made any fatal mistakes in release preparation :-) Edit: Whoa, that was fast ACCEPTED into unstable.

22 July 2016

Norbert Preining: Yukio Mishima The Temple of the Golden Pavilion

A masterpiece of modern Japanese literature: Yukio Mishima ( ) The Temple of the Golden Pavilion ( ). The fictional story about the very real arson attack that destroyed the Golden Pavilion in 1950.
mishima-golden-temple A bit different treatise on beauty and ugliness!
How shall I put it? Beauty yes, beauty is like a decayed tooth. It rubs against one s tongue, it hangs there, hurting one, insisting on its own existence, finally [ ] the tooth extracted. Then, as one looks at the small, dirty, brown, blood-stained tooth lying in one s hand, one s thoughts are likely to be as follows: Is this it?
Mizoguchi, a stutterer, is from young years on taken by a near mystical imagination of the Golden Pavilion, influence by his father who considers it the most beautiful object in the world. After his father s death he moves to Kyoto and becomes acolyte in the temple. He develops a friendship with Kashiwagi, who uses his clubfeet to make women feel sorry for him and make them fall in love with his clubfeet, as he puts it. Kashiwagi also puts Mizoguchi onto the first tracks of amorous experiences, but Mizoguchi invariably turns out to be impotent but not due to his stuttering, but due to the image of the Golden Pavilion appearing in the essential moment and destroying every chance.
Yes, this was really the coast of the Sea of Japan! Here was the source of all my unhappiness, of all my gloomy thoughts, the origin of all my ugliness and all my strength. It was a wild sea.
Mizoguchi is getting more and more mentally about his relation with the head monk, neglects his studies, and after a stark reprimand he escapes to the north coast, from where he is brought back by police to the temple. He decides to burn down the Golden Pavilion, which has taken more and more command of his thinking and doing. He carries out the deed with the aim to burn himself in the top floor, but escapes in the last second to retreat into the hills to watch the spectacle.
Closely based on the true story of the arsonist of the Golden Pavilion, whom Mishima even visited in prison, the book is a treatise about beauty and ugly.
At his trial he [the real arsonist] said: I hate myself, my evil, ugly, stammering self. Yet he also said that he did not in any way regret having burned down the Kinkakuji.
Nancy Wilson Ross in the preface
Mishima is master in showing these two extremes by contrasting the refined qualities of Japanese culture flower arrangement, playing the shakuhachi, with immediate outburst of contrasting behavior: cold and brutal recklessness. Take for example the scene were Kashiwagi is arranging flowers, stolen by Mizoguchi from the temple grounds, while Mizoguchi is playing the flute. They also discuss koans and various interpretations. Enters the Ikebana teacher, and mistress of Kashiwagi. She congratulates Kashiwagi to his excellent arrangement, which he answers coldly by quitting their relationship, both as teacher as well as mistress, and telling her not to see him again in a formal style. She, still ceremonially kneeling, suddenly destroys the flower arrangement, only to be beaten and thrown out by Kashiwagi. And the beauty and harmony has turned to ugliness and hate in seconds. Beauty and Ugliness, two sides of the same medal, or inherently the same, because it is only up to the point of view. Mishima ingeniously plays with this duality, and leads us through the slow and painful development of Mizoguchi to the bitter end, which finally gives him freedom, freedom from the force of beauty. Sometimes seeing how our society is obsessed with beauty I cannot get rid of the feeling that there are far more Mizoguchis at heart.

20 July 2016

Steinar H. Gunderson: Solskogen 2016 videos

I just published the videos from Solskogen 2016 on Youtube; you can find them all in this playlist. The are basically exactly what was being sent out on the live stream, frame for frame, except that the audio for the live shader compos has been remastered, and of course a lot of dead time has been cut out (the stream was sending over several days, but most of the time, only the information loop from the bigscreen). YouTube doesn't really support the variable 50/60 Hz frame rate we've been using well as far as I can tell, but mostly it seems to go to some 60 Hz upconversion, which is okay enough, because the rest of your setup most likely isn't free-framerate anyway. Solskogen is interesting in that we're trying to do a high-quality stream with essentially zero money allocated to it; where something like Debconf can use 2500 for renting and transporting equipment (granted, for two or three rooms and not our single stream), we're largely dependent on personal equipment as well as borrowing things here and there. (I think we borrowed stuff from more or less ten distinct places.) Furthermore, we're nowhere near the situation of two cameras, a laptop, perhaps a few microphones ; not only do you expect to run full 1080p60 to the bigscreen and switch between that and information slides for each production, but an Amiga 500 doesn't really have an HDMI port, and Commodore 64 delivers an infamously broken 50.12 Hz signal that you really need to deal with carefully if you want it to not look like crap. These two factors together lead to a rather eclectic setup; here, visualized beautifully from my ASCII art by ditaa: Solskogen 2016 A/V setup diagram Of course, for me, the really interesting part here is near the end of the chain, with Nageru, my live video mixer, doing the stream mixing and encoding. (There's also Cubemap, the video reflector, but honestly, I never worry about that anymore. Serving 150 simultaneous clients is just not something to write home about anymore; the only adjustment I would want to make would probably be some WebSockets support to be able to deal with iOS without having to use a secondary HLS stream.) Of course, to make things even more complicated, the live shader compo needs two different inputs (the two coders' laptops) live on the bigscreen, which was done with two video capture cards, text chroma-keyed on top from Chroma, and OBS, because the guy controlling the bigscreen has different preferences from me. I would take his screen in as a dirty feed and then put my own stuff around it, like this: Solskogen 2016 shader compo screenshot (Unfortunately, I forgot to take a screenshot of Nageru itself during this run.) Solskogen was the first time I'd really used Nageru in production, and despite super-extensive testing, there's always something that can go wrong. And indeed there was: First of all, we discovered that the local Internet line was reduced from 30/10 to 5/0.5 (which is, frankly, unusable for streaming video), and after we'd half-way fixed that (we got it to 25/4 or so by prodding the ISP, of which we could reserve about 2 for video demoscene content is really hard to encode, so I'd prefer a lot more) Nageru started crashing. It wasn't even crashes I understood anything of. Generally it seemed like the NVIDIA drivers were returning GL_OUT_OF_MEMORY on things like creating mipmaps; it's logical that they'd be allocating memory, but we had 6 GB of GPU memory and 16 GB of CPU memory, and lots of it was free. (The PC we used for encoding was much, much faster than what you need to run Nageru smoothly, so we had plenty of CPU power left to run x264 in, although you can of course always want more.) It seemed to be mostly related to zoom transitions, so I generally avoided those and ran that night's compos in a more static fashion. It wasn't until later that night (or morning, if you will) that I actually understood the bug (through the godsend of the NVX_gpu_memory_info extension, which gave me enough information about the GPU memory state that I understood I wasn't leaking GPU memory at all); I had set Nageru to lock all of its memory used in RAM, so that it would never ever get swapped out and lose frames for that reason. I had set the limit for lockable RAM based on my test setup, with 4 GB of RAM, but this setup had much more RAM, a 1080p60 input (which uses more RAM, of course) and a second camera, all of which I hadn't been able to test before, since I simply didn't have the hardware available. So I wasn't hitting the available RAM, but I was hitting the amount of RAM that Linux was willing to lock into memory for me, and at that point, it'd rather return errors on memory allocations (including the allocations the driver needed to make for its texture memory backings) than to violate the never swap contract. Once I fixed this (by simply increasing the amount of lockable memory in limits.conf), everything was rock-stable, just like it should be, and I could turn my attention to the actual production. Often during compos, I don't really need the mixing power of Nageru (it just shows a single input, albeit scaled using high-quality Lanczos3 scaling on the GPU to get it down from 1080p60 to 720p60), but since entries come in using different sound levels (I wanted the stream to conform to EBU R128, which it generally did) and different platforms expect different audio work (e.g., you wouldn't put a compressor on an MP3 track that was already mastered, but we did that on e.g. SID tracks since they have nearly zero ability to control the overall volume), there was a fair bit of manual audio tweaking during some of the compos. That, and of course, the live 50/60 Hz switches were a lot of fun: If an Amiga entry was coming up, we'd 1. fade to a camera, 2. fade in an overlay saying we were switching to 50 Hz so have patience, 3. set the camera as master clock (because the bigscreen's clock is going to go away soon), 4. change the scaler from 60 Hz to 50 Hz (takes two clicks and a bit of waiting), 5. change the scaler input in Nageru from 1080p60 to 1080p50, 6. steps 3,2,1 in reverse. Next time, I'll try to make that slightly smoother, especially as the lack of audio during the switch (it comes in on the bigscreen SDI feed) tended to confuse viewers. So, well, that was a lot of fun, and it certainly validated that you can do a pretty complicated real-life stream with Nageru. I have a long list of small tweaks I want to make, though; nothing beats actual experience when it comes to improving processes. :-)

13 July 2016

Steinar H. Gunderson: Cubemap 1.3.0 released

I just released version 1.3.0 of Cubemap, my high-performance video reflector. For a change, both new features are from (indirect) user requests; someone wanted support for raw TS inputs and it was easy enough to add. And then I heard a rumor that people had found Cubemap useless because it was logging so much . Namely, if you have a stream that's down, Cubemap will connect to it every 200 ms, and log two lines for every failed connection attempt. Now, why people discard software on ~50 MB/day of logs (more like 50 kB/day after compression) on a broken setup (if you have a stream that's not working, why not just remove it from the config file and reload?) instead of just asking the author is beyond me, but hey, eventually it reached my ears, and after a grand half hour of programming, there's rate-limiting of logging failed connection attempts. :-) The new version hasn't hit Debian unstable yet, but I'm sure it will very soon.

12 July 2016

Steinar H. Gunderson: Cisco WLC SNMP password reset

If you have a Cisco wireless controller whose admin password you don't know, and you don't have the right serial cable, you can still reset it over SNMP if you forgot to disable the default read/write community:
snmpset -Os -v 2c -c private 192.168.1.1 1.3.6.1.4.1.14179.2.5.5.1.3.5.97.100.109.105.110 s foobarbaz
Thought you'd like to know. :-P (There are other SNMP-based variants out there that rely on the CISCO-CONFIG-COPY-MIB, but older versions of the WLc software doesn't suppport it.)

26 June 2016

Steinar H. Gunderson: Nageru 1.3.0 released

I've just released version 1.3.0 of Nageru, my live software video mixer. Things have been a bit quiet on the Nageru front recently, for two reasons: First, I've been busy with moving (from Switzerland to Norway) and associated job change (from Google to MySQL/Oracle). Things are going well, but these kinds of changes tend to take, well, time and energy. Second, the highlight of Nageru 1.3.0 is encoding of H.264 streams meant for end users (using x264), not just the Quick Sync Video streams from earlier versions, which work more as a near-lossless intermediate format meant for transcoding to something else later. Like with most things video, hitting such features really hard (I've been doing literally weeks of continuous stream testing) tends to expose weaknesses in upstream software. In particular, I wanted x264 speed control, where the quality is tuned up and down live as the content dictates. This is mainly because the content I want to stream this summer (demoscene competitions) varies from the very simple to downright ridiculously complex (as you can see, YouTube just basically gives up and creates gray blocks). If you have only one static quality setting, you will have the choice between something that looks like crap for everything, and one that drops frames like crazy (or, if your encoding software isn't all that, like e.g. using ffmpeg(1) directly, just gets behind and all your clients' streams just stop) when the tricky stuff comes. There was an unofficial patch for speed control, but it was buggy, not suitable for today's hardware and not kept at all up to date with modern x264 versions. So to get speed control, I had to work that patch pretty heavily (including making it so that it could work in Nageru directly instead of requiring a patched x264) and then it exposed a bug in x264 proper that would cause corruption when changing between some presets, and I couldn't release 1.3.0 before that fix had at least hit git. Similarly, debugging this exposed an issue with how I did streaming with ffmpeg and the MP4 mux (which you need to be able to stream H.264 directly to HTML5 <video> without any funny and latency-inducing segmenting business); to know where keyframes started, I needed to flush the mux before each one, but this messes up interleaving, and if frames were ever dropped right in front of a keyframe (which they would on the most difficult content, even at speed control's fastest presets!), the duration field of the frame would be wrong, causing the timestamps to be wrong and even having pts < dts in some cases. (VLC has to deal with flushing in exactly the same way, and thus would have exactly the same issue, although VLC generally doesn't transcode variable-framerate content so well to begin with, so the heuristics would be more likely to work. Incidentally, I wrote the VLC code for this flushing back in the day, to be able to stream WebM for some Debconf.) I cannot take credit for the ffmpeg/libav fixes (that was all done by Martin Storsj ), but again, Nageru had to wait for the new API they introduce (that just signals to the application when a keyframe is about to begin, removing the need for flushing) to get into git mainline. Hopefully, both fixes will get into releases soon-ish and from there one make their way into stretch. Apart from that, there's a bunch of fixes as always. I'm still occasionally (about once every two weeks of streaming or so) hitting what I believe is a bug in NVIDIA's proprietary OpenGL drivers, but it's nearly impossible to debug without some serious help from them, and they haven't been responding to my inquiries. Every two weeks means that you could be hitting it in a weekend's worth of streaming, so it would be nice to get it fixed, but it also means it's really really hard to make a reproducible test case. :-) But the fact that this is currently the worst stability bug (and that you can work around it by using e.g. Intel's drivers) also shows that Nageru is pretty stable these days.

16 May 2016

Steinar H. Gunderson: stretch on ODROID XU4

I recently acquired an ODROID XU4. Despite being 32-bit, it's currently at the upper end of cheap SoC-based devboards; it's based on Exynos 5422 (which sits in Samsung Galaxy S5), which means 2 GHz quadcore Cortex-A15 (plus four slower Cortex-A7, in a big.LITTLE configuration), 2 GB RAM, USB 3.0, gigabit Ethernet, a Mali-T628 GPU and eMMC/SD storage. (My one gripe about the hardware is that you can't put on the case lid while still getting access to the serial console.) Now, since I didn't want it for HTPC or something similar (I wanted a server/router I could carry with me), I didn't care much about the included Ubuntu derivative with all sorts of Samsung modifications, so instead, I went on to see if I could run Debian on it. (Spoiler alert: You can't exactly just download debian-installer and run it.) It turns out there are lots of people who make Debian images, but they're still filled with custom stuff here and there. In recent times, people have put down heroic efforts to make unified ARM kernels; servers et al can now enumerate hardware using ACPI, while SoCs (such as the XU4) have a device tree file (loaded by the bootloader) containing a functional description of what hardware exists and how it's hooked up. And lo and behold, the 4.5.0 armmp kernel from stretch boots and mostly works! Well except for that there's no HDMI output. :-) There are two goals I'd like to achieve by this exercise: First, it's usually much easier to upgrade things if they are close to mainline. (I wanted support for sch_fq, for instance, which isn't in 3.10, and the vendor kernel is 3.10.) Second, anything that doesn't work in Debian is suddenly exposed pretty harshly, and can be filed bugs for and fixed which benefits not only XU4 users (if nothing else, because the custom distros have to carry less delta), but usually also other boards as most issues are of a somewhat more generic nature. Yet, the ideal seems to puzzle some of the more seasoned people in the ODROID user groups; I guess sometimes it's nice to come in as a na ve new user. :-) So far, I've filed bugs or feature requests to the kernel (#823552, #824435), U-Boot (#824356), grub (#823955, #824399), and login (#824391) and yes, that includes for the aforemented lack of HDMI output. Some of them are already fixed; with some luck, maybe the XU4 can be added next to the other Exynos5 board at the compatibility list for the armmp kernels at some point. :-) You can get the image at http://storage.sesse.net/debian-xu4/. Be sure to read the README and the linked ODROID forum post.

2 May 2016

Russ Allbery: Review: The Girl with the Dragon Tattoo

Review: The Girl with the Dragon Tattoo, by Stieg Larsson
Translator: Reg Keeland
Series: Millennium #1
Publisher: Vintage Crime
Copyright: 2005, 2008
Printing: June 2009
ISBN: 0-307-47347-3
Format: Mass market
Pages: 644
As The Girl with the Dragon Tattoo opens, Mikael Blomkvist is losing a criminal libel suit in Swedish court. His magazine, Millennium, published his hard-hitting piece of investigative journalism that purported to reveal sketchy arms deals and financial crimes by Hans-Erik Wennerstr m, a major Swedish businessman. But the underlying evidence didn't hold up, and Blomkvist could offer no real defense at trial. The result is a short prison stint for him (postponed several months into this book) and serious economic danger for Millennium. Lisbeth Salander is a (very) freelance investigator for Milton Security. Her specialty is research and background checks: remarkably thorough, dispassionate, and comprehensive. She's almost impossible to talk to, tending to meet nearly all questions with stony silence, but Dragan Armansky, the CEO of Milton Security, has taken her partly under his wing. She, and Milton Security, were hired by a lawyer named Dirch Frode to do a comprehensive background check on Mikael Blomkvist, which she and Dragan present near the start of the book. The reason, as the reader discovers in a few more chapters, is that Frode's employer wants to offer Blomkvist a very strange job. Over forty years ago, Harriet Vanger, scion of one of Sweden's richest industrial families, disappeared. Her uncle, Henrik Vanger, has been obsessed with her disappearance ever since, but in forty years of investigation has never been able to discover what happened to her. There are some possibilities for how her body could have been transported off the island the Vangers (mostly) lived, and live, on, but motive and suspects are still complete unknowns. Vanger wants Blomkvist to try his hand under the cover of writing a book about the Vanger family. Payment is generous, but even more compelling is Henrik Vanger's offer to give Blomkvist documented, defensible evidence against Wennerstr m at the end of the year. The Girl with the Dragon Tattoo (the original Swedish title is M n som hatar kvinnor, "Men who hate women") is the first of three mystery novels written at the very end of Stieg Larsson's life, all published posthumously. They made quite a splash when they were published: won multiple awards, sold millions of copies, and have resulted in four movies to date. I've had a copy of the book sitting around for a while and finally picked it up when in the mood for something a bit different. A major disclaimer up front: I read very little crime and mystery fiction. Every genre has its own conventions and patterns, and regular genre readers often look for different things than people new to that genre. My review is from a somewhat outside and inexperienced perspective, which may not be useful for regular genre readers. I'm also a US reader, reading the book in translation. It appears to be a very good translation, but it was also quite obvious to me that The Girl with the Dragon Tattoo was written from a slightly different set of cultural assumptions than I brought to the book. This is one of the merits of reading books from other cultures in translation. It can be eye-opening, and can carry some of the same thrill as science fiction or fantasy, to hit the parts of the book that question your assumptions. But it can also be hard to tell whether some striking aspect of a book is due to a genre convention I wasn't familiar with, a Swedish cultural assumption that I don't share, or just the personal style of the author. A few things do leap out as cultural differences. Blomkvist has to spend a few months in prison in the middle of this book, and that entire experience is completely foreign to an American understanding of what prison is like. The degradation, violence, and awfulness that are synonymous with prison for an American are almost entirely absent. He even enjoys the experience as quiet time to focus on writing a history of the Vangers (Blomkvist early on decides to take his cover story seriously, since he doubts he'll make any inroads into the mystery of Harriet's disappearance but can at least get a book out of it). It's a minor element in the book, glossed over in a few pages, but it's certainly eye-opening for how minimum security prison could be structured in a civilized country. Similarly, as an American reader, I was struck by how hard Larsson has to work to ruin Salander's life. Although much of the book is written from Blomkvist's perspective (in tight third person), Lisbeth Salander is the titular girl with the dragon tattoo and becomes more and more involved in the story as it develops. The story Larsson wanted to tell requires that she be in a very precarious position legally and socially. In the US, this would be relatively easy, particularly for someone who acts like Salander does. In Sweden, Larsson has to go to monumental efforts to find ways for Salander to credibly fall through Sweden's comprehensive social safety net, and still mostly relies on Salander's complete refusal to assist or comply with any form of authority or support. I've read a lot about differences in policies around social support between the US and Scandinavian countries, but I've rarely read something that drove the point home more clearly than the amount of work a novelist has to go to in order to mess up their protagonist's life in Sweden. The actual plot is slow-moving and as much about the psychology of the characters as it is about the mystery. The reader gets inside the thoughts of the characters occasionally, but Larsson shows far more than tells and leaves it to the reader to draw more general conclusions. Blomkvist's relationship with his long-time partner and Millennium co-founder is an excellent example: so much is left unstated that I would have expected other books to lay down in black and white, and the characters seem surprisingly comfortable with ambiguity. (Some of this may be my genre unfamiliarity; SFF tends to be straightforward to a fault, and more literary fiction is more willing to embrace ambiguous relationships.) While the mystery of Harriet's disappearance forms the backbone of the story, rather more pages are spent on Blomkvist navigating the emotional waters of the near-collapse of his career and business, his principles around investigation and journalism, and the murky waters of the Vanger's deeply dysfunctional family. Harriet's disappearance is something of a locked room mystery. The day she disappeared, a huge crash closed the only bridge from the island to the mainland, both limiting suspects and raising significant questions about why her body was never found on the island. It's also forty years into the past, so Blomkvist has to rely on Henrik Vanger's obsessive archives, old photographs, and old police reports. I found the way it unfolded to be quite satisfying: there are just enough clues to let Blomkvist credibly untangle things with some hard work and research, but they're obscure enough to make it plausible that previous investigators missed them. Through most of this novel, I wasn't sure what I thought of it. I have a personal interest in Blomkvist's journalistic focus wrongdoing by rich financiers but I had trouble warming to Blomkvist himself. He's a very passive, inward character, who spends a lot of the early book reacting to things that are happening to him. Salander is more dynamic and honestly more likable, but she's also deeply messed up, self-destructive, and does some viciously awful things in this book. And the first half of the book is very slow: lots of long conversations, lots of character introduction, and lots of Blomkvist wandering somewhat aimlessly. It's only when Larsson gets the two protagonists together that I thought the book started to click. Salander sees Blomkvist's merits more clearly than the reader can, I think. I also need to give a substantial warning: The Girl with the Dragon Tattoo is a very violent novel, and a lot of that violence is sexual. By mid-book, Blomkvist realizes that Harriet's disappearance is somehow linked with a serial killer whose trademark is horrific, sexualized symbolism drawn from Leviticus. There is a lot of rape here, including revenge rape by a protagonist. If that sort of thing is likely to bother you, you may want to steer way clear. That said, despite the slow pace, the nauseating subject matter, the occasionally very questionable ethics of protagonists, and a twist of the knife at the very end of the novel that I thought was gratuitously nasty on Larsson's part and wasn't the conclusion I wanted, I found myself enjoying this. It has a different pace and a different flavor than what I normally read, the characters are deep and complex enough to play off each other in satisfying ways, and Salander is oddly compelling to read about. Given the length, it's a substantial investment of time, but I don't regret reading it, and I'm quite tempted to read the sequel. I'm not sure this is the sort of book I can recommend (or not recommend) given my lack of familiarity with the genre, but I think US readers might get an additional layer of enjoyment out of seeing how different of a slant the Swedish setting puts on some of the stock elements of a crime novel. Followed by The Girl Who Played with Fire. Rating: 7 out of 10

26 April 2016

Matthias Klumpp: Why are AppStream metainfo files XML data?

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to ). There are basically three reasons for using XML as the default format for metainfo files: 1. XML is easily forward/backward compatible, while YAML is not This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs. Take this example XML line for defining an icon for an application:
<icon type="cached">foobar.png</icon>
and now the equivalent YAML:
Icons:
  cached: foobar.png
Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:
<icon type="cached" width="128" height="128">foobar.png</icon>
This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible. This looks differently with the YAML file. The foobar.png is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:
Icons:
  cached: name: foobar.png
          width: 128
          height: 128
The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:
Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128
Less than ideal. While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility. 2. Translating YAML is not much fun A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):
<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Spr vce soubor </summary>
<summary xml:lang="da">Filh ndtering</summary>
This would become something like this in YAML:
Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Spr vce soubor 
  da: Filh ndtering
Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:
<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsf hig zu sein. Sie k nnen daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bed rfnissen ausf hren.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Caracter sticas:</p>
  <ul>
  <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
  <li xml:lang="de">Navigationsleiste f r Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren k nnen.</li>
  <li xml:lang="es">barra de navegaci n (o de ruta completa) para URL que permite navegar r pidamente a trav s de la jerarqu a de archivos y carpetas.</li>
  <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
  ....
  </ul>
</description>
Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be: In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient. On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too. 3. Allowing XML and YAML makes a confusing story and adds complexity While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above. I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice. So yeah, XML isn t fun to write by hand. But for this case, XML is a good choice.

Steinar H. Gunderson: Full stack

As I'm nearing the point where Nageru, my live video mixer, can produce a stream directly that is actually suitable to streaming directly to clients (without a transcoding layer in the chain), it struck me the other day how much of the chain I've actually had to touch: In my test setup, the signal comes into a Blackmagic Intensity Shuttle. At some point, I found what I believe is a bug in the card's firmware; I couldn't fix it, but a workaround was applied in the Linux kernel. (I also have some of their PCI cards, in which I haven't found any bugs, but I have found bugs in their drivers.) From there, it goes into bmusb, a driver I wrote myself. bmusb uses libusb-1.0 to drive the USB card from userspace but for performance and stability reasons, I patched libusb to use the new usbfs zerocopy support in the Linux kernel. (The patch is still pending review.) Said zerocopy support wasn't written by me, but I did the work to clean up the support and push it upstream (it's in the 4.6-rc* series). Once safely through bmusb, it goes of course into Nageru, which I wrote myself. Nageru uses Movit for pixel processing, which I also wrote myself. Movit in turn uses OpenGL; I've found bugs in all three major driver implementations, and fixed a Nageru-related one in Mesa (and in the process of debugging that, found bugs in apitrace, a most useful OpenGL debugger). Sound goes through zita-resampler to stretch it ever so gently (in case audio and video clocks are out of sync), which I didn't wrote, but patched to get SSE support (patch pending upstream). So now Nageru chews a bit on it, and then encodes the video using x264 (that's the new part in 1.3.0 of course, you need a fast CPU to do that as opposed to using Quick Sync). I didn't write x264, but I had to redo parts of the speedcontrol patch (not part of upstream; awaiting review semi-upstream) because of bugs and outdated timings, but I also found a bug in x264 proper (fixed by upstream, pending inclusion). Muxing is done through ffmpeg, where I actually found multiple bugs in the muxer (some of which are still pending fixes). Once the stream is safely encoded and hopefully reasonably standards-conforming (that took me quite a while), it goes to Cubemap, which I wrote, for reflection to clients. For low-bitrate clients, it takes a detour through VLC to get a re-encode on a faster machine to lower bitrate I've found multiple bugs in VLC's streaming support in the past (and also fixed some of them, plus written the code that interacts with Cubemap). From there it goes to any of several clients, usually a browser. I didn't write any browsers (thank goodness!), but I wrote the client-side JavaScript that picks the closest relay, and the code for sending it to a Chromecast. I also found a bug in Chrome for Android (will be fixed in version 50 or 51, although the fix was just about turning on something that was already in the works), and one in Firefox for Linux (fixed by patching GStreamer's MP4 demuxer, although they've since switched away from that to something less crappy). IE/Edge also broke at some point, but unfortunately I don't have a way to report bugs to Microsoft. There's also at least one VLC bug involved on the client side (it starts decoding frames too late if they come with certain irregular timestamps, which causes them to drop), but I want to verify that they still persist even after the muxer is fixed before I go deep on that. Moral of the story: If anyone wants to write a multimedia application and says I'll just use <framework, language or library XYZ>, and I'll get everything for free; I just need to click things together! , they simply don't know what they're talking about and are in for a rude awakening. Multimedia is hard, an amazing amount of things can go wrong, complex systems have subtle bugs, and there is no silver bullet.

6 April 2016

Steinar H. Gunderson: Nageru 1.2.0 released

I've just released version 1.2.0 of Nageru, my live video mixer. The main new feature is support for Blackmagic's PCI (and Thunderbolt) series of cards through their driver (in addition to the preexisting support for their USB3 cards, through my own free one), but the release is really much more than that. In particular, 1.2.0 has a lot of those small tweaks that takes it just to that point where it starts feeling like software I can use and trust myself. Of course, there are still tons of rough edges (and probably also bad bugs I didn't know about), but in a sense, it's the first real 1.x release. There's not one single thing I can point to it's more the sum. To that end, I will be using it at Solskogen this summer to run what's most likely the world's first variable-framerate demoparty stream, with the stream nominally in 720p60 but dropping to 720p50 during the oldschool compos to avoid icky conversions on the way, given that almost all oldschool machines are PAL. (Of course, your player needs to handle it properly to get perfect 50 Hz playback, too :-) Most likely through G-SYNC or similar, unless you actually have a CRT you can set to 50 Hz.) For more details about exactly what's new, see the NEWS file, or simply the git commit log.

31 March 2016

Antoine Beaupr : My free software activities, march 2016

Debian Long Term Support (LTS) This is my 4th month working on Debian LTS, started by Raphael Hertzog at Freexian. I spent half of the month away on a vacation so little work was done, especially since I tried to tackle rather large uploads like NSS and Xen. I also worked on the frontdesk shift last week.

Frontdesk That work mainly consisted of figuring out how to best help the security team with the last uploads to the Wheezy release. For those who don't know, Debian 7 Wheezy, or "oldstable", is going to be unsupported by the security team starting end of april, and the Debian 6 Squeeze (the previous LTS) is now unsupported. The PGP signatures on the archived release have started yielding expiration errors which can be ignored but that are really a strong reminder that it is really time to upgrade. So the LTS team is now working towards backporting a few security issues from squeeze to wheezy, and this is what I focused on during triage work. I have identified the following high priority packages I will work on after I complete my work on the Xen and NSS packages (detailed below):
  • libidn
  • icu
  • phpmyadmin
  • tomcat6
  • optipng
  • srtp
  • dhcpcd
  • python-tornado

Updates to NSS and Xen I have spent a lot of time testing and building packages for NSS and Xen. To be fair, Brian May did most of the work on the Xen packages, and I merely did some work to test the packages on Koumbit's infrastructure, something which I will continue doing in the next month. For NSS, wheezy and jessie are in this weird state where patches were provided to the security team all the way back in November yet were never tested. Since then, yet more issues came up and I worked hard to review and port patches for those new security issues to wheezy. I'll followup on both packages in the following month.

Other free software work

Android TL;DR: there's an even longer version of this with the step-by-step procedures and that I will update as time goes on in my wiki. I somehow inherited an Android phone recently, on a loan from a friend because the phone broke one too many times and she got a new one from her provider. This phone is a HTC One S "Ville", which is fairly old, but good enough to play with and give me a mobile computing platform to listen to podcasts, play music, access maps and create GPS traces. I was previously doing this with my N900, but that device is really showing its age: very little development is happening on it, the SDK is closed source and the device itself is fairly big, compared to the "Ville". Plus, the SIM card actually works on the Ville so, even though I do not have an actual contract with a cell phone provider (too expensive, too invasive on my privacy), I can still make emergency phone calls (911)! Plus, since there is good wifi on the machine, I can use it to connect to the phone system with the built-in SIP client, send text messages through SMS (thanks to VoIP.ms SMS support) or Jabber. I have also played around with LibreSignal, the free software replacement for Signal, which uses proprietary google services. Yes, the VoIP.ms SMS app also uses GCM, but hopefully that can be fixed. (While I was writing this, another Debian Developer wrote a good review of Signal so I am happy to skip that step. Go read that.)
See also my apps list for a more complete list of the apps I have installed on the phone. I welcome recommendations on cool free software apps I should use!
I have replaced the stock firmware on the phone with Cyanogenmod 12.1, which was a fairly painful experience, partly because of the difficult ambiance on the #cyanogenmod channel on Freenode, where I had extreme experiences: a brave soul helped me through the first flashing process for around 2 hours, nicely holding my hand at every step. Other times, I have seen flames and obtuse comments from people being vulgar, brutal, obnoxious, if not sometimes downright homophobic and sexist from other users. It is clearly a community that needs to fix their attitude. I have documented everything I could in details in this wiki page, in case others want to resuscitate their old phones, but also because I ended up reinstalling the freaking phone about 4 times, and was getting tired of forgetting how to do it every time. I am somewhat fascinated by Android: here is the Linux-based device that should save us all from the proprietary Apple nightmares of fenced in gardens and censorship. Yet, public Android downloads are hidden behind license agreements, even though the code itself is free, which has led fellow Debian developers to work on making libre rebuilds of Androids to workaround this insanity. But worse: all phones are basically proprietary devices down to the core. You need custom firmware to be loaded on the machine for it to boot at all, from the bootloader all the way down to the GSM baseband and Wifi drivers. It is a minefield of closed source software, and trying to run free software on there is a bit of a delusion, especially since the baseband has so much power over the phone. Still, I think it is really interesting to run free software on those machines, and help people that are stuck with cell phones get familiar with software freedom. It seems especially important to me to make Android developers aware of software freedom considering how many apps are available for free yet on which it is not possible to contribute significantly because the source code is not published at all, or published only on the Google Store, instead of the more open and public F-Droid repository which publishes only free software. So I did contribute. This month, I am happy to announce that I contributed to the following free software projects on Android: I have also reviewed the literature surrounding Telegram, a popular messaging app rival to Signal and Whatsapp. Oddly enough, my contributions to Wikipedia on that subject were promptly reverted which made me bring up the subject on the page's Talk page. This lead to an interesting response from the article's main editors which at least added that the "its security features have been contested by security researchers and cryptography experts". Considering the history of Telegram, I would keep people away from it and direct people to use Signal instead, even though Signal has similar metadata issues, mostly because Telegram's lack of response to the security issues that were outlined by fellow security researchers. Both systems suffer from a lack of federation as well, which is a shame in this era of increasing centralization. I am not sure I will put much more work in developing for Android for now. It seems like a fairly hostile platform to work on, and unless I have specific pain points I want to fix, it feels so much better to work on my own stuff in Debian. Which brings me to my usual plethora of free software projects I got stuck in this month.

IRC projects irssi-plugin-otr had a bunch of idiotic bugs lying around, and I had a patch that I hadn't submitted upstream from the Debian package, which needed a rebuild because the irssi version changed, which is a major annoyance. The version in sid is now a snapshot because upstream needs to make a new release but at least it should fix things for my friends running unstable and testing. Hopefully those silly uploads won't be necessary in the future. That's for the client side. On the server side, I have worked on updating the Atheme-services package to the latest version, which actually failed because the upstream libmowgli is missing release tags, which means the Debian package for it is not up-to-date either. Still, it is nice to have a somewhat newer version, even though it is not the latest and some bugs were fixed. I have also looked at making atheme reproducible but was surprised at the hostility of the upstream. In the end, it looks like they are still interested in patches, but they will be a little harder to deploy than for Charybdis, so this could take some time. Hopefully I will find time in the coming weeks to test the new atheme services daemon on the IRC network I operate.

Syncthing I have also re-discovered Syncthing, a file synchronization software. Amazingly, I was having trouble transferring a single file between two phones. I couldn't use Bluetooth (not sure why), the "Wifi sharing" app was available only on one phone (and is proprietary, and has a limit of 4MB files), and everything else requires an account, the cloud, or cabling. So. Just heading to f-droid, install syncthing, flash a few qr-codes around and voil : files are copied over! Pretty amazing: the files were actually copied over the local network, using IPv6 link-local addresses, encryption and the DHT. Which is a real geeky way of saying it's completely fast, secure and fast. Now, I found a few usability issues, so much that I wrote a whole usability story for the developers, which were really appreciative of my review. Some of the issues were already fixed, others were pretty minor. Syncthing has a great community, and it seems like a great project I encourage everyone to get familiar with.

Battery stats The battery-status project I mentionned previously has been merged with the battery-stats project (yes, the names are almost the same, which is confusing) and so I had to do some work to fix my Python graph script, which was accepted upstream and will be part of Debian officially from now on, which is cool. The previous package was unofficial. I have also noticed that my battery has a significantly than when I wrote the script. Whereas it was basically full back then, it seems now it has lost almost 15% of its capacity in about 6 months. According to the calculations of the script:
this battery will reach end of life (5%) in 935 days, 19:07:58.336480, on 2018-10-23 12:06:07.270290
Which is, to be fair, a good life: it will somewhat work for about three more years.

Playlist, git-annex and MPD in Python On top of my previously mentioned photos-import script, I have worked on two more small programs. One is called get-playlist and is an extension to git-annex to easily copy to the local git-annex repository all files present in a given M3U playlist. This is useful for me because my phone cannot possibly fit my whole MP3 collection, and I use playlists in GMPC to tag certain files, particularly the Favorites list which is populated by the "star" button in the UI. I had a lot of fun writing this script. I started using elpy as an IDE in Emacs. (Notice how Emacs got a new webpage, which is a huge improvement was had been basically unchanged since the original version, now almost 20 years old, and probably written by RMS himself.) I wonder how I managed to stay away from Elpy for so long, as it glues together key components of Emacs in an elegant and functional bundle:
  • Company: the "vim-like" completion mode i had been waiting for forever
  • Jedi: context-sensitive autocompletion for Python
  • Flymake: to outline style and syntax errors (unfortunately not the more modern Flycheck)
  • inline documentation...
In short, it's amazing and makes everything so much easier to work with that I wrote another script. The first program wouldn't work very well because some songs in the playlists had been moved, so I made another program, this time to repair playlists which refer to missing files. The script is simply called fix-playlists, and can operate transparently on multiple playlists. It has a bunch of heuristics to find files and uses a MPD server as a directory to search into. It can edit files in place or just act as a filter.

Useful snippets Writing so many scripts, so often, I figured I needed to stop wasting time always writing the same boilerplate stuff on top of every file, so I started publishing Yasnippet-compatible file snippets, in my snippets repository. For example, this report is based on the humble lts snippet. I also have a base license snippet which slaps the AGPLv3 license on top of a Python file. But the most interesting snippet, for me, is this simple script snippet which is a basic scaffolding for a commandline script that includes argument processing, logging and filtering of files, something which I was always copy-pasting around.

Other projects And finally, a list of interesting issues en vrac:
  • there's a new Bootstrap 4 theme for Ikiwiki. It was delivering content over CDNs, which is bad for privacy issues, so I filed an issue which was almost immediately fixed by the author!
  • I found out about BitHub, a tool to make payments with Bitcoins for patches submitted on OWS projects. It wasn't clear to me where to find the demo, but when I did, I quickly filed a PR to fix the documentation. Given the number of open PRs there and the lack of activity, I wonder if the model is working at all...
  • a fellow Debian Developer shared his photos workflow, which was interesting to me because I have my own peculiar script, which I never mentionned here, but which I ended up sharing with the author

Steinar H. Gunderson: Signal

Signal is a pretty amazing app; it manages to combine great security with great simplicity. (It literally takes two minutes, even for an unskilled user, to set it up.) I looked at the Wikipedia article, and the list of properties the protocol provides is impressive; I had hardly any idea you would even want all of these. But I've tried to decode what they actually mean: (There are more guarantees and features for group chat.) Again, it's really impressive. Modern cryptography at its finest. My only two concerns is that it's too bound to telephone numbers (you can't have the same account on two devices, for instance it very closely mimics the SMS/MMS/POTS model in that regard), and that it's too clumsy to verify public keys for the IM part. It can show them as hex or do a two-way QR code scan, but there's no NFC support, and there's no way to read e.g. a series of plaintext words instead of the fingerprint. (There's no web of trust, but probably that's actually for the better.) I hear WhatsApp is currently integrating the Signal protocol (or might be done already it's a bit unclear), but for now, my bet is on Signal. Install it now and frustrate NSA. And get free SMS/MMS to other Signal users (which are growing in surprising numbers) while you're at it. :-)

Next.

Previous.