Search Results: "bab"

20 November 2017

Jonathan Carter: New powerline goodies in Debian

About powerline Powerline does some font substitutions that allow additional theming for terminal applications such as tmux, vim, zsh, bash and more. The powerline font has been packaged in Debian for a while now, and I ve packaged two powerline themes for vim and zsh. They re currently only in testing, but once my current todo list on packages look better, I ll upload them to stretch-backports. For vim, vim-airline vim-airline is different from previous vim powerline plugins in that it doesn t depend om perl or python, it s purely implemented in vim config files. Demo Here s a gif from the upstream site, they also demo various themes on there that you can get in Debian by installing the vim-airlines-themes package. Vim Airline demo gif How to enable Install the vim-airline package, and add the following to your .vimrc file:
" Vim Airline theme
let g:airline_theme='powerlineish'
let g:airline_powerline_fonts = 1
let laststatus=2
The vim-airline-themes package contains additional themes that can be defined in the snippet above. For zsh, powerlevel9k Demo Here s a gif from upstream that walks through some of its features. You can configure it to display all kinds of system metrics and also information about VCS status in your current directory. Powerline demo gif Powerlevel9k has lots of options and features. If you re interested in it, you should probably take a look at their readme file on GitHub for all the details. How to enable Install the zsh-theme-powerlevel9k package and add the following to your to your .zshrc file.
source /usr/share/powerlevel9k/powerlevel9k.zsh-theme

18 November 2017

Russ Allbery: Free software log (October 2017)

I've not been taking the time to write these promptly in part because I've not found the time to do much free software work. Thankfully, November will at least contain some work-sponsored work (on a package that isn't widely used yet, but maybe we can make it appealing enough). Anyway, that's for next month. For October, the only thing I have to report is refreshing the signing key for my personal Debian repository (generating a new key for the new release) and finally updating the distributions to move stretch to stable, jessie to oldstable, and create the new testing distribution (buster). If for some strange reason you're using my personal repositories (there probably isn't much reason just at the moment), be sure to upgrade eyrie-keyring, since I'm going to switch signing over to the new key shortly.

17 November 2017

Jonathan Carter: I am now a Debian Developer

It finally happened On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montr al, my application was approved. If you re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it. How it started In 1999 no wait, I can t start there, as much as I want to, this is a short post, so In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its boot-floppies installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn t want to support Windows ever again, and I didn t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called freshmeat that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again. Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up highvoltage@debian.org . I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :) Ubuntu and beyond

Ubuntu 4.10 default desktop Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just warty written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called Ubuntu and the desktop edition has naked people on it. I wasn t sure what he meant and was kind of dumbfounded so I just laughed and said something like Uh ok . At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline Linux for Human Beings . Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal. Closer to Ubuntu s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying Go talk to them! Go talk to them! , but I felt so intimidated by them that I couldn t even bring myself to walk up and say hello. In the interest of keeping this short, I m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I d like to do a similar video series that might help a new generation of packagers. I ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

Raphaël Hertzog: Freexian s report about Debian Long Term Support, October 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available: Evolution of the situation The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position. The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we re a bit behind in CVE triaging apparently). Thanks to our sponsors New sponsors are in bold.

No comment Liked this article? Click here. My blog is Flattr-enabled.

Craig Small: Short Delay with WordPress 4.9

You may have heard WordPress 4.9 is out. While this seems a good improvement over 4.8, it has a new editor that uses codemirror. So what s the problem? Well, inside codemirror is jshint and this has that idiotic no evil license. I think this was added in by WordPress, not codemirror itself. So basically WordPress 4.9 has a file, or actually a tiny part of a file that is non-free. I ll now have to delay the update of WordPress to hack that piece out, which probably means removing the javascript linter. Not ideal but that s the way things go.

Michal Čihař: Running Bitcoin node and ElectrumX server

I've been tempted to run own ElectrumX server for quite some. First attempt was to run this on Turris Omnia router, however that turned out to be impossible due to memory requirements both Bitcoind and ElectrumX have. This time I've dedicated host for this and it runs fine: Electrum connecting to btc.cihar.com The server runs Debian sid (probably it would be doable on stretch as well, but I didn't try much) and the setup was pretty simple. First we need to install some things - Bitcoin daemon and ElectrumX dependencies:
# Bitcoin daemon, not available in stretch
apt install bitcoind
# We will checkout ElectrumX from git
apt install git
# ElectrumX deps
apt install python3-aiohttp
# Build environment for ElectrumX deps
apt install build-essentials python3-pip libleveldb-dev
# ElectrumX deps not packaged in Debian
pip3 install plyvel pylru
# Download ElectrumX sources
su - electrumx -c 'git clone https://github.com/kyuupichan/electrumx.git'
Create users which will run the services:
adduser bitcoind
adduser electrumx
Now it's time to prepare configuration for the services. For Bitcoin it's quite simple - we need to configure RPC interface and enable transaction index in /home/bitcoind/.bitcoin/bitcoin.conf:
txindex=1
listen=1
rpcuser=bitcoin
rpcpassword=somerandompassword
The ElectrumX configuration is quite simple as well and it's pretty well documented. I've decided to place it in /etc/electrumx.conf:
COIN=BitcoinSegwit
DB_DIRECTORY=/home/electrumx/.electrumx
DAEMON_URL=http://bitcoin:somerandompassword@localhost:8332/
TCP_PORT=50001
SSL_PORT=50002
HOST=::
DONATION_ADDRESS=3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF
SSL_CERTFILE=/etc/letsencrypt/live/btc.cihar.com/fullchain.pem
SSL_KEYFILE=/etc/letsencrypt/live/btc.cihar.com/privkey.pem
REPORT_HOST=btc.cihar.com
BANNER_FILE=banner
I've decided to control both services using systemd, so it's matter of creating pretty simple units for that. Actually the Bitcoin one closely matches the one I've used on Turris Omnia and the ElectrumX the one they ship, but there are some minor changes. Systemd unit for ElectrumX in /etc/systemd/system/electrumx.service:
[Unit]
Description=Electrumx
After=bitcoind.target
[Service]
EnvironmentFile=/etc/electrumx.conf
ExecStart=/home/electrumx/electrumx/electrumx_server.py
User=electrumx
LimitNOFILE=8192
TimeoutStopSec=30min
[Install]
WantedBy=multi-user.target
And finally systemd unit for Bitcoin daemon in /etc/systemd/system/bitcoind.service:
[Unit]
Description=Bitcoind
After=network.target
[Service]
ExecStart=/usr/bin/bitcoind
User=bitcoind
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30
[Install]
WantedBy=multi-user.target
Now everything should be configured and it's time to start up the services:
# Enable services so that they start on boot 
systemctl enable electrumx.service bitcoind.service
# Start services
systemctl start electrumx.service bitcoind.service
Now you have few days time until Bitcoin fetches whole blockchain and ElectrumX indexes that. If you happen to have another Bitcoin node running (or was running in past), you can speedup the process by copying blocks from that system (located in ~/.bitcoin/blocks/). Only get blocks from sources you trust absolutely as it might change your view of history, see Bitcoin wiki for more information on the topic. There is also magnet link in the ElectrumX docs to download ElectrumX database to speed up this process. This should be safe to download from untrusted source. The last think I'd like to mention is resources usage. You should have at least 4 GB of memory to run this, 8 GB is really preferred (both services consume around 4GB). On disk space, Bitcoin currently consumes 170 GB and ElectrumX 25 GB. Ideally all this should be running on the SSD disk. You can however offload some of the files to slower storage as old blocks are rarely accessed and this can save some space on your storage. Following script will move around 50 GB of blockchain data to /mnt/btc/blocks (use only when Bitcoin daemon is not running):
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/sh
set -e
DEST=/mnt/btc/blocks
cd ~/.bitcoin/blocks/
find . -type f \( -name 'blk00[0123]*.dat' -o -name 'rev00[0123]*dat' \)   sed 's@^\./@@'   while read name ; do
        mv $name $DEST/$name
        ln -s $DEST/$name $name
done
Anyway if you would like to use this server, configure btc.cihar.com in your Electrum client. If you find this howto useful, you can send some Satoshis to 3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF.

Filed under: Crypto Debian English

Norbert Preining: ScalaFX: Problems with Tables abound

Doing a lot with all kinds of tables in ScalaFX, I stumbled upon a bug in ScalaFX that, with the help of the bug report, I was able to circumvent. It is a subtle bug where types are mixed between scalafx.SOMETHING and the corresponding javafx.SOMETHING.
In one of the answers it is stated that:
The issue is with implicit conversion from TableColumn not being located by Scala. I am not clear why this is happening (maybe a Scala bug).
But the provided work-around at least made it work. Until today I stumbled onto a (probably) just another instance of this bug, but where the same work-around does not help. I am using TreeTableViews and try to replace the children of the root by filtering out one element. The code I use is of course very different, but here is a reduced and fully contained example, based on the original bug report and adapted to use a TreeTableView:
import scalafx.Includes._
import scalafx.scene.control.TreeTableColumn._
import scalafx.scene.control.TreeItem._
import scalafx.application.JFXApp.PrimaryStage
import scalafx.application.JFXApp
import scalafx.scene.Scene
import scalafx.scene.layout._
import scalafx.scene.control._
import scalafx.scene.control.TreeTableView
import scalafx.scene.control.Button
import scalafx.scene.paint.Color
import scalafx.beans.property. ObjectProperty, StringProperty 
import scalafx.collections.ObservableBuffer
// TableTester.scala
object TableTester extends JFXApp  
  val characters = ObservableBuffer[Person](
    new Person("Peggy", "Sue", "123", Color.Violet),
    new Person("Rocky", "Raccoon", "456", Color.GreenYellow),
    new Person("Bungalow ", "Bill", "789", Color.DarkSalmon)
  )
  val table = new TreeTableView[Person](
    new TreeItem[Person](new Person("","","",Color.Red))  
      expanded = true
      children = characters.map(new TreeItem[Person](_))
     )  
    columns ++= List(
      new TreeTableColumn[Person, String]  
        text = "First Name"
        cellValueFactory =  
          _.value.value.value.firstName
         
        prefWidth = 180
       ,
      new TreeTableColumn[Person, String]()  
        text = "Last Name"
        cellValueFactory =  
          _.value.value.value.lastName
         
        prefWidth = 180
       
    )
   
  stage = new PrimaryStage  
    title = "Simple Table View"
    scene = new Scene  
      content = new VBox()  
        children = List(
          new Button("Test it")  
            onAction = p =>  
              val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p =>  
                val bar: TreeItem[Person] = p
                p
               )
              table.root.value.children = foo
             
           ,
          table)
       
     
   
 
// Person.scala
class Person(firstName_ : String, lastName_ : String, phone_ : String, favoriteColor_ : Color = Color.Blue)  
  val firstName = new StringProperty(this, "firstName", firstName_)
  val lastName = new StringProperty(this, "lastName", lastName_)
  val phone = new StringProperty(this, "phone", phone_)
  val favoriteColor = new ObjectProperty(this, "favoriteColor", favoriteColor_)
  firstName.onChange((x, _, _) => System.out.println(x.value))
 
With this code what one gets on compilation with the latest Scala and ScalaFX is:
[error]  found   : scalafx.collections.ObservableBuffer[javafx.scene.control.TreeItem[Person]]
[error]  required: scalafx.collections.ObservableBuffer[scalafx.scene.control.TreeItem[Person]]
[error]               val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p =>  
[error]                                                                                          ^
[error] one error found
And in this case, adding import statements didn t help, what a pity. Unfortunately this bug is open since 2014 with a helpwanted tag and nothing is going on. I guess I have to try to dive into the source code of ScalaFX

15 November 2017

Steinar H. Gunderson: Introducing Narabu, part 6: Performance

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3, part 4 and part 5 first. Like I wrote in part 5, there basically isn't a big splashy ending where everything is resolved here; you're basically getting some graphs with some open questions and some interesting observations. First of all, though, I'll need to make a correction: In the last part, I wrote that encoding takes 1.2 ms for 720p luma-only on my GTX 950, which isn't correct I remembered the wrong number. The right number is 2.3 ms, which I guess explains even more why I don't think it's acceptable at the current stage. (I'm also pretty sure it's possible to rearchitect the encoder so that it's much better, but I am moving on to other video-related things for the time being.) I encoded a picture straight off my DSLR (luma-only) at various resolutions, keeping the aspect. Then I decoded it a bunch of times on my GTX 950 (low-end last-generation NVIDIA) and on my HD 4400 (ultraportable Haswell laptop) and measured the times. They're normalized for megapixels per second decoded; remember that doubling width (x axis) means quadruple the pixels. Here it is: Narabu decoding performance graph I'm not going to comment much beyond two observations: Encoding only contains the GTX 950 because I didn't finish the work to get that single int64 divide off: Narabu encoding performance graph This is interesting. I have few explanations. Probably more benchmarking and profiling would be needed to make sense of any of it. In fact, it's so strange that I would suspect a bug, but it does indeed seem to create a valid bitstream that is decoded by the decoder. Do note, however, that seemingly even on the smallest resolutions, there's a 1.7 ms base cost (you can't see it on the picture, but you'd see it in an unnormalized graph). I don't have a very good explanation for this either (even though there are some costs that are dependent on the alphabet size instead of the number of pixels), but figuring it out would probably be a great start for getting the performance up. So that concludes the series, on a cliffhanger. :-) Even though it's not in a situation where you can just take it and put it into something useful, I hope it was an interesting introduction to the GPU! And in the meantime, I've released version 1.6.3 of Nageru, my live video mixer (also heavily GPU-based) with various small adjustments and bug fixes found before and during Tr ndisk. And Movit is getting compute shaders for that extra speed boost, although parts of it is bending my head. Exciting times in GPU land :-)

12 November 2017

Russ Allbery: Review: Night Moves

Review: Night Moves, by Pat Green
Publisher: Aquarius
Copyright: 2014
ISBN: 0-9909741-1-1
Format: Kindle
Pages: 159
In the fall of 2012, Pat Green was a preacher of a failing church, out of a job, divorced for six months, and feeling like a failure at every part of his life. He was living in a relative's house and desperately needed work and his father had been a taxi driver. So he got a job as a 6pm to 6am taxi driver in his home town of Joliet, Illinois. That job fundamentally changed his understanding of the people who live in the night, how their lives work, and what it means to try to help them. This is nonfiction: a collection of short anecdotes about life as a cab driver and the people who have gotten a ride in Green's cab. They're mostly five or six pages long, just a short story or window into someone's life. I ran across Pat Green's writing by following a sidebar link from a post on Patheos (probably from Love, Joy, Feminism, although I no longer remember). Green has an ongoing blog on Patheos about raising his transgender son (who appears in this collection as a lesbian daughter; he wasn't out yet as transgender when this was published), which is both a good sample of his writing and occasionally has excerpts from this book. Green's previous writing experience, as mentioned at several points in this collection, was newspaper columns in the local paper. It shows: these essays have the succinct, focused, and bite-sized property of a good newspaper article (or blog post). The writing is a little rough, particularly the remembered dialogue that occasionally falls into the awkward valley between dramatic, constructed fictional dialogue and realistic, in-the-moment speech. But the stories are honest and heartfelt and have the self-reflective genuineness of good preaching paired with a solid sense of narrative. Green tries to observe and report first, both the other person and his own reactions, and only then try to draw more general conclusions. This book is also very hard to read. It's not a sugar-coated view of people who live in the night of a city, nor is it constructed to produce happy endings. The people who Green primarily writes about are poor, or alone, or struggling. The story that got me to buy this book, about taking a teenage girl to a secret liaison that turned out to be secret because her liaison was another girl, is heartwarming but also one of the most optimistic stories here. A lot of people die or just disappear after being regular riders for some time. A lot of people are desperate and don't have any realistic way out. Some people, quite memorably, think they have a way out, and that way out closes on them. The subtitle of this book is "An Ex-Preacher's Journey to Hell in a Taxi" and (if you followed the link above) you'll see that Green is writing in the Patheos nonreligious section. The other theme of this collection is the church and its effect on the lives of people who are trying to make a life on the outskirts of society. That effect is either complete obliviousness or an active attempt to make their lives even worse. Green lays out the optimism that he felt early in the job, the hope that he could help someone the way a pastor would, guide her to resources, and how it went horribly wrong when those resources turned out to not be interested in helping her at all. And those stories repeat, and repeat. It's a book that makes it very clear that the actual practice of Christianity in the United States is not about helping poor or marginalized people, but there are certainly plenty of Christian resources for judging, hurting people, closing doors, and forcing abused people back into abusive situations, all in the name of God. I do hope some Christians read this and wince very hard. (And lest the progressive Christians get too smug, one of the stories says almost as brutal things about liberal ministries as the stories of conservative ones.) I came away feeling even more convinced by the merits of charities that just give money directly to poor people. No paternalism, no assuming that rich people know what they need, no well-meaning intermediary organizations with endless rules, just resources delivered directly to the people who most need resources. Ideally done by the government and called universal basic income. Short of constructing a functional government that builds working public infrastructure, and as a supplement even if one has such a government (since infrastructure can't provide everything), it feels like the most moral choice. Individual people may still stay mired in awful situations, but at least that isn't compounded by other people taking their autonomy away and dictating life to them in complete ignorance. This is a fairly short and inexpensive book. I found it very much worth reading, and may end up following Green's blog as well. There are moments of joy and moments of human connection, and the details of the day-to-day worries and work style of a taxi driver (in this case, one who drives a company car) are pretty interesting. (Green does skip over some parts for various reasons, such as a lot of the routine fares and most of the stories of violence, but does mention what he's skipping over.) But it's also a brutal book, because so many people are hurting and there isn't much Green can do about it except bear witness and respect them as people in a way that religion doesn't. Recommended, but brace yourself. Rating: 8 out of 10

06 November 2017

Jonathan Dowland: Coil

Peter Christopherson and Jhonn Balance, from [Santa Sangre](https://santasangremagazine.wordpress.com/2014/11/16/the-angelic-conversation-in-remembrance-of-coil/) Peter Christopherson and Jhonn Balance, from Santa Sangre
A friend asked me to suggest five tracks by Coil that gave an introduction to their work. Trying to summarize Coil in 5 tracks is tough. I think it's probably impossible to fairly summarize Coil with any subset of their music, for two reasons. Firstly, their music was the output of their work but I don't think is really the whole of the work itself. There's a real mystique around them. They were deeply interested in arcania, old magic, Aleister Crowley, scatology; they were both openly and happily gay and their work sometimes explored their experiences in various related underground scenes and sub-cultures; they lost friends to HIV/AIDS and that had a profound impact on them. They had a big influence on some people who discovered them who were exploring their own sexualities at the time and might have felt excluded from mainstream society. They frequently explored drugs, meditation and other ways to try to expand and open their minds; occultism. They were also fiercely anti-commercial, their stuff was released in limited quantities across a multitude of different music labels, often under different names, and often paired with odd physical objects, runes, vials of blood, etc. Later fascinations included paganism and moon worship. I read somewhere that they literally cursed one of their albums. Secondly, part of their "signature" was the lack of any consistency in their work, or to put it another way, their style over time varied enormously. I'm also not necessarily well-versed in all their stuff, I'm part way on this journey myself... but these are tracks which stand out at least from the subset I've listened to. Both original/core members of Coil have passed away and the legal status of their catalogue is in a state of limbo. Some of these songs are available on currently-in-print releases, but all such releases are under dispute by some associate or other.

1. Heaven's Blade Like (probably) a lot of Coil songs, this one exists in multiple forms, with some dispute about which are canonical, which are officially sanctioned, etc. the video linked above actually contains 5 different versions, but I've linked to a time offset to the 4th: "Heaven's Blade (Backwards)". This version was the last to come to light with the recent release of "Backwards", an album originally prepared in the 90s at Trent Reznor's Nothing Studios in New Orleans, but not finished or released. The circumstances around its present-day release, as well as who did what to it and what manipulation may have been performed to the audio a long time after the two core members had passed, is a current topic in fan circles. Despite that, this is my preferred version. You can choose to investigate the others, or not, at your own discretion.

2. how to destroy angels (ritual music for the accumulation of male sexual energy) A few years ago, "guidopaparazzi", a user at the Echoing the Sound music message board attempted to listen to every Coil release ever made and document the process. He didn't do it chronologically, leaving the EPs until near the end, which is when he tackled this one (which was the first release by Coil, and was the inspiration behind the naming of Trent Reznor's one-time side project "How To Destroy Angels"). Guido seemed to think this was some kind of elaborate joke. Personally I think it's a serious piece and there's something to it but this just goes to show, different people can take things in entirely different ways. Here's Guido's review, and you can find the rest of his reviews linked from that one if you wish. https://archive.org/details/Coil-HowToDestroyAngels1984

3. Red Birds Will Fly Out Of The East And Destroy Paris In A Night Both "Musick To Play In The Dark" volumes (one and two) are generally regarded as amongst the most accessible entry points to the Coil discography. This is my choice of cut from volume 1. For some reason this reminds me a little of some of the background music from the game "Unreal Tournament". I haven't played that in at least 15 years. I should go back and see if I can figure out why it does. The whole EP is worth a listen, especially at night. https://archive.org/details/CoilMusickToPlayInTheDarkVol1/Coil+-+Musick+To+Play+In+The+Dark+Vol+1+-+2+Red+Birds+Will+Fly+Out+Of+The+East+And+Destroy+Paris+In+A+Night.flac

4. Things Happen It's tricky to pick a track from either "Love's Secret Domain" or "Horse Rotorvator"; there are other choices which I think are better known and loved than this one but it's one that haunted me after I first heard it for one reason or another, so here it is.

5. The Anal Staircase Track 1 from Horse Rotorvator. What the heck is a Horse Rotorvator anyway? I think it was supposed to have been a lucid nightmare experienced by the vocalist Jhonn Balance. So here they wrote a song about anal sex. No messing about, no allusion particularly, but why should there be? https://archive.org/details/CoilHorseRotorvator2001Remaster/Coil+-+Horse+Rotorvator+%5B2001+remaster%5D+-+01+The+Anal+Staircase.flac

Bonus 6th: 7-Methoxy-B-Carboline (Telepathine) From the drone album "Time Machines", which has just been re-issued by DIAS records, who describe it as "authorized". Each track is titled by the specific combination of compounds that inspired its composition, supposedly. Or, perhaps it's a "recommended dosing" for listening along. https://archive.org/details/TimeMachines-TimeMachines

Post-script If those piqued your interest, there's some decent words and a list of album suggestions in this Vinyl Factory article. Finally, if you can track them down, Stuart Maconie had two radio shows about Coil on his "Freak Zone" programme. The main show discusses the release of "Backwards", including an interview with collaborator Danny Hyde, who was the main person behind the recent re-issue. The shorter show is entitled John Doran uncoils Coil. Guest John Doran from The Quietus discusses the group and their history interspersed with Coil tracks and tracks from their contemporaries. Interestingly they chose a completely different set of 5 tracks to me.

Andreas Bombe: Reviving GHDL in Debian

It has been a few years since Debian last had a working VHDL simulator in the archive. Its competitor Verilog has been covered by the iverilog and verilator simulator packages, but GHDL was the only option for VHDL in Debian and that has become broken, orphaned and was eventually removed. I have just submitted an ITP to make my work on it official. A lot has changed since the last Debian upload of GHDL. Upstream development is quite active and it has gained free reimplementations of the standard library definitions (the lack of which frustrated at least two attempts at adoption of the Debian package). It has gained additional backends, in addition to GCC it can now also use LLVM and its own custom mcode (x86 only) code generator. The mcode backend should provide faster compilation at the expense of lacking sophisticated optimization, hence it might be preferable over the other two for small projects. My intentions are to provide all three backends in separate packages which would also offer easier backend troubleshooting a user experiencing problems can simply install another package to try a different backend. The problem with that idea is that GHDL is not designed for that kind of parallel installation. The backend is chosen at build configure time and that configuration is built and installed. Parallel installation will probably need some development but if that would turn out to be much work I could always have the packages conflicting initially. Given all these changes I am redoing the Debianization from ground up and maybe take bits and pieces from the old packaging where suitable. Right now I m building the different backends to compare and see what files are backend specific and what can go into a common package.

05 November 2017

Russ Allbery: Review: Sweep in Peace

Review: Sweep in Peace, by Ilona Andrews
Series: Innkeeper Chronicles #2
Publisher: NYLA
Copyright: 2015
ISBN: 1-943772-32-0
Format: Kindle
Pages: 302
This is the sequel to Clean Sweep. You could pick up the background as you go along, but the character relationships benefit from reading the series in order. Dina's inn is doing a bit better, but it still desperately needs guests. That means she's not really in a position to say no when an Arbitrator shows up at her door and asks her to host a peace summit. Lucky for the Arbitrator, since every other inn on Earth did say no. Nexus has been the site of a viciously bloody conflict between the vampires, the Hope-Crushing Horde, and the Merchants of Baha-char for years. All sides have despaired of finding any form of peace. The vampires and the Horde have both deeply entrenched themselves in a cycle of revenge. The Merchants have the most strategic position and an apparently unstoppable warrior. The situation is hopeless; by far the most likely outcome will be open warfare inside the inn, which would destroy its rating and probably Dina's future as an innkeeper. Dina will need all of her power and caution just to stop that; peace seems beyond any possibility, but thankfully isn't her problem. Maybe the Arbitrator can work some miracle if she can just keep everyone alive. And well fed. Which is another problem. She has enough emergency money for the food, but somehow cook for forty people from four different species while keeping them all from killing each other? Not a chance. She's going to have to hire someone somehow, someone good, even though she can't afford to pay. Sweep in Peace takes this series farther out of urban fantasy territory and farther into science fiction, and also ups the stakes (and the quality of the plot) a notch. We get three moderately interesting alien species with only slight trappings of fantasy, a wonderful alien chef who seems destined to become a regular in the series, and a legitimately tricky political situation. The politics and motives aren't going to win any awards for deep and subtle characterization, but that isn't what the book is going for. It's trying to throw enough challenges at Dina to let her best characteristics shine, and it does that rather well. The inn continues to be wonderful, although I hope it becomes more of a character in its own right as the series continues. Dina's reshaping of it for guests, and her skill at figuring out the rooms her guests would enjoy, is my favorite part of these books. She cares about making rooms match the personality of her guests, and I love books that give the character a profession that matters to them even if it's unrelated to the plot. I do wish Andrews would find a few other ways for Dina to use her powers for combat beyond tentacles and burying people in floors, but that's mostly a quibble. You should still not expect great literature. I guessed the big plot twist several chapters before it happened, and the resolution is, well, not how these sorts of political situations resolve in the real world. But there is not a stupid love affair, there are several interesting characters, and one of the recurring characters gets pretty solid and somewhat unusual characterization. And despite taking the plot in a more serious direction, Sweep in Peace retains its generally lighthearted tone and firm conviction in Dina's ability to handle just about anything. Also, the chef is wonderful. One note: Partway into the book, I started getting that "oh, this is a crossover" feeling (well-honed by years of reading comic books). As near as I can tell from a bit of research, Andrews pulled in some of their characters from the Edge series. This was a bit awkward, in the "who are these people and why do they seem to have more backstory than any of the other supporting characters" cross-over sort of way, but the characters that were pulled in were rather intriguing. I might have to go read the Edge books now. Anyway, if you liked Clean Sweep, this is better in pretty much every way. Recommended. Followed by One Fell Sweep. Rating: 8 out of 10

03 November 2017

Rog rio Brito: Comparison of JDK installation of various Linux distributions

Today I spent some time in the morning seeing how one would install the JDK on Linux distributions. This is to create a little comparative tutorial to teach introductory Java. Installing the JDK is, thanks to the OpenJDK developers in Debian and Ubuntu (Matthias Klose and helpers), a very easy task. You simply type something like:
apt-get install openjdk-8-jdk
Since for a student it is better to have everything for experiments, I install the full version, not only the -headless version. Given my familiarity with Debian/Ubuntu, I didn't have to think about the way of installing it, of course. But as this is a tutorial meant to be as general as I can, I tried also to include instructions on how to install Java on other distributions. The first two that came to my mind were openSUSE and Fedora. Both use the RPM package format for their "native" packages (in the same sense that Debian uses DEB packages for "native" packages). But they use different higher-level tools to install such packages: Fedora uses a tool called dnf, while openSUSE uses zypper. To try these distributions, I got their netinstall ISOs and used qemu/kvm to install on a virtual machine. I used the following to install/run the virtual machines (the example below, is, of course, for openSUSE):
qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -drive index=0,media=disk,cache=unsafe,file=suse.qcow2 -cdrom openSUSE-Leap-42.3-NET-x86_64.iso
The names of the packages also change from one distribution to another. On Fedora, I had to use:
dnf install java-1.8.0-openjdk-devel
On openSUSE, I had to use:
zypper install java-1_8_0-openjdk-devel
Note that one distribution uses dots in the names of the packages while the other uses underscores. One interesting thing that I noticed with dnf was that, when I used it, it automatically refreshed the package lists from the network, something which I forgot, and it was a pleasant surprise. I don't know about zypper, but I guess that it probably had fresh indices when the installation finished. Both installations were effortless after I knew the names of the packages to install. Oh, BTW, in my 5 minute exploration with these distributions, I noticed that if you don't want the JDK, but only the JRE, then you omit the -devel suffix. It makes sense when you think about it, for consistency with other packages, but Debian's conventions also make sense (JRE with -jre suffix, JDK with -jdk suffix). I failed miserably to use Fedora's prebaked, vanilla cloud image, as I couldn't login on this image and I decided to just install the whole OS on a fresh virtual machine. I don't have instructions on how to install on Gentoo nor on Arch, though. I now see how hard it is to cover instructions/provide software for as many distributions as you wish, given the multitude of package managers, conventions etc.

02 November 2017

Steinar H. Gunderson: Introducing Narabu, part 5: Encoding

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3 and part 4 first. At this point, we've basically caught up with where I am, so thing are less set in stone. However, let's look at what qualitatively differentiates encoding from decoding; unlike in interframe video codecs (where you need to do motion vector search and stuff), encoding and decoding are very much mirror images of each other, so we can intuitively expect them to be relatively similar in performance. The procedure is DCT, quantization, entropy coding, and that's it. One important difference is in the entropy coding. Since our rANS encoding is non-adaptive (a choice made largely for simplicity, but also because our streams are so short) works by first signaling a distribution and then encoding each coefficient using that distribution. However, we don't know that distribution until we've DCT-ed all blocks in the picture, so we can't just DCT each block and entropy code the coefficients on-the-fly. There are a few ways to deal with this: As you can see, tons of possible strategies here. For simplicity, I've ended up with the former, although this could very well be changed at some point. There are some interesting subproblems, though: First of all, we need to decide the data type of this temporary array. The DCT tends to concentrate energy into fewer coefficients (which is a great property for compression!), so even after quantization, some of them will get quite large. This means we cannot store them in an 8-bit texture; however, even the bigger ones are very rarely bigger than 10 bits (including a sign bit), so using 16-bit textures wastes precious memory bandwidth. I ended up slicing the coefficients by horizontal index and then pairing them up (so that we generate pairs 0+7, 1+6, 2+5 and 3+4 for the first line of the 8x8 DCT block, 8+15, 9+14, 10+13 and 11+12 for the next line, etc.). This allows us to pack two coefficients into a 16-bit texture, for an average of 8 bits per coefficient, which is what we want. It makes for some slightly fiddly clamping and bit-packing since we are packing signed values, but nothing really bad. Second, and perhaps surprisingly enough, counting efficiently is nontrivial. We want a histogram over what coefficients are used the more often, ie., for each coefficient, we want something like ++counts[dist][coeff] (recall we have four distinct distributions). However, since we're in a massively parallel algorithm, this add needs to be atomic, and since values like e.g. 0 are super-common, all of our GPU cores will end up fighting over the cache line containing counts[dist][0]. This is not fast. Think 10 ms/frame not fast. Local memory to the rescue again; all modern GPUs have fast atomic adds to local memory (basically integrating adders into the cache, as I understand it, although I might have misunderstood here). This means we just make a suitably large local group, build up our sub-histogram in local memory and then add all nonzero buckets (atomically) to the global histogram. This improved performance dramatically, to the point where it was well below 0.1 ms/frame. However, our histogram is still a bit raw; it sums to 1280x720 = 921,600 values, but we want an approximation that sums to exactly 4096 (12 bits), with some additional constraints (like no nonzero coefficients). Charles Bloom has an exposition of a nearly optimal algorithm, although it took me a while to understand it. The basic idea is: Make a good approximation by multiplying each frequency by 4096/921600 (rounding intelligently). This will give you something that very nearly rounds to 4096 either above or below, e.g. 4101. For each step you're above or below the total (five in this case), find the best single coefficient to adjust (most entropy gain, or least loss); Bloom is using a heap, but on the GPU, each core is slow but we have many of them, so it's better just to try all 256 possibilities in parallel and have a simple voting scheme through local memory to find the best one. And then finally, we want a cumulative distribution function, but that is simple through a parallel prefix sum on the 256 elements. And then finally, we can take our DCT coefficients and the finished rANS distribution, and write the data! We'll have to leave some headroom for the streams (I've allowed 1 kB for each, which should be ample except for adversarial data and we'll probably solve that just by truncating the writes and accepting the corruption), but we'll compact them when we write to disk. Of course, the Achilles heel here is performance. Where decoding 720p (luma only) on my GTX 950 took 0,4 ms or so, encoding is 1,2 ms or so, which is just too slow. Remember that 4:2:2 is twice that, and we want multiple streams, so 2,4 ms per frame is eating too much. I don't really know why it's so slow; the DCT isn't bad, the histogram counting is fast, it's just the rANS shader that's slow for some reason I don't understand, and also haven't had the time to really dive deeply into. Of course, a faster GPU would be faster, but I don't think I can reasonably demand that people get a 1080 just to encode a few video streams. Due to this, I haven't really worked out the last few kinks. In particular, I haven't implemented DC coefficient prediction (it needs to be done before tallying up the histograms, so it can be a bit tricky to do efficiently, although perhaps local memory will help again to send data between neighboring DCT blocks). And I also haven't properly done bounds checking in the encoder or decoder, but it should hopefully be simple as long as we're willing to accept that evil input decodes into garbage instead of flagging errors explicitly. It also depends on a GLSL extension that my Haswell laptop doesn't have to get 64-bit divides when precalculating the rANS tables; I've got some code to simulate 64-bit divides using 32-bit, but it doesn't work yet. The code as it currently stands is in this Git repository; you can consider it licensed under GPLv3. It's really not very user-friendly at this point, though, and rather rough around the edges. Next time, we'll wrap up with some performance numbers. Unless I magically get more spare time in the meantime and/or some epiphany about how to make the encoder faster. :-)

Antoine Beaupr : October 2017 report: LTS, feed2exec beta, pandoc filters, git mediawiki

Debian Long Term Support (LTS) This is my monthly Debian LTS report. This time I worked on the famous KRACK attack, git-annex, golang and the continuous stream of GraphicsMagick security issues.

WPA & KRACK update I spent most of my time this month on the Linux WPA code, to backport it to the old (~2012) wpa_supplicant release. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM. I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice peer->tk is reset, to_set to negotiate a new TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations. I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this. At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.

Git-annex The next big chunk on my list was completing the work on git-annex (CVE-2017-12976) that I started in August. It turns out doing the backport was simpler than I expected, even with my rusty experience with Haskell. Type-checking really helps in doing the right thing, especially considering how Joey Hess implemented the fix: by introducing a new type. So I backported the patch from upstream and notified the security team that the jessie and stretch updates would be similarly easy. I shipped the backport to LTS as DLA-1144-1. I also shared the updated packages for jessie (which required a similar backport) and stretch (which didn't) and those Sebastien Delafond published those as DSA 4010-1.

Graphicsmagick Up next was yet another security vulnerability in the Graphicsmagick stack. This involved the usual deep dive into intricate and sometimes just unreasonable C code to try and fit a round tree in a square sinkhole. I'm always unsure about those patches, but the test suite passes, smoke tests show the vulnerability as fixed, and that's pretty much as good as it gets. The announcement (DLA 1154-1) turned out to be a little special because I had previously noticed that the penultimate announcement (DLA 1130-1) was never sent out. So I made a merged announcement to cover both instead of re-sending the original 3 weeks late, which may have been confusing for our users.

Triage & misc We always do a bit of triage even when not on frontdesk duty, so I: I also did smaller bits of work on: The latter reminded me of the concerns I have about the long-term maintainability of the golang ecosystem: because everything is statically linked, an update to a core library (say the SMTP library as in CVE-2017-15042, thankfully not affecting LTS) requires a full rebuild of all packages including the library in all distributions. So what would be a simple update in a shared library system could mean an explosion of work on statically linked infrastructures. This is a lot of work which can definitely be error-prone: as I've seen in other updates, some packages (for example the Ruby interpreter) just bit-rot on their own and eventually fail to build from source. We would also have to investigate all packages to see which one include the library, something which we are not well equipped for at this point. Wheezy was the first release shipping golang packages but at least it's shipping only one... Stretch has shipped with two golang versions (1.7 and 1.8) which will make maintenance ever harder in the long term.
We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman

Other free software work This month again, I was busy doing some serious yak shaving operations all over the internet, on top of publishing two of my largest LWN articles to date (2017-10-16-strategies-offline-pgp-key-storage and 2017-10-26-comparison-cryptographic-keycards).

feed2exec beta Since I announced this new project last month I have released it as a beta and it entered Debian. I have also wrote useful plugins like the wayback plugin that saves pages on the Wayback machine for eternal archival. The archive plugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite). The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with --loglevel=ERROR to silence WARNINGs. Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple exec call, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the exec plugin now. I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap... Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?

Pandoc filters As I mentioned earlier, I dove again in Haskell programming when working on the git-annex security update. But I also have a small Haskell program of my own - a Pandoc filter that I use to convert the HTML articles I publish on LWN.net into a Ikiwiki-compatible markdown version. It turns out the script was still missing a bunch of stuff: image sizes, proper table formatting, etc. I also worked hard on automating more bits of the publishing workflow by extracting the time from the article which allowed me to simply extract the full article into an almost final copy just by specifying the article ID. The only thing left is to add tags, and the article is complete. In the process, I learned about new weird Haskell constructs. Take this code, for example:
-- remove needless blockquote wrapper around some tables
--
-- haskell newbie tips:
--
-- @ is the "at-pattern", allows us to define both a name for the
-- construct and inspect the contents as once
--
--   is the "empty record pattern": it basically means "match the
-- arguments but ignore the args"
cleanBlock (BlockQuote t@[Table  ]) = t
Here the idea is to remove <blockquote> elements needlessly wrapping a <table>. I can't specify the Table type on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean. The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:
cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = []
The last bit that drove me crazy was the date parsing:
-- the "GAByline" div has a date, use it to generate the ikiwiki dates
--
-- this is distinct from cleanBlock because we do not want to have to
-- deal with time there: it is only here we need it, and we need to
-- pass it in here because we do not want to mess with IO (time is I/O
-- in haskell) all across the function hierarchy
cleanDates :: ZonedTime -> Block -> [Block]
-- this mouthful is just the way the data comes in from
-- LWN/Pandoc. there could be a cleaner way to represent this,
-- possibly with a record, but this is complicated and obscure enough.
cleanDates time (Div (_, [cls], _)
                 [Para [Str month, Space, Str day, Space, Str year], Para _])
    cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date"
                                           (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e,"
                                                           (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime)))
                        ++ ikiwikiRawInline (ikiwikiMetaField "updated"
                                             (iso8601Format time))
                        ++ [Para []]
-- other elements just pass through
cleanDates time x = [x]
Now that seems just dirty, but it was even worse before. One thing I find difficult in adapting to coding in Haskell is that you need to take the habit of writing smaller functions. The language is really not well adapted to long discourse: it's more about getting small things connected together. Other languages (e.g. Python) discourage this because there's some overhead in calling functions (10 nanoseconds in my tests, but still), whereas functions are a fundamental and important construction in Haskell that are much more heavily optimized. So I constantly need to remind myself to split things up early, otherwise I can't do anything in Haskell. Other languages are more lenient, which does mean my code can be more dirty, but I feel get things done faster then. The oddity of Haskell makes frustrating to work with. It's like doing construction work but you're not allowed to get the floor dirty. When I build stuff, I don't mind things being dirty: I can cleanup afterwards. This is especially critical when you don't actually know how to make things clean in the first place, as Haskell will simply not let you do that at all. And obviously, I fought with Monads, or, more specifically, "I/O" or IO in this case. Turns out that getting the current time is IO in Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include IO. I eventually moved the time initialization up into main so that I had only one IO function and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern. I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.

Git remote MediaWiki Few people know that there is a MediaWiki remote for Git which allow you to mirror a MediaWiki site as a Git repository. As a disaster recovery mechanism, I have been keeping such a historical backup of the Amateur radio wiki for a while now. This originally started as a homegrown Python script to also convert the contents in Markdown. My theory then was to see if we could switch from Mediawiki to Ikiwiki, but it took so long to implement that I never completed the work. When someone had the weird idea of renaming a page to some impossible long name on the wiki, my script broke. I tried to look at fixing it and then remember I also had a mirror running using the Git remote. It turns out it also broke on the same issue and that got me looking in the remote again. I got lost in a zillion issues, including fixing that specific issue, but I especially looked at the possibility of fetching all namespaces because I realized that the remote fetches only a part of the wiki by default. And that drove me to submit namespace support as a patch to the git mailing list. Finally, the discussion came back to how to actually maintain that contrib: in git core or outside? Finally, it looks like I'll be doing some maintenance that project outside of git, as I was granted access to the GitHub organisation...

Galore Yak Shaving Then there's the usual hodgepodge of fixes and random things I did over the month.
There is no [web extension] only XUL! - Inside joke

31 October 2017

Paul Wise: FLOSS Activities October 2017

Changes

Issues

Review

Administration
  • Debian: respond to mail debug request, redirect hardware access seeker to guest account, redirect hardware donors to porters, redirect interview seeker to DPL, reboot system with dead service
  • Debian mentors: security updates, reboot
  • Debian wiki: upgrade search db format, remove incorrect bans, whitelist email addresses, disable accounts with bouncing email, update email for accounts with bouncing email
  • Debian website: remove need for a website rebuild
  • Openmoko: restart web server, set web server process limits, install monitoring tool

Sponsors The talloc/cmocka uploads and the remmina issue were sponsored by my employer. All other work was done on a volunteer basis.

Chris Lamb: Free software activities in October 2017

Here is my monthly update covering what I have been doing in the free software world in October 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:


I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Improve names in output of "internal" binwalk members. (#877525).
  • Don't crash on malformed md5sums files. (#877473).
  • Omit misleading "any of" prefix when only complaining about a single module on import. [...]
  • Adjust tests as ps2ascii now varies its output on timezone. [...]

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Clojure considers .class file to be stale if it shares the same timestamp of the .clj. We thus adjust the timestamps of the .clj to always be younger. (#877418).
  • Print a message in --verbose mode if no canonical time was specified. [...]

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Always show SHA-256 checksums, regardless of the browser viewport size. [...]
  • Add an API endpoint to fetch specific .buildinfo files for a certain package/version/architecture. [...]


Debian My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.
Patches contributed
  • devscripts: Please print the actual arguments debuild makes to Lintian. (#880124)
  • hw-detect: Drop reference to floppy disks; it's almost 2018. (#880122)
  • debci:
    • Use deb.debian.org over http.debian.net. (#879654)
    • Document how to use an alternative mirror. (#879655)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Followed up on a large number of upstream "pings" that have been left dormant.
  • Issued DLA 1121-1 to fix an out-of-bounds read vulnerability in curl where a malicious FTP server could abuse this to prevent clients from interacting with it.
  • Issued DLA 1123-1 for the "Go" programming language where an attacker could generate a MIME request such that the server ran out of file descriptors.
  • Issued DLA 1126-1 for the libxfont font selection and rasterisation library, correcting two vulnerabilities, both involving the library being tricked into reading invalid/random memory.
  • Issued DLA 1134-1 for sdl-image1.2, an image loading library. A maliciously-crafted .xcf file could cause a stack-based buffer overflow resulting in potential code execution.

Uploads
  • python-django:
    • 2.0~beta1-1 New upstream 2.x release.
    • 1.11.6-1 New upstream bugfix release.
  • gunicorn (19.6.0-10+deb9u1) Prepared a release for stable to avoid a runtime dependency on a compiler. (#877722)
  • redis:
    • 4:4.0.2-3:
      • Drop the Debian-specific /etc/redis/redis-server.pre-up.d (etc.) hooks and remove them if unchanged.
      • Include systemd redis-server@.service and redis-sentinel@.service template files to easily run multiple Redis instances. (#877702)
      • Patch redis.conf and sentinel.conf with quilt instead of maintaining our own versions under debian/.
    • 4:4.0.2-4:
      • Add input validity checking to cluster config slot numbers to fix CVE-2017-15047. (#878076)
      • Drop debian/bin/generate-parts now we aren't calling it.
      • Correct Bash-ism in NEWS file.
    • 4:4.0.2-5: Replace the existing patch for CVE-2017-15047 with an upstream-blessed version that covers another case.
  • redisearch (0.21.3-5) Initial release.
  • docbook2man (2.0.0-40) Correct spelling mistakes in binaries and other misc packaging tidying.
  • python-redis (2.10.6-1) New upstream release.
  • bfs (1.1.3-1) New upstream release.

FTP Team

As a Debian FTP assistant I ACCEPTed 103 packages: amcheck, argagg, binutils, blockui, bro-pkg, chkservice, citus, django-axes, docker-containerd, doctest, dtkwidget, duktape, feed2exec, fontforge, fonttools, gcc-8, gcc-8-cross, generator-scripting-language, gitgraph.js, haskell-uri-encode, hoel, iniparser, its, jquery-areyousure, kodi, libcatmandu-mods-perl, libcatmandu-template-perl, libcatmandu-xml-perl, libcatmandu-xsd-perl, libcode-tidyall-plugin-sortlines-naturally-perl, libgdamm5.0, libinfinity, libmods-record-perl, libreoffice-dictionaries, libset-intervaltree-perl, libsodium, linux, linux-grsec, ltsp-manager, lxqt-themes, mailman3-core, measurement-kit, mini-buildd, musescore, node-babel, node-babel-eslint, node-babel-loader, node-babel-plugin-add-module-exports, node-babel-plugin-transform-define, node-gulp-newer, node-regenerate-unicode-properties, node-regexpu-core, node-regjsparser, node-unicode-data, node-unicode-loose-match, openjdk-9, orafce, pgaudit, pgsql-ogr-fdw, pk4, postgresql-mysql-fdw, powa-archivist, python-azure-devtools, python-colormap, python-darkslide, python-dotenv, python-karborclient, python-logfury, python-lupa, python-marshmallow, python-murano-pkg-check, python-octaviaclient, python-pathspec, python-pgpy, python-pydub, python-randomize, python-sabyenc, python-searchlightclient, python-stestr, python-subunit2sql, python-twitter, python-utils, python-wsgilog, r-cran-bindr, r-cran-desc, r-cran-hms, r-cran-readstata13, r-cran-rprojroot, r-cran-wikidatar, r-cran-wikipedir, r-cran-wikitaxa, repmgr, requests-file, resteasy3.0, sdl-kitchensink, stardicter, systemd-el, thunderbird, tomcat8.0, uwsgi-plugin-luajit, uwsgi-plugin-mongo, uwsgi-plugin-php & uwsgi-plugin-v8. I additionally filed 3 RC bugs against packages that had incomplete debian/copyright files against: fonttools, generator-scripting-language & libsodium.

Norbert Preining: Debian/TeX Live 2017.20171031-1

Halloween is here, time to upload a new set of scary packages of TeX Live. About a month has passed, so there is the usual big stream up updates. There was actually an intermediate release to get out some urgent fixes, but I never reported the news here. So here are the accumulated changes and updates. My favorite this time is wallcalendar, a great class to design all kind of calendars, it looks really well done. I immediately will start putting one together. On the font side there is the new addition coelacanth. To quote from the README: Coelacanth is inspired by the classic Centaur type design of Bruce Rogers, described by some as the most beautiful typeface ever designed. It aims to be a professional quality type family for general book typesetting. And indeed it is beautiful! Other noteworthy addition is the Spark font that allows creating sparklines in the running text with LaTeX. Enjoy. New packages algobox, amscls-doc, beilstein, bib2gls, coelacanth, crossreftools, dejavu-otf, dijkstra, ducksay, dynkin-diagrams, eqnnumwarn, fetchcls, fixjfm, glossaries-finnish, hagenberg-thesis, hecthese, ifxptex, isopt, istgame, ku-template, limecv, mensa-tex, musicography, na-position, notestex, outlining, pdfreview, spark-otf, spark-otf-fonts, theatre, unitn-bimrep, upzhkinsoku, wallcalendar, xltabular. Updated packages acmart, amsmath, animate, arabluatex, arara, babel, babel-french, bangorexam, baskervillef, beebe, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bxjaprnind, bxjscls, bxpapersize, bytefield, classicthesis, cochineal, complexity, cooking-units, curves, datetime2-german, dccpaper, doclicense, docsurvey, eledmac, epstopdf, eqparbox, esami, etoc, fbb, fei, fithesis, fmtcount, fnspe, fonts-tlwg, fontspec, genealogytree, glossaries, glossaries-extra, hecthese, hepthesis, hvfloat, ifplatform, ifptex, inconsolata, jfmutil, jsclasses, ketcindy, knowledge, koma-script, l3build, l3experimental, l3kernel, l3packages, langsci, latex2man, latexbug, lato, leadsheets, libertinust1math, listofitems, luatexja, luatexko, luatodonotes, lwarp, markdown, mcf2graph, media9, newtx, novel, numspell, ocgx2, overpic, philokalia, phonenumbers, platex, poemscol, pst-exa, pst-geometrictools, pst-ovl, pst-plot, pst-pulley, pst-tools, pst-vehicle, pst2pdf, pstool, pstricks, pstricks-add, pxchfon, pxjahyper, quran, randomlist, rec-thy, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdoc, tikzducks, tikzsymbols, toptesi, translation-biblatex-de, unicode-math, updmap-map, uplatex, widetable, xcharter, xepersian, xetexko, xetexref, xsim, zhlipsum.

29 October 2017

Russ Allbery: Review: Why We Sleep

Review: Why We Sleep, by Matthew Walker
Publisher: Scribner
Copyright: October 2017
ISBN: 1-5011-4433-2
Format: Kindle
Pages: 341
The world is full of theories, and corresponding books, about things that will make you healthier or prevent disease. Nearly all of them are scams, either intentional or created through the placebo effect and the human tendency to see patterns that don't exist. The rare ones that aren't have a certain pattern: they're grounded in our best understanding of biology, align with what our body wants to do anyway, have been thoroughly studied using proper testing methodology, and don't make money for powerful corporations. I'm fairly sure this is one of those rare ones that isn't a scam. And, if so, it's rather important and worth your attention. Matthew Walker is a professor of neuroscience and biology at the University of California at Berkeley, where he's the founder of the Center for Human Sleep Science. He's not a doctor; he started medical training, but (as he says in the book) found himself more attracted to questions than answers. He's a professional academic researcher who has been studying sleep for decades. This book is a combination of summary of the current state of knowledge of academic sleep research and a plea: get more sleep, because we're literally killing ourselves with the lack of it. Walker opens the book with a discussion of the mechanisms of sleep: how we biologically fall asleep and why, how this has changed over time, and how it changes with age. Along with that, he defines sleep: the REM and NREM sleep cycle that you may have already heard of, how it manifests itself in most people, and where dreams fit in. The second part then discusses what happens when you sleep, with a focus on what goes wrong when you don't. (Spoiler: A lot. Study after study, all cited and footnoted, has found connections between sleep and just about every aspect of mental and physical health.) The third part does the same for dreams, fitting them into the picture along with a scientific discussion of just what's going on during dreams. The fourth and final part tackles the problem: why don't we get enough sleep, and what can we do about it? I will warn in advance that this book will make you paranoid about your sleeping patterns. Walker has the missionary zeal of an academic who has sunk his teeth into something really important that society needs to take into account and will try to drown you in data, analysis, analogies, and sheer earnestness until you will believe him. He wants you to get at least seven, and preferably eight, hours of sleep a night. Every night, with as little variation as you can manage. Everyone, even if you think you're someone who doesn't need as much sleep (you're probably not). There's a ton of science here, a great popularization of a whole field of research, but this is also a book that's trying to get you to do something. Normally, that sort of book raises my shields. I'm not much of a believer in any book of the general genre of "most people are doing this basic part of life wrong, and should do it my way instead." But the hallmarks of good science are here: very widespread medical consensus, no corporate interest or obvious path to profit, and lots of studies (footnoted here, with some discussions of methodology although not the statistical details, which will require looking up the underlying studies and careful caveats where studies indicate correlation but may not find causes). And Walker makes the very telling point early in the book that nearly every form of life on the planet sleeps in one way or another (defined as a daily recurring period of time during which it doesn't respond to outside stimulus), which is a strong indicator of universal necessity. Given the vulnerability and loss of useful hours that come with sleep, one would expect some species to find an evolutionary path away from it if it were dispensable. But except for extremely short-lived species, we've never found a living creature that didn't sleep. Walker's argument for duration is also backed up by repeated studies on human capability before and after various quantities of sleep, and on studies of the sleep phases in various parts of the night. Study after study used six hours as the cutoff point and showed substantial deterioration in physical and mental capabilities even after only one night of short sleeping. (Reducing sleep to four hours is nearly catastrophic.) And, more worrisomely, that degradation is still measurable after "catching up" on sleep on subsequent nights. Sleeping in on weekends doesn't appear to fully compensate for the damage done by short-sleeping during the week. When Walker gets into the biological reasons for sleep, one starts to understand why it's so important. I think the part I found the most fascinating was the detailed analysis of what the brain is doing while you sleep. It's not inactive at all, even outside of REM sleep. Walker and other sleep researchers have done intriguing experiments showing how different parts of the sleep cycle transfer memories from short to long term storage, transfer physical skills into subconscious parts of the brain, discard short term memories that the conscious brain has tagged as being unwanted, and free up space for new knowledge acquisition. REM sleep appears to attempt to connect otherwise unrelated memories and bits of knowledge, inverting how association normally works in the brain, thus providing some concrete explanation for sleep's role in creativity. And (this research is fairly new), deep NREM sleep causes temporary physical changes in the brain that appear to be involved in flushing metabolic waste products away, including the plaque involved in Alzheimer's. The last part of the book is probably the most concretely useful: what can one practically do to get more sleep? There is quite a lot that's proven effective, but Walker starts with something else: sleeping pills. Here, you can almost see the lines drawn by a lawyer around what Walker should say. He stresses that he's not a medical doctor while laying out study after study that all point in the same direction: sleeping pills are a highly dangerous medical fraud that will shorten your lifespan for negligible benefit in helping you fall asleep, while limiting your brain's ability to enter true sleep. They're sedation, sedation is not sleep, and the four billion dollar sleeping pill market is literally making everything worse. The good news is there is an effective treatment for insomnia that works for many people; the better news is that it's completely free (although Walker does suggest some degree of medical supervision for serious insomnia so that some parts of it can be tailored to you). He walks through CBT-I (cognitive behavior therapy for insomnia), which is now the medically recommended primary treatment for insomnia, and takes apart the pieces to show how they line up with the results of sleep research studies. Alongside that are recommendations for improving sleep for people who don't have clinical insomnia but who aren't regularly getting the recommended amount of sleep. There are a lot of interesting bits here (and he of course talks about blue LED light and its relationship to melatonin cycles), but I think the most interesting for me was that you have to lower your core body temperature by a couple of degrees (Fahrenheit) to enter sleep. The temperature of your sleeping environment is therefore doubly important: temperature changes are one of the signals your body uses to regulate circadian rhythms (cold being a signal of night), and a colder sleeping area helps you lower your core body temperature so that you can fall asleep. (The average person does best with a sleeping room temperature of 65F, 18C.) There's even more in here: I haven't touched on Walker's attack on the US tendency to push high school start times earlier and earlier in the day (particularly devastating for teenagers, whose circadian rhythms move two hours later in the day than adults before slowly returning to an adult cycle). Or the serious problems of waking to an alarm clock, and the important benefits of the sleep that comes at the end of a full night's cycle. Or the benefits of dreams in dealing with trauma and some theories for how PTSD may interfere with that process. Or the effect of sleep on the immune system. Walker's writing style throughout Why We Sleep is engaging and clear, although sometimes too earnest. He really wants the reader to believe him and to get more sleep, and sometimes that leaks around the edges. One can also see the effort he's putting into not reading too much into research studies, but if there's a flaw in the science here, it's that I think Walker takes a few tentative conclusions a bit too far. (I'm sure these studies have the standard research problem of being frequently done on readily-available grad students rather than representative samples of the population, although the universality of sleep works in science's favor here.) Some of the recitations of research studies can get rather dry, and I once again discovered how boring I find most discussion of dreams, but for a first book written by an academic, this is quite readable. This is one of those books that I want everyone to read mostly so that they can get the information in it, not as much for the enjoyment of reading the book itself. I've been paying closer attention to my own sleep patterns for the last few years, and my personal experience lines up neatly with the book in both techniques to get better sleep and the benefits of that sleep. I'd already reached the point where I was cringing when people talk about regularly going on four or five hours of sleep; this is an entire book full of researched reasons to not do that. (Walker points out that both Reagan and Thatcher, who bragged about not requiring much sleep, developed Alzheimer's, and calls out Trump for making the same brag.) The whole book may not be of interest to everyone, but I think everyone should at least understand why the World Heath Organization recommends eight hours a night and labels shift work a probable carcinogen. And, as Walker points out, we should be teaching some of this stuff in school health classes alongside nutrition and sex education. Alas, Walker can't provide much advice on what I think is the largest robber of sleep: the constant time pressure of modern life, in which an uninterrupted nine hour sleep opportunity feels like an unaffordable luxury. Rating: 9 out of 10

28 October 2017

Steinar H. Gunderson: Introducing Narabu, part 4: Decoding

Narabu is a new intraframe video codec. You probably want to read part 1, part 2 and part 3 first. So we're at the stage where the structure is in place. How do we decode? Once we have the structure, it's actually fairly straightforward: First of all, we need to figure out where each slice starts and ends. This is done on the CPU, but it's mostly just setting up pointers, so it's super-cheap. It doesn't see any pixels at all, just lengths and some probability distributions (those are decoded on the CPU, but they're only a few hundred values and no FP math is involved). Then, we set up local groups of size 64 (8x8) with 512 floats of shared memory. Each group will be used for decoding one slice (320 blocks), all coefficients. Each thread starts off doing rANS decoding (this involves a lookup into a small LUT, and of course some arithmetic) and dequantization of 8 blocks (for its coefficient); this means we now have 512 coefficients, so we add a barrier, do 64 horizontal IDCTs (one 8-point IDCT per thread), add a new barrier, do 64 vertical IDCTs, and then finally write out the data. We can then do the next 8 coefficients the same way (we kept the rANS decoding state), and so on. Note how the parallelism changes during the decoding; it's a bit counterintuitive at first, and barriers are not for free, but it avoids saving the coefficients to global memory (which is costly). First, the parallelism is over coefficients, then over horizontal DCT blocks, then over vertical DCT blocks, and then we start all over again. In CPU multithreading, this would be very tricky, and probably not worth it at all, but on the GPU, it gives us tons of parallelism. One problem is that the rANS work is going to be unbalanced within each warp. There's a lot more data (and thus more calculation and loading of compressed data from memory) in the lower-frequency coefficients, which means the other threads in the warp are wasting time doing nothing when they do more work. I tried various schemes to balance it better (like making even larger thread groups to get e.g. more coefficient 0s that could work together, or reordering the threads' coefficients in a zizag), but seemingly it didn't help any. Again, the lack of a profiler here is hampering effective performance investigations. In any case, the performance is reasonably good (my GTX 950 does 1280x720 luma-only in a little under 0.4 ms, which equates to ~1400 fps for full 4:2:2). As a side note, GoPro open-sourced their CineForm HD implementation the other day, and I guess it only goes to show that these kind of things really belong on the GPU; they claim similar performance numbers to what I get on my low-end NVIDIA GPU (923.6 fps for 1080p 4:2:2, which would be roughtly 1800 fps on 720p60), but that's on a 4GHz 8-core Broadwell-E, which is basically taking the most expensive enthusiast desktop CPU you can get your hands on and then overclocking it. Kudos to GoPro for freeing it, though (even under a very useful license), even though FFmpeg already had a working reverse-engineered implementation. :-) Next time, we'll look at the encoder, which is a bit more complicated. After that, we'll do some performance tests and probably wrap up the series. Stay tuned.

Next.