Search Results: "xam"

23 November 2017

Sean Whitton: Using Propellor to provision your Debian development laptop

sbuild is a tool used by those maintaining packages in Debian, and derived distributions such as Ubuntu. When used correctly, it can catch a lot of categories of bugs before packages are uploaded. It does this by building the package in a clean environment, and then running the package through the Lintian, piuparts, adequate and autopkgtest tools. However, configuring sbuild so that it makes use of all of these tools is cumbersome. In response to this complexity, I wrote a module for the Propellor configuration management system to prepare a system such that a user can just go ahead and run the sbuild(1) command. This module is useful on one s development laptop if you need to reinstall your OS, you don t have to look up the instructions for setting up sbuild again. But it s also useful on throwaway build boxes. I can instruct propellor to provision a new virtual machine to build packages with sbuild, and all the different tools mentioned above will be connected together for me. I just uploaded Propellor version 5.1.0 to Debian unstable. The version overhauls the API and internals of the Sbuild module to take better advantage of Propellor s design. I won t get into those details in this post. What I d like to do is demonstrate how you can set up sbuild on your own machines, using Propellor. Getting started with Propellor apt-get install propellor, and then propellor --init. You ll be offered two setups, options A and B. I suggest starting with option B. If you never use Propellor for anything other than provisioning sbuild, you can stick with option B. If this tutorial makes you want to check out more features of Propellor, you might consider switching to option A and importing your old configuration. Open ~/.propellor/config.hs. You will see something like this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ mybox
        ]
-- An example host.
mybox :: Host
mybox = host "mybox.example.com" $ props
        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.unattendedUpgrades
        & Apt.installed ["etckeeper"]
        & Apt.installed ["ssh"]
        & User.hasSomePassword (User "root")
        & File.dirExists "/var/www"
        & Cron.runPropellor (Cron.Times "30 * * * *")
You ll want to customise this so that it reflects your computer. My laptop is called iris, so I might replace the above with this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
The list of lines beginning with & are the properties of the host iris. Here, I ve removed all properties except the osDebian property, which informs propellor that iris runs Debian testing and has the amd64 architecture. The effect of this is that Propellor will not try to change anything about iris. In this tutorial, we are not going to let Propellor configure anything about iris other than setting up sbuild. (The osDebian property is a pure info property, which means that it tells Propellor information about the host to which other properties might refer, but it doesn t itself change anything about iris.) Telling Propellor to configure sbuild First, add to the import lines at the top of config.hs the lines:
import qualified Propellor.Property.Sbuild as Sbuild
import qualified Propellor.Property.Schroot as Schroot
to enable use of the Sbuild module. Here is the full config for iris, which I ll go through line-by-line:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
        & Apt.useLocalCacher
        & sidSchrootBuilt
        & Sbuild.usableBy (User "spwhitton")
        & Schroot.overlaysInTmpfs
        & Cron.runPropellor (Cron.Times "30 * * * *")
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
Running Propellor to configure your laptop propellor iris.silentflame.com. In this configuration, you don t need to worry about whether the hostname iris.silentflame.com actually resolves to your laptop. However, it must be possible to ssh root@localhost. This should be enough that spwhitton can:
$ sbuild -A --run-lintian --run-autopkgtest --run-piuparts foo.dsc
Further configuration It is easy to add new schroots; for example, for building backports:
        ...
        & stretchSchrootBuilt
        ...
  where
        ...
        stretchSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian (Stable "stretch") X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
You can also add additional properties to configure your chroot. Perhaps on your LAN you need sbuild to install packages via https, and you already have an apt cacher available. You can replace the apt-cacher-ng configuration like this:
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Apt.mirror "https://foo.mirror/debian/"
                & Apt.installed ["apt-transport-https"]
Thanks Thanks to Propellor s author, Joey Hess, for help navigating Propellor s type system while performing the overhaul included in version 5.1.0. Also for a conversation at DebConf17 which enabled this work by clearing some misconceptions of mine.

Jonathan Dowland: Concreate and Red Hat JBoss OpenShift image sources

Last year I wrote about some tools for working with Docker images. Since then, we've deprecated the dogen tool for our own images and have built a successor called Concreate. Concreate takes a container image definition described in a YAML document and generates an output image. To do so, it generates an intermediate Dockerfile, along with the scripts and artefacts you references in the YAML file, and by default invokes docker build on the result. However, it can use other builders, such as the OpenShift Build Service, which is what we use for our production images. Concreate can also manage running tests against the image. As with the Container Testing Framework that I mentioned last time, these tests are defined using the Behave system. In related news is that we have published the sources for all of our images. You can now go and read the image.yaml file for EAP 7 on OpenShift to give you an example of what a real image using Concreate looks like.

20 November 2017

Reproducible builds folks: Reproducible Builds: Weekly report #133

Here's what happened in the Reproducible Builds effort between Sunday November 5 and Saturday November 11 2017: Upcoming events On November 17th Chris Lamb will present at Open Compliance Summit, Yokohama, Japan on how reproducible builds ensures the long-term sustainability of technology infrastructure. We plan to hold an assembly at 34C3 - hope to see you there! LEDE CI tests Thanks to the work of lynxis, Mattia and h01ger, we're now testing all LEDE packages in our setup. This is our first result for the ar71xx target: "502 (100.0%) out of 502 built images and 4932 (94.8%) out of 5200 built packages were reproducible in our test setup." - see below for details how this was achieved. Bootstrapping and Diverse Double Compilation As a follow-up of a discussion on bootstrapping compilers we had on the Berlin summit, Bernhard and Ximin worked on a Proof of Concept for Diverse Double Compilation of tinycc (aka tcc). Ximin Luo did a successful diverse-double compilation of tinycc git HEAD using gcc-7.2.0, clang-4.0.1, icc-18.0.0 and pgcc-17.10-0 (pgcc needs to triple-compile it). More variations are planned for the future, with the eventual aim to reproduce the same binaries cross-distro, and extend it to test GCC itself. Packages reviewed and fixed, and bugs filed Patches filed upstream: Patches filed in Debian: Patches filed in OpenSUSE: Reviews of unreproducible packages 73 package reviews have been added, 88 have been updated and 40 have been removed in this week, adding to our knowledge about identified issues. 4 issue types have been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Mattia Rizzolo uploaded version 88~bpo9+1 to stretch-backports. reprotest development reproducible-website development theunreproduciblepackage development tests.reproducible-builds.org in detail Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Dirk Eddelbuettel: RcppEigen 0.3.3.3.1

A maintenance release 0.3.3.3.1 of RcppEigen is now on CRAN (and will get to Debian soon). It brings Eigen 3.3.* to R. The impetus was a request from CRAN to change the call to Rcpp::Rcpp.plugin.maker() to only use :: as the function has in fact been exported and accessible for a pretty long time. So now the usage pattern catches up. Otherwise, Haiku-OS is now supported and a minor Travis tweak was made. The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.3.3.1 (2017-11-19)
  • Compilation under Haiku-OS is now supported (Yu Gong in #45).
  • The Rcpp.plugin.maker helper function is called via :: as it is in fact exported (yet we had old code using :::).
  • A spurious argument was removed from an example call.
  • Travis CI now uses https to fetch the test runner script.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 November 2017

Joey Hess: custom ARM disk image generation with propellor

Following up on propelling disk images, Propellor can now build custom ARM disk images for a variety of different ARM boards. The disk image build can run on a powerful laptop or server, so it's super fast and easy compared with manually installing Debian on an ARM board. Here's a simple propellor config for a Olimex LIME board, with ssh access and a root password:
lime :: Host
lime = host "lime.example.com" $ props
    & osDebian Unstable ARMHF
    & Machine.olimex_A10_OLinuXino_LIME
    & hasPartition (partition EXT4  mountedAt  "/"  setSize  MegaBytes 8192)
        & hasPassword (User "root")
        & Ssh.installed
    & Ssh.permitRootLogin (RootLogin True)
To make a disk image for that board, I only have to add this property to my laptop:
& imageBuiltFor lime
    (RawDiskImage "/srv/lime.img")
    (Debootstrapped mempty)
Propellor knows what kernel to install and how to make the image bootable for a bunch of ARM boards, including the Olimex LIME, the SheevaPlug, Banana Pi, and CubieTruck. To build the disk image targeting ARM, propellor uses qemu. So it's helpful that, after the first build, propellor incrementally updates disk images, quite quickly and efficiently. Once the board has the image installed, you can run propellor on it to further maintain it, and if there's a hardware problem, you can quickly replace it with an updated image.
computer tower that I will be maintaining with propellor
It's fairly simple to teach propellor about other ARM boards, so it should be quite easy to keep propellor knowing about all ARM boards supported by Debian (and other distros). Here's how I taught it about the Olimex LIME:
olimex_A10_OLinuXino_LIME :: Property (HasInfo + DebianLike)
olimex_A10_OLinuXino_LIME = FlashKernel.installed "Olimex A10-OLinuXino-LIME"
     requires  sunixi "A10-OLinuXino-Lime"
     requires  armmp
My home server is a CubieTruck which serves as a wireless access point, solar panel data collector, and git-annex autobuilder. It's deployed from a disk image built by propellor, using this config. I've been involved with building disk image for ARM boards for a long time -- it was part of my job for five years -- and this is the first time I've been entirely happy with the process.

Louis-Philippe V ronneau: DebConf Videoteam sprint report - day 0

First day of the videoteam autumn sprint! Well, I say first day, but in reality it's more day 0. Even though most of us have arrived in Cambridge already, we are still missing a few people. Last year we decided to sprint in Paris because most of our video gear is stocked there. This year, we instead chose to sprint a few days before the Cambridge Mini-Debconf to help record the conference afterwards. Since some of us arrived very late and the ones who did arrive early are still mostly jet lagged (that includes me), I'll use this post to introduce the space we'll be working from this week and our general plan for the sprint. House Party After some deliberations, we decided to rent a house for a week in Cambridge: finding a work space to accommodate us and all our gear proved difficult and we decided mixing accommodation and work would be a good idea. I've only been here for a few hours, but I have to say I'm pretty impressed by the airbnb we got. Last time I checked (it seems every time I do, some new room magically appears), I counted 5 bedrooms, 6 beds, 5 toilets and 3 shower rooms. Heck, there's even a solarium and a training room with weights and a punching bag on the first floor. Having a whole house to ourselves also means we have access to a functional kitchen. I'd really like to cook at least a few meals during the week. There's also a cat! Picture of a black cat I took from Wikipedia. It was too dark outside to use mine It's not the house's cat per say, but it's been hanging out around the house for most of the day and makes cute faces trying to convince us to let it come inside. Nice try cat. Nice try. Here are some glamour professional photos of what the place looks like on a perfect summer day, just for the kick of it: The view from the garden The Kitchen One of the multiple bedrooms Of course, reality has trouble matching all the post-processing filters. Plan for the week Now on a more serious note; apart from enjoying the beautiful city of Cambridge, here's what the team plans to do this week: tumbleweed Stefano wants to continue refactoring our ansible setup. A lot of things have been added in the last year, but some of it are hacks we should remove and implement correctly. highvoltage Jonathan won't be able to come to Cambridge, but plans to work remotely, mainly on our desktop/xfce session implementation. Another pile of hacks waiting to be cleaned! ivodd Ivo has been working a lot of the pre-ansible part of our installation and plans to continue working on that. At the moment, creating an installation USB key is pretty complicated and he wants to make that simpler. olasd Nicolas completely reimplemented our streaming setup for DC17 and wants to continue working on that. More specifically, he wants to write scripts to automatically setup and teardown - via API calls - the distributed streaming network we now use. Finding a way to push TLS certificates to those mirrors, adding a live stream viewer on video.debconf.org and adding a viewer to our archive are also things he wants to look at. pollo For my part, I plan to catch up with all the commits in our ansible repository I missed since last year's sprint and work on documentation. It would be very nice if we could have a static website describing our work so that others (at mini-debconfs for examples) could replicate it easily. If I have time, I'll also try to document all the ansible roles we have written. Stay tuned for more daily reports!

18 November 2017

Petter Reinholdtsen: Legal to share more than 3000 movies listed on IMDB?

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github. So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
 3186 unique IMDB title IDs in total
The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713. It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries? Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

17 November 2017

Norbert Preining: ScalaFX: Problems with Tables abound

Doing a lot with all kinds of tables in ScalaFX, I stumbled upon a bug in ScalaFX that, with the help of the bug report, I was able to circumvent. It is a subtle bug where types are mixed between scalafx.SOMETHING and the corresponding javafx.SOMETHING.
In one of the answers it is stated that:
The issue is with implicit conversion from TableColumn not being located by Scala. I am not clear why this is happening (maybe a Scala bug).
But the provided work-around at least made it work. Until today I stumbled onto a (probably) just another instance of this bug, but where the same work-around does not help. I am using TreeTableViews and try to replace the children of the root by filtering out one element. The code I use is of course very different, but here is a reduced and fully contained example, based on the original bug report and adapted to use a TreeTableView:
import scalafx.Includes._
import scalafx.scene.control.TreeTableColumn._
import scalafx.scene.control.TreeItem._
import scalafx.application.JFXApp.PrimaryStage
import scalafx.application.JFXApp
import scalafx.scene.Scene
import scalafx.scene.layout._
import scalafx.scene.control._
import scalafx.scene.control.TreeTableView
import scalafx.scene.control.Button
import scalafx.scene.paint.Color
import scalafx.beans.property. ObjectProperty, StringProperty 
import scalafx.collections.ObservableBuffer
// TableTester.scala
object TableTester extends JFXApp  
  val characters = ObservableBuffer[Person](
    new Person("Peggy", "Sue", "123", Color.Violet),
    new Person("Rocky", "Raccoon", "456", Color.GreenYellow),
    new Person("Bungalow ", "Bill", "789", Color.DarkSalmon)
  )
  val table = new TreeTableView[Person](
    new TreeItem[Person](new Person("","","",Color.Red))  
      expanded = true
      children = characters.map(new TreeItem[Person](_))
     )  
    columns ++= List(
      new TreeTableColumn[Person, String]  
        text = "First Name"
        cellValueFactory =  
          _.value.value.value.firstName
         
        prefWidth = 180
       ,
      new TreeTableColumn[Person, String]()  
        text = "Last Name"
        cellValueFactory =  
          _.value.value.value.lastName
         
        prefWidth = 180
       
    )
   
  stage = new PrimaryStage  
    title = "Simple Table View"
    scene = new Scene  
      content = new VBox()  
        children = List(
          new Button("Test it")  
            onAction = p =>  
              val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p =>  
                val bar: TreeItem[Person] = p
                p
               )
              table.root.value.children = foo
             
           ,
          table)
       
     
   
 
// Person.scala
class Person(firstName_ : String, lastName_ : String, phone_ : String, favoriteColor_ : Color = Color.Blue)  
  val firstName = new StringProperty(this, "firstName", firstName_)
  val lastName = new StringProperty(this, "lastName", lastName_)
  val phone = new StringProperty(this, "phone", phone_)
  val favoriteColor = new ObjectProperty(this, "favoriteColor", favoriteColor_)
  firstName.onChange((x, _, _) => System.out.println(x.value))
 
With this code what one gets on compilation with the latest Scala and ScalaFX is:
[error]  found   : scalafx.collections.ObservableBuffer[javafx.scene.control.TreeItem[Person]]
[error]  required: scalafx.collections.ObservableBuffer[scalafx.scene.control.TreeItem[Person]]
[error]               val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p =>  
[error]                                                                                          ^
[error] one error found
And in this case, adding import statements didn t help, what a pity. Unfortunately this bug is open since 2014 with a helpwanted tag and nothing is going on. I guess I have to try to dive into the source code of ScalaFX

Renata Scheibler: Hello, world!

Renata's picture, a white woman profile. She touches her chin with her left hand fingertips About me Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil. Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here. Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:
Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.
There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django. I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning. I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network). I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home. So, how come it is that only now, 14 years later, I am trying to get into tech? Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.) I ended up going to study... teacher training in History as an undergrad course. A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that). Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did. I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher. There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

14 November 2017

Jonathan Dowland: WadC 2.2

Bird Cage, a WadC-generated map for Heretic
Bird Cage map
I have recently released version 2.2 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps. The biggest change in this version is a reworking of the preferences system (to use the Java Preferences API), the wadcli command-line interface respecting preferences and a new preferences UI dialog (adapted from Quake Injector). There are two new example maps: A Labyrinth demonstration contributed by "Yoruk", and a Heretic map Bird Cage by yours truly. These are both now amongst the largest examples in the collection, although laby.wl was generated by a higher-level program. For more information see the release notes and the reference, or check out the new gallery of examples or skip straight to downloads. I have no plans to work on WadC further (but never say never, I suppose.)

12 November 2017

Lars Wirzenius: Unit and integration testing: an analogy with cars

A unit is a part of your program you can test in isolation. You write unit tests to test all aspects of it that you care about. If all your unit tests pass, you should know that your unit works well. Integration tests are for testing that when your various well-tested, high quality units are combined, integrated, they work together. Integration tests test the integration, not the individual units. You could think of building a car. Your units are the ball bearings, axles, wheels, brakes, etc. Your unit tests for the ball bearings might test, for example, that they can handle a billion rotations, at various temperatures, etc. Your integration test would assume the ball bearings work, and should instead test that the ball bearings are installed in the right way so that the car, as whole, can run a kilometers, and accelerate and brake every kilometer, uses only so much fuel, produces only so much pollution, and doesn't kill passengers in case of a crash.

Sven Hoexter: Offering a Simtec Entropy Key

Since I started to lean a bit towards the concept of minimalism I've got rid of stuff, including all stationary computers. So for now I'm left with just my laptop and that's something where I do not want to attach an USB entropy key permanently. That's why I've a spare Simtec Entropy Key I no longer use, and I'm willing to sell. In case someone is interested, I'm willing to give it away for 20EUR + shipping. If you can convince me it'll be of use for the Debian project (end up on a DSA managed machine for example) I'm willing to give it away for less. If you're located in Cologne, Copenhagen or Barcelona we might be able, depending on the timing, to do a personal handover (with or without keysigning). Otherwise I guess shipping is mainly interesting for someone also located in Europe. You can use sven at stormbind dot net or hoexter at debian dot org to contact me and use GPG key 0xA6DC24D9DA2493D1.

10 November 2017

Wouter Verhelst: SReview 0.1

This morning I uploaded version 0.1 of SReview, my video review and transcoding system, to Debian experimental. There's still some work to be done before it'll be perfectly easy to use by anyone, but I do think I've reached the point by now where it should have basic usability by now. Quick HOWTO for how to use it: There's still some bits of the above list that I want to make easier to do, and there's still some things that shouldn't be strictly necessary, but all in all, I think SReview has now reached a certain level of maturity that means I felt confident doing its first upload to Debian. Did you try it out? Let me know what you think!

Thadeu Lima de Souza Cascardo: Software Freedom Strategy with Community Projects

It's been some time since I last wrote. Life and work have been busy. At the same time, the world has been busy, and as I would love to write a larger post, I will try to be short here. I would love to touch on the Librem 5 and postmarketOS. In fact, I had, in a podcast in Portuguese, Papo Livre. Maybe, I'll touch a little on the latter. Some of the inspiration for this post include: All of those led me to understand how software freedom is under attack, in particular how copyleft in under attack. And, as I talked during FISL, though many might say that "Open Source has won", end users software freedom has not. Lots of companies have co-opted "free software" but give no software freedom to their users. They seem friends with free software, and they are. Because they want software to be free. But freedom should not be a value for software itself, it needs to be a value for people, not only companies or people who are labeled software developers, but all people. That's why I want to stop talking about free software, and talk more about software freedom. Because I believe the latter is more clear about what we are talking about. I don't mind that we use whatever label, as long as we stablish its meaning during conversations, and set the tone to distinguish them. The thing is: free software does not software freedom make. Not by itself. As Bradley Kuhn puts it: it's not magic pixie dust. Those who have known me for years might remember me as a person who studied free software licenses and how I valued copyleft, the GPL specifically, and how I concerned myself with topics like license compatibility and other licensing matters. Others might remember me as a person who valued a lot about upstreaming code. Not carrying changes to software openly developed that you had not made an effort to put upstream. I can't say I was wrong on both accounts. I still believe in those things. I still believe in the importance of copyleft and the GPL. I still value sharing your code in the commons by going upstream. But I was certaily wrong in valuing them too much. Or not giving as much or even more value to distribution efforts of getting software freedom to the users. And it took me a while in seeing how many people also saw the GPL as a tool to get code upstream. You see that a lot in Linus' discourse about the GPL. And that is on the minds of a lot of people, who I have seen argue that copyleft is not necessary for companies to contribute code back. But that's the problem. The point is not about getting code upstream. But about assuring people have the freedom to run a modified version of the software they received on their computers. It turns out that many examples of companies who had contributed code upstream, have not delivered that freedom to their end-users, who had received a modified version of that same software, which is not free. Bradley Kuhn also alerts us that many companies have been replacing copyleft software with non-copyleft software. And I completely agree with him that we should be writing more copyleft software that we hold copyright for, so we can enforce it. But looking at what has been happening recently in the Linux community about enforcement, even thought I still believe in enforcement as an strategy, I think we need much more than that. And one of those strategies is delivering more free software that users may be able to install on their own computers. It's building those replacements for software that people have been using for any reason. Be it the OS they get when they buy a device, or the application they use for communication. It's not like the community is not doing it, it's just that we need to acknowledge that this is a necessary strategy to guarantee software freedom. That distribution of software that users may easily install on their computers is as much or even more valuable than developing software closer to the hacker/developer community. That doing downstream changes to free software in the effort of getting them to users is worth it. That maintaining that software stable and secure for users is a very important task. I may be biased when talking about that, as I have been shifting from doing upstream work to downstream work and both on the recent years. But maybe that's what I needed to realize that upstreaming does not necessarily guarantees that users will get software freedom. I believe we need to talk more about that. I have seen many people dear to me disregard that difference between the freedom of the user and the freedom of software. There is much more to talk about that, go into detail about some of those points, and I think we need to debate more. I am subscribed to the libreplanet-discuss mailing list. Come join us in discussing about software freedom there, if you want to comment on anything I brought up here. As I promised I would, I would like to mention about postmarketOS, which is an option users have now to get some software freedom on some mobile devices. It's an effort I wanted to build myself, and I applaud the community that has developed around it and has been moving forward so quickly. And it's a good example of a balance between upstream and dowstream code that gets to deliver a better level of software freedom to users than the vendor ever would. I wanted to write about much of the topics I brought up today, but postponed that for some time. I was motivated by recent events in the community, and I am really disappointed at some the free software players and some of the events that happened in the last few years. That got me into thinking in how we need to manifest ourselves about those issues, so people know how we feel. So here it is: I am disappointed at how the Linux Foundation handled the situation about Software Freedom Conversancy taking a case against VMWare; I am disappointed about how Software Freedom Law Center handled a trademark issue against the Software Freedom Conservancy; and I really appreciate all the work the Software Freedom Conservancy has been doing. I have supported them for the last two years, and I urge you to become a supporter too.

07 November 2017

Reproducible builds folks: Reproducible Builds: Weekly report #132

Here's what happened in the Reproducible Builds effort between Sunday October 29 and Saturday November 4 2017: Past events Upcoming events Reproducible work in other projects Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 7 package reviews have been added, 43 have been updated and 47 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Documentation updates diffoscope development Version 88 was uploaded to unstable by Mattia Rizzolo. It included contributions (already covered by posts of the previous weeks) from: strip-nondeterminism development Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from:
Version 0.5.2-2 was uploaded to unstable by Holger Levsen. It included contributions already covered by posts of the previous weeks, as well as new ones from: reprotest development buildinfo.debian.net development tests.reproducible-builds.org Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 November 2017

Thorsten Alteholz: My Debian Activities in October 2017

FTP assistant Again, this month almost the same numbers as last month appeared in the statistics. I accepted 214 packages and rejected 22 uploads. The overall number of packages that got accepted this month was only 339. Debian LTS This was my fortieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 20.75h. During that time I did LTS uploads of: I also took care of radare2 twice and marked all four CVEs as not-affected for Wheezy and Jessie. As nobody else wanted to address the issues in wireshark yet, I now started to work on this package. Last but not least I did one week of frontdesk duties. Other stuff During October I took care of some bugs and at one go uploaded new upstream versions of hoel and duktape (this had to be done twice as I introduced an new bug with the first upload :-(). I only fixed bugs in glewlwyd and smstools. This month I also sponsored an upload of printrun. After about ten years of living without any power outage, some construction worker decided to cut a cable near my place. Unfortunately one of my computers used for recording TV shows did not boot after the cable had been repaired and I had to switch some timers to other boxes. All in all this was too much stress and I purchased some USVs from APC. As apcupsd was orphaned, I took the opportunity to adopt it as DOPOM for this month. My license pasting project now contains 31 license templates for your debian/copyright. The list of available texts can be obtained with:

curl http://licapi.debian.net/template

The license text itself is available under the given links, for example with

curl http://licapi.debian.net/template/Apache-2

03 November 2017

Rog rio Brito: Comparison of JDK installation of various Linux distributions

Today I spent some time in the morning seeing how one would install the JDK on Linux distributions. This is to create a little comparative tutorial to teach introductory Java. Installing the JDK is, thanks to the OpenJDK developers in Debian and Ubuntu (Matthias Klose and helpers), a very easy task. You simply type something like:
apt-get install openjdk-8-jdk
Since for a student it is better to have everything for experiments, I install the full version, not only the -headless version. Given my familiarity with Debian/Ubuntu, I didn't have to think about the way of installing it, of course. But as this is a tutorial meant to be as general as I can, I tried also to include instructions on how to install Java on other distributions. The first two that came to my mind were openSUSE and Fedora. Both use the RPM package format for their "native" packages (in the same sense that Debian uses DEB packages for "native" packages). But they use different higher-level tools to install such packages: Fedora uses a tool called dnf, while openSUSE uses zypper. To try these distributions, I got their netinstall ISOs and used qemu/kvm to install on a virtual machine. I used the following to install/run the virtual machines (the example below, is, of course, for openSUSE):
qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -drive index=0,media=disk,cache=unsafe,file=suse.qcow2 -cdrom openSUSE-Leap-42.3-NET-x86_64.iso
The names of the packages also change from one distribution to another. On Fedora, I had to use:
dnf install java-1.8.0-openjdk-devel
On openSUSE, I had to use:
zypper install java-1_8_0-openjdk-devel
Note that one distribution uses dots in the names of the packages while the other uses underscores. One interesting thing that I noticed with dnf was that, when I used it, it automatically refreshed the package lists from the network, something which I forgot, and it was a pleasant surprise. I don't know about zypper, but I guess that it probably had fresh indices when the installation finished. Both installations were effortless after I knew the names of the packages to install. Oh, BTW, in my 5 minute exploration with these distributions, I noticed that if you don't want the JDK, but only the JRE, then you omit the -devel suffix. It makes sense when you think about it, for consistency with other packages, but Debian's conventions also make sense (JRE with -jre suffix, JDK with -jdk suffix). I failed miserably to use Fedora's prebaked, vanilla cloud image, as I couldn't login on this image and I decided to just install the whole OS on a fresh virtual machine. I don't have instructions on how to install on Gentoo nor on Arch, though. I now see how hard it is to cover instructions/provide software for as many distributions as you wish, given the multitude of package managers, conventions etc.

02 November 2017

Antoine Beaupr : October 2017 report: LTS, feed2exec beta, pandoc filters, git mediawiki

Debian Long Term Support (LTS) This is my monthly Debian LTS report. This time I worked on the famous KRACK attack, git-annex, golang and the continuous stream of GraphicsMagick security issues.

WPA & KRACK update I spent most of my time this month on the Linux WPA code, to backport it to the old (~2012) wpa_supplicant release. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM. I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice peer->tk is reset, to_set to negotiate a new TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations. I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this. At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.

Git-annex The next big chunk on my list was completing the work on git-annex (CVE-2017-12976) that I started in August. It turns out doing the backport was simpler than I expected, even with my rusty experience with Haskell. Type-checking really helps in doing the right thing, especially considering how Joey Hess implemented the fix: by introducing a new type. So I backported the patch from upstream and notified the security team that the jessie and stretch updates would be similarly easy. I shipped the backport to LTS as DLA-1144-1. I also shared the updated packages for jessie (which required a similar backport) and stretch (which didn't) and those Sebastien Delafond published those as DSA 4010-1.

Graphicsmagick Up next was yet another security vulnerability in the Graphicsmagick stack. This involved the usual deep dive into intricate and sometimes just unreasonable C code to try and fit a round tree in a square sinkhole. I'm always unsure about those patches, but the test suite passes, smoke tests show the vulnerability as fixed, and that's pretty much as good as it gets. The announcement (DLA 1154-1) turned out to be a little special because I had previously noticed that the penultimate announcement (DLA 1130-1) was never sent out. So I made a merged announcement to cover both instead of re-sending the original 3 weeks late, which may have been confusing for our users.

Triage & misc We always do a bit of triage even when not on frontdesk duty, so I: I also did smaller bits of work on: The latter reminded me of the concerns I have about the long-term maintainability of the golang ecosystem: because everything is statically linked, an update to a core library (say the SMTP library as in CVE-2017-15042, thankfully not affecting LTS) requires a full rebuild of all packages including the library in all distributions. So what would be a simple update in a shared library system could mean an explosion of work on statically linked infrastructures. This is a lot of work which can definitely be error-prone: as I've seen in other updates, some packages (for example the Ruby interpreter) just bit-rot on their own and eventually fail to build from source. We would also have to investigate all packages to see which one include the library, something which we are not well equipped for at this point. Wheezy was the first release shipping golang packages but at least it's shipping only one... Stretch has shipped with two golang versions (1.7 and 1.8) which will make maintenance ever harder in the long term.
We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman

Other free software work This month again, I was busy doing some serious yak shaving operations all over the internet, on top of publishing two of my largest LWN articles to date (2017-10-16-strategies-offline-pgp-key-storage and 2017-10-26-comparison-cryptographic-keycards).

feed2exec beta Since I announced this new project last month I have released it as a beta and it entered Debian. I have also wrote useful plugins like the wayback plugin that saves pages on the Wayback machine for eternal archival. The archive plugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite). The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with --loglevel=ERROR to silence WARNINGs. Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple exec call, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the exec plugin now. I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap... Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?

Pandoc filters As I mentioned earlier, I dove again in Haskell programming when working on the git-annex security update. But I also have a small Haskell program of my own - a Pandoc filter that I use to convert the HTML articles I publish on LWN.net into a Ikiwiki-compatible markdown version. It turns out the script was still missing a bunch of stuff: image sizes, proper table formatting, etc. I also worked hard on automating more bits of the publishing workflow by extracting the time from the article which allowed me to simply extract the full article into an almost final copy just by specifying the article ID. The only thing left is to add tags, and the article is complete. In the process, I learned about new weird Haskell constructs. Take this code, for example:
-- remove needless blockquote wrapper around some tables
--
-- haskell newbie tips:
--
-- @ is the "at-pattern", allows us to define both a name for the
-- construct and inspect the contents as once
--
--   is the "empty record pattern": it basically means "match the
-- arguments but ignore the args"
cleanBlock (BlockQuote t@[Table  ]) = t
Here the idea is to remove <blockquote> elements needlessly wrapping a <table>. I can't specify the Table type on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean. The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:
cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = []
The last bit that drove me crazy was the date parsing:
-- the "GAByline" div has a date, use it to generate the ikiwiki dates
--
-- this is distinct from cleanBlock because we do not want to have to
-- deal with time there: it is only here we need it, and we need to
-- pass it in here because we do not want to mess with IO (time is I/O
-- in haskell) all across the function hierarchy
cleanDates :: ZonedTime -> Block -> [Block]
-- this mouthful is just the way the data comes in from
-- LWN/Pandoc. there could be a cleaner way to represent this,
-- possibly with a record, but this is complicated and obscure enough.
cleanDates time (Div (_, [cls], _)
                 [Para [Str month, Space, Str day, Space, Str year], Para _])
    cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date"
                                           (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e,"
                                                           (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime)))
                        ++ ikiwikiRawInline (ikiwikiMetaField "updated"
                                             (iso8601Format time))
                        ++ [Para []]
-- other elements just pass through
cleanDates time x = [x]
Now that seems just dirty, but it was even worse before. One thing I find difficult in adapting to coding in Haskell is that you need to take the habit of writing smaller functions. The language is really not well adapted to long discourse: it's more about getting small things connected together. Other languages (e.g. Python) discourage this because there's some overhead in calling functions (10 nanoseconds in my tests, but still), whereas functions are a fundamental and important construction in Haskell that are much more heavily optimized. So I constantly need to remind myself to split things up early, otherwise I can't do anything in Haskell. Other languages are more lenient, which does mean my code can be more dirty, but I feel get things done faster then. The oddity of Haskell makes frustrating to work with. It's like doing construction work but you're not allowed to get the floor dirty. When I build stuff, I don't mind things being dirty: I can cleanup afterwards. This is especially critical when you don't actually know how to make things clean in the first place, as Haskell will simply not let you do that at all. And obviously, I fought with Monads, or, more specifically, "I/O" or IO in this case. Turns out that getting the current time is IO in Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include IO. I eventually moved the time initialization up into main so that I had only one IO function and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern. I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.

Git remote MediaWiki Few people know that there is a MediaWiki remote for Git which allow you to mirror a MediaWiki site as a Git repository. As a disaster recovery mechanism, I have been keeping such a historical backup of the Amateur radio wiki for a while now. This originally started as a homegrown Python script to also convert the contents in Markdown. My theory then was to see if we could switch from Mediawiki to Ikiwiki, but it took so long to implement that I never completed the work. When someone had the weird idea of renaming a page to some impossible long name on the wiki, my script broke. I tried to look at fixing it and then remember I also had a mirror running using the Git remote. It turns out it also broke on the same issue and that got me looking in the remote again. I got lost in a zillion issues, including fixing that specific issue, but I especially looked at the possibility of fetching all namespaces because I realized that the remote fetches only a part of the wiki by default. And that drove me to submit namespace support as a patch to the git mailing list. Finally, the discussion came back to how to actually maintain that contrib: in git core or outside? Finally, it looks like I'll be doing some maintenance that project outside of git, as I was granted access to the GitHub organisation...

Galore Yak Shaving Then there's the usual hodgepodge of fixes and random things I did over the month.
There is no [web extension] only XUL! - Inside joke

01 November 2017

Jonathan Dowland: In defence of "Thought for the Day"

BBC Radio 4's long-running "Thought for the Day" has been in the news this week, as several presenters for the Today Programme have criticised the segment in an interview with Radio Times magazine. One facet of the criticism was whether the BBC should be broadcasting three minutes of religious content daily when more than half of the population are atheist. I'm an atheist and in my day-to-day life I have almost zero interaction with people of faith, certainly none where faith is a topic of conversation. However when I was an undergrad at Durham, I was a member of St John's College which has a Christian/Anglican/Evangelical heritage, and I met a lot of religious friends during my time there. What I find a little disturbing about the lack of faithful people in my day-to-day life, compared to then, is how it shines a light on how disjoint our society is. This has become even more apparent with the advent of the "filter bubble" and how irreconcilable factions are around topics like Brexit, Trump, etc. For these reasons I appreciate Thought for the Day and hearing voices from communities that I normally have little to do with. I can agree with the complaints about the lack of diversity, and I particularly enjoy hearing Thoughts from Muslims, Sikhs, Hindus, Buddhists, and such. Another criticism levelled against the segment was that it can be "preachy". I haven't found that myself. I get the impression that most of the monologues are carefully constructed to be as unpreaching as possible. I can usually appreciate the ethical content of the talks, without having to buy into the faith aspect. Interestingly the current principal of St John's College, David Wilkinson, has written his own response to the interview for the Radio Times.

31 October 2017

Chris Lamb: Free software activities in October 2017

Here is my monthly update covering what I have been doing in the free software world in October 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:


I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Improve names in output of "internal" binwalk members. (#877525).
  • Don't crash on malformed md5sums files. (#877473).
  • Omit misleading "any of" prefix when only complaining about a single module on import. [...]
  • Adjust tests as ps2ascii now varies its output on timezone. [...]

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Clojure considers .class file to be stale if it shares the same timestamp of the .clj. We thus adjust the timestamps of the .clj to always be younger. (#877418).
  • Print a message in --verbose mode if no canonical time was specified. [...]

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Always show SHA-256 checksums, regardless of the browser viewport size. [...]
  • Add an API endpoint to fetch specific .buildinfo files for a certain package/version/architecture. [...]


Debian My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.
Patches contributed
  • devscripts: Please print the actual arguments debuild makes to Lintian. (#880124)
  • hw-detect: Drop reference to floppy disks; it's almost 2018. (#880122)
  • debci:
    • Use deb.debian.org over http.debian.net. (#879654)
    • Document how to use an alternative mirror. (#879655)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Followed up on a large number of upstream "pings" that have been left dormant.
  • Issued DLA 1121-1 to fix an out-of-bounds read vulnerability in curl where a malicious FTP server could abuse this to prevent clients from interacting with it.
  • Issued DLA 1123-1 for the "Go" programming language where an attacker could generate a MIME request such that the server ran out of file descriptors.
  • Issued DLA 1126-1 for the libxfont font selection and rasterisation library, correcting two vulnerabilities, both involving the library being tricked into reading invalid/random memory.
  • Issued DLA 1134-1 for sdl-image1.2, an image loading library. A maliciously-crafted .xcf file could cause a stack-based buffer overflow resulting in potential code execution.

Uploads
  • python-django:
    • 2.0~beta1-1 New upstream 2.x release.
    • 1.11.6-1 New upstream bugfix release.
  • gunicorn (19.6.0-10+deb9u1) Prepared a release for stable to avoid a runtime dependency on a compiler. (#877722)
  • redis:
    • 4:4.0.2-3:
      • Drop the Debian-specific /etc/redis/redis-server.pre-up.d (etc.) hooks and remove them if unchanged.
      • Include systemd redis-server@.service and redis-sentinel@.service template files to easily run multiple Redis instances. (#877702)
      • Patch redis.conf and sentinel.conf with quilt instead of maintaining our own versions under debian/.
    • 4:4.0.2-4:
      • Add input validity checking to cluster config slot numbers to fix CVE-2017-15047. (#878076)
      • Drop debian/bin/generate-parts now we aren't calling it.
      • Correct Bash-ism in NEWS file.
    • 4:4.0.2-5: Replace the existing patch for CVE-2017-15047 with an upstream-blessed version that covers another case.
  • redisearch (0.21.3-5) Initial release.
  • docbook2man (2.0.0-40) Correct spelling mistakes in binaries and other misc packaging tidying.
  • python-redis (2.10.6-1) New upstream release.
  • bfs (1.1.3-1) New upstream release.

FTP Team

As a Debian FTP assistant I ACCEPTed 103 packages: amcheck, argagg, binutils, blockui, bro-pkg, chkservice, citus, django-axes, docker-containerd, doctest, dtkwidget, duktape, feed2exec, fontforge, fonttools, gcc-8, gcc-8-cross, generator-scripting-language, gitgraph.js, haskell-uri-encode, hoel, iniparser, its, jquery-areyousure, kodi, libcatmandu-mods-perl, libcatmandu-template-perl, libcatmandu-xml-perl, libcatmandu-xsd-perl, libcode-tidyall-plugin-sortlines-naturally-perl, libgdamm5.0, libinfinity, libmods-record-perl, libreoffice-dictionaries, libset-intervaltree-perl, libsodium, linux, linux-grsec, ltsp-manager, lxqt-themes, mailman3-core, measurement-kit, mini-buildd, musescore, node-babel, node-babel-eslint, node-babel-loader, node-babel-plugin-add-module-exports, node-babel-plugin-transform-define, node-gulp-newer, node-regenerate-unicode-properties, node-regexpu-core, node-regjsparser, node-unicode-data, node-unicode-loose-match, openjdk-9, orafce, pgaudit, pgsql-ogr-fdw, pk4, postgresql-mysql-fdw, powa-archivist, python-azure-devtools, python-colormap, python-darkslide, python-dotenv, python-karborclient, python-logfury, python-lupa, python-marshmallow, python-murano-pkg-check, python-octaviaclient, python-pathspec, python-pgpy, python-pydub, python-randomize, python-sabyenc, python-searchlightclient, python-stestr, python-subunit2sql, python-twitter, python-utils, python-wsgilog, r-cran-bindr, r-cran-desc, r-cran-hms, r-cran-readstata13, r-cran-rprojroot, r-cran-wikidatar, r-cran-wikipedir, r-cran-wikitaxa, repmgr, requests-file, resteasy3.0, sdl-kitchensink, stardicter, systemd-el, thunderbird, tomcat8.0, uwsgi-plugin-luajit, uwsgi-plugin-mongo, uwsgi-plugin-php & uwsgi-plugin-v8. I additionally filed 3 RC bugs against packages that had incomplete debian/copyright files against: fonttools, generator-scripting-language & libsodium.

Next.