Search Results: "mario"

12 April 2016

Reproducible builds folks: Reproducible builds: week 48 in Stretch cycle

What happened in the reproducible builds effort between March 20th and March 26th: Toolchain fixes Daniel Kahn Gillmor worked on removing build path from build symbols submitting a patch adding -fdebug-prefix-map to clang to match GCC, another patch against gcc-5 to backport the removal of -fdebug-prefix-map from DW_AT_producer, and finally by proposing the addition of a normalizedebugpath to the reproducible feature set of dpkg-buildflags that would use -fdebug-prefix-map to replace the current directory with . using -fdebug-prefix-map. Sergey Poznyakoff merged the --clamp-mtime option so that it will be featured in the next Tar release. This option is likely to be used by dpkg-deb to implement deterministic mtimes for packaged files. Packages fixed The following packages have become reproducible due to changes in their build dependencies: augeas, gmtkbabel, ktikz, octave-control, octave-general, octave-image, octave-ltfat, octave-miscellaneous, octave-mpi, octave-nurbs, octave-octcdf, octave-sockets, octave-strings, openlayers, python-structlog, signond. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: tests.reproducible-builds.org i386 build nodes have been setup by converting 2 of the 4 amd64 nodes to i386. (h01ger) Package reviews 92 reviews have been removed, 66 added and 31 updated in the previous week. New issues: timestamps_generated_by_xbean_spring, timestamps_generated_by_mangosdk_spiprocessor. Chris Lamb filed 7 FTBFS bugs. Misc. On March 20th, Chris Lamb gave a talk at FOSSASIA 2016 in Singapore. The very same day, but a few timezones apart, h01ger did a presentation at LibrePlanet 2016 in Cambridge, Massachusetts. Seven GSoC/Outreachy applications were made by potential interns to work on various aspects of the reproducible builds effort. On top of interacting with several applicants, prospective mentors gathered to review the applications.

27 March 2016

Lunar: Reproducible builds: week 48 in Stretch cycle

What happened in the reproducible builds effort between March 20th and March 26th:

Toolchain fixes
  • Sebastian Ramacher uploaded breathe/4.2.0-1 which makes its output deterministic. Original patch by Chris Lamb, merged uptream.
  • Rafael Laboissiere uploaded octave/4.0.1-1 which allows packages to be built in place and avoid unreproducible builds due to temporary build directories appearing in the .oct files.
Daniel Kahn Gillmor worked on removing build path from build symbols submitting a patch adding -fdebug-prefix-map to clang to match GCC, another patch against gcc-5 to backport the removal of -fdebug-prefix-map from DW_AT_producer, and finally by proposing the addition of a normalizedebugpath to the reproducible feature set of dpkg-buildflags that would use -fdebug-prefix-map to replace the current directory with . using -fdebug-prefix-map. As succesful result of lobbying at LibrePlanet 2016, the --clamp-mtime option will be featured in the next Tar release. This option is likely to be used by dpkg-deb to implement deterministic mtimes for packaged files.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: augeas, gmtkbabel, ktikz, octave-control, octave-general, octave-image, octave-ltfat, octave-miscellaneous, octave-mpi, octave-nurbs, octave-octcdf, octave-sockets, octave-strings, openlayers, python-structlog, signond. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet:
  • #818742 on milkytracker by Reiner Herrmann: sorts the list of source files.
  • #818752 on tcl8.4 by Reiner Herrmann: sort source files using C locale.
  • #818753 on tk8.6 by Reiner Herrmann: sort source files using C locale.
  • #818754 on tk8.5 by Reiner Herrmann: sort source files using C locale.
  • #818755 on tk8.4 by Reiner Herrmann: sort source files using C locale.
  • #818952 on marionnet by ceridwen: dummy out build date and uname to make build reproducible.
  • #819334 on avahi by Reiner Herrmann: ship upstream changelog instead of the one generated by gettextize (although duplicate of #804141 by Santiago Vila).

tests.reproducible-builds.org i386 build nodes have been setup by converting 2 of the 4 amd64 nodes to i386. (h01ger)

Package reviews 92 reviews have been removed, 66 added and 31 updated in the previous week. New issues: timestamps_generated_by_xbean_spring, timestamps_generated_by_mangosdk_spiprocessor. Chris Lamb filed 7 FTBFS bugs.

Misc. On March 20th, Chris Lamb gave a talk at FOSSASIA 2016 in Singapore. The very same day, but a few timezones apart, h01ger did a presentation at LibrePlanet 2016 in Cambridge, Massachusetts. Seven GSoC/Outreachy applications were made by potential interns to work on various aspects of the reproducible builds effort. On top of interacting with several applicants, prospective mentors gathered to review the applications. Huge thanks to Linda Naeun Lee for the new hackergotchi visible on Planet Debian.

21 February 2016

Mario Lang: Generating C++ from a DTD with Jinja2 and lxml

I recently stumbled across an XML format specified in a DTD that I wanted to work with from within C++. The XML format is document centric, which is a bit of a pain with existing data binding compilers according to my limited experience. So to learn something new, and to keep control over generated code, I started to investigate what it would take to write my own little custom data binding compiler.
Writing a program that writes a program It turns out that there are two very helpful libraries in Python which can really make your life a lot easier:
To keep my life simple, I am focusing on generating accessors for XML attributes only for now. I leave it up to the library client to figure out how to deal with child elements.
A highly simplified DOM Inspired by the hybrid example from libstudxml, we define a simple base class that can store raw XML elements.
class element  
public:
  using attributes_type = std::map<xml::qname, std::string>;
  using elements_type = std::vector<std::shared_ptr<element>>;
  element(const xml::qname& name) : tag_name_(name)  
  virtual ~element() = default;
  xml::qname const& tag_name() const   return tag_name_;  
  attributes_type const& attributes() const   return attributes_;  
  attributes_type&       attributes()         return attributes_;  
  std::string const& text() const   return text_;  
  void text(std::string const& text)   text_ = text;  
  elements_type const& elements() const  return elements_; 
  elements_type&       elements()         return elements_;  
  element(xml::parser&, bool start_end = true);
  void serialize (xml::serializer&, bool start_end = true) const;
  template<typename T> static std::shared_ptr<element> create(xml::parser& p)  
    return std::make_shared<T>(p, false);
   
private:
  xml::qname tag_name_;
  attributes_type attributes_;
  std::string text_;           // Simple content only.
  elements_type elements_;     // Complex content only.
 ;
For each element name in the DTD, we're going to define a class that inherits from the element class, implementing special methods to make attribute access easier. The element(xml::parser&) constructor is going to create the corresponding class whenever it sees a certain element name. This calls for some sort of factory:
class factory  
public:
  static std::shared_ptr<element> make(xml::parser& p);
protected:
  struct element_info  
    xml::content content_type;
    std::shared_ptr<element> (*construct)(xml::parser&);
   ;
  using map_type = std::map<xml::qname, element_info>;
  static map_type *get_map()  
    if (!map) map = new map_type;
    return map;
   
private:
  static map_type *map;
 ;
template<typename T>
struct register_element : factory  
  register_element(xml::qname const& name, xml::content const& content)  
    get_map()->insert( name, element_info content, &element::create<T> );
   
 ;
shared_ptr<element> factory::make(xml::parser& p)  
  auto name = p.qname();
  auto iter = get_map()->find(name);
  if (iter == get_map()->end())  
    // No subclass found, so store plain data so we do not loose on roundtrip.
    return std::make_shared<element>(p, false);
   
  auto const& element = iter->second;
  p.content(element.content_type);
  return element.create(p);
 
The header template Now that we have our required infrastructure, we can finally start writing Jinja2 templates to generate classes for all elements in our DTD:
 %- for elem in dtd.iterelements() % 
   %- if elem.name in forwards_for % 
     %- for forward in forwards_for[elem.name] % 
class  forward ;
     %- endfor % 
   %- endif % 
class  elem.name  : public dom::element  
  static register_element< elem.name > factory_registration;
public:
   elem.name (xml::parser& p, bool start_end = true) : dom::element(p, start_end)  
   
   %- for attr in elem.iterattributes() % 
     %- if attr is required_string_attribute % 
  std::string  attr.name () const;
  void  attr.name (std::string const&);
     %- elif attr is implied_string_attribute % 
  optional<std::string>  attr.name () const;
  void  attr.name (optional<std::string>);
     # more branches to go here # 
     %- endif % 
   %- endfor % 
 ;
 %- endfor % 
required_string_attribute and implied_string_attribute are so-called Jinja2 tests. They are a nice way to isolate predicates such that the Jinja2 templates can stay relatively free of complicated expressions:
templates.tests['required_string_attribute'] = lambda a: \
  a.type in ['id', 'cdata', 'idref'] and a.default == 'required'
templates.tests['implied_string_attribute'] = lambda a: \
  a.type in ['id', 'cdata', 'idref'] and a.default == 'implied'
That is nice, but we have only seen C++ header declarations so far. Lets have a look into the implementation of some of our attribute accessors.
Enum conversion One interesting aspect of DTD based code generation is the fact that attributes can have enumerations specified. Assume that we have some extra data-structure in Python which helps us to define a nice name for each individual enumeration attribute. Then, a part of the Jinja2 template to generate the implementation for an enumeration attribute looks like:
     %- elif attr is known_enumeration_attribute % 
       %- set enum = enumerations[tuple(attr.values())]['name'] % 
       %- if attr.default == 'required' % 
 enum   elem.name :: attr.name () const  
  auto iter = attributes().find(qname " attr.name " );
  if (iter != attributes().end())  
         %- for value in attr.values() % 
     % if not loop.first % else  % else %       % endif -% 
    if (iter->second == " value ") return  enum :: value   mangle ;
         %- endfor % 
    throw illegal_enumeration ;
   
  throw missing_attribute ;
 
void  elem.name :: attr.name ( enum  value)  
  static qname const attr " attr.name " ;
  switch (value)  
         %- for value in attr.values() % 
  case  enum :: value   mangle :
    attributes()[attr] = " value ";
    break;
         %- endfor % 
  default:
    throw illegal_enumeration ;
   
 
       %- elif attr.default == 'implied' % 
 # similar implementation using boost::optional # 
       %- endif % 
     %- endif % 
Putting it all together The header for the library is generated like this:
from jinja2 import DictLoader, Environment
from lxml.etree import DTD
LIBRARY_HEADER = """
 # Our template code # 
"""
bmml = DTD('bmml.dtd')
templates = Environment(loader=DictLoader(globals()))
templates.filters['mangle'] = lambda ident: \
   '8th_or_128th': 'eighth_or_128th',
   '256th': 'twohundredfiftysixth',
   'continue': 'continue_'
   .get(ident, ident)
def template(name):
  return templates.get_template(name)
def hpp():
  print(template('LIBRARY_HEADER').render(
     'dtd': bmml,
     'enumerations': enumerations,
     'forwards_for':  'ornament': ['ornament_type'],
                      'score': ['score_data', 'score_header'] 
     ))
With all of this in place, we can have a look at a small use case for our library.
Printing document content I haven't really explained anything about the document format we're working with until now. Braille Music Markup Language is an XML based plain text markup language. Its purpose is to be able to enhance plain braille music scores with usually hard-to-calcuate meta information. Almost all element text content is supposed to be printed as-is to reconstruct the original plain text. So we could at least define one very basic operation in our library: printing the plain text content of an element. I found an XML stylesheet that is supposed to convert BMML documents to HTML. This stylesheet apparently has a bug, insofar as it forgets to treat the rest_data element in the same way as it already treats the note_data element. note to self, I wish I would've done a code review before the EU-project that developed BMML was finished. It looks like resurrecting maintainance is one of the things I might be able to look into in a meeting in Pisa in the first three days of March this year. If we keep this in mind, we can easily reimplement what the stylesheet does in idiomatic C++:
template<typename T>
typename std::enable_if<std::is_base_of<element, T>::value, std::ostream&>::type
operator<<(std::ostream &out, std::shared_ptr<T> elem)  
  if (!std::dynamic_pointer_cast<note_data>(elem) &&
      !std::dynamic_pointer_cast<rest_data>(elem) &&
      !std::dynamic_pointer_cast<score_header>(elem))
   
    auto const& text = elem->text();
    if (text.empty()) for (auto child : *elem) out << child; else out << text;
   
  return out;
 
The use of std::enable_if is necessary here so that operator<< is defined on the element class and all of its subclasses. Without the std::enable_if magic, client code would be forced to manually make sure it is passing std::shared_ptr<element> each time it wants to use the operatr<< on any of our specially defined subclasses. Now we can easily print BMML documents and get their actual plain text representation.
#include <fstream>
#include <iostream>
#include <xml/parser>
#include <xml/serializer>
#include "bmml.hxx"
using namespace std;
using namespace xml;
int main (int argc, char *argv[])  
  if (argc < 2)  
    cerr << "usage: " << argv[0] << " [<filename.bmml>...]" << endl;
    return EXIT_FAILURE;
   
  try  
    for (int i = 1; i < argc; ++i)  
      ifstream ifs argv[i] ;
      if (ifs.good())  
        parser p ifs, argv[i] ;
        p.next_expect(parser::start_element, "score", content::complex);
        cout << make_shared<bmml::score>(p, false) << endl;
        p.next_expect(parser::end_element, "score");
        else  
        cerr << "Unable to open '" << argv[i] << "'." << endl;
        return EXIT_FAILURE;
       
     
    catch (xml::exception const& e)  
    cerr << e.what() << endl;
    return EXIT_FAILURE;
   
 
That's it for now. The full source for the actual library which inspired this posting can be found on github in my bmmlcxx project. If you have an comments or questions, send me mail. If you like bmmlcxx, don't forget to star it :-).

Mario Lang: Generating C++ from a DTD with Jinja2 and lxml

I recently stumbled across an XML format specified in a DTD that I wanted to work with from within C++. The XML format is document centric, which is a bit of a pain with existing data binding compilers according to my limited experience. So to learn something new, and to keep control over generated code, I started to investigate what it would take to write my own little custom data binding compiler.
Writing a program that writes a program It turns out that there are two very helpful libraries in Python which can really make your life a lot easier:
To keep my life simple, I am focusing on generating accessors for XML attributes only for now. I leave it up to the library client to figure out how to deal with child elements.
A highly simplified DOM Inspired by the hybrid example from libstudxml, we define a simple base class that can store raw XML elements.
class element  
public:
  using attributes_type = std::map<xml::qname, std::string>;
  using elements_type = std::vector<std::shared_ptr<element>>;
  element(const xml::qname& name) : tag_name_(name)  
  virtual ~element() = default;
  xml::qname const& tag_name() const   return tag_name_;  
  attributes_type const& attributes() const   return attributes_;  
  attributes_type&       attributes()         return attributes_;  
  std::string const& text() const   return text_;  
  void text(std::string const& text)   text_ = text;  
  elements_type const& elements() const  return elements_; 
  elements_type&       elements()         return elements_;  
  element(xml::parser&, bool start_end = true);
  void serialize (xml::serializer&, bool start_end = true) const;
  template<typename T> static std::shared_ptr<element> create(xml::parser& p)  
    return std::make_shared<T>(p, false);
   
private:
  xml::qname tag_name_;
  attributes_type attributes_;
  std::string text_;           // Simple content only.
  elements_type elements_;     // Complex content only.
 ;
For each element name in the DTD, we're going to define a class that inherits from the element class, implementing special methods to make attribute access easier. The element(xml::parser&) constructor is going to create the corresponding class whenever it sees a certain element name. This calls for some sort of factory:
class factory  
public:
  static std::shared_ptr<element> make(xml::parser& p);
protected:
  struct element_info  
    xml::content content_type;
    std::shared_ptr<element> (*construct)(xml::parser&);
   ;
  using map_type = std::map<xml::qname, element_info>;
  static map_type *get_map()  
    if (!map) map = new map_type;
    return map;
   
private:
  static map_type *map;
 ;
template<typename T>
struct register_element : factory  
  register_element(xml::qname const& name, xml::content const& content)  
    get_map()->insert( name, element_info content, &element::create<T> );
   
 ;
shared_ptr<element> factory::make(xml::parser& p)  
  auto name = p.qname();
  auto iter = get_map()->find(name);
  if (iter == get_map()->end())  
    // No subclass found, so store plain data so we do not loose on roundtrip.
    return std::make_shared<element>(p, false);
   
  auto const& element = iter->second;
  p.content(element.content_type);
  return element.create(p);
 
The header template Now that we have our required infrastructure, we can finally start writing Jinja2 templates to generate classes for all elements in our DTD:
 %- for elem in dtd.iterelements() % 
   %- if elem.name in forwards_for % 
     %- for forward in forwards_for[elem.name] % 
class  forward ;
     %- endfor % 
   %- endif % 
class  elem.name  : public dom::element  
  static register_element< elem.name > factory_registration;
public:
   elem.name (xml::parser& p, bool start_end = true) : dom::element(p, start_end)  
   
   %- for attr in elem.iterattributes() % 
     %- if attr is required_string_attribute % 
  std::string  attr.name () const;
  void  attr.name (std::string const&);
     %- elif attr is implied_string_attribute % 
  optional<std::string>  attr.name () const;
  void  attr.name (optional<std::string>);
     # more branches to go here # 
     %- endif % 
   %- endfor % 
 ;
 %- endfor % 
required_string_attribute and implied_string_attribute are so-called Jinja2 tests. They are a nice way to isolate predicates such that the Jinja2 templates can stay relatively free of complicated expressions:
templates.tests['required_string_attribute'] = lambda a: \
  a.type in ['id', 'cdata', 'idref'] and a.default == 'required'
templates.tests['implied_string_attribute'] = lambda a: \
  a.type in ['id', 'cdata', 'idref'] and a.default == 'implied'
That is nice, but we have only seen C++ header declarations so far. Lets have a look into the implementation of some of our attribute accessors.
Enum conversion One interesting aspect of DTD based code generation is the fact that attributes can have enumerations specified. Assume that we have some extra data-structure in Python which helps us to define a nice name for each individual enumeration attribute. Then, a part of the Jinja2 template to generate the implementation for an enumeration attribute looks like:
     %- elif attr is known_enumeration_attribute % 
       %- set enum = enumerations[tuple(attr.values())]['name'] % 
       %- if attr.default == 'required' % 
 enum   elem.name :: attr.name () const  
  auto iter = attributes().find(qname " attr.name " );
  if (iter != attributes().end())  
         %- for value in attr.values() % 
     % if not loop.first % else  % else %       % endif -% 
    if (iter->second == " value ") return  enum :: value   mangle ;
         %- endfor % 
    throw illegal_enumeration ;
   
  throw missing_attribute ;
 
void  elem.name :: attr.name ( enum  value)  
  static qname const attr " attr.name " ;
  switch (value)  
         %- for value in attr.values() % 
  case  enum :: value   mangle :
    attributes()[attr] = " value ";
    break;
         %- endfor % 
  default:
    throw illegal_enumeration ;
   
 
       %- elif attr.default == 'implied' % 
 # similar implementation using boost::optional # 
       %- endif % 
     %- endif % 
Putting it all together The header for the library is generated like this:
from jinja2 import DictLoader, Environment
from lxml.etree import DTD
LIBRARY_HEADER = """
 # Our template code # 
"""
bmml = DTD('bmml.dtd')
templates = Environment(loader=DictLoader(globals()))
templates.filters['mangle'] = lambda ident: \
   '8th_or_128th': 'eighth_or_128th',
   '256th': 'twohundredfiftysixth',
   'continue': 'continue_'
   .get(ident, ident)
def template(name):
  return templates.get_template(name)
def hpp():
  print(template('LIBRARY_HEADER').render(
     'dtd': bmml,
     'enumerations': enumerations,
     'forwards_for':  'ornament': ['ornament_type'],
                      'score': ['score_data', 'score_header'] 
     ))
With all of this in place, we can have a look at a small use case for our library.
Printing document content I haven't really explained anything about the document format we're working with until now. Braille Music Markup Language is an XML based plain text markup language. Its purpose is to be able to enhance plain braille music scores with usually hard-to-calcuate meta information. Almost all element text content is supposed to be printed as-is to reconstruct the original plain text. So we could at least define one very basic operation in our library: printing the plain text content of an element. I found an XML stylesheet that is supposed to convert BMML documents to HTML. This stylesheet apparently has a bug, insofar as it forgets to treat the rest_data element in the same way as it already treats the note_data element. note to self, I wish I would've done a code review before the EU-project that developed BMML was finished. It looks like resurrecting maintainance is one of the things I might be able to look into in a meeting in Pisa in the first three days of March this year. If we keep this in mind, we can easily reimplement what the stylesheet does in idiomatic C++:
template<typename T>
typename std::enable_if<std::is_base_of<element, T>::value, std::ostream&>::type
operator<<(std::ostream &out, std::shared_ptr<T> elem)  
  if (!std::dynamic_pointer_cast<note_data>(elem) &&
      !std::dynamic_pointer_cast<rest_data>(elem) &&
      !std::dynamic_pointer_cast<score_header>(elem))
   
    auto const& text = elem->text();
    if (text.empty()) for (auto child : *elem) out << child; else out << text;
   
  return out;
 
The use of std::enable_if is necessary here so that operator<< is defined on the element class and all of its subclasses. Without the std::enable_if magic, client code would be forced to manually make sure it is passing std::shared_ptr<element> each time it wants to use the operatr<< on any of our specially defined subclasses. Now we can easily print BMML documents and get their actual plain text representation.
#include <fstream>
#include <iostream>
#include <xml/parser>
#include <xml/serializer>
#include "bmml.hxx"
using namespace std;
using namespace xml;
int main (int argc, char *argv[])  
  if (argc < 2)  
    cerr << "usage: " << argv[0] << " [<filename.bmml>...]" << endl;
    return EXIT_FAILURE;
   
  try  
    for (int i = 1; i < argc; ++i)  
      ifstream ifs argv[i] ;
      if (ifs.good())  
        parser p ifs, argv[i] ;
        p.next_expect(parser::start_element, "score", content::complex);
        cout << make_shared<bmml::score>(p, false) << endl;
        p.next_expect(parser::end_element, "score");
        else  
        cerr << "Unable to open '" << argv[i] << "'." << endl;
        return EXIT_FAILURE;
       
     
    catch (xml::exception const& e)  
    cerr << e.what() << endl;
    return EXIT_FAILURE;
   
 
That's it for now. The full source for the actual library which inspired this posting can be found on github in my bmmlcxx project. If you have an comments or questions, send me mail. If you like bmmlcxx, don't forget to star it :-).

1 February 2016

Russ Allbery: Review: Oathblood

Review: Oathblood, by Mercedes Lackey
Series: Vows and Honor #3
Publisher: DAW
Copyright: April 1998
ISBN: 0-88677-773-9
Format: Mass market
Pages: 394
I have this story collection listed as the third book in the Vows and Honor series, but as mentioned in the review of The Oathbound, it's more complicated than that. This book has the first Tarma and Kethry story, which is not found in The Oathbound, and two of the better stories from that volume. This is probably the place to start for the series; you're not missing that much from the rest of that book. However, the last three stories ("Wings of Fire," "Spring Plowing at Forst Reach," and "Oathblood") have significant spoilers for Oathbreakers. Therefore, if you care about both avoiding spoilers and reading this series, my recommended reading order is to ignore The Oathbound entirely, read Oathblood up to but not including "Wings of Fire," read Oathbreakers, and then come back here for the last two stories. "Sword-sworn": This is the very first Tarma and Kethry story and hence where this series actually begins. As Lackey notes in her introduction, it's a pretty stock "rape and revenge" story, which is not something I particularly enjoy. Marion Zimmer Bradley liked it well enough to accept it anyway, and I can sort of see why: the dynamic between the two characters sparkles in a few places, and the Shin'a'in world-building isn't bad. The plot, though, is very predictable and not very notable. There isn't much here that you'd be surprised by if you'd read references to these events in later stories. And there's no explanation of a few things one might be curious about, such as where Need came from. (6) "Turnabout": This is one of the two stories also found in The Oathbound. Merchants are plagued by bandits who manage to see through ruses and always catch their guards by surprise (with a particularly nasty bit of rape and murder in one case Tarma and Kethry stories have quite a lot of that). That's enough to get the duo to take the job of luring out the bandits and dealing with them, using a nice bit of magical disguise. This story is also a song on one of the Vows and Honor albums from Firebird (which I also have). It was one of my favorites of Lackey's songs, so I want to like the story (and used to like it a great deal). Unfortunately, the very nasty bit of revenge that the supposed heroes take at the end of the story completely destroyed my enjoyment of it on re-reading. It's essentially a glorification of prison rape, which is a trope that I no longer have any patience for. (4) "The Making of a Legend": In order to explain the differences between the song based on "Turnabout" and the actual story, Lackey invented a bard, Leslac, who loves writing songs about Tarma and Kethry and regularly gets the details wrong, mostly by advertising them as moral crusaders for women instead of mercenaries who want to get paid, much to their deep annoyance. This is his debut in an actual story, featuring an incident that's delightfully contrary to Leslac's expectations. It's a slight story, but I thought it was fun. (6) "Keys": Another story from The Oathbound, this is a locked-room mystery with a bit of magical sleuthing. Kethry attempts to prove that a woman did not murder her husband while Tarma serves as her champion in a (rather broken) version of trial by combat. I think the version here is better than the edited version in The Oathbound, and it's a fairly enjoyable bit of sleuthing. (7) "A Woman's Weapon": I would call this the typical Tarma and Kethry story (except that, for a change, it's missing the rape): they stumble across some sort of serious injustice and put things to right with some hard thinking and a bit of poetic justice. In this case, it's a tannery that's poisoning the land, and a master tanner who can't put a stop to his rival. Competent although not particularly memorable. (6) "The Talisman": A rather depressing little story about a mage who wants shortcuts and a magic talisman that isn't what it appears to be. Not one of my favorites, in part because it has some common Tarma and Kethry problems: unnecessary death, a feeling that the world is very dangerous and that mistakes are fatal, and narrative presentation of the people who die from their stupidity as deserving it. I couldn't shake the feeling that there was probably some better way of resolving this if people had just communicated a bit better. (5) "A Tale of Heroes": Back to the rape, unfortunately, plus a bit of very convenient match-making that I found extremely dubious. For all that Lackey's introduction paints this as a story of empowering people to follow their own paths, the chambermaid of this story didn't seem to have many more choices in her life after meeting Tarma and Kethry than before, even if her physical situation was better. I did like the touch of Tarma and Kethry not being the heroes and victors in the significant magical problem they stumble across, though, and it's a warm-hearted story if you ignore the effects of trauma as much as the story ignores them. (6) "Friendly Fire": An amusing short story about the power of bad luck and Murphy's Law. It hit one of my pet peeves at one point, where Lackey tries to distort the words of someone with a cold and just makes the dialogue irritating to read, but otherwise a lot of fun. (7) "Wings of Fire": I love the Hawkbrothers, so it's always fun when they show up. The villain of this piece is way over the top and leaves much to be desired, but the guest-starring Hawkbrother mostly makes up for it. Once again, Tarma and Kethry get out of a tight spot by thinking harder instead of by having more power, although the villain makes that rather easy via overconfidence. Once again, though, the poetic justice that Lackey's protagonists enjoy leaves a bad taste in my mouth, although it's not quite as bad here as some other stories. (6) "Spring Planting at Forst Reach": On one level, this is a rather prosaic story about training horses (based on Lackey's experience and reading, so a bit better than typical fantasy horse stories). But it's set at Forst Reach, Vanyel's home, some years after Vanyel. I like those people and their gruff approach to life, and it meshes well with Tarma and Kethry's approach. If you enjoy the two showing off their skills and wowing people with new ideas, you'll have fun with this. (7) "Oathblood": As you might guess from the matching title, this novella is the heart of the book and about a quarter of its length. We get to see Kethry's kids, see more of their life in their second (post-Oathbreakers) career, and then get a rather good adventure story of resourceful and thoughtful youngsters, with a nice touch of immature but deeply-meant loyalty. I didn't enjoy it as much as I would have without one of the tactics the kids use to get out of trouble, but my dislike for reading about other people's bowel troubles is partly a personal quirk. This is a pretty typical Lackey story of resourcefulness and courage; if you like this series in general, you'll probably enjoy this one. (7) Rating: 7 out of 10

13 December 2015

Robert Edmonds: Works with Debian: Intel SSD 750, AMD FirePro W4100, Dell P2715Q

I recently installed new hardware in my primary computer running Debian unstable. The disk used for the / and /home filesystem was replaced with an Intel SSD 750 series NVM Express card. The graphics card was replaced by an AMD FirePro W4100 card, and two Dell P2715Q monitors were installed. Intel SSD 750 series NVM Express card This is an 800 GB SSD on a PCI-Express x4 card (model number SSDPEDMW800G4X1) using the relatively new NVM Express interface, which appears as a /dev/nvme* device. The stretch alpha 4 Debian installer was able to detect and install onto this device, but grub-installer 1.127 on the installer media was unable to install the boot loader. This was due to a bug recently fixed in 1.128:
grub-installer (1.128) unstable; urgency=high
  * Fix buggy /dev/nvme matching in the case statement to determine
    disc_offered_devfs (Closes: #799119). Thanks, Mario Limonciello!
 -- Cyril Brulebois <kibi@debian.org>  Thu, 03 Dec 2015 00:26:42 +0100
I was able to download and install the updated .udeb by hand in the installer environment and complete the installation. This card was installed on a Supermicro X10SAE motherboard, and the UEFI BIOS was able to boot Debian directly from the NVMe card, although I updated to the latest available BIOS firmware prior to the installation. It appears in lspci like this:
02:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)
(prog-if 02 [NVM Express])
    Subsystem: Intel Corporation SSD 750 Series [Add-in Card]
    Flags: bus master, fast devsel, latency 0
    Memory at f7d10000 (64-bit, non-prefetchable) [size=16K]
    Expansion ROM at f7d00000 [disabled] [size=64K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
    Capabilities: [60] Express Endpoint, MSI 00
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [150] Virtual Channel
    Capabilities: [180] Power Budgeting <?>
    Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
    Capabilities: [270] Device Serial Number 55-cd-2e-41-4c-90-a8-97
    Capabilities: [2a0] #19
    Kernel driver in use: nvme
The card itself appears very large in marketing photos, but this is a visual trick: the photographs are taken with the low-profile PCI bracket installed, rather than the standard height PCI bracket which it ships installed with. smartmontools fails to read SMART data from the drive, although it is still able to retrieve basic device information, including the temperature:
root@chase 0 :~# smartctl -d scsi -a /dev/nvme0n1
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.3.0-trunk-amd64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor:               NVMe
Product:              INTEL SSDPEDMW80
Revision:             0135
Compliance:           SPC-4
User Capacity:        800,166,076,416 bytes [800 GB]
Logical block size:   512 bytes
Rotation Rate:        Solid State Device
Logical Unit id:      8086INTEL SSDPEDMW800G4                     1000CVCQ531500K2800EGN  
Serial number:        CVCQ531500K2800EGN
Device type:          disk
Local Time is:        Sun Dec 13 01:48:37 2015 EST
SMART support is:     Unavailable - device lacks SMART capability.
=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     31 C
Drive Trip Temperature:        85 C
Error Counter logging not supported
[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
Device does not support Self Test logging
root@chase 4 :~# 
Simple tests with cat /dev/nvme0n1 >/dev/null and iotop show that the card can read data at about 1 GB/sec, about twice as fast as the SATA-based SSD that it replaced. apt/dpkg now run about as fast on the NVMe SSD as they do on a tmpfs. Hopefully this device doesn't at some point require updated firmware, like some infamous SSDs have. AMD FirePro W4100 graphics card This is a graphics card capable of driving multiple DisplayPort displays at "4K" resolution and at a 60 Hz refresh rate. It has four Mini DisplayPort connectors, although I only use two of them. It was difficult to find a sensible graphics card. Most discrete graphics cards appear to be marketed towards video gamers who apparently must seek out bulky cards that occupy multiple PCI slots and have excessive cooling devices. (To take a random example, the ASUS STRIX R9 390X has three fans and brags about its "Mega Heatpipes".) AMD markets a separate line of "FirePro" graphics cards intended for professionals rather than gamers, although they appear to be based on the same GPUs as their "Radeon" video cards. The AMD FirePro W4100 is a normal half-height PCI-E card that fits into a single PCI slot and has a relatively small cooler with a single fan. It doesn't even require an auxilliary power connection and is about the same dimensions as older video cards that I've successfully used with Debian. It was difficult to determine whether the W4100 card was actually supported by an open source driver before buying it. The word "FirePro" appears nowhere on the webpage for the X.org Radeon driver, but I was able to find a "CAPE VERDE" listed as an engineering name which appears to match the "Cape Verde" code name for the FirePro W4100 given on Wikipedia's List of AMD graphics processing units. This explains the "verde" string that appears in the firmware filenames requested by the kernel (available only in the non-free/firmware-amd-graphics package):
[drm] initializing kernel modesetting (VERDE 0x1002:0x682C 0x1002:0x2B1E).
[drm] Loading verde Microcode
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_pfp.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_me.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_ce.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_rlc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_mc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_smc.bin
The card appears in lspci like this:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde GL [FirePro W4100]
(prog-if 00 [VGA controller])
    Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 2b1e
    Flags: bus master, fast devsel, latency 0, IRQ 55
    Memory at e0000000 (64-bit, prefetchable) [size=256M]
    Memory at f7e00000 (64-bit, non-prefetchable) [size=256K]
    I/O ports at e000 [size=256]
    Expansion ROM at f7e40000 [disabled] [size=128K]
    Capabilities: [48] Vendor Specific Information: Len=08 <?>
    Capabilities: [50] Power Management version 3
    Capabilities: [58] Express Legacy Endpoint, MSI 00
    Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
    Capabilities: [150] Advanced Error Reporting
    Capabilities: [200] #15
    Capabilities: [270] #19
    Kernel driver in use: radeon
The W4100 appears to work just fine, except for a few bizarre error messages that are printed to the kernel log when the displays are woken from power saving mode:
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
There don't appear to be any ill effects from these error messages, though. I have the following package versions installed:
 / Name                          Version             Description
+++-=============================-===================-================================================
ii  firmware-amd-graphics         20151207-1          Binary firmware for AMD/ATI graphics chips
ii  linux-image-4.3.0-trunk-amd64 4.3-1~exp2          Linux 4.3 for 64-bit PCs
ii  xserver-xorg-video-radeon     1:7.6.1-1           X.Org X server -- AMD/ATI Radeon display driver
The Supermicro X10SAE motherboard has two PCI-E 3.0 slots, but they're listed as functioning in either "16/NA" or "8/8" mode, which apparently means that putting anything in the second slot (like the Intel 750 SSD, which uses an x4 link) causes the video card to run at a smaller x8 link width. This can be verified by looking at the widths reported in the "LnkCap" and "LnkSta" lines in the lspci -vv output:
root@chase 0 :~# lspci -vv -s 01:00.0   egrep '(LnkCap LnkSta):'
        LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
        LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
root@chase 0 :~# 
I did not notice any visible artifacts or performance degradation because of the smaller link width. The sensors utility from the lm-sensors package is capable of reporting the temperature of the GPU:
root@chase 0 :~# sensors radeon-pci-0100
radeon-pci-0100
Adapter: PCI adapter
temp1:        +55.0 C  (crit = +120.0 C, hyst = +90.0 C)
root@chase 0 :~# 
Dell P2715Q monitors Two new 27" Dell monitors with a native resolution of 3840x2160 were attached to the new graphics card. They replaced two ten year old Dell 2001FP monitors with a native resolution of 1600x1200 that had experienced burn-in, providing 4.32 times as many pixels. (TV and monitor manufacturers now shamelessly refer to the 3840x2160 resolution as "4K" resolution even though neither dimension reaches 4000 pixels.) There was very little to setup beyond plugging the DisplayPort inputs on these monitors into the DisplayPort outputs on the graphics card. Most of the setup involved reconfiguring software to work with the very high resolution. X.org, for tl;dr CLOSED NOTABUG reasons doesn't set the DPI correctly. These monitors have ~163 DPI resolution, so I added -dpi 168 to /etc/X11/xdm/Xservers. (168 is an even 1.75x multiple of 96.) Software like Google Chrome and xfce4-terminal rendered fonts and graphical elements at the right size, but other software like notion, pidgin, and virt-manager did not fully understand the high DPI. E.g., pidgin renders fonts at the correct size, but icons are too small. The default X cursor was also too small. To fix this, I installed the dmz-cursor-theme package, ran update-alternatives --config x-cursor-theme and selected /usr/share/icons/DMZ-Black/cursor.theme as the cursor theme. Overall, these displays are much brighter and more readable than the ones they replaced.

4 November 2015

Mario Lang: Blind through the night

"Graz nightlife as seen from the perspective of a blind couple" has just been uploaded to YouTube. This is a collection of many short clips I made while going to tekno and DnB parties. In case you don't know already, me and my girlfriend are both legally blind.

20 October 2015

Russ Allbery: Review: The Oathbound

Review: The Oathbound, by Mercedes Lackey
Series: Vows and Honor #1
Publisher: DAW
Copyright: July 1988
ISBN: 0-88677-414-4
Format: Mass market
Pages: 302
This book warrants a bit of explanation. Before Arrows of the Queen, before Valdemar (at least in terms of publication dates), came Tarma and Kethry short stories. I don't know if they were always intended to be set in the same world as Valdemar; if not, they were quickly included. But they came from another part of the world and a slightly different sub-genre. While the first two Valdemar trilogies were largely coming-of-age fantasy, Tarma and Kethry are itinerant sword-and-sorcery adventures featuring two women with a soul bond: the conventionally attractive, aristocratic mage Kethry, and the celibate, goddess-sworn swordswoman Tarma. Their first story was published, appropriately, in Marion Zimmer Bradley's Swords and Sorceress III. This is the first book about Tarma and Kethry. It's a fix-up novel: shorter stories, bridged and re-edited, and glued together with some additional material. And it does not contain the first Tarma and Kethry story. As mentioned in my earlier Valdemar reviews, this is a re-read, but it's been something like twenty years since I previously read the whole Valdemar corpus (as it was at the time; I'll probably re-read everything I have on hand, but it's grown considerably, and I may not chase down the rest of it). One of the things I'd forgotten is how oddly, from a novel reader's perspective, the Tarma and Kethry stories were collected. Knowing what I know now about publishing, I assume Swords and Sorceress III was still in print at the time The Oathbound was published, or the rights weren't available for some other reason, so their first story had to be omitted. Whatever the reason, The Oathbound starts with a jarring gap that's no less irritating in this re-read than it was originally. Also as is becoming typical for this series, I remembered a lot more world-building and character development than is actually present in at least this first book. In this case, I strongly suspect most of that characterization is in Oathbreakers, which I remember as being more of a coherent single story and less of a fix-up of puzzle and adventure stories with scant time for character growth. I'll be able to test my memory shortly. What we do get is Kethry's reconciliation of her past, a brief look at the Shin'a'in and the depth of Tarma and Kethry's mutual oath (unfortunately told more than shown), the introduction of Warrl (again, a relationship that will grow a great deal more depth later), and then some typical sword and sorcery episodes: a locked room mystery, a caravan guard adventure about which I'll have more to say later, and two rather unpleasant encounters with a demon. The material is bridged enough that it has a vague novel-like shape, but the bones of the underlying short stories are pretty obvious. One can tell this isn't really a novel even without the tell of a narrative recap in later chapters of events that you'd just read earlier in the same book. What we also get is rather a lot of rape, and one episode of seriously unpleasant "justice." A drawback of early Lackey is that her villains are pure evil. My not entirely trustworthy memory tells me that this moderates over time, but early stories tend to feature villains completely devoid of redeeming qualities. In this book alone one gets to choose between the rapist pedophile, the rapist lord, the rapist bandit, and the rapist demon who had been doing extensive research in Jack Chalker novels. You'll notice a theme. Most of the rape happens off camera, but I was still thoroughly sick of it by the end of the book. This was already a cliched motivation tactic when these stories were written. Worse, as with the end of Arrow's Flight, the protagonists don't seem to be above a bit of "turnabout is fair play." When you're dealing with rape as a primary plot motivation, that goes about as badly as you might expect. The final episode here involves a confrontation that Tarma and Kethry brought entirely on themselves through some rather despicable actions, and from which they should have taken a lesson about why civilized societies have criminal justice systems. Unfortunately, despite an ethical priest who is mostly played for mild amusement, no one in the book seems to have drawn that rather obvious conclusion. This, too, I recall as getting better as the series goes along and Lackey matures as a writer, but that only helps marginally with the early books. Some time after the publication of The Oathbound and Oathbreakers, something (presumably the rights situation) changed. Oathblood was published in 1998 and includes not only the first Tarma and Kethry story but also several of the short stories that make up this book, in (I assume) something closer to their original form. That makes The Oathbound somewhat pointless and entirely skippable. I re-read it first because that's how I first approached the series many years ago, and (to be honest) because I'd forgotten how much was reprinted in Oathblood. I'd advise a new reader to skip it entirely, start with the short stories in Oathblood, and then read Oathbreakers before reading the final novella. You'd miss the demon stories, but that's probably for the best. I'm complaining a lot about this book, but that's partly from familiarity. If you can stomach the rape and one stunningly unethical protagonist decision, the stories that make it up are solid and enjoyable, and the dynamic between Tarma and Kethry is always a lot of fun (and gets even better when Warrl is added to the mix). I think my favorite was the locked room mystery. It's significantly spoiled by knowing the ending, and it has little deeper significance, but it's a classic sort unembellished, unapologetic sword-and-sorcery tale that's hard to come by in books. But since it too is reprinted (in a better form) in Oathblood, there's no point in reading it here. Followed by Oathbreakers. Rating: 6 out of 10

14 October 2015

Mario Lang: ccidentals in Haskell

I've had quite some fun recently (re)learning Haskell. My learning project is to implement braille music notation parsing in Haskell. Given that i've already implemented most of this stuff in C++, it gives me a great opportunity to rethink my algorithms. Not everything I've had to implement until now was actually pretty. I spent yesterday evening implementing accidentals handling, which turned out to be quite a mess. However, I wanted to share my definition of the circle of fifths, because I find it rather concise.
The problem Given a key signature (often expressed as the number of sharp or flat accidentals), tell which pitch classes are actually raised/lowered. While reading through music notation software, I have seen several implementations of this basic concept. However, I have never seen one which was so concise.
module Accidental where
import           Data.Map (Map)
import qualified Data.Map as Map (fromList)
import qualified Haskore.Basic.Pitch as Pitch
fifths n   n >  0    = let [a,b,c,d,e,f,g] = fifths (n-1)
                       in  [d,e,f,g+1,a,b,c]
           n <  0    = let [a,b,c,d,e,f,g] = fifths (n+1)
                       in  [e,f,g,a,b,c,d-1]
           otherwise = replicate 7 0
Given this, we can easily define a Map of pitches to currently active accidentals/alterations. List comprehension to the rescue!
accidentals :: Int -> Map Pitch.T Pitch.Relative
accidentals k = Map.fromList [ ((o, c), a)
                               o <- [0..maxOctave]
                             , (c, a) <- zip diatonicSteps $ fifths k
                             , a /= 0
                             ] where
  maxOctave = 9
  diatonicSteps = [Pitch.C, Pitch.D, Pitch.E, Pitch.F, Pitch.G,
                   Pitch.A, Pitch.B]
The full source code for the haskore-braille (WIP) package can be found on GitHub. If you have any comments regarding the implementation, please drop me a mail.

5 May 2015

Miriam Ruiz: SuperTuxKart 0.9: The other side of the story

I approached the SuperTuxKart community fearing some backslash due to last week s discussion about their release 0.9, to find instead a nice, friendly and welcoming community. I have already had some very nice talks with them since then, and they have patiently explained to me the sequence of events that led to the situation that I mentioned and that, for the sake of fairness, I consider that I have to share here too. You can read the log of the first conversation I had with them (the log has been edited and cleared up for clarity and readability). I seriously recommend reading it, it s a honest friendly conversation, and it s first hand. For those who don t already know the game:

All this story seems to start with the complain of a 6 yo girl, close relative of one of the developers and STK user, who explained that she always felt that Mario Kart was better because there was a princess in it. I m not particularly happy with princesses as role models for girls, but one thing I have always said is that we have to listen to kids and take their opinions into accounts, and I know that if I had such a request from one of the kids closer to me, I probably would have fulfilled it too. In any case, Free Software projects based on volunteer work are essentially a do-ocracy and it is assumed that whoever does the work, gets to decide about it.

So that is how Princess Sara was added to the game. While developing it, I was assured that they took extra care that her proportions were somehow realistic, and not as distorted as we re used to see in Barbie or many Disney films. Sara is inspired on an OpenGameArt s wizard and is not supposed to be a weak damsel in distress, but in fact a powerful character in the world s universe.

Sara is not the only female character playable. There are a few others: Suzanne (a monkey, Blender s mascot), Xue (XFCE s mouse) and Amanda (a panda, the mascot of windows maker). Sara happens to be the only human character playable, male or female. While it has been argued that by adding that character, a player might have the impression that the rest of the characters would be male by default, I have been told that the intention is exactly the opposite,and that the fact that the only human playable character in the game is female should make it more attractive to girls. To some, at least. Here are some images of Sara:

So the fact is that they have invested a lot of time in developing Sara s model. I m not an artist myself, so I don t know first hand how much time and effort it takes to make such a model, but in any case it seems that quite a lot. When they designed the beach track Gran Paradiso, they wanted to add people to the beach. That track is, in fact, inspired on a real existing place: Princess Juliana Airport. Time was over and they wanted to publish a version with what they already had, so they used Sara s model in a bikini on the beach, with the intention of adding more people, male and female, later. The overall view of the beach would be:

This is how that track shows when the players are driving in it:

Now, about the poster of version 0.9, it is supposed to be inspired in the previous poster of version 0.8.1, only this time inspired in Carnival (which is, in fact, a celebration in which sexualization of both genders is a core part). I know that there are accusations of cultural apropriation, but I couldn t know, as my white privilege probably shields me from seeing that. Up to now, no one has said anything about that, only Gunnar explaining his point of view as a non-native mexican: While the poster does not strike as the most cautious possible, I do not see it as culturally offensive. It does not attempt to set a scene portraiting what were the cultures really like; the portrait it paints is similar to so many fantasy recreations . In my opinion, even when the model is done in good taste, with no superbig breasts and no unrealistic waist, it s still depicting a girl without much clothes as the main element of the scene, with an attire, a posture and an attitude that clearly resembles carnival and, thus, inevitably conveys a message of sexualization. Even though I can t deny that it s a cute poster, it s one I wouldn t be happy to see for example in a school, if someone wanted to promote the game there. The author of the poster, anyway, tells me that he had a totally different intention when doing it, and he wanted to depict a powerful princess, in the center of SuperTuxKart s universe, celebrating the new engine.

About the panties showing every now and then, I ve been told that it s something so hard to see that in fact you would really have to open the model itself to view them. I m not saying that I like them though, I think it would have been better if Sara would have had short pants under the skirt, if she was going to drive the snowmobile with a dress, but I m not sure if that s something important enough to condemn the game. The original girl mentioned at the beginning of this post seems to have found the animation funny, started laughing, and said that Sara is very silly, and that was all. It s probably something more silly than naughty, I guess. Even though, as I said, it s something I don t like too much. I don t have to agree with STK developers in everything. I guess.

There s one thing I would like to highlight about my conversations with the developers of SuperTuxKart, though. I like them. They seem to be as concerned about the wellbeing of kids as I am, they have their own ethic norms of what s acceptable and what s not, and they want to do something to be proud of. Sometimes, many of these conflicts arise from a lack of trust. When I first saw the screenshots with the girl in bikini and the panties showing, I was honestly concerned about the direction the project was taking. After having talked with the developers, I am more calmed about it, because they seem to have their heart in the right place, they care, they are motivated and they work hard. I don t know if a princess would be my first choice for a main female character, but at least their intention seems to be to give some girls a sensible role model in the game with who they can identify.

1 May 2015

Miriam Ruiz: Sexualized depiction of women in SuperTuxKart 0.9

It has been recently discussed in Debian-Women and Debian-Games mailing lists, but for all of you who don t read those mailing lists and might have kids or use free games with kids in the classroom, or stuff like that, I thought it might be good to talk about it here. SuperTuxKart is a free 3D kart racing game, similar to Mario Kart, with a focus on having fun over realism. The characters in the game are the mascots of free and open source projects, except for Nolok, who does not represent a particular open source project, but was created by the SuperTux Game Team as the enemy of Tux. On April 21, 2015, version 0.9 (not yet in Debian) was released which used the Antarctica graphics engine (a derivative of Irrlicht) and enabled better graphics appearance and features such as dynamic lighting, ambient occlusion, depth of field, and global illumination. Along with this new engine comes a poster with a sexualized white woman is wearing an outfit that can be depicted as a mix of Native american clothes from different nation and a halo of feathers, as well as many models of her in a bikini swim suit, all along the game, even in the hall of the airport. They say an image is worth more than a thousand words, don t they?

14 April 2015

Mario Lang: Bjarne Stoustrup talking about organisations that can raise expectations

At time index 22:35, Bjarne Stroustrup explains in this video what he thinks is very special about organisatrions like Cambridge or Bell Labs. When I just heard him explain this, I couldn't help but think of Debian. This is exactly how I felt (and actually still do) when I joined Debian as a Developer in 2002. This is, what makes Debian, amongst other things, very special to me. If you don't want to watch the video, here is the excerpt I am talking about:
One of the things that Cambridge could do, and later Bell Labs could do, is somehow raise peoples expectations of themselves. Raise the level that is considered acceptable. You walk in and you see what people are doing, you see how people are doing, you see how apparently easily they do it, and you see how nice they are while doing it, and you realize, I better sharpen up my game. This is something where you have to, you just have to get better. Because, what is acceptable has changed. And some organisations can do that, and well, most can't, to that extent. And I am very very lucky to be in a couple places that actually can increase your level of ambition, in some sense, level of what is a good standard.

9 April 2015

Mario Lang: A C++ sample collection

I am one of those people that best learns from looking at examples. No matter if I am trying to learn a programming pattern/idiom, or a completely new library or framework. Documentation is good (if it is good!) for diving into the details, but to get me started, I always want to look at a self contained example so that I can get a picture of the thing in my head. So I was very excited when a few days ago, CppSamples was announced on the ISO C++ Blog. While it is a very young site, it already contains some very useful gems. It is maintained over at GitHub, so it is also rather easy to suggest new additions, or improve the existing examples by submitting a pull request. Give it a try, it is really quite nice. In my book, the best resource I have found so far in 2015. BTW, Debian has a standard location for finding examples provided by a package. It is /usr/share/doc/<package>/examples/. I consider that very useful.

7 April 2015

Mario Lang: I am sorry, but this looks insane

I am a console user. I really just started to use X11 again about two weeks ago, to sometimes test an a Qt application I am developing. I am not using Firefox or anything similar, all my daily work happens in shells and inside of emacs, in a console, not in X11. BRLTTY runs all the time, translating the screen content to something that my braille display can understand, sent out via USB. So the most important programs to me, are really emacs, and brltty. This is my desktop, that is up since 179 days.
PID   USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1227 message+  20   0    7140   2860    672 S   0,0  0,1 153:33.10 dbus-daemon
21457 root      20   0   44456   1116    788 S   0,0  0,1 146:42.47 packagekitd
    1 root      20   0   24348   2808   1328 S   0,0  0,1 109:16.99 systemd
 7897 mlang     20   0  585776 121656   4188 S   0,0  6,0 105:22.40 emacs
13332 root      20   0   10744    432    220 S   0,0  0,0  91:55.96 ssh
19581 root      20   0    4924   1632   1076 S   0,0  0,1  53:33.56 systemd
19596 root      20   0   20312   9764   9660 S   0,0  0,5  48:10.76 systemd-journal
10172 root      20   0   85308   2472   1672 S   0,0  0,1  20:30.18 NetworkManager
   29 root      20   0       0      0      0 S   0,0  0,0  18:40.24 kswapd0
13334 root      20   0  120564   5748    304 S   0,0  0,3  16:20.89 sshfs
    7 root      20   0       0      0      0 S   0,0  0,0  15:21.15 rcu_sched
14245 root      20   0    7620    316    152 S   0,0  0,0  15:08.64 ssh
  438 root      20   0       0      0      0 S   0,0  0,0  12:14.80 jbd2/dm-1-8
11952 root      10 -10   42968   2028   1420 S   0,0  0,1  10:36.20 brltty
I am sorry, but this doesn't look right, not at all. I am not even beginning to talk about dbus-daemon and systemd. Why the HECK does packagekitd (which I definitely don't use actively) use up more then two hours of plain CPU time? What did it do, talk to NSA via an asymmetric cipher, or what?! I play music via sshfs, sometimes FLAC files. That barely consumed more CPU time then brltty, which is probably the most active daemon on my system, erm, it should be. I don't want to chime into any flamewars. I have accepted that we have systemd. But this does not look right! I remember, back in the good old days, emacs and brltty were my top CPU users.

23 March 2015

Mario Lang: Why is Qt5 not displaying Braille?

While evaluating the cross-platform accessibility of Qt5, I stumbled across this deficiency:
#include <QApplication>
#include <QTextEdit>
int main(int argv, char **args)
 
  QApplication app(argv, args);
  QTextEdit textEdit;
  textEdit.setText(u8"\u28FF");
  textEdit.show();
  return app.exec();
 
(compile with -std=c++11). On my system, this "application" does not show the correct glyph always. Sometimes, it renders a a white square with black border, i.e., the symbol for unknown glyph. However, if I invoke the same executable several times, sometimes, it renders the glyph correctly. In other words: The glyph choosing mechansim is apparently non-deterministic!!! UPDATE: Sune Vuorela figured out that I need to set QT_HARFBUZZ=old in the environment for this bug to go away. Apparently, harfbuzz-ng from Qt 5.3 is buggy.

18 March 2015

Mario Lang: Call for Help: BMC -- Braille Music Compiler

Since 2009, I am persuing a personal programming project. As I am not a professional programmer, I have spent quite a lot of that time exploring options. I have thrown out about three or four prototype implementations already. My last implementation seems to contain enough accumulated wisdom to be actually useful. I am far from finished, but the path I am walking now seems relatively sound. So, what is this project about? I have set myself a rather ambitious goal: I am trying to implement a two-way bridge between visual music notation and braille music code. It is called BMC (Braille Music Compiler). My problem: I am, as some of you might remember, 100% blind. So I am trying to write a translator between something I will never see directly, and its counterpart representation in a tactile encoding I had to learn from scratch to be able to work on this project. Braille music code is probably the most cryptic thing I have ever tried to learn. It basically is a method to represent a 2-dimensional structure like staff-notation as a stream of characters encoded in 6-dot braille. As the goal above states, I am ultimately trying to implement a converter that works both ways. One of my prototypes already implemented reading digital staff notation (MusicXML) and transcribing it to Braille. However, to be able to actually understand all the concepts involved, I ended up starting from the other end of the spectrum with my new implementation: parsing braille music code and emitting digital staff notation (LilyPond and MusicXML). This is a rather unique feature, since while there is commercial (and very expensive) software out there to convert MusicXML to braille music code, there is, as far as I know, no system that allows to input un-annotated braille music code and have it automatically converted to sighted music notation. So the current state of things is, that we are able to read certain braille music code formats, and output either reformatted (to new line-width) braille music code, LilyPond or MusicXML. The ultimate goal is to also implement a MusicXML reader, and convert the data to something that can be output as braille music code. While the initial description might not sound very hard, there are a lot of complications arising from how braille music code works, which make this quite a programming challenge. For one, braille music note and rest values are ambigious. A braille music note or rest that looks like a whole can mean a whole or 16th. A braille music note or rest that looks like a half can mean a half or a 32nd. And so on. So each braille music code value can have two meanings. The actual value can be caluclated with a recursive algorithm that I have worked out from scratch over the years. The original implementation was inspired by Samuel Thibault (thanks!) and has since then evolved into something that does what we need, while trying to do that very fast. Most input documents can be processed in almost no time, however, time signatures with a value > 1 (such as 12/8) tend to make the number of possible choices exploed quite heavily. I have found so far one piece from J.S. Bach (BWV988 Variation 3) which takes about 1.5s on my 3GHz AMD (and the code is already using several CPU cores). Additionally, braille music code supports a form of "micro"-repetitions which are not present in visual staff notation which effectively allow certain musical patterns to be compressed if represented in braille. Another algorithmically interesting part of BMC that I have started to taclke just recently is the linebreaking problem. Braille music code has some peculiar rules when it comes to breaking a measure of musical material into several lines. I ended up adapting Donald E. Knuth's algorithm from Breaking Paragraphs into Lines for fixed-width text. In other words, I am ignoring the stretch/shrink factors, while making use of different penalty values to find the perfect solution for the problem of breaking a paragraph of braille music code into several lines. One thing that I have learnt from my perivous prototype (which was apparently useful enough to already acquire some users) is that it is not enough to just transcribe one format to another. I ultimately want to store meta information about the braille that is presented to the user such that I can implement interactive querying and editing features. Braille music code is complicated, and one of the original motivations to work on software to deal with it was to ease the learning curve. A user of BMC should be able to ask the system for a description of a character at a certain position. The user interface (not implemented yet) should allow to play a certain note interactively, or play the measure under the cursor, or play the whole document, and if possible, have the cursor scroll along while playback plays notes. These features are not implemented in BMC yet, but they have been impleemnted in the previous prototype and their usefulness is apparent. Also, when viewing a MusicXML document in braille music code, certain non-structural changes like adding/removing fingering annotations should be possible while preserving unhandled features of the original MusicXML document. This also has been implemented in the previous prototype, and is a goal for BMC.
I need your help The reason why I am explaining all of this here is that I need your help for this project to succeed. Helping the blind to more easily work with traditional music notation is a worthwhile goal to persue. There is no free system around that really tries to adhere to the braille music code standard, and aims to cover converting both ways. I have reached a level of conformance that surpasses every implementation of the same problem that I have seen so far on the net. However, the primary audience of this software is going to be using Windows. We desperately need a port to that OS, and a user interface resembling NotePad with a lot fewer menu entires. We also need a GTK interface that does the same thing on Linux. wxWindows is unfortunately out of question, since it does not provide the same level of Accessibility on all the platforms it supports. Ideally, we'd also have a Cocoa interface for OS X. I am afraid there is no platform independent GUI framework that offers the same level of Accessibility on all supported platforms. And since much of our audience is going to rely on working Accessibility, it looks like we need to implement three user interfaces to achieve this goal :-(. I also desperately need code reviews and inspiration from fellow programmers. BMC is a C++11 project heavily making use of Boost. If you are into one of these things, please give it a whirl, and emit pull requests, no matter how small they are. While I have learnt a lot in the last years, I am sure there are many places that could use some fresh winds of thought by people that are not me. I am suffering from what I call "the lone coder syndrome". I also need (technical) writers to help me complete the pieces of documentation that are already lying around. I have started to write a braille music tutorial based on the underlying capabilities of BMC. In other words, the tutorial includes examples which are being typeset in braille and staff notation, using LilyPond as a rendering engine. However, something like a user manual is missing, basically, because the user interface is missing. BMC is currently "just" a command-line tool (well enough for me) that transcribes input files to STDOUT. This is very good for testing the backend, which is all that has been important to me in the last years. However, BMC has reached a stage now where its functionality is likely useful enough to be exposed to users. While I try to improve things steadily as I can, I realize that I really need to put out this call for help to make any useful progress in a foreseeable time. If you think it is a worthwhile goal to help the blind to more easily work with music notation, and also enable communication between blind and sighted musicians in both ways, please take the time and consider how you could help this project to advance. My email address can be found on my GitHub page. Oh, and while you are over at GitHub, make sure to star BMC if you think it is a nice project. It would be nice if we could produce a end-user oriented release before the end of this year.

22 December 2014

Michael Prokop: Ten years of Grml

* On 22nd of October 2004 an event called OS04 took place in Seifenfabrik Graz/Austria and it marked the first official release of the Grml project. Grml was initially started by myself in 2003 I registered the domain on September 16, 2003 (so technically it would be 11 years already :)). It started with a boot-disk, first created by hand and then based on yard. On 4th of October 2004 we had a first presentation of grml 0.09 Codename Bughunter at Kunstlabor in Graz. I managed to talk a good friend and fellow student Martin Hecher into joining me. Soon after Michael Gebetsroither and Andreas Gredler joined and throughout the upcoming years further team members (Nico Golde, Daniel K. Gebhart, Mario Lang, Gerfried Fuchs, Matthias Kopfermann, Wolfgang Scheicher, Julius Plenz, Tobias Klauser, Marcel Wichern, Alexander Wirt, Timo Boettcher, Ulrich Dangel, Frank Terbeck, Alexander Steinb ck, Christian Hofstaedtler) and contributors (Hermann Thomas, Andreas Krennmair, Sven Guckes, Jogi Hofm ller, Moritz Augsburger, ) joined our efforts. Back in those days most efforts went into hardware detection, loading and setting up the according drivers and configurations, packaging software and fighting bugs with lots of reboots (working on our custom /linuxrc for the initrd wasn t always fun). Throughout the years virtualization became more broadly available, which is especially great for most of the testing you need to do when working on your own (meta) distribution. Once upon a time udev became available and solved most of the hardware detection issues for us. Nowadays X.org doesn t even need a xorg.conf file anymore (at least by default). We have to acknowledge that Linux grew up over the years quite a bit (and I m wondering how we ll look back at the systemd discussions in a few years). By having Debian Developers within the team we managed to push quite some work of us back to Debian (the distribution Grml was and still is based on), years before the Debian Derivatives initiative appeared. We never stopped contributing to Debian though and we also still benefit from the Debian Derivatives initiative, like sharing issues and ideas on DebConf meetings. On 28th of May 2009 I myself became an official Debian Developer. Over the years we moved from private self-hosted infrastructure to company-sponsored systems, migrated from Subversion (brr) to Mercurial (2006) to Git (2008). Our Zsh-related work became widely known as grml-zshrc. jenkins.grml.org managed to become a continuous integration/deployment/delivery home e.g. for the dpkg, fai, initramfs-tools, screen and zsh Debian packages. The underlying software for creating Debian packages in a CI/CD way became its own project known as jenkins-debian-glue in August 2011. In 2006 I started grml-debootstrap, which grew into a reliable method for installing plain Debian (nowadays even supporting installation as VM, and one of my customers does tens of deployments per day with grml-debootstrap in a fully automated fashion). So one of the biggest achievements of Grml is from my point of view that it managed to grow several active and successful sub-projects under its umbrella. Nowadays the Grml team consists of 3 Debian Developers Alexander Wirt (formorer), Evgeni Golov (Zhenech) and myself. We couldn t talk Frank Terbeck (ft) into becoming a DM/DD (yet?), but he s an active part of our Grml team nonetheless and does a terrific job with maintaining grml-zshrc as well as helping out in Debian s Zsh packaging (and being a Zsh upstream committer at the same time makes all of that even better :)). My personal conclusion for 10 years of Grml? Back in the days when I was a student Grml was my main personal pet and hobby. Grml grew into an open source project which wasn t known just in Graz/Austria, but especially throughout the German system administration scene. Since 2008 I m working self-employed and mainly working on open source stuff, so I m kind of living a dream, which I didn t even have when I started with Grml in 2003. Nowadays with running my own business and having my own family it s getting harder for me to consider it still a hobby though, instead it s more integrated and part of my business which I personally consider both good and bad at the same time (for various reasons). Thanks so much to anyone of you, who was (and possibly still is) part of the Grml journey! Let s hope for another 10 successful years! Thanks to Max Amanshauser and Christian Hofstaedtler for reading drafts of this.

18 December 2014

Mario Lang: deluXbreed #2 is out!

The third installment of my crossbreed digital mix podcast is out! This time, I am featuring Harder & Louder and tracks from Behind the Machine and the recently released Remixes.
  1. Apolloud - Nagazaki
  2. Apolloud - Hiroshima
  3. SA+AN - Darksiders
  4. Im Colapsed - Cleaning 8
  5. Micromakine & Switch Technique - Ascension
  6. Micromakine - Cyberman (Dither Remix)
  7. Micromakine - So Good! (Synapse Remix)
How was DarkCast born and how is it done? I always loved 175BPM music. It is an old thing that is not going away soon :-). I recently found that there is a quite active culture going on, at least on BandCamp. But single tracks are just that, not really fun to listen to in my opinion. This sort of music needs to be mixed to be fun. In the past, I used to have most tracks I like/love as vinyl, so I did some real-world vinyl mixing myself. But these days, most fun music is only available digitally, at least easily. Some people still do vinyl releases, but they are actually rare. So for my personal enjoyment, I started to digitally mix tracks I really love, such that I can listen to them without "interruption". And since I am an iOS user since three years now, using the podcast format to get stuff onto my devices was quite a natural choice. I use SoX and a very small shell script to create these mixes. Here is a pseudo-template:
sox --combine mix-power \
" sox \" sox 1.flac -p\" \" sox 3.flac -p speed 0.987 delay 2:28.31 2:28.31\" -p" \
" sox \" sox 2.flac -p delay 2:34.1 2:34.1\" -p" \
mix.flac
As you can imagine, it is quite a bit of fiddling to get these scripts to do what you want. But it is a non-graphical method to get things done. If you know of a better tool, possibly with a bit of real-time controls, to get the same job done, wihtout having to resort to a damn GUI, let me know.

14 December 2014

Mario Lang: Data-binding MusicXML

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub. I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy. During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand. If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API. For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt. Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.

24 October 2014

Stefano Zacchiroli: Italy puts Free Software first in public sector

Debian participation in Italy's CAD68 committee (The initial policy change discussed in this document is a couple of years old now, but it took about the same time to be fully implemented, and AFAIK the role Debian played in it has not been documented yet.) In October 2012 the Italian government, led at the time by Mario Monti, did something rather innovative, at least for a country that is not usually ahead of its time in the area of information technology legislation. They decided to change the main law (the "CAD", for Codice dell'Amministrazione Digitale) that regulates the acquisition of software at all levels of the public administration (PA), giving an explicit preference to the acquisition of Free Software. The new formulation of article 68 of the CAD first lists some macro criteria (e.g., TCO, adherence to open standards, security support, etc.) that public administrations in Italy shall use as ranking criteria in software-related calls for tenders. Then, and this is the most important part, the article affirms that the acquisition of proprietary software solutions is permitted only if it is impossible to choose Free Software solutions instead; or, alternatively, to choose software solutions that have already being acquired (and paid for) by the PA in the past, reusing preexisting software. The combined effect of these two provisions is that all new software acquisitions by PAs in Italy will be Free Software, unless it is motivated in writing, challengable by a judge that it was impossible to do otherwise. Isn't it great? It is, except that such a law is not necessarily easy to adhere to in practice, especially for small public administrations (e.g., municipalities of a few hundred people, not uncommon in Italy) which might have very little clue about software in general, and even less so about Free Software. This is why the government also tasked the relevant Italian agency to provide guidelines on how to choose software in a way that conforms with the new formulation of article 68. The agency decided to form a committee to work on the guidelines (because you always need a committee, right? :-) ). To my surprise, the call for participation to be part of the committee explicitly listed representatives of Free Software communities as privileged software stakeholders that they wanted to have on the committee kudos to the agency for that. (The Italian wording on the call was: Costituir titolo di preferenza rivestire un ruolo di [ ] referenti di community del software a codice sorgente aperto.) Therefore, after various prods by fellow European Free Software activists that were aware of the ongoing change in legislation, I applied to be a volunteer CAD68 committee member, got selected, and ended up working over a period of about 6 months (March-September 2013) to help the agency writing the new software acquisition guidelines. Logistically, it hasn't been entirely trivial, as the default meeting place was in Rome, I live in Paris, and the agency didn't really have a travel budget for committee members. That's why I've sought sponsorship from Debian, offering to represent Debian views within the committee; Lucas kindly agreed to my request. So what did I do on behalf of Debian as a committee member during those months? Most of my job has been some sort of consulting on how community-driven Free Software projects like Debian work, on how the software they produce can be relied upon and contributed to, and more generally on how the PA can productively interact with such projects. In particular, I've been happy to work on the related work section of the guidelines, ensuring they point to relevant documents such as the French government guidelines on how to adopt Free Software (AKA circulaire Ayrault). I've also drafted the guidelines section on Free Software directories, ensuring that important resources such as FSF's Free Software Directory are listed as starting points for PAs that looking for software solutions for specific needs. Another part of my job has been ensuring that the guidelines do not end up betraying the principle of Free Software preference that is embodied in article 68. A majority of committee members came from a Free Software background, so that might not seem a difficult goal to accomplish. But it is important to notice that: (a) the final editor of the guidelines is the agency itself, not the committee, so having a "pro-Free Software" majority within the committee doesn't mean much per se; and (b) lobbying from the "pro-proprietary software" camp did happen, as it is entirely natural in these cases. In this respect I'm happy with the result: I do believe that the software selection process recommended by the guidelines, finally published in January 2014, upholds the Free Software preference principle of article 68. I credit both the agency and the non-ambiguity of the law (on this specific point) for that result. All in all, this has been a positive experience for me. It has reaffirmed my belief that Debian is a respected, non-partisan political actor of the wider software/ICT ecosystem. This experience has also given me a chance to be part of country-level policy-making, which has been very instructive on how and why good ideas might take a while to come into effect and influence citizen lives. Speaking of which, I'm now looking forward to the first alleged violations of article 68 in Italy, and how they will be dealt with. Abundant popcorn will certainly be needed. Links & press If you want to know more about this topic, I've collected below links to resources that have documented, in various languages, the publication of the CAD68 guidelines.

Next.

Previous.