Search Results: "Matthias Klumpp"

11 January 2024

Matthias Klumpp: Wayland really breaks things Just for now?

This post is in part a response to an aspect of Nate s post Does Wayland really break everything? , but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense! The shift towards people seeing Linux more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this this holistic approach is something I always wanted! Furthermore, I think Wayland removes a lot of functionality that shouldn t exist in a modern compositor and that s a good thing too! Some of X11 s features and design decisions had clear drawbacks that we shouldn t replicate. I highly recommend to read Nate s blog post, it s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But! But! Of course there was a but coming  I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches. But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available. A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost). In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.
KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of easiest : In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though. Here s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all. So if they learn that porting to Linux not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the new Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen. Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt s documentation you will often find texts like works everywhere except for on Wayland compositors or mobile 4. Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What s missing?

Window positioning SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps. It is important to note that this is not a legacy design, but in many cases an intentional choice these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop). Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way. I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again. Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it s not the end of the world if every window has the same icon, but it s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now. I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow W icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we re not fully there yet.

Wayland is frustrating for (some) application authors As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result. Wayland does break screen sharing, global hotkeys, gaming latency (via no tearing ) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except use Xwayland and be on emulation as second-class citizen forever , it just results in very frustrated application developers. For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art! Wayland, via its protocol limitations, forces a certain way to build application UX often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow. Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out? So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question: Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity? I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors. If the answer for your environment turns out to be Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability , then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment. If the answer turns out to be We do want some portability , the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren t. We can t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits). You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them. To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier. What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp!  I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes
  1. Apologies for the clickbait-y title it comes with the subject
  2. When I talk about Wayland I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
  3. I would have picked a picture from our lab, but that would have needed permission first
  4. Qt has awesome platform issues pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren t particularly helpful when porting (but still essential to know).
  5. Besides issues with Nvidia hardware CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.

11 November 2023

Matthias Klumpp: AppStream 1.0 released!

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!    Check it out on GitHub, or get the release tarball or read the documentation or release notes!

Some nostalgic memories I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later). I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others. At the time I was writing a software deployment tool called Listaller this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream. Back then I saw AppStream as a necessary side-project for my actual project, and didn t even consider me as the maintainer of it for quite a while (I hadn t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be and also not how ubiquitous it would become. The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0?

API breaks The most important thing that s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+. For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release. Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this

Better compatibility checks For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use. The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out! With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more! The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints. So, what s in it for the future? Contrary to what I thought, AppStream does not really seem to be done and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature. Onwards to 1.0.1!

10 October 2023

Matthias Klumpp: How to indicate device compatibility for your app in MetaInfo data

At the moment I am hard at work putting together the final bits for the AppStream 1.0 release (hopefully to be released this month). The new release comes with many new new features, an improved developer API and removal of most deprecated things (so it carefully breaks compatibility with very old data and the previous C API). One of the tasks for the upcoming 1.0 release was #481 asking about a formal way to distinguish Linux phone applications from desktop applications. AppStream infamously does not support any is-for-phone label for software components, instead the decision whether something is compatible with a device is based the the device s capabilities and the component s requirements. This allows for truly adaptive applications to describe their requirements correctly, and does not lock us into form factors going into the future, as there are many and the feature range between a phone, a tablet and a tiny laptop is quite fluid. Of course the match to current device capabilities check does not work if you are a website ranking phone compatibility. It also does not really work if you are a developer and want to know which devices your component / application will actually be considered compatible with. One goal for AppStream 1.0 is to have its library provide more complete building blocks to software centers. Instead of just a here s the data, interpret it according to the specification API, libappstream now interprets the specification for the application and provides API to handle most common operations like checking device compatibility. For developers, AppStream also now implements a few virtual chassis configurations , to roughly gauge which configurations a component may be compatible with. To test the new code, I ran it against the large Debian and Flatpak repositories to check which applications are considered compatible with what chassis/device type already. The result was fairly disastrous, with many applications not specifying compatibility correctly (many do, but it s by far not the norm!). Which brings me to the actual topic of this blog post: Very few seem to really know how to mark an application compatible with certain screen sizes and inputs! This is most certainly a matter of incomplete guides and good templates, so maybe this post can help with that a bit:

The ultimate cheat-sheet to mark your app chassis-type compatible As a quick reminder, compatibility is indicated using AppStream s relations system: A requires relation indicates that the system will not run at all or will run terribly if the requirement is not met. If the requirement is not met, it should not be installable on a system. A recommends relation means that it would be advantageous to have the recommended items, but it s not essential to run the application (it may run with a degraded experience without the recommended things though). And a supports relation means a given interface/device/control/etc. is supported by this application, but the application may work completely fine without it.

I have a desktop-only application A desktop-only application is characterized by needing a larger screen to fit the application, and requiring a physical keyboard and accurate mouse input. This type is assumed by default if no capabilities are set for an application, but it s better to be explicit. This is the metadata you need:
<component type="desktop-application">
  <id>org.example.desktopapp</id>
  <name>DesktopApp</name>
  [...]
  <requires>
    <display_length>768</display_length>
    <control>keyboard</control>
    <control>pointing</control>
  </requires>
  [...]
</component>
With this requires relation, you require a small-desktop sized screen (at least 768 device-independent pixels (dp) on its smallest edge) and require a keyboard and mouse to be present / connectable. Of course, if your application needs more minimum space, adjust the requirement accordingly. Note that if the requirement is not met, your application may not be offered for installation.
Note: Device-independent / logical pixels One logical pixel (= device independent pixel) roughly corresponds to the visual angle of one pixel on a device with a pixel density of 96 dpi (for historical X11 reasons) and a distance from the observer of about 52 cm, making the physical pixel about 0.26 mm in size. When using logical pixels as unit, they might not always map to exact physical lengths as their exact size is defined by the device providing the display. They do however accurately depict the maximum amount of pixels that can be drawn in the depicted direction on the device s display space. AppStream always uses logical pixels when measuring lengths in pixels.

I have an application that works on mobile and on desktop / an adaptive app Adaptive applications have fewer hard requirements, but a wide range of support for controls and screen sizes. For example, they support touch input, unlike desktop apps. An example MetaInfo snippet for these kind of apps may look like this:
<component type="desktop-application">
  <id>org.example.adaptive_app</id>
  <name>AdaptiveApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <supports>
    <control>keyboard</control>
    <control>pointing</control>
    <control>touch</control>
  </supports>
  [...]
</component>
Unlike the pure desktop application, this adaptive application requires a much smaller lowest display edge length, and also supports touch input, in addition to keyboard and mouse/touchpad precision input.

I have a pure phone/table app Making an application a pure phone application is tricky: We need to mark it as compatible with phones only, while not completely preventing its installation on non-phone devices (even though its UI is horrible, you may want to test the app, and software centers may allow its installation when requested explicitly even if they don t show it by default). This is how to achieve that result:
<component type="desktop-application">
  <id>org.example.phoneapp</id>
  <name>PhoneApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <recommends>
    <display_length compare="lt">1280</display_length>
    <control>touch</control>
  </recommends>
  [...]
</component>
We require a phone-sized display minimum edge size (adjust to a value that is fit for your app!), but then also recommend the screen to have a smaller edge size than a larger tablet/laptop, while also recommending touch input and not listing any support for keyboard and mouse. Please note that this blog post is of course not a comprehensive guide, so if you want to dive deeper into what you can do with requires/recommends/suggests/supports, you may want to have a look at the relations tags described in the AppStream specification.

Validation It is still easy to make mistakes with the system requirements metadata, which is why AppStream 1.0 will provide more commands to check MetaInfo files for system compatibility. Current pre-1.0 AppStream versions already have an is-satisfied command to check if the application is compatible with the currently running operating system:
:~$ appstreamcli is-satisfied ./org.example.adaptive_app.metainfo.xml
Relation check for: */*/*/org.example.adaptive_app/*
Requirements:
   Unable to check display size: Can not read information without GUI toolkit access.
Recommendations:
   No recommended items are set for this software.
Supported:
   Physical keyboard found.
   Pointing device (e.g. a mouse or touchpad) found.
   This software supports touch input.
In addition to this command, AppStream 1.0 will introduce a new one as well: check-syscompat. This command will check the component against libappstream s mock system configurations that define a most common (whatever that is at the time) configuration for a respective chassis type. If you pass the --details flag, you can even get an explanation why the component was considered or not considered for a specific chassis type:
:~$ appstreamcli check-syscompat --details ./org.example.phoneapp.metainfo.xml
Chassis compatibility check for: */*/*/org.example.phoneapp/*
Desktop:
   Incompatible
   recommends: This software recommends a display with its shortest edge
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Laptop:
   Incompatible
   recommends: This software recommends a display with its shortest edge 
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Server:
   Incompatible
   requires: This software needs a display for graphical content.
   recommends: This software needs a display for graphical content.
   recommends: This software recommends a touch input device.
Tablet:
   Compatible (100%)
Handset:
   Compatible (100%)
I hope this is helpful for people. Happy metadata writing!

6 December 2021

Matthias Klumpp: New things in AppStream 0.15

On the road to AppStream 1.0, a lot of items from the long todo list have been done so far only one major feature is remaining, external release descriptions, which is a tricky one to implement and specify. For AppStream 1.0 it needs to be present or be rejected though, as it would be a major change in how release data is handled in AppStream. Besides 1.0 preparation work, the recent 0.15 release and the releases before it come with their very own large set of changes, that are worth a look and may be interesting for your application to support. But first, for a change that affects the implementation and not the XML format: 1. Completely rewritten caching code Keeping all AppStream data in memory is expensive, especially if the data is huge (as on Debian and Ubuntu with their large repositories generated from desktop-entry files as well) and if processes using AppStream are long-running. The latter is more and more the case, not only does GNOME Software run in the background, KDE uses AppStream in KRunner and Phosh will use it too for reading form factor information. Therefore, AppStream via libappstream provides an on-disk cache that is memory-mapped, so data is only consuming RAM if we are actually doing anything with it. Previously, AppStream used an LMDB-based cache in the background, with indices for fulltext search and other common search operations. This was a very fast solution, but also came with limitations, LMDB s maximum key size of 511 bytes became a problem quite often, adjusting the maximum database size (since it has to be set at opening time) was annoyingly tricky, and building dedicated indices for each search operation was very inflexible. In addition to that, the caching code was changed multiple times in the past to allow system-wide metadata to be cached per-user, as some distributions didn t (want to) build a system-wide cache and therefore ran into performance issues when XML was parsed repeatedly for generation of a temporary cache. In addition to all that, the cache was designed around the concept of one cache for data from all sources , which meant that we had to rebuild it entirely if just a small aspect changed, like a MetaInfo file being added to /usr/share/metainfo, which was very inefficient. To shorten a long story, the old caching code was rewritten with the new concepts of caches not necessarily being system-wide and caches existing for more fine-grained groups of files in mind. The new caching code uses Richard Hughes excellent libxmlb internally for memory-mapped data storage. Unlike LMDB, libxmlb knows about the XML document model, so queries can be much more powerful and we do not need to build indices manually. The library is also already used by GNOME Software and fwupd for parsing of (refined) AppStream metadata, so it works quite well for that usecase. As a result, search queries via libappstream are now a bit slower (very much depends on the query, roughly 20% on average), but can be mmuch more powerful. The caching code is a lot more robust, which should speed up startup time of applications. And in addition to all of that, the AsPool class has gained a flag to allow it to monitor AppStream source data for changes and refresh the cache fully automatically and transparently in the background. All software written against the previous version of the libappstream library should continue to work with the new caching code, but to make use of some of the new features, software using it may need adjustments. A lot of methods have been deprecated too now. 2. Experimental compose support Compiling MetaInfo and other metadata into AppStream collection metadata, extracting icons, language information, refining data and caching media is an involved process. The appstream-generator tool does this very well for data from Linux distribution sources, but the tool is also pretty heavyweight with lots of knobs to adjust, an underlying database and a complex algorithm for icon extraction. Embedding it into other tools via anything else but its command-line API is also not easy (due to D s GC initialization, and because it was never written with that feature in mind). Sometimes a simpler tool is all you need, so the libappstream-compose library as well as appstreamcli compose are being developed at the moment. The library contains building blocks for developing a tool like appstream-generator while the cli tool allows to simply extract metadata from any directory tree, which can be used by e.g. Flatpak. For this to work well, a lot of appstream-generator s D code is translated into plain C, so the implementation stays identical but the language changes. Ultimately, the generator tool will use libappstream-compose for any general data refinement, and only implement things necessary to extract data from the archive of distributions. New applications (e.g. for new bundling systems and other purposes) can then use the same building blocks to implement new data generators similar to appstream-generator with ease, sharing much of the code that would be identical between implementations anyway. 2. Supporting user input controls Want to advertise that your application supports touch input? Keyboard input? Has support for graphics tablets? Gamepads? Sure, nothing is easier than that with the new control relation item and supports relation kind (since 0.12.11 / 0.15.0, details):
<supports>
  <control>pointing</control>
  <control>keyboard</control>
  <control>touch</control>
  <control>tablet</control>
</supports>
3. Defining minimum display size requirements Some applications are unusable below a certain window size, so you do not want to display them in a software center that is running on a device with a small screen, like a phone. In order to encode this information in a flexible way, AppStream now contains a display_length relation item to require or recommend a minimum (or maximum) display size that the described GUI application can work with. For example:
<requires>
  <display_length compare="ge">360</display_length>
</requires>
This will make the application require a display length greater or equal to 300 logical pixels. A logical pixel (also device independent pixel) is the amount of pixels that the application can draw in one direction. Since screens, especially phone screens but also screens on a desktop, can be rotated, the display_length value will be checked against the longest edge of a display by default (by explicitly specifying the shorter edge, this can be changed). This feature is available since 0.13.0, details. See also Tobias Bernard s blog entry on this topic. 4. Tags This is a feature that was originally requested for the LVFS/fwupd, but one of the great things about AppStream is that we can take very project-specific ideas and generalize them so something comes out of them that is useful for many. The new tags tag allows people to tag components with an arbitrary namespaced string. This can be useful for project-internal organization of applications, as well as to convey certain additional properties to a software center, e.g. an application could mark itself as featured in a specific software center only. Metadata generators may also add their own tags to components to improve organization. AppStream gives no recommendations as to how these tags are to be interpreted except for them being a strictly optional feature. So any meaning is something clients and metadata authors need to negotiate. It therefore is a more specialized usecase of the already existing custom tag, and I expect it to be primarily useful within larger organizations that produce a lot of software components that need sorting. For example:
<tags>
  <tag namespace="lvfs">vendor-2021q1</tag>
  <tag namespace="plasma">featured</tag>
</tags>
This feature is available since 0.15.0, details. 5. MetaInfo Creator changes The MetaInfo Creator (source) tool is a very simple web application that provides you with a form to fill out and will then generate MetaInfo XML to add to your project after you have answered all of its questions. It is an easy way for developers to add the required metadata without having to read the specification or any guides at all. Recently, I added support for the new control and display_length tags, resolved a few minor issues and also added a button to instantly copy the generated output to clipboard so people can paste it into their project. If you want to create a new MetaInfo file, this tool is the best way to do it! The creator tool will also not transfer any data out of your webbrowser, it is strictly a client-side application. And that is about it for the most notable changes in AppStream land! Of course there is a lot more, additional tags for the LVFS and content rating have been added, lots of bugs have been squashed, the documentation has been refined a lot and the library has gained a lot of new API to make building software centers easier. Still, there is a lot to do and quite a few open feature requests too. Onwards to 1.0!

16 March 2020

Matthias Klumpp: Maintain release info easily in MetaInfo/Appdata files

This article isn t about anything new , like the previous ones on AppStream it rather exists to shine the spotlight on a feature I feel is underutilized. From conversations it appears that the reason simply is that people don t know that it exists, and of course that s a pretty bad reason not to make your life easier  Mini-Disclaimer: I ll be talking about appstreamcli, part of AppStream, in this blogpost exclusively. The appstream-util tool from the appstream-glib project has a similar functionality check out its help text and look for appdata-to-news if you are interested in using it instead. What is this about? AppStream permits software to add release information to their MetaInfo files to describe current and upcoming releases. This feature has the following advantages: The disadvantage of all this, is that humans have to maintain the release information. Also, people need to write XML for this. Of course, once humans are involved with any technology, things get a lot more complicated. That doesn t mean we can t make things easier for people to use though. Did you know that you don t actually have to edit the XML in order to update your release information? To make creating and maintaining release information as easy as possible, the appstreamcli utility has a few helpers built in. And the best thing is that appstreamcli, being part of AppStream, is available pretty ubiquitously on Linux distributions. Update release information from NEWS data The NEWS file is a not very well defined textfile that lists user-visible changes worth mentioning per each version. This maps pretty well to what AppStream release information should contain, so let s generate that from a NEWS file! Since the news format is not defined, but we need to parse this somehow, the amount of things appstreamcli can parse is very limited. We support a format in this style:
Version 0.2.0
~~~~~~~~~~~~~~
Released: 2020-03-14
Notes:
 * Important thing 1
 * Important thing 2
Features:
 * New/changed feature 1
 * New/changed feature 2 (Author Name)
 * ...
Bugfixes:
 * Bugfix 1
 * Bugfix 2
 * ...
Version 0.1.0
~~~~~~~~~~~~~~
Released: 2020-01-10
Features:
 * ...
When parsing a file like this, appstreamcli will allow a lot of errors/ imperfections and account for quite a few style and string variations. You will need to check whether this format works for you. You can see it in use in appstream itself and libxmlb for a slightly different style. So, how do you convert this? We first create our NEWS file, e.g. with this content:
Version 0.2.0
~~~~~~~~~~~~~~
Released: 2020-03-14
Bugfixes:
 * The CPU no longer overheats when you hold down spacebar
Version 0.1.0
~~~~~~~~~~~~~~
Released: 2020-01-10
Features:
 * Now plays a "zap" sound on every character input
For the MetaInfo file, we of course generate one using the MetaInfo Creator. Then we can run the following command to get a preview of the generated file: appstreamcli news-to-metainfo ./NEWS ./org.example.myapp.metainfo.xml - Note the single dash at the end this is the explicit way of telling appstreamcli to print something to stdout. This is how the result looks like:
<?xml version="1.0" encoding="utf-8"?>
<component type="desktop-application">
  [...]
  <releases>
    <release type="stable" version="0.2.0" date="2020-03-14T00:00:00Z">
      <description>
        <p>This release fixes the following bug:</p>
        <ul>
          <li>The CPU no longer overheats when you hold down spacebar</li>
        </ul>
      </description>
    </release>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
      <description>
        <p>This release adds the following features:</p>
        <ul>
          <li>Now plays a "zap" sound on every character input</li>
        </ul>
      </description>
    </release>
  </releases>
</component>
Neat! If we want to save this to a file instead, we just exchange the dash with a filename. And maybe we don t want to add all releases of the past decade to the final XML? No problem too, just pass the --limit flag as well: appstreamcli news-to-metainfo --limit=6 ./NEWS ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml That s nice on its own, but we really don t want to do this by hand The best way to ensure the MetaInfo file is updated, is to simply run this command at build time to generate the final MetaInfo file. For the Meson build system you can achieve this with a code snippet like below (but for CMake this shouldn t be an issue either you could even make a nice macro for it there):
ascli_exe = find_program('appstreamcli')
metainfo_with_relinfo = custom_target('gen-metainfo-rel',
    input : ['./NEWS', 'org.example.myapp.metainfo.xml'],
    output : ['org.example.myapp.metainfo.xml'],
    command : [ascli_exe, 'news-to-metainfo', '--limit=6', '@INPUT0@', '@INPUT1@', '@OUTPUT@']
)
In order to also translate releases, you will need to add this to your .pot file generation workflow, so (x)gettext can run on the MetaInfo file with translations merged in. Release information from YAML files Since parsing a no structure, somewhat human-readable file is hard without baking an AI into appstreamcli, there is also a second option available: Generate the XML from a YAML file. YAML is easy to write for humans, but can also be parsed by machines.The YAML structure used here is specific to AppStream, but somewhat maps to the NEWS file contents as well as MetaInfo file data. That makes it more versatile, but in order to use it, you will need to opt into using YAML for writing news entries. If that s okay for you to consider, read on! A YAML release file has this structure:
---
Version: 0.2.0
Date: 2020-03-14
Type: development
Description:
- The CPU no longer overheats when you hold down spacebar
- Fixed bugs ABC and DEF
---
Version: 0.1.0
Date: 2020-01-10
Description:  -
  This is our first release!
  Now plays a "zap" sound on every character input
As you can see, the release date has to be an ISO 8601 string, just like it is assumed for NEWS files. Unlike in NEWS files, releases can be defined as either stable or development depending on whether they are a stable or development release, by specifying a Type field. If no Type field is present, stable is implicitly assumed. Each release has a description, which can either be a free-form multi-paragraph text, or a list of entries. Converting the YAML example from above is as easy as using the exact same command that was used before for plain NEWS files: appstreamcli news-to-metainfo --limit=6 ./NEWS.yml ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml If appstreamcli fails to autodetect the format, you can help it by specifying it explicitly via the --format=yaml flag. This command would produce the following result:
<?xml version="1.0" encoding="utf-8"?>
<component type="console-application">
  [...]
  <releases>
    <release type="development" version="0.2.0" date="2020-03-14T00:00:00Z">
      <description>
        <ul>
          <li>The CPU no longer overheats when you hold down spacebar</li>
          <li>Fixed bugs ABC and DEF</li>
        </ul>
      </description>
    </release>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
      <description>
        <p>This is our first release!</p>
        <p>Now plays a "zap" sound on every character input</p>
      </description>
    </release>
  </releases>
</component>
Note that the 0.2.0 release is now marked as development release, a thing which was not possible in the plain text NEWS file before. Going the other way Maybe you like writing XML, or have some other tool that generates the MetaInfo XML, or you have received your release information from some other source and want to convert it into text. AppStream also has a tool for that! Using appstreamcli metainfo-to-news <metainfo-file> <news-file> you can convert a MetaInfo file that has release entries into a text representation. If you don t want appstreamcli to autodetect the right format, you can specify it via the --format=<text yaml> switch. Future considerations The release handling is still not something I am entirely happy with. For example, the release information has to be written and translated at release time of the application. For some projects, this workflow isn t practical. That s why issue #240 exists in AppStream which basically requests an option to have release notes split out to a separate, remote location (and also translations, but that s unlikely to happen). Having remote release information is something that will highly likely happen in some way, but implementing this will be a quite disruptive, if not breaking change. That is why I am holding this change back for the AppStream 1.0 release. In the meanwhile, besides improving the XML form of release information, I also hope to support a few more NEWS text styles if they can be autodetected. The format of the systemd project may be a good candidate. The YAML release-notes format variant will also receive a few enhancements, e.g. for specifying a release URL. For all of these things, I very much welcome pull requests or issue reports. I can implement and maintain the things I use myself best, so if I don t use something or don t know about a feature many people want I won t suddenly implement it or start to add features at random because they may be useful . That would be a recipe for disaster. This is why for these features in particular contributions from people who are using them in their own projects or want their new usecase represented are very welcome.

7 March 2020

Matthias Klumpp: Introducing the MetaInfo Creator

This year s FOSDEM conference was a lot of fun one of the things I always enjoy most about this particular conference (besides having some of the outstanding food you can get in Brussels and meeting with friends from the free software world) is the ability to meet a large range of new people who I wouldn t usually have interacted with, or getting people from different communities together who otherwise would not meet in person as each bigger project has their own conference (for example, the amount of VideoLAN people is much lower at GUADEC and Akademy compared to FOSDEM). It s also really neat to have GNOME and KDE developers within reach at the same place, as I care about both desktops a lot. An unexpected issue This blog post however is not about that. It s about what I learned when talking to people there about AppStream, and the outcome of that. Especially when talking to application authors but also to people who deal with larger software repositories, it became apparent that many app authors don t really want to deal with the extra effort of writing metadata at all. This was a bit of a surprise to me, as I thought that there would be a strong interest for application authors to make their apps look as good as possible in software catalogs. A bit less surprising was the fact that people apparently don t enjoy reading a large specification, reading a long-ish intro guide with lots of dos and don ts or basically reading any longer text at all before being able to create an AppStream MetaInfo/AppData file describing their software. Another common problem seems to be that people don t immediately know what a reverse-DNS ID is, the format AppStream uses for uniquely identifying each software component. So naturally, people either have to read about it again (bah, reading!  ) or make something up, which occasionally is wrong and not the actual component-ID their software component should have. The MetaInfo Creator It was actually suggested to me twice that what people really would like to have is a simple tool to put together a MetaInfo file for their software. Basically a simple form with a few questions which produces the final file. I always considered this a nice to have, but not essential feature, but now I was convinced that this actually has a priority attached to it. So, instead of jumping into my favourite editor and writing a bunch of C code to create this make MetaInfo file form as part of appstreamcli, this time I decided to try what the cool kids are doing and make a web application that runs in your browser and creates all metadata there. So, behold the MetaInfo Creator! If you click this link, you will end up at an Angular-based web application that will let you generate MetaInfo/AppData files for a few component-types simply by answering a set of questions. The intent was to make this tool as easy to use as possible for someone who basically doesn t know anything about AppStream at all. Therefore, the tool will: For the Meson feature, the tool simply can not generate a use this and be done script, as each Meson snippet needs to be adjusted for the individual project. So this option is disabled by default, but when enabled, a few simple Meson snippets will be produced which can be easily adjusted to the project they should be part of. The tool currently does not generate any release information for a MetaInfo file at all, This may be added in future. The initial goal was to have people create any MetaInfo file in the first place, having projects also ship release details would be the icing on the cake. I hope people find this project useful and use it to create better MetaInfo files, so distribution repositories and Flatpak repos look better in software centers. Also, since MetaInfo files can be used to create an inventory of software and to install missing stuff as-needed, having more of them will help to build smarter software managers, create smaller OS base installations and introspect what software bundles are made of easily. I welcome contributions to the MetaInfo Creator! You can find its source code on GitHub. This is my first web application ever, the first time I wrote TypeScript and the first time I used Angular, so I d bet a veteran developer more familiar with these tools will cringe at what I produced. So, scratch that itch and submit a PR!  Also, if you want to create a form for a new component type, please submit a patch as well. C developer s experience notes for Angular, TypeScript, NodeJS This section is just to ramble a bit about random things I found interesting as a developer who mostly works with C/C++ and Python and stepped into the web-application developer s world for the first time. For a project like this, I would usually have gone with my default way of developing something for the web: Creating a Flask-based application in Python. I really love Python and Flask, but of course using them would have meant that all processing would have had to be done on the server. One the one hand I could have used libappstream that way to create the XML, format it and validate it, but on the other hand I would have had to host the Python app on my own server, find a place at Purism/Debian/GNOME/KDE or get it housed at Freedesktop somehow (which would have taken a while to arrange) and I really wanted to have a permanent location for this application immediately. Additionally, I didn t want people to send the details of new unpublished software to my server. TypeScript I must say that I really like TypeScript as a language compared to JavaScript. It is not really revolutionary (I looked into Dart and other ways to compile $stuff to JavaScript first), but it removes just enough JavaScript weirdness to be pleasant to use. At the same time, since TS is a superset of JS, JavaScript code is valid TypeScript code, so you can integrate with existing JS code easily. Picking TS up took me much less than an hour, and most of its features you learn organically when working on a project. The optional type-safety is a blessing and actually helped me a few times to find an issue. It being so close to JS is both a strength and weakness: On the one hand you have all the JS oddities in the language (implicit type conversion is really weird sometimes) and have to basically refrain from using them or count on the linter to spot them, but on the other hand you can immediately use the massive amount of JavaScript code available on the web. Angular The Angular web framework took a few hours to pick up there are a lot of concepts to understand. But ultimately, it s manageable and pretty nice to use. When working at the system level, a lot of complexity is in understanding how the CPU is processing data, managing memory and using the low-level APIs the operating system provides. With the web application stuff, a lot of the complexity for me was in learning about all the moving parts the system is comprised of, what their names are, what they are, and what works with which. And that is not a flat learning curve at all. As C developer, you need to know how the computer works to be efficient, as web developer you need to know a bunch of different tools really well to be productive. One thing I am still a bit puzzled about is the amount of duplicated HTML templates my project has. I haven t found a way to reuse template blocks in multiple components with Angular, like I would with Jinja2. The documentation suggests this feature does not exist, but maybe I simply can t find it or there is a completely different way to achieve the same result. NPM Ecosystem The MetaInfo Creator application ultimately doesn t do much. But according to GitHub, it has 985 (!!!) dependencies in NPM/NodeJS. And that is the bare minimum! I only added one dependency myself to it. I feel really uneasy about this, as I prefer the Python approach of having a rich standard library instead of billions of small modules scattered across the web. If there is a bug in one of the standard library functions, I can submit a patch to Python where some core developer is there to review it. In NodeJS, I imagine fixing some module is much harder. That being said though, using npm is actually pretty nice there is a module available for most things, and adding a new dependency is easy. NPM will also manage all the details of your dependency chain, GitHub will warn about security issues in modules you depend on, etc. So, from a usability perspective, there isn t much to complain about (unlike with Python, where creating or using a module ends up as a fight the system event way too often and the question which random file do I need to create now to achieve what I want? always exists. Fortunately, Poetry made this a bit more pleasant for me recently). So, tl;dr for this section: The web application development excursion was actually a lot of fun, and I may make more of those in future, now that I learned more about how to write web applications. Ultimately though, I enjoy the lower-level software development and backend development a bit more. Summary Check out the MetaInfo Creator and its source code, if you want to create MetaInfo files for a GUI application, console application, addon or service component quickly.

24 October 2017

Reproducible builds folks: Reproducible Builds: Weekly report #130

Here's what happened in the Reproducible Builds effort between Sunday October 15 and Saturday October 21 2017: Past events Upcoming events New York University sessions A three week session will be held at New York University to work on reproducibilty issues in conjunction with the reproducible builds community. Students from the Application Security course will be working for two weeks to work on the reproducible builds effort. Packages reviewed and fixed, and bugs filed The following reproducible builds-related NMUs were accepted: Patches sent upstream: Reviews of unreproducible packages 41 package reviews have been added, 119 have been updated and 54 have been removed in this week, adding to our knowledge about identified issues. 2 issue types were removed as they were fixed: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development strip-nondeterminism development Version 0.039-1 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts of the previous weeks, including: reprotest development tests.reproducible-builds.org Website updates Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

28 September 2017

Matthias Klumpp: Adding fonts to software centers

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already  . Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font. Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. Lato Regular Italic or Lato Bold . A user however will see the font family as a font, e.g. just Lato instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream font components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a
<provides/>
-tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this. How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in
/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
for the AppStream metadata generator to pick up:
<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>
  <name>Lato</name>
  <summary>A sanserif type face fam ily</summary>
  <description>
    <p>
      Lato is a sanserif type face fam ily designed in the Sum mer 2010 by Warsaw-based designer
       ukasz Dziedzic ( Lato  means  Sum mer  in Pol ish). In Decem ber 2010 the Lato fam ily
      was pub lished under the open-source Open Font License by his foundry tyPoland, with
      sup port from Google.
    </p>
  </description>
  <url type="homepage">http://www.latofonts.com/</url>
  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>
When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an icon for the respective font component using its regular typeface. Of course that is not ideal what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display? This behavior can be influenced by adding
<font/>
tags to a
<provides/>
tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag. The example lines of text are written in a language matching the font using Pango. But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font. For example, a font with mathematical symbols might want to add the following to its metainfo file:
<custom>
  <value key="FontIconText"> </value>
  <value key="FontSampleText">       ...         </value>
</custom>
Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts. So, In summary:

4 April 2017

Matthias Klumpp: On Tanglu

It s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices. Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!). On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools. During the past year however, the project s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of junior jobs is a much better starting point. We also invested almost zero time in making our project known and create the necessary buzz and excitement that s actually needed to sustain a project like this. Doing more in the advertisement domain and help newcomers area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn t enough, talking about it is of crucial importance and that is something I knew about, but didn t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn t enough, community building is of equal importance. Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well. The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there s always that one guy doing it). I still haven t found a great way to solve this it s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation. Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus that s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can t split myself and KDE s cloning machine remains broken, so I can t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it). Good news everyone! (sort of) So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the make bundling nice! department). So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work. If it turns out that absolutely nothing works and we can t attract new people to help with Tanglu, it would mean that there generally isn t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing  The only thing that had to stop is leaving our users in the dark on what is happening. Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about  If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists, It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

22 August 2016

Zlatan Todori : When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it. But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me. On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me. I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on. And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing? One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO". NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it. Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features). This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team. Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here). But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached. Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues. While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm. To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem. P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

17 June 2016

Matthias Klumpp: A few words about the future of the Limba project

limba-smallI wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^). As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do  ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it s natural to ask what the projects next steps will be. Meeting with the competition At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition. Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system. The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible. Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters. Sometimes stepping out of the way is the best way to achieve progress So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more. Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability. So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that s the thing I actually care most about. I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don t want, and that s completely fine.  And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.) Moving forward So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense. In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. those will receive more attention  An integral part of Limba is a web service called LimbaHub to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board. Let s see  P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling

20 May 2016

Zlatan Todori : 4 months of work turned into GNOME, Debian testing based tablet

Huh, where do I start. I started working for a great CEO and great company known as Purism. What is so great about it? First of all, CEO (Todd Weaver), is incredible passionate about Free software. Yes, you read it correctly. Free software. Not Open Source definition, but Free software definition. I want to repeat this like a mantra. In Purism we try to integrate high-end hardware with Free software. Not only that, we want our hardware to be Free as much as possible. No, we want to make it entirely Free but at the moment we don't achieve that. So instead going the way of using older hardware (as Ministry of Freedom does, and kudos to them for making such option available), we sacrifice this bit for the momentum we hope to gain - that brings growth and growth brings us much better position when we sit at negotiation table with hardware producers. If negotiations even fail, with growth we will have enough chances to heavily invest in things such as openRISC or freeing cellular modules. We want to provide in future entirely Free hardware&software device that has integrated security and privacy focus while it is easy to use and convenient as any other mainstream OS. And we choose to currently sacrifice few things to stay in loop. Surely that can't be the only thing - and it isn't. Our current hardware runs entirely on Free software. You can install Debian main on it and all will work out of box. I know I did this and enjoy my Debian more than ever. We also have margin share program where part of profit we donate to Free software projects. We are also discussing a lot of new business model where our community will get a lot of influence (stay tuned for this). Besides all this, our OS (called PureOS - yes, a bit misfortune that we took the name of dormant distribution), was Trisquel based but now it is Debian testing based. Current PureOS 2.0 is coming with default DE as Cinnamom but we are already baking PureOS 3.0 which is going to come with GNOME Shell as default. Why is this important? Well, around 12 hours ago we launched a tablet campaign on Indiegogo which comes with GNOME Shell and PureOS as default. Not one, but two tablets actually (although we heavily focus on 11" one). This is the product of mine 4 months dedicated work at Purism. I must give kudos to all Purism members that pushed their parts in preparation for this campaign. It was hell of a ride. Librem11 I have also approached (of course!) Debian for creation of OEM installations ISOs for our Librem products. This way, with every sold Librem that ships with Debian preinstalled, Debian will get donation. It is our way to show gratitude to Debian for all the work our community does (yes, I am still extremely proud Debian dude and I will stay like that!). Oh yes, I am the chief technology person at Purism, and besides all goals we have, I also plan (dream) about Purism being the company that has highest number of Debian Developers. In that terms I am very proud to say that Matthias Klumpp became part of Purism. Hopefully we soon extend the number of Debian population in Purism. Of course, I think it is fairly known that I am easy to approach so if anyone has any questions (as I didn't want this post to be too long) feel free to contact me. Also - in Free software spirit - we welcome any community engagement, suggestion and/or feedback.

6 May 2016

Matthias Klumpp: Adventures in D programming

I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D. Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D just my personal opinion from building one project with it. Why choose D in the first place? The previous AppStream generator was written in Python, which wasn t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again. So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it). What is great about D? This list is just a huge braindump of things I had on my mind at the time of writing  Interfacing with C There are multiple things which make D awesome, for example interfacing with C code and to a limited degree with C++ code is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:
extern(C):
nothrow:
struct _mystruct  
alias mystruct_p = _mystruct*;
mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);
You can call them from D code in two ways:
auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();
This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a string as first parameter, can also be called like a member function of string. Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible. Getting rid of C++ cruft There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it s still valid. E.g. getting rid of the #include preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming. Incredibly powerful metaprogramming Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do. Built-in unit-test support Unittesting with D is really easy: You just add one or more unittest blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them. The unittest scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output. Safe D While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed. Unfortunately, @safe is not the default for functions though. Separate operators for addition and concatenation D exclusively uses the + operator for addition, while the ~ operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning. Optional garbage collector D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier. Built-in documentation generator This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a unittest scope to generate automatic examples for the usage of a function, which is pretty cool. Scope blocks The scope statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and with some GCC/Clang extensions a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details. Built-in syntax for parallel programming Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the Programming in D book. Pure functions D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions. D is fast! D matches the speed of C++ in almost all occasions, so you won t lose performance when writing D code that is, unless you have the GC run often in a threaded environment. Very active and friendly community The D community is very active and friendly so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at forum.dlang.org. What is bad about D? Half-proprietary reference compiler This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem. For the reference D compiler, Digital Mars D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first. Since major Linux distributions can t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos. Especially for Linux distributions, there is no way to say use this compiler to get the best and latest D compatibility . Additionally, if people can t simply apt install latest-d, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it s really bad that the proprietary reference compiler is hurting D adoption in that way. That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler. UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:
The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.
Phobos (standard library) is deprecating features too quickly This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences. Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos. This is really annoying, and probably the biggest source of unhappiness I had while working with D especially the documentation not matching the actual code is a bad experience for someone new to the language. Incomplete free compilers with varying degrees of maturity LDC and GDC have bugs, and for someone new to the language it s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat. For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that. No ABI compatibility D has a defined ABI too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can t call a library compiled with LDC or DMD. GDC actually doesn t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further. The D package manager, dub, does not yet play well with distro packaging This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved. The GC is sometimes an issue The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals SIGUSR1 and SIGUSR2 to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn t a problem for asgen, but it might be for other applications. These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements. version is a reserved word Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it s slightly annoying. The version keyword is used for conditional compilation, and needing to abbreviate it to ver in all parts of the code sucks a little (e.g. the Package interface can t have a property version , but now has ver instead). The ecosystem is not (yet) mature In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It s always nothing major, usually you can easily solve these issues and go on, but it s annoying to have these papercuts. This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished. Conclusion I like to work with D, and I consider it to be a great language the quirks it has in its toolchain are not that bad to prevent writing great things with it. At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can t run a GC), I would not recommend to use D. Rust seems to be the much better choice then. In any case, D s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice. As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste  UPDATE: To get some idea of D, check out the D tour on the new website tour.dlang.org.

26 April 2016

Matthias Klumpp: Why are AppStream metainfo files XML data?

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to ). There are basically three reasons for using XML as the default format for metainfo files: 1. XML is easily forward/backward compatible, while YAML is not This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs. Take this example XML line for defining an icon for an application:
<icon type="cached">foobar.png</icon>
and now the equivalent YAML:
Icons:
  cached: foobar.png
Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:
<icon type="cached" width="128" height="128">foobar.png</icon>
This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible. This looks differently with the YAML file. The foobar.png is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:
Icons:
  cached: name: foobar.png
          width: 128
          height: 128
The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:
Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128
Less than ideal. While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility. 2. Translating YAML is not much fun A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):
<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Spr vce soubor </summary>
<summary xml:lang="da">Filh ndtering</summary>
This would become something like this in YAML:
Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Spr vce soubor 
  da: Filh ndtering
Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:
<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsf hig zu sein. Sie k nnen daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bed rfnissen ausf hren.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Caracter sticas:</p>
  <ul>
  <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
  <li xml:lang="de">Navigationsleiste f r Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren k nnen.</li>
  <li xml:lang="es">barra de navegaci n (o de ruta completa) para URL que permite navegar r pidamente a trav s de la jerarqu a de archivos y carpetas.</li>
  <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
  ....
  </ul>
</description>
Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be: In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient. On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too. 3. Allowing XML and YAML makes a confusing story and adds complexity While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above. I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice. So yeah, XML isn t fun to write by hand. But for this case, XML is a good choice.

Matthias Klumpp: A GNOME Software Hackfest report

Two weeks ago was the GNOME Software hackfest in London, and I ve been there! And I just now found the time to blog about it, but better do it late than never  . Arriving in London and finding the Red Hat offices After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn t leave any time to visit the city much, but by being close to the center, one could hardly avoid the London experience  . Cool people working on great stuff towerbridge2016That s basically the summary for the hackfest  . It was awesome to meet with Richard Hughes again, since we haven t seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn t make it yet, but I am no longer against the idea of having that the remaining issues are solvable). Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I ve only worked with over IRC or bug reports (e.g. William, Kalev, ) was great. Also lots of new people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It s pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn t know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora  )). The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it s adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we ve seen some pretty great mockups done by Endless too I hope those will make it into production somehow.
Ironically, a "snapstore" was close to the office ;-)

Ironically, a snapstore was close to the office ;-)

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions. I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn t make for a good story where we can just tell developers ship as this bundling format, and it will be supported everywhere ). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp I will blog about that separately soon. On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard s bus-factor on the project a little  . I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu. We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that s a different story  ). The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, app of the day features, etc.). This problem hasn t been solved, since it s highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it). In summary It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point  ). For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects. So, the only thing left to do is a huge shout out of THANK YOU to the Ubuntu Community Fund and therefore the Ubuntu community for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn t get into trouble with paying my flights.
Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical s offices.

To worried KDE people: No, I didn t leave the blue side I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible

16 April 2016

Matthias Klumpp: Introducing AppStream-Generator

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution. Getting rid of dep11-generator Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator. Adventures in Go / Rust / D Since I didn t want to write this new piece of software in C (or basically, writing it in C was my last option  ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go ), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost. What good to expect from appstream-generator? So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution s package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues. LibAppStream-based parsing, generation of YAML or XML, multi-distro support, As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite. Feature Flags, Optipng, The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn t use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons. No more Contents file dependency Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data. It can t all be good, right? That is true, there are also some known issues the new generator has: Large amounts of RAM required The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve. What are symbolic links? To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don t exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does. I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome! Deploying asgen I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow many thanks once again for the Ubuntu community for sponsoring my attendance!). Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

20 December 2015

Petter Reinholdtsen: Using appstream with isenkram to install hardware related packages in Debian

Around three years ago, I created the isenkram system to get a more practical solution in Debian for handing hardware related packages. A GUI system in the isenkram package will present a pop-up dialog when some hardware dongle supported by relevant packages in Debian is inserted into the machine. The same lookup mechanism to detect packages is available as command line tools in the isenkram-cli package. In addition to mapping hardware, it will also map kernel firmware files to packages and make it easy to install needed firmware packages automatically. The key for this system to work is a good way to map hardware to packages, in other words, allow packages to announce what hardware they will work with. I started by providing data files in the isenkram source, and adding code to download the latest version of these data files at run time, to ensure every user had the most up to date mapping available. I also added support for storing the mapping in the Packages file in the apt repositories, but did not push this approach because while I was trying to figure out how to best store hardware/package mappings, the appstream system was announced. I got in touch and suggested to add the hardware mapping into that data set to be able to use appstream as a data source, and this was accepted at least for the Debian version of appstream. A few days ago using appstream in Debian for this became possible, and today I uploaded a new version 0.20 of isenkram adding support for appstream as a data source for mapping hardware to packages. The only package so far using appstream to announce its hardware support is my pymissile package. I got help from Matthias Klumpp with figuring out how do add the required metadata in pymissile. I added a file debian/pymissile.metainfo.xml with this content:
<?xml version="1.0" encoding="UTF-8"?>
<component>
  <id>pymissile</id>
  <metadata_license>MIT</metadata_license>
  <name>pymissile</name>
  <summary>Control original Striker USB Missile Launcher</summary>
  <description>
    <p>
      Pymissile provides a curses interface to control an original
      Marks and Spencer / Striker USB Missile Launcher, as well as a
      motion control script to allow a webcamera to control the
      launcher.
    </p>
  </description>
  <provides>
    <modalias>usb:v1130p0202d*</modalias>
  </provides>
</component>
The key for isenkram is the component/provides/modalias value, which is a glob style match rule for hardware specific strings (modalias strings) provided by the Linux kernel. In this case, it will map to all USB devices with vendor code 1130 and product code 0202. Note, it is important that the license of all the metadata files are compatible to have permissions to aggregate them into archive wide appstream files. Matthias suggested to use MIT or BSD licenses for these files. A challenge is figuring out a good id for the data, as it is supposed to be globally unique and shared across distributions (in other words, best to coordinate with upstream what to use). But it can be changed later or, so we went with the package name as upstream for this project is dormant. To get the metadata file installed in the correct location for the mirror update scripts to pick it up and include its content the appstream data source, the file must be installed in the binary package under /usr/share/appdata/. I did this by adding the following line to debian/pymissile.install:
debian/pymissile.metainfo.xml usr/share/appdata
With that in place, the command line tool isenkram-lookup will list all packages useful on the current computer automatically, and the GUI pop-up handler will propose to install the package not already installed if a hardware dongle is inserted into the machine in question. Details of the modalias field in appstream is available from the DEP-11 proposal. To locate the modalias values of all hardware present in a machine, try running this command on the command line:
cat $(find /sys/devices/ grep modalias)
To learn more about the isenkram system, please check out my blog posts tagged isenkram.

14 December 2015

Matthias Klumpp: AppStream/DEP-11 fully supported in Debian now!

AppStream on DebianBack in 2011, when the AppStream meeting in N rnberg had just happened, I published the DEP-11 (Debian Extension Project 11) draft together with Michael Vogt and Julian Andres Klode, as an approach to implement AppStream in Debian. Back then, the FTPMasters team rejected the suggestion to use the official XML specification, and so the DEP-11 specification was adapted to be based on YAML instead of XML. This wasn t much of a big deal, since the initial design of DEP-11 was to be a superset of the AppStream specification, so it wasn t meant to be exactly like AppStream anyway. AppStream back then was only designed for applications (as in stuff that provides a .desktop file ), but with DEP-11 we aimed for much more: DEP-11 should also describe fonts, drivers, pkg-config files and other metadata, so in the end one would be able to ask the package manager meaningful questions like is the firmware of device X installed? or request actions such as please install me the GIMP , making it unnecessary to know package names at all, and making packages a mere implementation detail. Then, GNOME-Software happened and demanded all these features. Back then, I was the de-facto maintainer of the AppStream upstream project already, but didn t feel like being the maintainer yet, so I only curated the existing specification, without extending it much. The big push forward GNOME-Software created changed that dramatically, and with me taking control of the specification and documenting it properly, the very essence of DEP-11 became AppStream (that was around the AppStream 0.6 release). So today, DEP-11 is mainly a YAML-based version of the AppStream XML specification. AppStream XML and DEP-11 YAML are implemented by two projects, GLib and Qt libraries exist to access the metadata and AppStream is used by the software centers of GNOME, KDE and Elementary. Today there are two things to celebrate for me: First of all, there is the release of AppStream 0.9 (that happened last Saturday already), which brings some nice improvements to the API for developers and some micro-optimizations to speed up Xapian database queries. Yay! The second thing is full DEP-11 support in Debian! This means that you don t need to copy metadata around manually, or install extra packages: All you need to do is to install the appstream package, everything else is done for you, and the data is kept up to date automatically. This is made possible by APT 1.1 (thanks to the whole APT team!), some dedicated support for it in AppStream directly, the work of our Sysadmin team at Debian, which set up infrastructure to build the metadata automatically, as well as our FTPMasters team where Joerg helped with the final steps of getting the metadata into the archive. That AppStream data is now in the archive doesn t mean we live in a perfect utopia yet there are still issues to be handled, but all the major work is done now and we can now gradually improve the data generator and tools and squash the remaining bugs. And another item from the good news department: It s highly likely that Ubuntu will follow Debian in AppStream/DEP-11 support with the upcoming Xenial release! But how can I make use of the new metadata? Just install the appstream package everything is done for you! Another easy way is to install GNOME-Software, which makes use of the new metadata already. KDE Discover in Debian does not enable support for AppStream yet, this will likely come later. If you prefer to use the command-line, you can now use commands like
sudo appsteamcli install org.kde.kate.desktop
This will simply install the Kate text editor. Who wants some statistics? At time the Debian Sid/Unstable suite contains 1714 valid software components. It could be even more if the errors generated during metadata extraction would be resolved. For that, the metadata generator has a nice statistics page, showing the amount of each hint type in the suite and the development of the available software components in Debian and the hint types count over time (this plot feature was just added recently, so we are still a bit low on data). For packagers and interested upstreams, the data extractor creates detailed reports for each package, explaining why data was not included and how to fix the issue (in case something is unclear, please file a bug report and/or get in contact with me). In summary Thanks to everyone who helped to make this happen! For me this project means a lot, when writing this blog post I realized that I am basically working on it for almost 5 years (!) now (and the idea is even older). Seeing it to grow to such a huge success in other distributions was a joy, but now Debian can join the game with first-class AppStream support as well, which makes me even happier. Afterall Debian is the distribution I feel most at home. There is still lots of work to do (and already a few bugs known), but the hardest part of the journey is done let s walk into a bright future with AppStream!

11 December 2015

Matthias Klumpp: What s up in Tanglu?

tanglu logo pureIt is time again for another Tanglu blogpost (to be honest, this article is pretty much overdue  ). I want to shine a spotlight on the work that s done in Tanglu, be it ongoing or past work done for the new release. Why is the new release taking longer than usual? As you might have noticed, usually a new Tanglu release (Tanglu 4 Dasyatis ) should be released this month. We decided a while ago, however, to defer the release and are now aiming for an release in February / March 2016. Reason for this change in schedule is the GCC 5 transition (and more importantly the huge amount of follow-up transitions) as well as some other major infrastructure tasks, which we would like to complete and give them a good amount of testing before releasing. Also, some issues with our build system, and generally less build power than in previous releases is a problem (At least the Debile build-system issues could be worked around or solved). The hard disk crash in the forum and bugtracking server also delayed the start of the Tanglu 4 development process a lot. In all of these tasks, manpower is of course the main problem  Software Tasks General infrastructure tasks Improvements on Synchrotron Synchrotron, the software which is synchronizing the Tanglu archive with the Debian archive, received a few tweaks to make it more robust and detect installability of packages more reliably. We also run it more often now. Rapidumo improvements Rapidumo is a software written to complement dak (the Debian Archive Kit) in managing the Tanglu archive. It performs automatic QA tasks on the archive and provides a collection of tools to perform various tasks (like triggering package rebuilds). For Tanglu 4, we sometimes drop broken packages from the archive now, to speed up transitions and to remove packages which got uninstallable more quickly. Packages removed from the release suite still have a chance to enter it again, but they need to go through Tanglu s staging area again first. The removal process is currently semiautomatic, to avoid unneccessary removals and breakage. Rapidumo could benefit from some improvements and an interactive web interface (as opposed to static HTML pages) would be nice. Some early work on this is done, but not completed (and has a very low priority at the moment). DEP-11 integration There will be a bigger announcement on AppStream and DEP-11 in the next days, so I keep this terse: Tanglu will go for full AppStream integration with the next release, which means no more packaged metadata, but data placed directly in the archive. Work on this has started in Tanglu, but I needed to get back to the drawing board with it, to incorporate some new ideas for using synergies with Debian on generating the DEP-11 metadata. Phabricator Phabricator has been integrated well into our infrastructure, but there are still some pending tasks. E.g. we need subprojects in Phabricator, and a more powerful Conduit interface. Those are upstream bugs on Phabricator, and are actively being worked on. As soon as the missing features are available in Phabricator, we will also pursue the integration of Tanglu bug information with the Debian DistroTracker, which was discussed at DebConf this summer. UEFI support UEFI support is a tricky beast. Full UEFI support is a release-goal for Tanglu 4 (so we won t release without it). At time, our Live-CDs start on pure EFI systems, but there are several reported issues with the Calamares live-installer as well as Debian-Installer, which fails to install GRUB correctly. Work on resolving these problems is ongoing. Major LiveCD rework Tanglu 4 will ship with improved live-cds, which e.g. allow selecting the preferred locale early in the bootloader. We switched from live-boot to manage our live sessions to casper, the same tool which is also used in Ubuntu. Casper fixed a few issues we had, and brought some new, but overall using it was a good choice and work on making top-notch live-cds is progressing well. KDE Plasma Integration of the latest Plasma release is progressing, but its speed has slowed down since fewer people are working on it. If you want to help Tanglu s KDE Workspace integration, please help! For the upcoming release, the Plasma 5 packages which we created based on a collaboration with Kubuntu for the previous Tanglu 3 release have been merged with their Debian counterparts. This action fortunately was possible without major problems. Now Tanglu is (mostly) using the same Plasma 5 packages as Debian again (lots of Kudos go the the Kubuntu and Debian Qt/KDE packagers!). GNOME The same re-merge with Debian has been done on Tanglu s GNOME flavor (Tanglu also shipped with a more recent GNOME release than Debian in Tanglu 3). So Tanglu 4 GNOME and Debian Testing GNOME are at (almost) the same level. Unfortunately, the GNOME team is heavily understaffed a GNOME team member left at the beginning of the year for personal reasons, and the team now only has one semi-active member and me. Other An fvwm-nightshade spin is being worked on :-) . Apart from that, there are no teams maintaining other flavors in Tanglu. Security & Stable maintenance Thanks to several awesome people, the current Tanglu Stable release (Tanglu 3 Chromodoris ) receives regular updates, even for many packages which are not in the fully supported set. Tangluverse Tasks Tasks (not) done in the global Tanglu universe. HTTPS for community sites Thanks to our participation in the Let s Encrypt closed beta program, Tanglu websites like the user forums and bugtracker have been fully encrypted for a while, which should make submitting data to these sites much more secure. So far, we didn t encounter issues, which means that we will likely aim for getting HTTPS encryption enabled on every Tanglu website. Tanglu.org website The Tanglu main website didn t receive much love at all. It could use a facelift and more importantly updated content about Tanglu. So far, nobody volunteered to update the website, so this task is still open. It is, however, a high-priority task to increase our public visibility as a project, so updating the website is definitely something which will be done soon. Promotion It doesn t hurt to think about how to sell Tanglu as brand : What is Tanglu? Our motivation, our goals? What is our slogan? How can we communicate all of this to new users, without having them to read long explanations? (e.g. the Fedora Project has this nicely covered on their main website, and their slogan Freedom, Friends, Features, First immediately communicates the project s key values) Those are issues engineers don t think about often, still it is important to present Tanglu in a way that is easy to grasp for people hearing about it for the first time. So far, no people are working on this task specifically, although it regularly comes up on IRC. Sponsoring & Government Tanglu is by design not backed by any corporation. However, we still need to get our servers paid. Currently, Tanglu is supported by three sponsors, which basically provide the majority of our infrastructure. Also, developers provide additional machines to get Tanglu running and/or packages built. Still, this dependency on very few sponsors paying the majority of the costs is very bad, since it makes Tanglu vulnerable in case one sponsor decides to end sponsorship. We have no financial backing to be able to e.g. continue to pay an important server in case it s sponsor decides to drop sponsorship. Also, since Tanglu is no legal entity, accepting donations is hard for us at time, since a private person would need to accept the money on behalf of Tanglu. So we can t easily make donating to Tanglu possible (at least not in the way we want donations to be). These issues are currently being worked on, there are a couple of possible solutions on the table. I will write about details as soon as it makes sense to go public with them. In general, I am very happy with the Tanglu community: The developer community is a pleasure to work with, and interactions with our users on the forums or via the bugtracker are highly productive and friendly. Although the development speed has slowed down a bit, Tanglu is still an active and awesome project, and I am looking forward to the cool stuff we will do in future!

11 September 2015

Matthias Klumpp: Deploying Limba packages: Ideas & current status

LimbaHubThe Limba project does not only have the goal to allow developers to deploy their applications directly on multiple Linux distributions while reducing duplication of shared resources, it should also make it easy for developers to build software for Limba. Limba is worth nothing without good tooling to make it fun to use. That s why I am working on that too, and I want to share some ideas of how things could work in future and which services I would like to have running. I will also show what is working today already (and that s quite something!). This time I look at things from a developer s perspective (since the last posts on Limba were more end-user centric). If you read on, you will also find a nice video of the developer workflow  1. Creating metadata and building the software To make building Limba packages as simple as possible, Limba reuses already existing metadata, like AppStream metadata to find information about the software you want to create your package for. To ensure upstreams can build their software in a clean environment, Limba makes using one as simple as possible: The limba-build CLI tool creates a clean chroot environment quickly by using an environment created by debootstrap (or a comparable tool suitable for the Linux distribution), and then using OverlayFS to have all changes to the environment done during the build process land in a separate directory. To define build instructions, limba-build uses the same YAML format TravisCI uses as well for continuous integration. So there is a chance this data is already present as well (if not, it s trivial to write). In case upstream projects don t want to use these tools, e.g. because they have well-working CI already, then all commands needed to build a Limba package can be called individually as well (ideally, building a Limba package is just one call to lipkgen). I am currently planning DeveloperIPK packages containing resources needed to develop against another Limba package. With that in place and integrated with the automatic build-environment creation, upstream developers can be sure the application they just built is built against the right libraries as present in the package they depend on. The build tool could even fetch the build-dependencies automatically from a central repository. 2. Uploading the software to a repository While everyone can set up their own Limba repository, and the limba-build repo command will help with that, there are lots of benefits in having a central place where upstream developers can upload their software to. I am currently developing a service like that, called LimbaHub . LimbaHub will contain different repositories distributors can make available to their users by default, e.g. there will be one with only free software, and one for proprietary software. It will also later allow upstreams to create private repositories, e.g. for beta-releases. 3. Security in LimbaHub Every Limba package is signed with they key of its creator anyway, so in order to get a package into LimbaHub, one needs to get their OpenPGP key accepted by the service first. Additionally, the Hub service works with a per-package permission system. This means I can e.g. allow the Mozilla release team members to upload a package with the component-ID org.mozilla.firefox.desktop or even allow those user(s) to own the whole org.mozilla.* namespace. This should prevent people hijacking other people s uploads accidentally or on purpose. 4. QA with LimbaHub LimbaHub should also act as guardian over ABI stability and general quality of the software. We could for example warn upstreams that they broke ABI without declaring that in the package information, or even reject the package then. We could validate .desktop files and AppStream metadata, or even check if a package was built using hardening flags. This should help both developers to improve their software as well as users who benefit from that effort. In case something really bad gets submitted to LimbaHub, we always have the ability to remove the package from the repositories as a last resort (which might trigger Limba to issue a warning for the user that he won t receive updates anymore). What works Limba, LimbaHub and the tools around it are developing nicely, so far no big issues have been encountered yet. That s why I made a video showing how Limba and LimbaHub work together at time: Still, there is a lot of room for improvement Limba has not yet received enough testing, and LimbaHub is merely a proof-of-concept at time. Also, lots of high-priority features are not yet implemented. LimbaHub and Limba need help! At time I am developing LimbaHub and Limba alone with only occasional contributions from others (which are amazing and highly welcomed!). So, if you like Python and Flask, and want to help developing LimbaHub, please contact me the LimbaHub software could benefit from a more experienced Python web developer than I am  (and maybe having a designer look over the frontend later makes sense as well). If you are not afraid of C and GLib, and like to chase bugs or play with building Limba packages, consider helping Limba development :-)

Next.