A short status update of what happened on my side last month. I spent
quiet a bit of time reviewing new, code (thanks!) as well as
maintenance to keep things going but we also have some improvements:
Phosh
Add support for progress indicator and counts to lockscreen launcher entries: Merge request,
Demo using Phosh-EV's charge status as example (Merge request)
Tightening the feedback loop Link to heading One thing we notice ever so often is that although Phosh s source code is publicly available and upcoming changes are open for review the feedback loop between changes being made to the development branch and users noticing the change can still be quiet long.
This can be problematic as we ideally want to catch a regression or broken use case triggered by a change on the development branch (aka main) before the general availability of a new version.
As in 2022 I took another look back at what changed in Phosh in 2023 and instead of just updating my notes why not share it here. In short: While collecting these bits I became really impressed about the progress we made :
Some numbers Link to heading We were discussing at this years Phosh Community Get Together at Froscon if we should lengthen the Phosh release cycle a bit but we kept the one release per month schedule to get improvements out to users quickly.
I wanted to look back at what changed in phosh in 2022 and figured I
could share it with you. I'll be focusing on things very close to the
mobile shell, for a broader overview see Evangelos upcoming FOSDEM
talk.
Some numbers
We're usually aiming for a phosh release at the end of each month. In
2022 We did 10 releases like that, 7 major releases (bumping the
middle version number) and three betas. We skipped the April and
November releases. We also did one bug fix relesae out of line
(bumping the last bit of the version number). I hope we can keep that
cadence in 2023 as it allows us to get changes to users in a timely
fashion (thus closing usability gaps as early as possible) as well as
giving distributions a way to plan ahead. Ideally we'd not skip any
release but sometimes real life just interferes.
Those releases contain code contributions from about 20 different
people and translations from about 30 translators. These numbers
are roughly the same as 2021 which is great. Thanks everyone!
In phosh's git repository we had a bit over 730 non-merge commits
(roughly 2 per day), which is about 10% less than in 2021. Looking
closer this is easily compensated by commits to phoc (which needed
quite some work for the gestures) and phosh-mobile-settings which
didn't exist in 2021.
User visible features
Most notable new features are likely the swipe gestures for top and
bottom bar, the possibility to use the quick settings on the lock
screen as well as the style refresh driven by Sam Hewitt that
e.g. touched the modal dialogs (but also sliders, dialpads, etc):
We also added the possibility to have custom widgets via loadable
plugins on the lock screen so the user can decide which information
should be available. We currently ship plugins to show
information on upcoming calendar events
emergency contact information
PDFs like bus or train tickets
the current month (as hello world like plugin to get started)
These are maintained within phosh's source tree although out of
tree plugins should be doable too.
There's a settings application (the above mentioned
phosh-mobile-settings) to enable these. It also allows those plugins
to have individual preferences:
Speaking of configurability: Scale-to-fit settings (to work around
applications that don't fit the screen) and haptic/led feedback are
now configurable without resorting to the command line:
We can also have device specific settings which helps to temporarily
accumulate special workaround without affecting other phones.
Other user visible features include the ability to shuffle the digits
on the lockscreen's keypad, a VPN quick settings, improved screenshot
support and automatic high contrast theme switching when in bright
sunlight (based on ambient sensor readings) as shown here.
As mentioned above Evangelos will talk at FOSDEM 2023 about the
broader ecosystem improvements including GNOME, GTK, wlroots, phoc,
feedbackd, ModemManager, mmsd, NetworkManager and many others without
phosh wouldn't be possible.
What else
As I wanted a T-shirt for Debconf 2022 in Prizren so I created a logo
heavily inspired by those cute tiger images you often see in Southeast
Asia. Based on that I also made a first batch of stickers mostly
distributed at FrOSCon 2022:
That's it for 2022. If you want to get involved into phosh testing, development
or documentation then just drop by in the matrix room.
If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version Client version: stable-2.12.2 Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
pre-kubernetes-setup -------------------- control plane namespace does not already exist can create non-namespaced resources can create ServiceAccounts can create Services can create Deployments can create CronJobs can create ConfigMaps can create Secrets can read Secrets can read extension-apiserver-authentication configmap no clock skew detected
linkerd-version --------------- can determine the latest version cli is up-to-date
Status check results are
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f - Rendering Linkerd CRDs... Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install kubectl apply -f - namespace/linkerd created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created serviceaccount/linkerd-identity created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created serviceaccount/linkerd-destination created secret/linkerd-sp-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created secret/linkerd-policy-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created clusterrole.rbac.authorization.k8s.io/linkerd-policy created clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created role.rbac.authorization.k8s.io/linkerd-heartbeat created rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created serviceaccount/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created serviceaccount/linkerd-proxy-injector created secret/linkerd-proxy-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created configmap/linkerd-config created secret/linkerd-identity-issuer created configmap/linkerd-identity-trust-roots created service/linkerd-identity created service/linkerd-identity-headless created deployment.apps/linkerd-identity created service/linkerd-dst created service/linkerd-dst-headless created service/linkerd-sp-validator created service/linkerd-policy created service/linkerd-policy-validator created deployment.apps/linkerd-destination created cronjob.batch/linkerd-heartbeat created deployment.apps/linkerd-proxy-injector created service/linkerd-proxy-injector created secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
linkerd-existence ----------------- 'linkerd-config' config map exists heartbeat ServiceAccount exist control plane replica sets are ready no unschedulable pods control plane pods are ready cluster networks contains all pods cluster networks contains all services
linkerd-config -------------- control plane Namespace exists control plane ClusterRoles exist control plane ClusterRoleBindings exist control plane ServiceAccounts exist control plane CustomResourceDefinitions exist control plane MutatingWebhookConfigurations exist control plane ValidatingWebhookConfigurations exist proxy-init container runs as root user if docker container runtime is used
linkerd-identity ---------------- certificate config is valid trust anchors are using supported crypto algorithm trust anchors are within their validity period trust anchors are valid for at least 60 days issuer cert is using supported crypto algorithm issuer cert is within its validity period issuer cert is valid for at least 60 days issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls ------------------------------- proxy-injector webhook has valid cert proxy-injector cert is valid for at least 60 days sp-validator webhook has valid cert sp-validator cert is valid for at least 60 days policy-validator webhook has valid cert policy-validator cert is valid for at least 60 days
linkerd-version --------------- can determine the latest version cli is up-to-date
control-plane-version --------------------- can retrieve the control plane version control plane is up-to-date control plane and cli versions match
linkerd-control-plane-proxy --------------------------- control plane proxies are healthy control plane proxies are up-to-date control plane proxies and cli versions match
Status check results are
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f - namespace/linkerd-viz created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created serviceaccount/metrics-api created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created serviceaccount/prometheus created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created serviceaccount/tap created rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created secret/tap-k8s-tls created apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created role.rbac.authorization.k8s.io/web created rolebinding.rbac.authorization.k8s.io/web created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created serviceaccount/web created server.policy.linkerd.io/admin created authorizationpolicy.policy.linkerd.io/admin created networkauthentication.policy.linkerd.io/kubelet created server.policy.linkerd.io/proxy-admin created authorizationpolicy.policy.linkerd.io/proxy-admin created service/metrics-api created deployment.apps/metrics-api created server.policy.linkerd.io/metrics-api created authorizationpolicy.policy.linkerd.io/metrics-api created meshtlsauthentication.policy.linkerd.io/metrics-api-web created configmap/prometheus-config created service/prometheus created deployment.apps/prometheus created service/tap created deployment.apps/tap created server.policy.linkerd.io/tap-api created authorizationpolicy.policy.linkerd.io/tap created clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created serviceaccount/tap-injector created secret/tap-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created service/tap-injector created deployment.apps/tap-injector created server.policy.linkerd.io/tap-injector-webhook created authorizationpolicy.policy.linkerd.io/tap-injector created networkauthentication.policy.linkerd.io/kube-api-server created service/web created deployment.apps/web created serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard Linkerd dashboard available at: http://localhost:50750 Grafana dashboard available at: http://localhost:50750/grafana Opening Linkerd dashboard in the default browser Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components 2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
Since people are sometimes slightly surprised that you can go onto a
multi week trip with a smartphone running free sofware so only I
wanted to share some impressions from my recent trip to Prizren/Kosovo
to attend Debconf 22 using a Librem 5. It's a mix of things that
happend and bits that got improved to hopefully make things more fun
to use. And, yes, there won't be any big surprises like being stranded
without the ability to do phone calls in this read because there
weren't and there shouldn't be.
After two online versions Debconf 22 (the annual Debian Conference)
took place in Prizren / Kosovo this year and I sure wanted to go.
Looking for options I settled for a train trip to Vienna, to meet
there with friends and continue the trip via bus to Zagreb,
then switching to a final 11h direct bus to Prizren.
When preparing for the trip and making sure my Librem 5 phone has all the
needed documents I noticed that there will be quite some PDFs to show
until I arrive in Kosovo: train ticket, bus ticket, hotel reservation,
and so on. While that works by tapping unlocking the phone, opening
the file browser, navigating to the folder with the PDFs and showing
it via evince this looked like a lot of steps to repeat. Can't we have
that information on the Phone Shell's lockscreen?
This was a good opportunity to see if the upcoming plugin
infrastructure for the lock screen (initially meant to allow for a
plugin to show upcoming events) was flexible enough, so I used some
leisure time on the train to poke at this and just before I reached
Vienna I was able to use it for the first
time. It was the
very last check of that ticket, it also was a bit of cheating since
I didn't present the ticket on the phone itself but from phosh (the
phones graphical shell) running on my laptop but still.
This was possible since phosh is written in GTK and so I could
just leverage evince's EvView. Unfortunately the hotel check in
didn't want to see any documents .
For the next day I moved the code over to the Librem 5 and (being a
bit nervous as the queue to get on the bus was quite long) could
happily check into the Flixbus by presenting the barcode to the
barcode reader via the Librem 5's lockscreen.
When switching to the bus to Prizren I didn't get to use that feature
again as we bought the tickets at a counter but we got a nice krem
banana after entering the bus - they're not filled with jelly, but
krem - a real Kosovo must eat!).
Although it was a rather long trip we had frequent breaks and I'd
certainly take the same route again. Here's a photo of Prizren
taken on the Librem 5 without any additional postprocessing:
What about seeing the conference schedule on the phone? Confy(a
conferences schedule viewer using GTK and libhandy) to the rescue:
Since Debian's confy maintainer was around too, confy saw a bunch of
improvements over the conference.
For getting around Puremaps(an application to display maps and
show routing instructions) was very helpful, here geolocating me
in Prizren via GPS:
Puremaps currently isn't packaged in Debian but there's work
onging to fix that (I used the flatpak for the moment).
We got ourselves sim cards for the local phone network. For some
reason mine wouldn't work (other sim cards from the same operator
worked in my phone but this one just wouldn't). So we went to the sim card
shop and the guy there was perfectly able to operate the Librem 5
without further explanation (including making calls, sending USSD codes
to query balance, ).
The sim card problem turned out to be a problem on the operator side
and after a couple of days they got it working.
We had nice, sunny weather about all the time. That made me switch
between high contrast mode (to read things in bright sunlight) and
normal mode (e.g. in conference rooms) on the phone quite
often. Thankfully we have a ambient light sensor in the phone so we
can make that automatic.
See here for a video.
Jathan kicked off a DebianOnMobile sprint during the conference
where we were able to improve several aspects of mobile support in
Debian and on Friday I had the chance to give a talk about the state
of Debian on smartphones. pdf-presenter-console is a great
tool for this as it can display the current slide together with
additional notes. I needed some hacks to make it fit the phone screen
but hopefully we figure out a way to have this by default.
I had two great weeks in Prizren. Many thanks to the organizers of
Debconf 22 - I really enjoyed the conference.
phosh is graphical shell for mobile, touch based devices like
smart phones. It's the default graphical shell on Purism's
Librem 5 (and that's where
it came to life) but projects like postmarketOS,
Mobian and
Debian have picked it up
putting it into use on other devices as well and contributing patches.
This post is meant as a short overview how things are tied together so
further posts can provide more details.
A PHone SHell
As mobile shell phosh provides the interface components commonly
found on mobile devices to
launch applications
switch between running applications and close them
lock and unlock the screen
display status information (e.g. network connectivity, battery level)
provide quick access to things like torch or Bluetooth
show notifications
It uses
GObject object system and
GTK to build up the user
interface components. Mobile specific patterns are brought in via
libhandy.
Since phosh is meant to blend into GNOME as seamlessly as possible it
uses the common interfaces present there via
D-Bus like
org.gnome.Screensaver or org.gnome.keyring.SystemPrompter and
retrieves user configuration like keybindings via
GSettings from preexisting
schema.
The components of a running graphical session roughly look like this:
The blue boxes are the very same found on GNOME desktop sessions
while the white ones are currently only found on
phones.
feedbackd is explained
quickly: It's used for providing haptic or visual user feedback and
makes your phone rumble and blink when applications (or the shell)
want to notify the user about certain events like incoming phone calls
or new messages. What about phoc and squeekboard?
phoc and squeekboard
Although some stacks combine the graphical shell with the display server
(the component responsible for drawing applications and handling user
input) this isn't the case for phosh. phosh relies on a
Wayland
compositor to be present for that. Keeping shell and compositor apart
has some advantages like being able to restart the shell without
affecting other applications but also adds the need for some
additional communication between compositor and shell. This additional
communication is implemented via Wayland protocols. The Wayland
compositor used with phosh is called
phoc for PHone
Compositor.
One of these additional protocols is wlr-layer-shell.
It allows the shell to reserve space on the screen that is not used
by other applications and allows it to draw things like the top and
bottom bar or lock screen. Other protocols used by phosh (and hence implemented by phoc) are
wlr-output-management
to get information on and control properties of monitors or
wlr-foreign-toplevel-management
to get information about other windows on the display. The later
is used to allow to switch between running applications.
However these (and other) Wayland protocols are not implemented in
phoc from scratch. phoc leverages the
wlroots library for that. The library
also handles many other compositor parts like interacting with the
video and input hardware.
The details on how phoc actually puts things up on the screen deserves
a separate post. For the moment it's sufficient to note that phosh
requires a Wayland compositor like phoc.
We've not talked about entering text without a physical keyboard yet -
phosh itself does not handle that either.
squeekboard is the on
screen keyboard for text (and emoji) input. It again uses Wayland
protocols to talk to the Wayland compositor and it's (like phosh) a
component that wants exclusive access to some areas of the screen
(where the keyboard is drawn) and hence leverages the layer-shell
protocol. Very roughly speaking it turns touch input in that area into
text and sends that back to the compositor that then passes it back to
the application that currently gets the text input. squeekboard's main
author dcz has some more details
here.
The session
So how does the graphical session in the picture above come into existence?
As this is meant to be close to a regular GNOME session it's done via
gnome-session that is
invoked somewhat like:
phoc -E 'gnome-session --session=phosh'
So the compositor phoc is started up, launches gnome-session which
then looks at phosh.session for the session's components. These are
phosh, squeekboard and gnome-settings-daemon.
These then either connect to already running services via D-Bus
(e.g. NetworkManager, ModemManager, ...) or spawn them via D-Bus
activation when required (e.g. feedbackd).
Calling conventions
So when talking about phosh it's good to keep several things apart:
phosh - the graphical shell
phoc - the compositor
squeekboard - the on screen keyboard
phosh.session: The session that ties these and GNOME together
On top of that people sometimes refer to 'Phosh' as the software
collection consisting of the above plus more components from GNOME
(Settings, Contacs, Clocks, Weather, Evince, ...) and components that
currently aren't part of GNOME but adapt to small screen sizes, use
the same technologies and are needed to make a phone fun to use
e.g. Geary for email, Calls for making phone calls and Chats for SMS
handling.
Since just overloading the term Phosh is confusing GNOME/Phosh
Mobile Environment or Phosh Mobile Environment have been used
to describe the above collection of software and I've contacted GNOME on how to name
this properly, to not infringe on the GNOME trademark but also give proper credit
and hopefully being able to move things upstream that can live upstream.
That's it for a start. phosh's development documentation can be
browsed here but is
also available in the source code.
Besides the projects mentioned above credits go to
Purism for allowing me and others to work on the
above and other parts related to moving Free Software on mobile Linux
forward.
After some time in the experimental distribution I've uploaded
git-buildpackage 0.9.0 to sid a couple of weeks ago and were now
at 0.9.2 as of today. This brought in two new commands:
gbp export-orig to regenerate tarballs based on the current
version in debian/changelog. This was always possible by using
gbp buildpackage and ignoring the build result e.g. gbp
buildpackage --git-builder=/bin/true but having a separate
command is much more straight forward.
gbp push to push everything related to the current version in
debian/changelog: debian-tag, debian-branch, upstream-branch,
upstream-tag, pristine-tar branch. This could already be achieved
by a posttag hook but having it separate is again more straight
forward and reduces the numer of knobs one has to tweak.
We moved to better supported tools:
Switch to Python3 from Python2
Switch from epydoc to pydoctor
Finally switch from Docbook SGML to Docbook XML (we ultimately want to switch
to Sphinx at one point but this will be much simpler now).
so pk4 invokes gbp import-dsc on package import.
There were lots of improvements all over the place like gbp pq now
importing the patch queue on switch (if it's not already there) and gbp
import-dsc and import-orig not creating pointless master branches
if debian-branch != 'master'. And after being broken in the early
0.9.x cycle gbp buildpackage --git-overlay ... should be much better
supported now that we have proper tests.
All in all 26 bugs fixed. Thanks to everybody who contributed bug
reports and fixes.
Sent a patch for pristine-tar to allow storage of detached upstream signatures. (#871809)
Worked more on Lintian, a static analysis tool for Debian packages, reporting on various errors, omissions and quality-assurance issues to the maintainer (previous changes):
Address a number of issues in the copyright-year-in-future tag including preventing false positives in port numbers, email addresses, ISO standard numbers and street addresses (#869788), as well as "meta" or testing statements (#873323). In addition, report all violating years in a line and expand the testsuite.
Updated travis.debian.net (my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform for testing):
Move away from deb.debian.org; Travis appears to be using a HTTP proxy that strips SRV records. (commit)
Highlight double quotes are required for TRAVIS_DEBIAN_EXTRA_REPOSITORY. (commit)
Merged a pull request in django-slack, my library to easily post messages to the Slack group-messaging utility, where instantiation of a SlackException was failing. (#71)
Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.
The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.
I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.
This month I:
Presented a status update at Debconf17 in Montr al, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
I worked on the following issues upstream:
glib2.0: Please make the output of gio-querymodules reproducible. (...)
Tidy diffoscope.progress and the XML comparator (commit, commit)
disorderfs
disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.
4:4.0.1-5 Drop even more tests with timing issues.
4:4.0.1-6 Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
4:4.0.1-7 Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
python-gflags1.5.1-3 If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives--remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
bfs1.1.1-1 New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
Debian LTS
April marked the 24th month I contributed to Debian LTS under the
Freexian umbrella. I had 8 hours allocated plus 4 hours left from March
which I used by:
releasing DLA-881-1 for ejabberd. The actual package was
prepared by Philipp Huebner fixing two CVEs
preparing and releasing DLA-896-1 for icedove. This update involved
the debranding of Icedove back to Thunderbird fixing 17 CVEs
preparing and releasing DLA-895-1 of openoffice.org-dictionaries
so the provided dictionaries stay installable with the new
thunderbird package
preparing and releasing DLA-903-1 of hunspell-en-us so the
provided dictionary stays intallable with the new thunderbird
package
preparing and releasing DLA-904-1 of uzbek-wordlist so the
provided dictionaries stay installable with the new thunderbird package
handling the communication with credativ regarding XSA-212
triaging of several QEMU/KVM CVEs
backporting large amounts of the cirrus_vga driver to Wheezy's
qemu-kvm to fix 3 cirrus_vga related CVEs. The DLA is not released
yet since I'm awaiting some more feedback about the
test packages. Give them a try!
Looking into the 9pfs related CVEs in qemu-kvm. Work will be resumed
in May.
Other Debian stuff
organized the 10th installment of the Debian Groupware Meeting.
A more detailed report on this is pending.
uploaded osinfo-db 0.20170225-2 to unstable which builds now
reproducibly (thanks Chris Lamb) and has support added for the
Stretch RC3 installer
uploaded libvirt 1.2.9-9+deb8u4 to jessie which now works with newer
QEMU (thanks Hilko Bengen)
uploaded libvirt 3.0.0-4 to unstable unbreaking it for architectures
that don't support probing CPU definitions in QEMU (like mips) and
unbreaking the use of qemu-bridge-helper with apparmor so gnome-boxes
works apparmored now too
uploaded python-vobject 0.9.4.1-1 to experimental. The package was
prepared by Jelmer Vernoo . I made some minor cleaups and added a
autopkgtest.
uploaded hunspell-en-us, uzbek-wordlist,
openoffice.org-dictionaries to jessie-security to not conflict with
the new thunderbird package (see above)
sponsored the upload of icedove 1:45.8.0-3~deb8u1 to jessie-security.
sponsored the upload of python-selenium 2.53.2+dfsg1-2 to experimental
git-buildpackage
Released versions 0.8.14 and 0.8.15. Notable changes besides bug fixes:
gbp buildpackage will now default to --merge-mode=replace for
3.0 (quilt) packages to avoid merges where no merge is necessary.
gbp buildpackage --git-export=WC now implies --git-ignore-new --git-ignore-branch
to make it simpler to use
gbp buildpackge now has a "sloppy" mode to create a upstream tarball that uses the debian
branch as base. This can help to test build from a patched tree.
The main reason was to give people a way to not care about 3.0
(quilt) intrinsics when getting started with packaging.
gbp clone now supports vcsgit: and github: pseudo URLs:
$ gbp clone vcsgit:libvirt
gbp:info: Cloning from 'https://anonscm.debian.org/git/pkg-libvirt/libvirt.git'
$ gbp clone github:agx/libvirt-debian
gbp:info: Cloning from 'https://github.com/agx/libvirt-debian.git'
Debian LTS
November marked the 21st month I contributed to Debian LTS
under the Freexian umbrella. I had 8 hours allocated which I used
for:
the first half of a LTS front desk week
updating icedove 45.6.0 resulting in DLA-782-1 fixing 8 CVEs
releasing DLA-783-1 for XEN, the actual update was provided by
credativ
testing the bind9 update prepared by Thorsten Alteholz
fixing 8 CVEs in imagemagick resulting in DLA-807-1.
work on recent qemu CVEs
Other Debian stuff
Usual bunch of libvirt and related uploads
Uploaded git-buildpackage 0.8.10 to 0.8.12.1 to experimental and
unstable fixing (among other things) a long standing bug when using
multiple tarballs with filters and pristine-tar as well as making
generated orig tarballs reproducible so one gets identical tarballs even
without pristine-tar.
Ran a gbp import-dsc of unstable and filed bugs for cases where
pristine-tar would not import the package. Started to look into
git-apply errors.
Some other Free Software activites
libplanfahr: switched the example to python3 and made it parse
arguments without date as "today":
Debian LTS
November marked the nineteenth month I contributed to Debian LTS
under the Freexian umbrella. I had 7 hours allocated which I used
completely by:
Being at LTS frontdesk twice (at the beginning and end of November)
triaging about ~30 CVEs.
Preparing and releasing DLA-698-1 for QEMU fixing 9 CVEs
Putting out DLA-699-1 for xen, the acutal xen update was
prepared by Bastian Blank
Other Debian stuff
Usual bunch of libvirt and related uploads (osinfo-db-tools, libvirt-python, libosinfo)
looking into current xen issues and handling the communication with
credativ
investigating QEMU CVE-2016-7466 in Wheezy and Jessie
backporting patches for qemu-kvm to fix 9 CVEs resulting in DLA-689-1
starting with lts frontdesk (more on that next month)
Other Debian stuff
Carsten and myself had the chance to talk at the
Kopano conference about Debian and the state of Kopano in
Debian (slides)
Uploaded kopanocore to unstable, currently waiting in new
Several Libvirt and Libvirt (2.3.0, 2.4.0~rc*) related uploads
(libvirt 2.3.0, libvirt-python, ruby-libvirt 0.7.0)
Uploaded libosinfo 1.0.0 to experimental. This version has the
osinfo database split out into its own source package
(osinfo-db, waiting in new) so the operating system and
hypervisor information is updateable during a stable release without
having to update the library itself
Made specinfrahandle libvirt lxc containers like lxc containers
Fixed an error in calypso that made us potentially remove the
wrong entries, added travis ci support and started looking into
python-vobjects unicode breackage
finishing my work on bringing rails into shape security wise
resulting in DLA-641-1 for ruby-activesupport-3.2 and
DLA-642-1 for ruby-activerecord-3.2.
Made several improvements to foreman_ansible_inventory (a
ansible dynamic inventory querying Foreman): Fixing an endless loop
when Foreman would miscalculate the number of hosts to process,
flake8 cleaniness and some work on python3 support
Debian LTS
August marked the sixteenth month I contributed to Debian LTS
under the Freexian umbrella. I spent 9 hours (of allocated 8)
mostly on Rails related CVEs which resulted in DLA-603-1 and
DLA-604-1 fixing 6 CVEs and marking others as not affecting the
packages. The hardest part was proper testing since the split
packages in Wheezy don't allow to run the upstream test suite as is.
There's still CVE-2016-0753 which I need to check if it affects
activerecord or activesupport.
Additionally I had one relatively quiet week of LTS frontdesk work
triaging 10 CVEs.
Other Debian stuff
I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to
unstable. The later bringing all the enhanements and bugfixes since
Debconf 16 to sid and testing.
Debian LTS
August marked the sixteenth month I contributed to Debian LTS
under the Freexian umbrella. I spent 9 hours (of allocated 8)
mostly on Rails related CVEs which resulted in DLA-603-1 and
DLA-604-1 fixing 6 CVEs and marking others as not affecting the
packages. The hardest part was proper testing since the split
packages in Wheezy don't allow to run the upstream test suite as is.
There's still CVE-2016-0753 which I need to check if it affects
activerecord or activesupport.
Additionally I had one relatively quiet week of LTS frontdesk work
triaging 10 CVEs.
Other Debian stuff
I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to
unstable. The later bringing all the enhanements and bugfixes since
Debconf 16 to sid and testing.
Gathering from some recent discussions it seems to be not that well
known that Foreman (a lifecycle tool for your virtual machines)
does not only integrate well with Puppet but also with
ansible. This is a list of tools I find useful in this regard:
The ansible-module-foreman ansible module allows you to setup all
kinds of resources like images, compute resources, hostgroups,
subnets, domains within Foreman itself via ansible using Foreman's
REST API. E.g. creating a hostgroup looks like:
The foreman_ansible plugin for Foreman allows you to collect
reports and facts from ansible provisioned hosts. This requires an
additional hook in your ansible config like:
The hook will report to Foreman back after a playbook finished.
There are several options for creating hosts in Foreman via
the ansible API. I'm currently using ansible_foreman_module
tailored for image based installs. This looks in a playbook like:
- name: Build 10 hosts
foremanhost:
name: " item "
hostgroup: "a/host/group"
compute_resource: "hopefully_not_esx"
subnet: "webservernet"
environment: " env default(omit) "
ipv4addr: from_ipam default(omit) "
# Additional params to tag on the host
params:
app: varnish
tier: web
color: green
api_user: " foreman_user "
api_password: " foreman_pw "
api_url: " foreman_url "
with_sequence: start=1 end=10 format="newhost%02d"
The foreman_ansible_inventory is a dynamic inventory script
for ansible that fetches all your hosts and groups via the Foreman
REST APIs. It automatically groups hosts in ansible from Foreman's
hostgroups, environments, organizations and locations and allows you
to build additional groups based on any available host parameter
(and combinations thereof). So using the above example and this
configuration:
it would build the additional ansible groups varnish-web, green
and put the above hosts into them.
This way you can easily select the hosts for
e.g. blue green deployments. You don't have to pass the
parameters during host creation, if you have parameters on
e.g. domains or hostgroups these are available too for grouping via
group_patterns.
If you're grouping your hosts via the above inventory script and you
use lots of parameters than having these displayed in the detail page
can be useful. You can use the foreman_params_tab plugin for that.
There's also support for triggering ansible runs from within Foreman
itself but I've not used that so far.