Search Results: "kyle"

9 November 2025

Stefano Rivera: Debian Video Team Sprint: November 2025

This week, some of the DebConf Video Team met in Herefordshire (UK) for a sprint. We didn't have a sprint in 2024, and it was sorely needed, now. At the sprint we made good progress towards using Voctomix 2 more reliably, and made plans for our future hardware needs. Attendees Voctomix 2 DebConf 25 was the first event that the team used Voctomix version 2. Testing it during DebCamp 25 (the week before DebConf), it seemed to work reliably. But during the event, we hit repeated audio dropout issues, that affected about 18 of our recordings (and live streams). We had attempted to use Voctomix 2 at DebConf 24, and quickly rolled back to version 1, on day 1 of the conference, when we hit similar issues. We thought these issues would be resolved for DebConf 25, by using more powerful (newer) mixing machines. Trying to get to the bottom of these issues was the main focus of the sprint. Nicolas brought 2 of Debian's cameras and the Framework laptop that we'd used at the conference, so we could reproduce the problem. It didn't take long to reproduce, in fact, we spent most of the week trying any configuration changes we could think of to avoid it. The issue we've been seeing feels like a gstreamer bug, rather than something voctomix is doing incorrectly. If anything, configuration changes are avoiding hitting it. Finally, on the last night of the sprint, we managed to run voctomix all night without the problem appearing. But... that isn't enough to feel confident that the issue is avoided. More testing will be required. Detecting audio breakage Kyle worked on a way to report the audio quality in our Prometheus exporter, so we can automatically detect this kind of audio breakage. This was implemented in helios our audio level monitor, and lead to some related code refactoring. Framework Laptops Historically, the video team has relied on borrowed and rented computer hardware at conferences for our (software) video mixing, streaming, storage and encoding. Many years ago, we'd even typically have a local Debian mirror and upload queue on site. Our video mixing machines had to be desktop size computers with 2 Blackmagic Declink Mini Recorder PCI-e cards installed in them, to capture video from our cameras. Now that we reliably have more Internet bandwidth than we really need, at our conference venues, we can rely on offsite cloud servers. We only need the video capture and mixing machines on site. Blackmagic also has UltraStudio Recorder thunderbolt capture boxes that we can use with a laptop. The project bought a couple of these and a Framework 13 AMD laptop to test at DebConf 25. We used it in production at DebConf, in the "Petit Amphi" room, where it seemed to work fairly well. It was very picky about thunderbolt cable and port combinations, refusing to even boot when they were connected. Since then, Framework firmware has fixed these issues, and in our testing at the sprint, it worked almost perfectly. (One of the capture boxes got into a broken state, and had to be unplugged and re-connected to fix it.) We think these are the best option for the future, and plan to ask the project to buy some more of them. HDCP Apple Silicon devices seem to like to HDCP-encrypt their HDMI output whenever possible. This causes our HDMI capture hardware to display an "Encrypted" error, rather than any useful image. Chris experimented with a few different devices to strip HDCP from HDMI video, at least 2 of them worked. Spring Cleaning Kyle dug through the open issues in our Salsa repositories and cleaned up some issues. DebConf 25 Video Encoding The core video team at DebConf 25 was a little under-staffed, significantly overlapping with core conference organization, which took priority. That, combined with the Voctomix 2 audio dropout issues we'd hit, meant that there was quite a bit of work left to be done to get the conference videos properly encoded and released. We found that the encodings had been done at the wrong resolution, which forced a re-encode of all videos. In the process, we reviewed videos for audio issues and made a list of the ones that need more work. We ran out of time and this work isn't done, yet. DebConf 26 Preparation Kyle reviewed floorplans and photographs of the proposed DebConf 26 talk venues, and build up a list of A/V kit that we'll need to hire. Carl's Video Box Carl uses much of the same stack as the video team, for many other events in the US. He has experimenting with using a Dell 7212 tablet in an all-in-one laser-cut box. Carl demonstrated this box, which could perfect for small miniDebConfs, at the sprint. Using voctomix 2 on the box requires some work, because it doesn't use Blackmagic cards for video capture. The Box: Front The Box: Back gst-fallbacksrc Carl's box's needs lead to looking at gstfallbacksrc. This should let Voctomix 2 survive cameras (or network sources) going away for a moment. Matthias Geiger packaged it for us, and it's now in Debian NEW. Thanks! voctomix-outcasts Carl cut a release of voctomix-outcasts and Stefano uploaded it to unstable. Ansible Configuration The videoteam's stack is deployed with Ansible, and almost everything we do involves work on this stack. Carl upstreamed some of his features to us, and we updated our voctomix2 configuration to take advantage of our experiments at the sprint. Miscellaneous Voctomix contributions We fixed a couple of minor bugs in voctomix. More Nageru experimentation In 2023, we tried to configure Nageru (another live video mixer) for the video team's needs. Like voctomix it needs some configuration and scaffolding to adapt it to your needs. Practically, this means writing a "theme" in Lua that controls the mixer. The team still has a preference for Voctomix (as we're all very familiar with it), but would like to have Nageru available as an option when we need it. We fixed some minor issues in our theme, enough to get it running again, on the Framework laptop. Much more work is needed to really make it a useable option. Thank you Thanks to the Debian project for funding the costs of the sprint, and Chris Boot's extended family for providing us with a no-cost sprint venue. Thanks to c3voc for developing and maintaining voctomix, and helping us to debug issues in it. Thank you to everyone in the videoteam who attended or helped out remotely! And to employers who let us work on Debian on company time. We'll likely need to keep working on our stack remotely, in the leadup to DebConf 26, and/or have another sprint before then. Breakfast Coffee Hacklab Trains!

22 September 2025

Vincent Bernat: Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let s dive in!
$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New outlet service The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:
Akvorado flow processing before the change: flows are received and processed by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service
Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult without making the problem worse.2 In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without decoding them. The new outlet service handles the remaining tasks:
Akvorado flow processing after the change: flows are received by the inlet, sent to Kafka, processed by the outlet and inserted in ClickHouse
Akvorado flow processing after the introduction of the outlet service
This change goes beyond a simple split:3 the outlet now reads flows from Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle before. Flows are heavily batched to increase efficiency and reduce the load on ClickHouse using ch-go, a low-level Go client for ClickHouse. When batches are too small, asynchronous inserts are used (e20645). The number of outlet workers scales dynamically (e5a625) based on the target batch size and latency (50,000 flows and 5 seconds by default). This new architecture also allows us to simplify and optimize the code. The outlet fetches metadata synchronously (e20645). The BMP component becomes simpler by removing cooperative multitasking (3b9486). Reusing the same RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on the garbage collector (8b580f). The effect on Akvorado s overall performance was somewhat uncertain, but a user reported 35% lower CPU usage after migrating from the previous version, plus resolution of the long-standing BMP component issue.

Other changes This new version includes many miscellaneous changes, such as completion for source and destination ports (f92d2e), and automatic restart of the orchestrator service (0f72ff) when configuration changes to avoid a common pitfall for newcomers. Let s focus on some key areas for this release: observability, documentation, CI, Docker, Go, and JavaScript.

Observability Akvorado exposes metrics to provide visibility into the processing pipeline and help troubleshoot issues. These are available through Prometheus HTTP metrics endpoints, such as /api/v0/inlet/metrics. With the introduction of the outlet, many metrics moved. Some were also renamed (4c0b15) to match Prometheus best practices. Kafka consumer lag was added as a new metric (e3a778). If you do not have your own observability stack, the Docker Compose setup shipped with Akvorado provides one. You can enable it by activating the profiles introduced for this purpose (529a8f). The prometheus profile ships Prometheus to store metrics and Alloy to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka metrics are collected through the exporter bundled with Alloy (560113). Other metrics are exposed using Prometheus metrics endpoints and are automatically fetched by Alloy with the help of some Docker labels, similar to what is done to configure Traefik. cAdvisor was also added (83d855) to provide some container-related metrics. The loki profile ships Loki to store logs (45c684). While Alloy can collect and ship logs to Loki, its parsing abilities are limited: I could not find a way to preserve all metadata associated with structured logs produced by many applications, including Akvorado. Vector replaces Alloy (95e201) and features a domain-specific language, VRL, to transform logs. Annoyingly, Vector currently cannot retrieve Docker logs from before it was started. Finally, the grafana profile ships Grafana, but the shipped dashboards are broken. This is planned for a future version.

Documentation The Docker Compose setup provided by Akvorado makes it easy to get the web interface up and running quickly. However, Akvorado requires a few mandatory steps to be functional. It ships with comprehensive documentation, including a chapter about troubleshooting problems. I hoped this documentation would reduce the support burden. It is difficult to know if it works. Happy users rarely report their success, while some users open discussions asking for help without reading much of the documentation. In this release, the documentation was significantly improved.
$ git diff --shortstat v1.11.5 -- console/data/docs
 10 files changed, 1873 insertions(+), 1203 deletions(-)
The documentation was updated (fc1028) to match Akvorado s new architecture. The troubleshooting section was rewritten (17a272). Instructions on how to improve ClickHouse performance when upgrading from versions earlier than 1.10.0 was added (5f1e9a). An LLM proofread the entire content (06e3f3). Developer-focused documentation was also improved (548bbb, e41bae, and 871fc5). From a usability perspective, table of content sections are now collapsable (c142e5). Admonitions help draw user attention to important points (8ac894).
Admonition in Akvorado documentation to ask a user not to open an issue or start a discussion before reading the documentation
Example of use of admonitions in Akvorado's documentation

Continuous integration This release includes efforts to speed up continuous integration on GitHub. Coverage and race tests run in parallel (6af216 and fa9e48). The Docker image builds during the tests but gets tagged only after they succeed (8b0dce).
GitHub workflow for CI with many jobs, some of them running in parallel, some not
GitHub workflow to test and build Akvorado
End-to-end tests (883e19) ensure the shipped Docker Compose setup works as expected. Hurl runs tests on various HTTP endpoints, particularly to verify metrics (42679b and 169fa9). For example:
## Test inlet has received NetFlow flows
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_flow_input_udp_packets_total job="akvorado-inlet",listener=":2055" )
HTTP 200
[Captures]
inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_receivedflows" > 10
## Test inlet has sent them to Kafka
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_kafka_sent_messages_total job="akvorado-inlet" )
HTTP 200
[Captures]
inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_sentflows" >=   inlet_receivedflows  

Docker Akvorado ships with a comprehensive Docker Compose setup to help users get started quickly. It ensures a consistent deployment, eliminating many configuration-related issues. It also serves as a living documentation of the complete architecture. This release brings some small enhancements around Docker: Previously, many Docker images were pulled from the Bitnami Containers library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired VMWare in 2023. As a result, Bitnami images were deprecated in less than a month. This was not really a surprise4. Previous versions of Akvorado had already started moving away from them. In this release, the Apache project s Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20). Akvorado s Docker images were previously compiled with Nix. However, building AArch64 images on x86-64 is slow because it relies on QEMU userland emulation. The updated Dockerfile uses multi-stage and multi-platform builds: one stage builds the JavaScript part on the host platform, one stage builds the Go part cross-compiled on the host platform, and the final stage assembles the image on top of a slim distroless image (268e95 and d526ca).
# This is a simplified version
FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js
RUN apk add --no-cache make
WORKDIR /build
COPY console/frontend console/frontend
COPY Makefile .
RUN make console/data/frontend
FROM --platform=$BUILDPLATFORM golang:alpine AS build-go
RUN apk add --no-cache make curl zip
WORKDIR /build
COPY . .
COPY --from=build-js /build/console/data/frontend console/data/frontend
RUN go mod download
RUN make all-indep
ARG TARGETOS TARGETARCH TARGETVARIANT VERSION
RUN make
FROM gcr.io/distroless/static:latest
COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado
ENTRYPOINT [ "/usr/local/bin/akvorado" ]
When building for multiple platforms with --platform linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted line execute only once for all platforms. This significantly speeds up the build. Akvorado now ships Docker images for these platforms: linux/amd64, linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU. On x86-64, there are two choices. If your CPU is recent enough, Docker downloads linux/amd64/v3. This version contains additional optimizations and should run faster than the linux/amd64 version. It would be interesting to ship an image for linux/arm64/v8.2, but Docker does not support the same mechanism for AArch64 yet (792808).

Go This release includes many changes related to Go but not visible to the users.

Toolchain In the past, Akvorado supported the two latest Go versions, preventing immediate use of the latest enhancements. The goal was to allow users of stable distributions to use Go versions shipped with their distribution to compile Akvorado. However, this became frustrating when interesting features, like go tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be compiled with older toolchains by automatically downloading a newer one (94fb1c).5 Users can still override GOTOOLCHAIN to revert this decision. The recommended toolchain updates weekly through CI to ensure we get the latest minor release (5b11ec). This change also simplifies updates to newer versions: only go.mod needs updating. Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have started converting some unit tests to the new test/synctest package (bd787e, 7016d8, and 159085).

Testing When testing equality, I use a helper function Diff() to display the differences when it fails:
got := input.Keys()
expected := []int 1, 2, 3 
if diff := helpers.Diff(got, expected); diff != ""  
    t.Fatalf("Keys() (-got, +want):\n%s", diff)
 
This function uses kylelemons/godebug. This package is no longer maintained and has some shortcomings: for example, by default, it does not compare struct private fields, which may cause unexpectedly successful tests. I replaced it with google/go-cmp, which is stricter and has better output (e2f1df).

Another package for Kafka Another change is the switch from Sarama to franz-go to interact with Kafka (756e4a and 2d26c5). The main motivation for this change is to get a better concurrency model. Sarama heavily relies on channels and it is difficult to understand the lifecycle of an object handed to this package. franz-go uses a more modern approach with callbacks6 that is both more performant and easier to understand. It also ships with a package to spawn fake Kafka broker clusters, which is more convenient than the mocking functions provided by Sarama.

Improved routing table for BMP To store its routing table, the BMP component used kentik/patricia, an implementation of a patricia tree focused on reducing garbage collection pressure. gaissmai/bart is a more recent alternative using an adaptation of [Donald Knuth s ART algorithm][] that promises better performance and delivers it: 90% faster lookups and 27% faster insertions (92ee2e and fdb65c). Unlike kentik/patricia, gaissmai/bart does not help efficiently store values attached to each prefix. I adapted the same approach as kentik/patricia to store route lists for each prefix: store a 32-bit index for each prefix, and use it to build a 64-bit index for looking up routes in a map. This leverages Go s efficient map structure. gaissmai/bart also supports a lockless routing table version, but this is not simple because we would need to extend this to the map storing the routes and to the interning mechanism. I also attempted to use Go s new unique package to replace the intern package included in Akvorado, but performance was worse.7

Miscellaneous Previous versions of Akvorado were using a custom Protobuf encoder for performance and flexibility. With the introduction of the outlet service, Akvorado only needs a simple static schema, so this code was removed. However, it is possible to enhance performance with planetscale/vtprotobuf (e49a74, and 8b580f). Moreover, the dependency on protoc, a C++ program, was somewhat annoying. Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf schema into Go code (f4c879). Another small optimization to reduce the size of the Akvorado binary by 10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It includes the ASN database, as well as the SVG images for the documentation. A small layer of code makes this change transparent (b1d638 and e69b91).

JavaScript Recently, two large supply-chain attacks hit the JavaScript ecosystem: one affecting the popular packages chalk and debug and another impacting the popular package @ctrl/tinycolor. These attacks also exist in other ecosystems, but JavaScript is a prime target due to heavy use of small third-party dependencies. The previous version of Akvorado relied on 653 dependencies. npm-run-all was removed (3424e8, 132 dependencies). patch-package was removed (625805 and e85ff0, 69 dependencies) by moving missing TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a linter written in Rust (97fd8c, 125 dependencies, including the plugins). I switched from npm to Pnpm, an alternative package manager (fce383). Pnpm does not run install scripts by default8 and prevents installing packages that are too recent. It is also significantly faster.9 Node.js does not ship Pnpm but it ships Corepack, which allows us to use Pnpm without installing it. Pnpm can also list licenses used by each dependency, removing the need for license-compliance (a35ca8, 42 dependencies). For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite was replaced with its faster Rolldown version (463827). After these changes, Akvorado only pulls 225 dependencies.

Next steps I would like to land three features in the next version of Akvorado:
  • Add Grafana dashboards to complete the observability stack. See issue #1906 for details.
  • Integrate OVH s Grafana plugin by providing a stable API for such integrations. Akvorado s web console would still be useful for browsing results, but if you want to build and share dashboards, you should switch to Grafana. See issue #1895.
  • Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP enrichment) back into the outlet service. This should give more flexibility for adding features like the one requested in issue #1030. See issue #2006.

I started working on splitting the inlet into two parts more than one year ago. I found more motivation in recent months, partly thanks to Claude Code, which I used as a rubber duck. Almost none of the produced code was kept:10 it is like an intern who does not learn.

  1. Many attempts were made to make the BMP component both performant and not blocking. See for example PR #254, PR #255, and PR #278. Despite these efforts, this component remained problematic for most users. See issue #1461 as an example.
  2. Some features have been pushed to ClickHouse to avoid the processing cost in the inlet. See for example PR #1059.
  3. This is the biggest commit:
    $ git show --shortstat ac68c5970e2c   tail -1
    231 files changed, 6474 insertions(+), 3877 deletions(-)
    
  4. Broadcom is known for its user-hostile moves. Look at what happened with VMWare.
  5. As a Debian developer, I dislike these mechanisms that circumvent the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I don t like at all.
  6. In the early years of Go, channels were heavily promoted. Sarama was designed during this period. A few years later, a more nuanced approach emerged. See notably Go channels are bad and you should feel bad.
  7. This should be investigated further, but my theory is that the intern package uses 32-bit integers, while unique uses 64-bit pointers. See commit 74e5ac.
  8. This is also possible with npm. See commit dab2f7.
  9. An even faster alternative is Bun, but it is less available.
  10. The exceptions are part of the code for the admonition blocks, the code for collapsing the table of content, and part of the documentation.

26 April 2024

Russell Coker: Convergence vs Transference

I previously wrote a blog post titled Considering Convergence [1] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. Currently the expected use is that if you have web pages open on Chrome on Android it s possible to instruct Chrome on the desktop to open the same page if both instances of Chrome are signed in to the same GMail account. It s also possible to view the Chrome history with CTRL-H, select tabs from other devices and load things that were loaded on other devices some time ago. This is very minimal support for moving work between devices and I think we can do better. Firstly for web browsing the Chrome functionality is barely adequate. It requires having a heavyweight login process on all browsers that includes sharing stored passwords etc which isn t desirable. There are many cases where moving work is desired without sharing such things, one example is using a personal device to research something for work. Also the Chrome method of sending web pages is slow and unreliable and the viewing history method gets all closed tabs when the common case is get the currently open tabs from one browser window without wanting the dozens of web pages that turned out not to be interesting and were closed. This could be done with browser plugins to allow functionality similar to KDE Connect for sending tabs and also the option of emailing a list of URLs or a JSON file that could be processed by a browser plugin on the receiving end. I can send email between my home and work addresses faster than the Chrome share to another device function can send a URL. For documents we need a way of transferring files. One possibility is to go the Chromebook route and have it all stored on the web. This means that you rely on a web based document editing system and the FOSS versions are difficult to manage. Using Google Docs or Sharepoint for everything is not something I consider an acceptable option. Also for laptop use being able to run without Internet access is a good thing. There are a range of distributed filesystems that have been used for various purposes. I don t think any of them cater to the use case of having a phone/laptop and a desktop PC (or maybe multiple PCs) using the same files. For a technical user it would be an option to have a script that connects to a peer system (IE another computer with the same accounts and access control decisions) and rsync a directory of working files and the shell history, and then opens a shell with the HISTFILE variable, current directory, and optionally some user environment variables set to match. But this wouldn t be the most convenient thing even for technical users. For programs that are integrated into the desktop environment it s possible for them to be restarted on login if they were active when the user logged out. The session tracking for that has about 1/4 the functionality needed for requesting a list of open files from the application, closing the application, transferring the files, and opening it somewhere else. I think that this would be a good feature to add to the XDG setup. The model of having programs and data attached to one computer or one network server that terminals of some sort connect to worked well when computers were big and expensive. But computers continue to get smaller and cheaper so we need to think of a document based use of computers to allow things to be easily transferred as convenient. With convenience being important so the hacks of rsync scripts that can work for technical users won t work for most people.

6 June 2023

Russell Coker: PinePhonePro First Impression

Hardware I received my PinePhone Pro [1] on Thursday, it seems in many ways better than the Purism Librem 5 [2] that I have previously written about. The PinePhone is thinner, lighter, and yet has a much longer battery life. A friend described the Librem5 as the CyberTruck phone and not in a good way. In a test I had my PinePhone and my Librem5 fully charged, left them for 4.5 hours without doing anything much with them, and then the PinePhone was at 85% and the Librem5 was at 57%. So the Librem5 will run out of battery after about 10 hours of not being used while a PinePhonePro can be expected to last about 30 hours. The PinePhonePro isn t as good as some of the recent Android phones in this regard but it shows the potential to be quite usable. For this test both phones were connected to a 2.4GHz Wifi network (which uses less power than 5GHz) and doing nothing much with an out of the box configuration. A phone that is checking email, social networking, and a couple of IM services will use the battery faster. But even if the PinePhone has it s battery used twice as fast in a more realistic test that will still be usable. Here are the passmark results from the PinePhone Pro [3] which got a CPU score of 888 compared to 507 for the Librem 5 and 678 for one of the slower laptops I ve used. The results are excluded from the Passmark averages because they identified the CPU as only having 4 cores (expecting just 4*A72) while the PinePhonePro has 6 cores (2*A72+4*A53). This phone definitely has the CPU power for convergence [4]! Default OS By default the PinePhone has a KDE based GUI and the Librem5 has a GNOME based GUI. I don t like any iteration of GNOME (I have tried them all and disliked them all) and I like KDE so I will tend to like anything that is KDE based more than anything GNOME based. But in addition to that the PinePhone has an interface that looks a lot like Android with the three on-screen buttons at the bottom of the display and the way it has the slide up tray for installed apps. Android is the most popular phone OS and looking like the most common option is often a good idea for a new and different product, this seems like an objective criteria to determine that the default GUI on the PinePhone is a better choice (at least for the default). When I first booted it and connected it to Wifi the updates app said that there were 633 updates to apply, but never applied them (I tried clicking on the update button but to no avail) and didn t give any error message. For me not being Debian is enough reason to dislike Manjaro, but if that wasn t enough then the failure to update would be a good start. When I ran pacman in a terminal window it said that each package was corrupt and asked if I wanted to delete it. According to tar tvJf the packages weren t corrupt. After downloading them again it said that they were corrupt again so it seemed that pacman wasn t working correctly. When the screen is locked and a call comes in it gives a window with Accept and Reject buttons but neither of them works. The default country code for Spacebar (the SMS app) is +1 (US) even though I specified Australia on the initial login. It also doesn t get the APN unlike Android phones which seem to have some sort of list of APNs. Upgrading to Debian The Debian Wiki page about Installing on the PinePhone Pro has the basic information [5]. The first thing it covers is installing the TOW boot loader which is already installed by default in recent PinePhones (such as mine). You can recognise that TOW is installed by pressing the volume-up button in the early stages of boot up (described as before and during the second vibration ), then the LED will turn blue and the phone will act as a USB mass storage device which makes it easy to do other install/recovery tasks. The other TOW option is to press volume-down to boot from a MicroSD card (the default is to boot the OS on the eMMC). The images linked from the Debian wiki page are designed to be installed with bmaptool from the bmap-tools Debian package. After installing that package and downloading the pre-built Mobian image I installed it with the command bmaptool copy mobian-pinephonepro-phosh-bookworm-12.0-rc3.img.gz /dev/sdb where /dev/sdb is the device that the USB mapped PinePhone storage was located. That took 6 minutes and then I rebooted my PinePhone into Mobian! Unfortunately the default GUI for Mobian is GNOME/Phosh. Changing it to KDE is my next task.

29 May 2023

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

12 November 2022

Wouter Verhelst: Day 3 of the Debian Videoteam Sprint in Cape Town

The Debian Videoteam has been sprinting in Cape Town, South Africa -- mostly because with Stefano here for a few months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in the country on a regular basis. In addition to that, two more members of the team (Nicolas and Louis-Philippe) are joining the sprint remotely (from Paris and Montreal). Videoteam sprint (Kyle and Stefano working on things, with me behind the camera and Jonathan busy elsewhere.) We've made loads of progress! Some highlights: The sprint isn't over yet (we're continuing until Sunday), but loads of things have already happened. Stay tuned!

5 February 2021

Thorsten Alteholz: My Debian Activities in January 2021

FTP master This month I could increase my activities in NEW again and accepted 132 packages. Unfortunately I also had to reject 12 packages. The overall number of packages that got accepted was 374. Debian LTS This was my seventy-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 26h. During that time I did LTS and normal security uploads of: With the buster upload of highlight.js I could finish to fix CVE-2020-26237 in all releases. I also tried to fix one or the other CVE for golang packages, to be exact: golang-github-russellhaering-goxmldsig, golang-github-tidwall-match, golang-github-tidwall-gjson and golang-github-antchfx-xmlquery. The version in unstable is easily done by uploading a new upstream version after checking with ratt that all reverse-build-dependencies are still working. The next step will be to really upload all reverse-build-dependencies that need a new build. As the number of reverse-build-dependencies might be rather large, this needs to be done automatically somehow. The problem I am struggling with at the moment are packages that need to be rebuilt but the version in git already increased Another problem with golang packages are packages that are referenced by a Built-Using: line, but whose sources are not yet available on security-master. If this happens, the uploaded package will be automatically rejected. Unfortunately the rejection-email only contains the first missing package. So in order to reduce the hassle with such uploads, please send me the Built-Using:-line before the upload and I will import everything. In December/January this affected the uploads of influxdb and snapd. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirty-first ELTS month. During my allocated time I uploaded: Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions of: I improved packaging of: The golang packages here are basically ones with a missing source upload. For whatever reason maintainers tend to forget about this

16 July 2020

Louis-Philippe V ronneau: DebConf Videoteam Sprint Report -- DebConf20@Home

DebConf20 starts in about 5 weeks, and as always, the DebConf Videoteam is working hard to make sure it'll be a success. As such, we held a sprint from July 9th to 13th to work on our new infrastructure. A remote sprint certainly ain't as fun as an in-person one, but we nonetheless managed to enjoy ourselves. Many thanks to those who participated, namely: We also wish to extend our thanks to Thomas Goirand and Infomaniak for providing us with virtual machines to experiment on and host the video infrastructure for DebConf20. Advice for presenters For DebConf20, we strongly encourage presenters to record their talks in advance and send us the resulting video. We understand this is more work, but we think it'll make for a more agreeable conference for everyone. Video conferencing is still pretty wonky and there is nothing worse than a talk ruined by a flaky internet connection or hardware failures. As such, if you are giving a talk at DebConf this year, we are asking you to read and follow our guide on how to record your presentation. Fear not: we are not getting rid of the Q&A period at the end of talks. Attendees will ask their questions either on IRC or on a collaborative pad and the Talkmeister will relay them to the speaker once the pre-recorded video has finished playing. New infrastructure, who dis? Organising a virtual DebConf implies migrating from our battle-tested on-premise workflow to a completely new remote one. One of the major changes this means for us is the addition of Jitsi Meet to our infrastructure. We normally have 3 different video sources in a room: two cameras and a slides grabber. With the new online workflow, directors will be able to play pre-recorded videos as a source, will get a feed from a Jitsi room and will see the audience questions as a third source. This might seem simple at first, but is in fact a very major change to our workflow and required a lot of work to implement.
               == On-premise ==                                          == Online ==
                                                      
              Camera 1                                                 Jitsi
                                                                          
                 v                 ---> Frontend                         v                 ---> Frontend
                                                                                            
    Slides -> Voctomix -> Backend -+--> Frontend         Questions -> Voctomix -> Backend -+--> Frontend
                                                                                            
                 ^                 ---> Frontend                         ^                 ---> Frontend
                                                                          
              Camera 2                                           Pre-recorded video
In our tests, playing back pre-recorded videos to voctomix worked well, but was sometimes unreliable due to inconsistent encoding settings. Presenters will thus upload their pre-recorded talks to SReview so we can make sure there aren't any obvious errors. Videos will then be re-encoded to ensure a consistent encoding and to normalise audio levels. This process will also let us stitch the Q&As at the end of the pre-recorded videos more easily prior to publication. Reducing the stream latency One of the pitfalls of the streaming infrastructure we have been using since 2016 is high video latency. In a worst case scenario, remote attendees could get up to 45 seconds of latency, making participation in events like BoFs arduous. In preparation for DebConf20, we added a new way to stream our talks: RTMP. Attendees will thus have the option of using either an HLS stream with higher latency or an RTMP stream with lower latency. Here is a comparative table that can help you decide between the two protocols:
HLS RTMP
Pros
  • Can be watched from a browser
  • Auto-selects a stream encoding
  • Single URL to remember
  • Lower latency (~5s)
Cons
  • Higher latency (up to 45s)
  • Requires a dedicated video player (VLC, mpv)
  • Specific URLs for each encoding setting
Live mixing from home with VoctoWeb Since DebConf16, we have been using voctomix, a live video mixer developed by the CCC VOC. voctomix is conveniently divided in two: voctocore is the backend server while voctogui is a GTK+ UI frontend directors can use to live-mix. Although voctogui can connect to a remote server, it was primarily designed to run either on the same machine as voctocore or on the same LAN. Trying to use voctogui from a machine at home to connect to a voctocore running in a datacenter proved unreliable, especially for high-latency and low bandwidth connections. Inspired by the setup FOSDEM uses, we instead decided to go with a web frontend for voctocore. We initially used FOSDEM's code as a proof of concept, but quickly reimplemented it in Python, a language we are more familiar with as a team. Compared to the FOSDEM PHP implementation, voctoweb implements A / B source selection (akin to voctogui) as well as audio control, two very useful features. In the following screen captures, you can see the old PHP UI on the left and the new shiny Python one on the right. The old PHP voctowebThe new Python3 voctoweb Voctoweb is still under development and is likely to change quite a bit until DebConf20. Still, the current version seems to works well enough to be used in production if you ever need to. Python GeoIP redirector We run multiple geographically-distributed streaming frontend servers to minimize the load on our streaming backend and to reduce overall latency. Although users can connect to the frontends directly, we typically point them to live.debconf.org and redirect connections to the nearest server. Sadly, 6 months ago MaxMind decided to change the licence on their GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano Rivera wrote a Python program that uses the new database and reworked our ansible frontend server role. Since the new database cannot be redistributed freely, you'll have to get a (free) license key from MaxMind if you to use this role. Ansible & CI improvements Infrastructure as code is a living process and needs constant care to fix bugs, follow changes in DSL and to implement new features. All that to say a large part of the sprint was spent making our ansible roles and continuous integration setup more reliable, less buggy and more featureful. All in all, we merged 26 separate ansible-related merge request during the sprint! As always, if you are good with ansible and wish to help, we accept merge requests on our ansible repository :)

2 March 2020

Jonathan Carter: Free Software activities for 2020-02

Belgians This month started off in Belgium for FOSDEM on 1-2 February. I attended FOSDEM in Brussels and wrote a separate blog entry for that. The month ended with Belgians at Tammy and Wouter s wedding. On Thursday we had Wouter s bachelors and then over the weekend I stayed over at their wedding venue. I thought that other Debianites might be interested so I m sharing some photos here with permission from Wouter. It was the only wedding I ve been at where nearly everyone had questions about Debian! I first met Wouter on the bus during the daytrip on DebConf12 in Nicaragua, back then I ve eagerly followed the Debianites on Planet Debian for a while so it was like meeting someone famous. Little did I know that 8 years later, I d be at his wedding back in my part of the world. If you went to DebConf16 in South Africa, you might remember Tammy, who have done a lot of work for DC16 including most of the artwork, bunch of website work, design on the badges, bags, etc and also did a lot of organisation for the day trips. Tammy and Wouter met while Tammy was reviewing the artwork in the video loops for the DebConf videos, and then things developed from there. Wouter s Bachelors Wouter was blindfolded and kidnapped and taken to the city center where we prepared to go on a bike tour of Cape Town, stopping for beer at a few places along the way. Wouter was given a list of tasks that he had to complete, or the wedding wouldn t be allowed to continue

Wouter s tasks
Wouter s props, needed to complete his tasks
Bike tour leg at Cape Town Stadium.
Seeking out 29 year olds.
Wouter finishing his lemon and actually seemingly enjoying it.
Reciting South African national anthem notes and lyrics.
The national anthem, as performed by Wouter (I was actually impressed by how good his pitch was).
The Wedding Friday afternoon we arrived at the lodge for the weekend. I had some work to finish but at least this was nicer than where I was going to work if it wasn t for the wedding.
Accommodation at the lodge
When the wedding co-ordinators started setting up, I noticed that there were all these swirls that almost looked like Debian logos. I asked Wouter if that was on purpose or just a happy accident. He said Hmm! I haven t even noticed that yet! , didn t get a chance to ask Tammy yet, so it could still be her touch.
Debian swirls everywhere
I took a canoe ride on the river and look what I found, a paddatrapper!
Kyle and I weren t the only ones out on the river that day. When the wedding ceremony started, Tammy made a dramatic entrance coming in on a boat, standing at the front with the breeze blowing her dress like a valkyrie.
A bit of digital zoomage of previous image.
Time to say the vows.
Just married. Thanks to Sue Fuller-Good for the photo.
Except for one character being out of place, this was a perfect fairy tale wedding, but I pointed Wouter to https://jonathancarter.org/how-to-spell-jonathan/ for future reference so it s all good.
Congratulations again to both Tammy and Wouter. It was a great experience meeting both their families and friends and all the love that was swirling around all weekend.

Debian Package Uploads 2020-02-07: Upload package calamares (3.2.18-1) to Debian unstable. 2020-02-07: Upload package python-flask-restful (0.3.8-1) to Debian unstable. 2020-02-10: Upload package kpmcore (4.1.0-1) to Debian unstable. 2020-02-16: Upload package fracplanet (0.5.1-5.1) to Debian unstable (Closes: #946028). 2020-02-20: Upload package kpmcore (4.1.0-2) to Debian unstable. 2020-02-20: Upload package bluefish (2.2.11) to Debian unstable. 2020-02-20: Upload package gdisk (1.0.5-1) to Debian unstable. 2020-02-20: Accept MR#6 for gamemode. 2020-02-23: Upload package tanglet (1.5.5-1) to Debian unstable. 2020-02-23: Upload package gamemode (1.5-1) to Debian unstable. 2020-02-24: Upload package calamares (3.2.19-1) to Debian unstable. 2020-02-24: Upload package partitionmanager (4.1.0-1) to Debian unstable. 2020-02-24: Accept MR#7 for gamemode. 2020-02-24: Merge MR#1 for calcoo. 2020-02-24: Upload package calcoo (1.3.18-8) to Debian unstable. 2020-02-24: Merge MR#1 for flask-api. 2020-02-25: Upload package calamares (3.2.19.1-1) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-impatience (0.4.5-4) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-harddisk-led (19-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-no-annoyance (0+20170928-f21d09a-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-system-monitor (38-2) to Debian unstable. 2020-02-25: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp3) to Debian experimental.

Debian Mentoring 2020-02-10: Sponsor package python-marshmallow-polyfield (5.8-1) for Debian unstable (Python team request). 2020-02-10: Sponsor package geoalchemy2 (0.6.3-2) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-tempura (2.2.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-babel (2.8.0+dfsg.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-pynvim (0.4.1-1) for Debian unstable (Python team request). 2020-02-13: Review package ledmon (0.94-1) (Needs some more work) (mentors.debian.net request). 2020-02-14: Sponsor package citeproc-py (0.3.0-6) for Debian unstable (Python team request). 2020-02-24: Review package python-suntime (1.2.5-1) (Needs some more work) (Python team request). 2020-02-24: Sponsor package python-babel (2.8.0+dfsg.1-2) for Debian unstable (Python team request). 2020-02-24: Sponsor package 2048 (0.0.0-1~exp1) for Debian experimental (mentors.debian.net request). 2020-02-24: Review package notcurses (1.1.8-1) (Needs some more work) (mentors.debian.net request). 2020-02-25: Sponsor package cloudpickle (1.3.0-1) for Debian unstable (Python team request).

Debian Misc 2020-02-12: Apply Planet Debian request and close MR#21. 2020-02-23: Accept MR#6 for ToeTally (DebConf Video team upstream). 2020-02-23: Accept MR#7 for ToeTally (DebConf Video team upstream).

24 July 2017

Jonathan Carter: Plans for DebCamp17

In a few days, I ll be attending DebCamp17 in Montr al, Canada. Here are a few things I plan to pay attention to:

8 March 2017

Antoine Beaupr : An update to GitHub's terms of service

On February 28th, GitHub published a brand new version of its Terms of Service (ToS). While the first draft announced earlier in February didn't generate much reaction, the new ToS raised concerns that they may break at least the spirit, if not the letter, of certain free-software licenses. Digging in further reveals that the situation is probably not as dire as some had feared. The first person to raise the alarm was probably Thorsten Glaser, a Debian developer, who stated that the "new GitHub Terms of Service require removing many Open Source works from it". His concerns are mainly about section D of the document, in particular section D.4 which states:
You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.
Section D.5 then goes on to say:
[...] You grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality

ToS versus GPL The concern here is that the ToS bypass the normal provisions of licenses like the GPL. Indeed, copyleft licenses are based on copyright law which forbid users from doing anything with the content unless they comply with the license, which forces, among other things, "share alike" properties. By granting GitHub and its users rights to reproduce content without explicitly respecting the original license, the ToS may allow users to bypass the copyleft nature of the license. Indeed, as Joey Hess, author of git-annex, explained :
The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license
Hess has since removed all his content (mostly mirrors) from GitHub. Others disagree. In a well-reasoned blog post, Debian developer Jonathan McDowell explained the rationale behind the changes:
My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service.
This seems like a fair point to make: GitHub needs to protect its own rights to operate the service. McDowell then goes on to do a detailed rebuttal of the arguments made by Glaser, arguing specifically that section D.5 "does not grant [...] additional rights to reproduce outside of GitHub". However, specific problems arise when we consider that GitHub is a private corporation that users have no control over. The "Services" defined in the ToS explicitly "refers to the applications, software, products, and services provided by GitHub". The term "Services" is therefore not limited to the current set of services. This loophole may actually give GitHub the right to bypass certain provisions of licenses used on GitHub. As Hess detailed in a later blog post:
If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not? If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?
However, when asked on IRC, Bradley M. Kuhn of the Software Freedom Conservancy explained that "ultimately, failure to comply with a copyleft license is a copyright infringement" and that the ToS do outline a process to deal with such infringement. Some lawyers have also publicly expressed their disagreement with Glaser's assessment, with Richard Fontana from Red Hat saying that the analysis is "basically wrong". It all comes down to the intent of the ToS, as Kuhn (who is not a lawyer) explained:
any license can be abused or misused for an intent other than its original intent. It's why it matters to get every little detail right, and I hope Github will do that.
He went even further and said that "we should assume the ambiguity in their ToS as it stands is favorable to Free Software". The ToS are in effect since February 28th; users "can accept them by clicking the broadcast announcement on your dashboard or by continuing to use GitHub". The immediacy of the change is one of the reasons why certain people are rushing to remove content from GitHub: there are concerns that continuing to use the service may be interpreted as consent to bypass those licenses. Hess even hosted a separate copy of the ToS [PDF] for people to be able to read the document without implicitly consenting. It is, however, unclear how a user should remove their content from the GitHub servers without actually agreeing to the new ToS.

CLAs When I read the first draft, I initially thought there would be concerns about the mandatory Contributor License Agreement (CLA) in section D.5 of the draft:
[...] unless there is a Contributor License Agreement to the contrary, whenever you make a contribution to a repository containing notice of a license, you license your contribution under the same terms, and agree that you have the right to license your contribution under those terms.
I was concerned this would establish the controversial practice of forcing CLAs on every GitHub user. I managed to find a post from a lawyer, Kyle E. Mitchell, who commented on the draft and, specifically, on the CLA. He outlined issues with wording and definition problems in that section of the draft. In particular, he noted that "contributor license agreement is not a legal term of art, but an industry term" and "is a bit fuzzy". This was clarified in the final draft, in section D.6, by removing the use of the CLA term and by explicitly mentioning the widely accepted norm for licenses: "inbound=outbound". So it seems that section D.6 is not really a problem: contributors do not need to necessarily delegate copyright ownership (as some CLAs require) when they make a contribution, unless otherwise noted by a repository-specific CLA. An interesting concern he raised, however, was with how GitHub conducted the drafting process. A blog post announced the change on February 7th with a link to a form to provide feedback until the 21st, with a publishing deadline of February 28th. This gave little time for lawyers and developers to review the document and comment on it. Users then had to basically accept whatever came out of the process as-is. Unlike every software project hosted on GitHub, the ToS document is not part of a Git repository people can propose changes to or even collaboratively discuss. While Mitchell acknowledges that "GitHub are within their rights to update their terms, within very broad limits, more or less however they like, whenever they like", he sets higher standards for GitHub than for other corporations, considering the community it serves and the spirit it represents. He described the process as:
[...] consistent with the value of CYA, which is real, but not with the output-improving virtues of open process, which is also real, and a great deal more pleasant.
Mitchell also explained that, because of its position, GitHub can have a major impact on the free-software world.
And as the current forum of preference for a great many developers, the knock-on effects of their decisions throw big weight. While GitHub have the wheel and they ve certainly earned it for now they can do real damage.
In particular, there have been some concerns that the ToS change may be an attempt to further the already diminishing adoption of the GPL for free-software projects; on GitHub, the GPL has been surpassed by the MIT license. But Kuhn believes that attitudes at GitHub have begun changing:
GitHub historically had an anti-copyleft culture, which was created in large part by their former and now ousted CEO, Preston-Warner. However, recently, I've seen people at GitHub truly reach out to me and others in the copyleft community to learn more and open their minds. I thus have a hard time believing that there was some anti-copyleft conspiracy in this ToS change.

GitHub response However, it seems that GitHub has actually been proactive in reaching out to the free software community. Kuhn noted that GitHub contacted the Conservancy to get its advice on the ToS changes. While he still thinks GitHub should fix the ambiguities quickly, he also noted that those issues "impact pretty much any non-trivial Open Source and Free Software license", not just copylefted material. When reached for comments, a GitHub spokesperson said:
While we are confident that these Terms serve the best needs of the community, we take our users' feedback very seriously and we are looking closely at ways to address their concerns.
Regardless, free-software enthusiasts have other concerns than the new ToS if they wish to use GitHub. First and foremost, most of the software running GitHub is proprietary, including the JavaScript served to your web browser. GitHub also created a centralized service out of a decentralized tool (Git). It has become the largest code hosting service in the world after only a few years and may well have become a single point of failure for free software collaboration in a way we have never seen before. Outages and policy changes at GitHub can have a major impact on not only the free-software world, but also the larger computing world that relies on its services for daily operation. There are now free-software alternatives to GitHub. GitLab.com, for example, does not seem to have similar licensing issues in its ToS and GitLab itself is free software, although based on the controversial open core business model. The GitLab hosting service still needs to get better than its grade of "C" in the GNU Ethical Repository Criteria Evaluations (and it is being worked on); other services like GitHub and SourceForge score an "F". In the end, all this controversy might have been avoided if GitHub was generally more open about the ToS development process and gave more time for feedback and reviews by the community. Terms of service are notorious for being confusing and something of a legal gray area, especially for end users who generally click through without reading them. We should probably applaud the efforts made by GitHub to make its own ToS document more readable and hope that, with time, it will address the community's concerns.
Note: this article first appeared in the Linux Weekly News.

15 January 2016

Lars Wirzenius: Obnam 1.19 released (backup software)

I have just released version 1.19 of Obnam, the backup program. See the website at http://obnam.org for details on what the program does. The new version is available from git (see http://git.liw.fi) and as Debian packages from http://code.liw.fi/debian, and uploaded to Debian, and soon in unstable. The NEWS file extract below gives the highlights of what's new in this version. NOTE: Obnam has an EXPERIMENTAL repository format under development, called green-albatross. It is NOT meant for real use. It is likely to change in incompatible ways without warning. Do not use it unless you're willing to lose your backup. Version 1.19, released 2016-01-15 Bug fixes: Improvements to the manual: Improvements to functionality:

29 December 2013

Russ Allbery: Review: The Incrementalists

Review: The Incrementalists, by Steven Brust & Skyler White
Publisher: Tor
Copyright: September 2013
ISBN: 0-7653-3422-4
Format: Hardcover
Pages: 304
The Incrementalists are a secret society that has been in continuous existence for forty thousand years. The members are immortal... sort of (more on that in a moment). They are extremely good at determining what signals, triggers, and actions have the most impact on people, and adept at using those triggers to influence people's behavior. And they're making the world better. Not in any huge, noticeable way, just a little bit, here and there. Incrementally. Brust and White completely had me with this concept. There are several SF novels about immortals, but (apart from the vampire subgenre) they mostly stay underground and find ways to survive without trying to change the surrounding human society. I love the idea of small, incremental changes, and I was looking forward to reading a book about how that would work. How do they choose what to do? How do they find the right place to push? What long-term goals would such an organization pursue, and what governance structure would they use? I'm still wondering about most of those things, sadly, since that isn't what this novel is about at all. I should be fair: a few of those questions hover in the background. There are some political arguments (that parallel arguments on Brust's blog), and a tiny bit about governance. But mostly this is a sort of romance, a fight over identity, and an extensive exploration of the mental "garden" that the Incrementalists use and build to hold their memories and share information with each other. The story revolves around two Incrementalists: Phil, who is one of the leaders and has one of the longest continuous identities of any of the group, and Renee, a new recruit by Phil who picks up the memories and capabilities (sort of) of the recently-deceased Incrementalist Celeste. Phil is the long-term insider, although a fairly laid-back one. Renee is the outsider, the one to which the story is happening at the beginning, who is taught how the Incrementalists' system works. The Incrementalists is told in alternating first-person viewpoint sections by Phil and Renee (making me wonder if the characters were each written by one of the authors). Once I got past my disappointment over this book not being quite what I wanted, the identity puzzles that Brust and White play with here caught my attention. The underlying "magic" of the Incrementalists is based on the "memory palace" concept of storing memories via visualizations, but takes it into something akin to personal alternate worlds. Their immortality isn't through physical immortality, but rather the ability to store their memories and personality in their garden, which is their term for the memory palace, and then have it imposed on someone else: a combination of reincarnation, mental takeover, and mental merging. This makes for complex interpersonal relationships and complex interactions between memory, belief, and identity, not to mention some major ethical issues, which come to a head in Phil and Renee's relationship and Celeste's lingering meddling. I think Brust and White are digging into some interesting questions of character here. There's a lot of emphasis in The Incrementalists on the ease with which people can be manipulated, and the Incrementalists themselves are far from immune. Choice and personal definition are both questionable concepts given how much influence other people have, how many vulnerabilities everyone carries with them, and how much one's opinions are governed by one's life history. That makes identity complex, and raises the question of whether one can truly define oneself. But, while all these ideas are raised, I think The Incrementalists dances around them more than engages them directly. It's similar to the background hook: yes, these people are slowly improving the world, and we see a little bit of that (and a little bit of them manipulating people for their own convenience), but the story doesn't fully engage with the idea. There's a romance, a few arguments, some tension, some world-building, but I don't feel like this book ever fully committed to any of them. One of the occasional failure modes of Brust for me is insufficient explanation and insufficient clarity, and I hit that here. My final impression of The Incrementalists is interesting, enjoyable, but vaguely disappointing. It's still a good story with some interesting characters and nice use of the memory palace concept, and I liked Renee throughout (although I think the development of the love story is disturbingly easy and a little weird). But I can't strongly recommend it, and I'm not sure if it's worth seeking out. Rating: 7 out of 10

10 October 2013

Russ Allbery: One more haul

Just a few more recently-released books that I had to pick up. Sheila Bair Bull by the Horns (non-fiction)
Elizabeth Bear Book of Iron (sff)
Steven Brust & Skyler White The Incrementalists (sff)
J.M. Coetzee The Childhood of Jesus (mainstream)
Elizabeth Wein Rose Under Fire (mainstream)
Walter Jon Williams Knight Moves (sff) Time to get back into the reading habit. That's the plan for the next couple of weeks.

2 July 2013

Ondřej Čertík: My impressions from the SciPy 2013 conference

I have attended the SciPy 2013 conference in Austin, Texas. Here are my impressions.

Number one is the fact that the IPython notebook was used by pretty much everyone. I use it a lot myself, but I didn't realize how ubiquitous it has become. It is quickly becoming the standard now. The IPython notebook is using Markdown and in fact it is better than Rest. The way to remember the "[]()" syntax for links is that in regular text you put links into () parentheses, so you do the same in Markdown, and append [] for the text of the link. The other way to remember is that [] feel more serious and thus are used for the text of the link. I stressed several times to +Fernando Perez and +Brian Granger how awesome it would be to have interactive widgets in the notebook. Fortunately that was pretty much preaching to the choir, as that's one of the first things they plan to implement good foundations for and I just can't wait to use that.

It is now clear, that the IPython notebook is the way to store computations that I want to share with other people, or to use it as a "lab notebook" for myself, so that I can remember what exactly I did to obtain the results (for example how exactly I obtained some figures from raw data). In other words --- instead of having sets of scripts and manual bash commands that have to be executed in particular order to do what I want, just use IPython notebook and put everything in there.

Number two is that how big the conference has become since the last time I attended (couple years ago), yet it still has the friendly feeling. Unfortunately, I had to miss a lot of talks, due to scheduling conflicts (there were three parallel sessions), so I look forward to seeing them on video.

+Aaron Meurer and I have done the SymPy tutorial (see the link for videos and other tutorial materials). It's been nice to finally meet +Matthew Rocklin (very active SymPy contributor) in person. He also had an interesting presentation
about symbolic matrices + Lapack code generation. +Jason Moore presented PyDy.
It's been a great pleasure for us to invite +David Li (still a high school student) to attend the conference and give a presentation about his work on sympygamma.com and live.sympy.org.

It was nice to meet the Julia guys, +Jeff Bezanson and +Stefan Karpinski. I contributed the Fortran benchmarks on the Julia's website some time ago, but I had the feeling that a lot of them are quite artificial and not very meaningful. I think Jeff and Stefan confirmed my feeling. Julia seems to have quite interesting type system and multiple dispatch, that SymPy should learn from.

I met the VTK guys +Matthew McCormick and +Pat Marion. One of the keynotes was given by +Will Schroeder from Kitware about publishing. I remember him stressing to manage dependencies well as well as to use BSD like license (as opposed to viral licenses like GPL or LGPL). That opensource has pretty much won (i.e. it is now clear that that is the way to go).

I had great discussions with +Francesc Alted, +Andy Terrel, +Brett Murphy, +Jonathan Rocher, +Eric Jones, +Travis Oliphant, +Mark Wiebe, +Ilan Schnell, +St fan van der Walt, +David Cournapeau, +Anthony Scopatz, +Paul Ivanov, +Michael Droettboom, +Wes McKinney, +Jake Vanderplas, +Kurt Smith, +Aron Ahmadia, +Kyle Mandli, +Benjamin Root and others.


It's also been nice to have a chat with +Jason Vertrees and other guys from Schr dinger.

One other thing that I realized last week at the conference is that pretty much everyone agreed on the fact that NumPy should act as the default way to represent memory (no matter if the array was created in Fortran or other code) and allow manipulations on it. Faster libraries like Blaze or ODIN should then hook themselves up into NumPy using multiple dispatch. Also SymPy would then hook itself up so that it can be used with array operations natively. Currently SymPy does work with NumPy (see our tests for some examples what works), but the solution is a bit fragile (it is not possible to override NumPy behavior, but because NumPy supports general objects, we simply give it SymPy objects and things mostly work).

Similar to this, I would like to create multiple dispatch in SymPy core itself, so that other (faster) libraries for symbolic manipulation can hook themselves up, so that their own (faster) multiplication, expansion or series expansion would get called instead of the SymPy default one implemented in pure Python.

Other blog posts from the conference:

18 February 2013

John Sullivan: SCALE

I will be speaking at the Southern California Linux Expo (and yes, given the topics covered, it's missing a GNU). My talk, "Four Freedoms for Freedom," is on Sunday, February 24, 2013 from 16:30 to 17:30.
The most obvious people affected by all four of the freedoms that define free software are the programmers. They are the ones who will likely want to -- and are able to -- modify software running on their computers. But free software is a movement to advance and defend freedom for anyone and everyone using any computing device, not just programmers. In many countries now, given the ubiquity of tablets, phones, laptops and desktops, "anyone and everyone using any computing device" means nearly all citizens. But new technological innovations in these areas keep coming with new restrictions, frustrating and controlling users even while creating a perception of empowerment. The Free Software Foundation wants to gain the support and protect the interests of everyone, not just programmers. How do we reach people who have no intention of ever modifying a program, and how do we help them?
Other presentations on my list to check out (in chronological order, some conflicting): If you will be there and want to meet up, drop me a line.

31 October 2011

Russell Coker: SE Linux Status in Debian 2011-10

Debian/Unstable Development deb http://www.coker.com.au wheezy selinux The above APT sources.list line has my repository for SE Linux packages that have been uploaded to Unstable and which will eventually go to testing and then the Wheezy release (if they aren t obsoleted first). I have created that repository for people who want to track SE Linux development without waiting for an Unstable mirror to update. In that repository I ve included a new version of policycoreutils that now includes mcstrans and also has support for newer policy such that the latest selinux-policy-default package can be installed. The version that is currently in Testing supports upgrading policy on a running system but doesn t support installing the policy on a system that previously didn t run SE Linux. I have also uploaded SE Linux Policy packages from upstream release 20110726 compared to the previous packages which were from upstream release 20100524. As the numbers imply there is 14 months of upstream policy development which changes many things. Many of the patches from my Squeeze policy packages are not yet incorporated in the policy I have uploaded to Unstable. I won t guarantee that an Unstable system in Enforcing mode will do anything other than boot up and allow you to login via ssh. It s definitely not ready for production but it s also very suitable for development (10 years ago I did a lot of development on SE Linux systems that often denied login access, it wasn t fun). Kyle Moffett submitted a patch for libselinux which dramatically changed the build process. As Manoj (who wrote the previous build scripts) was not contactable I accepted Kyle s patch as provided. Thanks for the patch Kyle, and thanks for all your work over the years Manoj. Anyway the result of these changes should mean that it s easier to bootstrap Debian on a new architecture and easier to support multi-arch but I haven t tested either of these. Squeeze The policy packages from Squeeze can t be compiled on Unstable. The newer policy compilation tool chain is more strict about how some things can be declared and used, thus some policy which was fairly dubious but usable is now invalid. While it wouldn t be difficult to fix those problems I don t plan to do so. There is no good reason for compiling Squeeze policy on Unstable now that I ve uploaded a new upstream release. deb http://www.coker.com.au squeeze selinux I am still developing Squeeze policy and releasing it in the above APT repository. I will also get another policy release in a Squeeze update if possible to smooth the transition to Wheezy the goal is that Squeeze policy will be usable on Wheezy even if it can t be compiled. Also note that the compilation failures only affect the Debian package, it should still be possible to make modules for local use on a Wheezy system with Squeeze policy. MLS On Wednesday I m giving a lecture at my local LUG about MLS on SE Linux. I hope to have a MLS demonstration system available to LUG members by then. Ideally I will have a MLS system running on a virtual server somewhere that s accessible as well as a Xen/KVM image on a USB stick that can be copied by anyone at the meeting. I don t expect to spend much time on any aspect of SE Linux unrelated to MLS for the rest of the week. Version Control I need to change the way that I develop SE Linux packages, particularly the refpolicy source package (source of selinux-policy-default among others). A 20,000 line single patch is difficult to work with! I will have to switch to using quilt, once I get it working well it should save me time on my own development as well as making it easier to send patches upstream. Also I need to setup a public version control system so I can access the source from my workstation, laptop, and netbook. While doing that I might as well make it public so any interested people can help out. Suggestions on what type of VCS to use are welcome. How You Can Help Sorting out the mess that is the refpolicy package, sending patches upstream and migrating to a VCS is a fair bit of work. But there are lots of small parts. Sending patches upstream is a job that could be done in small pieces. Writing new policy is not something to do yet. There s not much point in doing that while I still haven t merged all the patches from Squeeze maybe next week. However I can provide the missing patches to anyone who wants to review them and assist with the merging. I have a virtual server that has some spare capacity. One thing I would like to do is to have some virtual machines running Unstable with various configurations of server software. Then we could track Unstable on those images and use automated testing to ensure that nothing breaks. If anyone wants root access on a virtual server to install their favorite software then let me know. But such software needs to be maintained and tested!

16 September 2009

Benjamin Mako Hill: Ubuntu Books

/copyrighteous/images/ubuntu_books_2009.png As I am attempting to focus on writing projects that are more scholarly and academic on the one hand (i.e., work for my day job at MIT) and more geared toward communicating free software principles toward wider audiences on the other (e.g., Revealing Errors), I have little choice but to back away from technical writing. However, this last month has seen the culmination of a bunch of work that I've done previously: two book projects that have been ongoing for the last couple years or more have finally hit the shelves! The first is the fourth edition (!) of the bestselling Official Ubuntu Book. Much to my happiness, the book continues to do extremely well and continues to receive solid reviews. This edition brings the book up to date for Jaunty/9.04 and adds a few pieces here and there. Although I was less active in this update than I have been in the past, Corey Burger continued his great work and Matthew Helmke of Ubuntu Forums fame stepped up to take a leading role. As I plan to retreat into a more purely advisory role for the next edition of this book, I'm thrilled to know that the project will remain in such capable hands. I'm also thrilled that this edition of the book, like all previous editions, is released as a free cultural work under CC BY-SA For years, I have heard people say that although they like the Official Ubuntu Book, it was a little too focused on desktops and on beginners for their tastes. The newly released Official Ubuntu Server Book is designed to address those concerns by providing an expanded guide to more "advanced" topics and is targeted at system administrators trying to get to know Ubuntu. Kyle Rankin planned and produced most of this book but I was thrilled to help poke it in places, chime in during the planning process, and to contribute a few chapters. Kyle is a knowledgeable sysadmin and has done wonderful job with the book. I only wish I could take more credit. The publisher has promised me that, at the very least, my chapters will be distributed under CC BY-SA. Many barriers to the adoption of free software are technical and a good book can, and often does, make a big difference. I enjoy being able to help address that problem. I also truly enjoy technical writing. I find it satisfying to share something I know well with others and it is great to know that I've helped someone with their problems. I'll assure I'll be able to do things here and there, I'll miss technical writing as I attempt "cut back."

23 April 2009

Adrian von Bidder: Let's kill KHTML

Reading Kyle's view on Konqueror and KHTML's current status: I couldn't agree more. I use konqueror instead of Firefox because I quite like its GUI, and its integration into KDE is obviously better than Firefox'. Issues with various websites prompt me to have an Iceweasel window open as well quite a large part of the time. Let's just switch to WebKit, so the market only has to care about Gecko and WebKit and can ignore one more marginal rendering engine. I see libqt-webkit 4.5 is in experimental and a Google query on debian konqueror webkit at least shows an Ubuntu packaging effort of the Konqueror WebKit KPart, so the days of khtml on my Desktop are certainly nearing its end. At this point: Kudos to the KDE folks (Debian and upstream). KDE4.2 is really, really usable, the remaining issues are really small. And, if I don't try to manually interfer like I did in my first attempt, migrating the KDE settings from ~/.kde4 to ~/.kde actually did work just fine on my netbook.

25 January 2009

Kyle McMartin: pcspkr? that s like flickr, right?

two words^W^Wone command. # echo blacklist pcspkr >/etc/modprobe.d/blacklist-pcspkr That is all.

Next.