Search Results: "dap"

4 November 2021

Russell Coker: USB Microphones

The Situation I bought myself some USB microphones over ebay, I couldn t see any with USB type A connectors (the original USB connectors) and bought ones with USB-C connectors. I thought it would be good to have microphones that could work with recent mobile phones and with PCs, because surely it wouldn t be difficult to get an adaptor. I tested one of the microphones, it worked well on a phone. I bought a pair of adaptors for USB A ports on a PC or laptop to USB-C (here s the link to where I bought them). I used one of the adaptors with a USB-C HDMI device which gave the following line from lsusb, I didn t try using a HDMI monitor on my laptop, having the device recognised was enough.
Bus 003 Device 002: ID 2109:0100 VIA Labs, Inc. USB 2.0 BILLBOARD
I tried connecting a USB-C microphone and Linux didn t recognise the existence of a USB device, I tried that on a PC and a laptop on multiple ports. I wondered whether the description of the VIA BILLBOARD device as USB 2.0 was relevant to my problem. According to Big Mess O Wires USB-C has separate wires for USB 3.1 and USB 2 [1]. So someone could make a device that converts USB-A to USB-C with only USB-2 wires in place. I tested the USB-A to USB-C adaptor with the HDMI device in a USB SuperSpeed (IE 3.x) port and it still identified as USB 2.0. I suspect that the USB-C HDMI device is using all the high speed wires for DisplayPort data (with a conversion to HDMI) and therefore looks like a USB 2.0 device. The Problem I want to install a microphone in my workstation for long Zoom training sessions (7 hours in a day) that otherwise require me to use multiple Android devices as I don t have a device that will do 7 hours of Zoom without running out of battery. A new workstation with USB-C is unreasonably expensive. A PCIe USB-C card would give me the port at the back of the machine, I can t have the back of the machine near the microphone because it s too noisy. If I could have a USB-C hub with reasonable length cables (the 1M cables typical for USB 2.0 hubs would be fine) connected to a USB-C port at the back of my workstation that would work. But there seems to be a great lack of USB-C hubs. NewBeDev has an informative post about the lack of USB-C hubs that have multiple USB-C ports [2]. There also seems to be a lack of USB-C hubs with cables longer than 20cm. The Solution I ended up ordering a Sades Wand gaming headset [3], that has over-ear headphones and an attached microphone which connects to the computer via USB 2.0. I gave the URL for the sades.com.au web site for reference but you will get a significantly better price by buying on ebay ($39+postage vs about $30 including postage). I guess I won t be using my new USB-C microphones for a while.

20 October 2021

Arturo Borrero Gonz lez: Iterating on how we do NFS at Wikimedia Cloud Services

Logos This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez. NFS is a central piece of infrastructure that is essential to services like Toolforge. Recently, the Cloud Services team at Wikimedia had been reviewing how we do NFS. The current situation NFS is a central piece of technology for some of the services that the Wikimedia Cloud Services team offers to the community. We have several shares that power different use cases: Toolforge user home directories live on NFS, and Cloud VPS users can also access dumps using this protocol. The current setup involves several physical hardware servers, with about 20TB of storage, offering shares over 10G links to the cloud. For the system to be more fault-tolerant, we duplicate each share for redundancy using DRBD. Running NFS on dedicated hardware servers has traditionally offered us advantages: mostly on the performance and the capacity fields. As time has passed, we have been enumerating more and more reasons to review how we do NFS. For one, the current setup is in violation of some of our internal rules regarding realm separation. Additionally, we had been longing for additional flexibility managing our servers: we wanted to use virtual machines managed by Openstack Nova. The DRBD-based high-availability system required mostly a hand-crafted procedure for failover/failback. There s also some scalability concerns as NFS is easy to grow up, but not to grow horizontally, and of course, we have to be able to keep the tenancy setup while doing so, something that NFS does by using LDAP/Unix users and may get complicated too when growing. In general, the servers have become too big to fail , clearly technical debt, and it has taken us years to decide on taking on the task to rethink the architecture. It s worth mentioning that in an ideal world, we wouldn t depend on NFS, but the truth is that it will still be a central piece of infrastructure for years to come in services like Toolforge. Over a series of brainstorming meetings, the WMCS team evaluated the situation and sorted out the many moving parts. The team managed to boil down the potential service future to two competing options: Then we decided to research both options in parallel. For a number of reasons, the evaluation was timeboxed to three weeks. Both ideas had a couple of points in common: the NFS data would be stored on our Ceph farm via Cinder volumes, and we would rely on Ceph reliability to avoid using DRBD. Another open topic was how to back up data from Ceph, to store our important bits in more than one basket. We will get to the back up topic later. The manila experiment The Wikimedia Foundation was an early adopter of some Openstack components (Nova, Glance, Designate, Horizon), but Manila was never evaluated for usage until now. Our approach for this experiment was to closely follow the upstream guidelines. We read the documentation and tried to understand the different setups you can build with Manila. As we often feel with other Openstack components, the documentation doesn t perfectly describe how to introduce a given component in your particular local setup. Here we use an admin-controller flat-topology Neutron network. This network is shared by all tenants (or projects) in our Openstack deployment. Also, Manila can use many different driver backends, for things like NetApps or CephFS that we don t use , yet. After some research, the generic driver was the one that seemed to better fit our use case. The generic driver leverages Nova virtual machines instances plus Cinder volume to create and manage the shares. In general, Manila supports two operational modes, whether it should create/destroy the share servers (i.e, the virtual machine instances) or not. This option is called driver_handles_share_server (or DHSS) and takes a boolean value. We were interested in trying with DHSS=true, to really benefit from the potential of the setup. Manila diagram NFS idea 6, original image in Wikitech So, after sorting all these variables, we moved on with our initial testing. We built a PoC setup as depicted in the diagram above, with the manila-share component running in a virtual machine inside the cloud. The PoC led to us reporting several bugs upstream: In some cases we tried to address these bugs ourselves: It s worth mentioning that the upstream community was extra-welcoming to us, and we re thankful for that. However, at the end of our three-week period, our Manila setup still wasn t working as expected. Your experience may change with other drivers perhaps the ZFSonLinux or the CephFS ones. In general, we were having trouble making the setup work as expected, so we decided to abandon this approach in favor of the other option we were considering at the beginning. Simple virtual machine serving NFS The alternative was to create a Nova virtual machine instance by hand and to configure it using puppet. We have been investing in an automation framework lately, so the idea is to not actually create the server by hand. Anyway, the data would be decoupled from the instance into Cinder volumes, which led us to the question we left for later: How should we back up those terabytes of important information? Just to be clear, the backup problem was independent of the above options; with Manila we would still have had to solve the same challenge. We would like to see our data be backed up somewhere else other than in Ceph. And that s exactly where we are at right now. We ve been exploring different backup strategies and will finally use the Cinder backup API. Conclusion The iteration will end with the dedicated NFS hardware servers being stopped, and the shares being served from within the cloud. The migration will take some time to happen because we will check and double-check that everything works as expected (including from the performance point of view) before making definitive changes. We already have some plans to make sure our users experience as little service impact as possible. The most troublesome shares will be those related to Toolforge. At some point we will need to disallow writes to the NFS share, rsync the data out of the hardware servers into the Cinder volumes, point the NFS clients to the new virtual machines, and then enable writes again. The main Toolforge share has about 8TB of data, so this will take a while. We will have more updates in the future. Who knows, perhaps our next-next iteration, in a couple of years, will see us adopting Openstack Manila for good. Featured image credit: File:(from break water) Manila Skyline panoramio.jpg, ewol, CC BY-SA 3.0 This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

12 October 2021

Antonio Terceiro: Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually. This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools. I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch. collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools. Installing collab-qa-tools The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries). One of the patches I sent, and was already accepted, is the ability to run it without the need to install:
source /path/to/collab-qa-tools/activate.sh
This will add the tools to your $PATH. Preparation The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named $ PACKAGE _*.log or just $ PACKAGE .log. Creating a TODO file
cqa-scanlogs   grep -v OK > todo
todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:
split --lines=30 --numeric-suffixes todo todo
Triaging You can now do the triaging. Let's say we split the TODO files, and will start with todo01. The first step is calling cqa-fetchbugs (it does what it says on the tin):
cqa-fetchbugs --TODO=todo01
Then, cqa-annotate will guide you through the logs and allow you to report bugs:
cqa-annotate --TODO=todo01
I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:
#!/bin/sh
set -eu
for todo in $@; do
  # force downloading bugs
  awk ' print(".bugs." $1) ' "$ todo "   xargs rm -f
  cqa-fetchbugs --TODO="$ todo "
  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="$ todo "
done
The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:
From:   fullname   <  email  >
To: submit@bugs.debian.org
Subject:   package  : FTBFS with ruby3.0:   summary  
Source:   package  
Version:   version   split:'+rebuild'   first  
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0
Hi,
We are about to enable building against ruby3.0 on unstable. During a test
rebuild,   package   was found to fail to build in that situation.
To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.
Relevant part (hopefully):
 % for line in extract % >   line  
 % endfor % 
The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/  package  /  filename   replace:".log",".build.txt"  
The cqa-annotate loop cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:
######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus
     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in  undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in  singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in  assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in  block (2 levels) in <top (required)>'
Deprecation Warnings:
Using  should  from rspec-expectations' old  :should  syntax without explicitly enabling the syntax is deprecated. Use the new  :expect  syntax or explicitly enable  :should  with  config.expect_with(:rspec)    c  c.syntax = :should   instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in  block (2 levels) in <top (required)>'.
If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
 config.raise_errors_for_deprecations! , and it will turn the
deprecation warnings into errors, giving you the full backtrace.
1 deprecation warning total
Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure
Failed examples:
rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution
/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i r f]:
You can then choose one of the options: When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus  
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i 1 r f]:
Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

6 September 2021

Vincent Bernat: Switching to the i3 window manager

I have been using the awesome window manager for 10 years. It is a tiling window manager, configurable and extendable with the Lua language. Using a general-purpose programming language to configure every aspect is a double-edged sword. Due to laziness and the apparent difficulty of adapting my configuration about 3000 lines to newer releases, I was stuck with the 3.4 version, whose last release is from 2013. It was time for a rewrite. Instead, I have switched to the i3 window manager, lured by the possibility to migrate to Wayland and Sway later with minimal pain. Using an embedded interpreter for configuration is not as important to me as it was in the past: it brings both complexity and brittleness.
i3 dual screen setup
Dual screen desktop running i3, Emacs, some terminals, including a Quake console, Firefox, Polybar as the status bar, and Dunst as the notification daemon.
The window manager is only one part of a desktop environment. There are several options for the other components. I am also introducing them in this post.

i3: the window manager i3 aims to be a minimal tiling window manager. Its documentation can be read from top to bottom in less than an hour. i3 organize windows in a tree. Each non-leaf node contains one or several windows and has an orientation and a layout. This information arbitrates the window positions. i3 features three layouts: split, stacking, and tabbed. They are demonstrated in the below screenshot:
Example of layouts
Demonstration of the layouts available in i3. The main container is split horizontally. The first child is split vertically. The second one is tabbed. The last one is stacking.
Tree representation of the previous screenshot
Tree representation of the previous screenshot.
Most of the other tiling window managers, including the awesome window manager, use predefined layouts. They usually feature a large area for the main window and another area divided among the remaining windows. These layouts can be tuned a bit, but you mostly stick to a couple of them. When a new window is added, the behavior is quite predictable. Moreover, you can cycle through the various windows without thinking too much as they are ordered. i3 is more flexible with its ability to build any layout on the fly, it can feel quite overwhelming as you need to visualize the tree in your head. At first, it is not unusual to find yourself with a complex tree with many useless nested containers. Moreover, you have to navigate windows using directions. It takes some time to get used to. I set up a split layout for Emacs and a few terminals, but most of the other workspaces are using a tabbed layout. I don t use the stacking layout. You can find many scripts trying to emulate other tiling window managers but I did try to get my setup pristine of these tentatives and get a chance to familiarize myself. i3 can also save and restore layouts, which is quite a powerful feature. My configuration is quite similar to the default one and has less than 200 lines.

i3 companion: the missing bits i3 philosophy is to keep a minimal core and let the user implements missing features using the IPC protocol:
Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible. Introduction to the i3 window manager
While this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab. Instead of maintaining a script for each feature, I have centralized everything into a single Python process, i3-companion using asyncio and the i3ipc-python library. Each feature is self-contained into a function. It implements the following components:
make a workspace exclusive to an application
When a workspace contains Emacs or Firefox, I would like other applications to move to another workspace, except for the terminal which is allowed to intrude into any workspace. The workspace_exclusive() function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
implement a Quake console
The quake_console() function implements a drop-down console available from any workspace. It can be toggled with Mod+ . This is implemented as a scratchpad window.
back and forth workspace switching on the same output
With the workspace back_and_forth command, we can ask i3 to switch to the previous workspace. However, this feature is not restricted to the current output. I prefer to have one keybinding to switch to the workspace on the next output and one keybinding to switch to the previous workspace on the same output. This behavior is implemented in the previous_workspace() function by keeping a per-output history of the focused workspaces.
create a new empty workspace or move a window to an empty workspace
To create a new empty workspace or move a window to an empty workspace, you have to locate a free slot and use workspace number 4 or move container to workspace number 4. The new_workspace() function finds a free number and use it as the target workspace.
restart some services on output change
When adding or removing an output, some actions need to be executed: refresh the wallpaper, restart some components unable to adapt their configuration on their own, etc. i3 triggers an event for this purpose. The output_update() function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.
I will detail the other features as this post goes on. On the technical side, each function is decorated with the events it should react to:
@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS)
async def previous_workspace(i3, event):
    """Go to previous workspace on the same output."""
The CommandEvent() event class is my way to send a command to the companion, using either i3-msg -t send_tick or binding a key to a nop command. The latter is used to avoid spawning a shell and a i3-msg process just to send a message. The companion listens to binding events and checks if this is a nop command.
bindsym $mod+Tab nop "previous-workspace"
There are other decorators to avoid code duplication: @debounce() to coalesce multiple consecutive calls, @static() to define a static variable, and @retry() to retry a function on failure. The whole script is a bit more than 1000 lines. I think this is worth a read as I am quite happy with the result.

dunst: the notification daemon Unlike the awesome window manager, i3 does not come with a built-in notification system. Dunst is a lightweight notification daemon. I am running a modified version with HiDPI support for X11 and recursive icon lookup. The i3 companion has a helper function, notify(), to send notifications using DBus. container_info() and workspace_info() uses it to display information about the container or the tree for a workspace.
Notification showing i3 tree for a workspace
Notification showing i3 s tree for a workspace

polybar: the status bar i3 bundles i3bar, a versatile status bar, but I have opted for Polybar. A wrapper script runs one instance for each monitor. The first module is the built-in support for i3 workspaces. To not have to remember which application is running in a workspace, the i3 companion renames workspaces to include an icon for each application. This is done in the workspace_rename() function. The icons are from the Font Awesome project. I maintain a mapping between applications and icons. This is a bit cumbersome but it looks great.
i3 workspaces in Polybar
i3 workspaces in Polybar
For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details.
Various modules for Polybar
Polybar displaying various information: CPU usage, memory usage, screen brightness, battery status, Bluetooth status (with a connected headset), network status (connected to a wireless network and to a VPN), notification status, and speaker volume.
For Bluetooh, network, and notification statuses, I am using Polybar s ipc module: the next version of Polybar can receive an arbitrary text on an IPC socket. The module is defined with a single hook to be executed at the start to restore the latest status.
[module/network]
type = custom/ipc
hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null
initial = 1
It can be updated with polybar-msg action "#network.send.XXXX". In the i3 companion, the @polybar() decorator takes the string returned by a function and pushes the update through the IPC socket. The i3 companion reacts to DBus signals to update the Bluetooth and network icons. The @on() decorator accepts a DBusSignal() object:
@on(
    StartEvent,
    DBusSignal(
        path="/org/bluez",
        interface="org.freedesktop.DBus.Properties",
        member="PropertiesChanged",
        signature="sa sv as",
        onlyif=lambda args: (
            args[0] == "org.bluez.Device1"
            and "Connected" in args[1]
            or args[0] == "org.bluez.Adapter1"
            and "Powered" in args[1]
        ),
    ),
)
@retry(2)
@debounce(0.2)
@polybar("bluetooth")
async def bluetooth_status(i3, event, *args):
    """Update bluetooth status for Polybar."""
The middle of the bar is occupied by the date and a weather forecast. The latest also uses the IPC mechanism, but the source is a Python script triggered by a timer.
Date and weather in Polybar
Current date and weather forecast for the day in Polybar. The data is retrieved with the OpenWeather API.
I don t use the system tray integrated with Polybar. The embedded icons usually look horrible and they all behave differently. A few years back, Gnome has removed the system tray. Most of the problems are fixed by the DBus-based Status Notifier Item protocol also known as Application Indicators or Ayatana Indicators for GNOME. However, Polybar does not support this protocol. In the i3 companion, The implementation of Bluetooth and network icons, including displaying notifications on change, takes about 200 lines. I got to learn a bit about how DBus works and I get exactly the info I want.

picom: the compositor I like having slightly transparent backgrounds for terminals and to reduce the opacity of unfocused windows. This requires a compositor.1 picom is a lightweight compositor. It works well for me, but it may need some tweaking depending on your graphic card.2 Unlike the awesome window manager, i3 does not handle transparency, so the compositor needs to decide by itself the opacity of each window. Check my configuration for details.

systemd: the service manager I use systemd to start i3 and the various services around it. My xsession script only sets some environment variables and lets systemd handles everything else. Have a look at this article from Micha G ral for the rationale. Notably, each component can be easily restarted and their logs are not mangled inside the ~/.xsession-errors file.3 I am using a two-stage setup: i3.service depends on xsession.target to start services before i3:
[Unit]
Description=X session
BindsTo=graphical-session.target
Wants=autorandr.service
Wants=dunst.socket
Wants=inputplug.service
Wants=picom.service
Wants=pulseaudio.socket
Wants=policykit-agent.service
Wants=redshift.service
Wants=spotify-clean.timer
Wants=ssh-agent.service
Wants=xiccd.service
Wants=xsettingsd.service
Wants=xss-lock.service
Then, i3 executes the second stage by invoking the i3-session.target:
[Unit]
Description=i3 session
BindsTo=graphical-session.target
Wants=wallpaper.service
Wants=wallpaper.timer
Wants=polybar-weather.service
Wants=polybar-weather.timer
Wants=polybar.service
Wants=i3-companion.service
Wants=misc-x.service
Have a look on my configuration files for more details.

rofi: the application launcher Rofi is an application launcher. Its appearance can be customized through a CSS-like language and it comes with several themes. Have a look at my configuration for mine.
Rofi as an application launcher
Rofi as an application launcher
It can also act as a generic menu application. I have a script to control a media player and another one to select the wifi network. It is quite a flexible application.
Rofi as a wifi network selector
Rofi to select a wireless network

xss-lock and i3lock: the screen locker i3lock is a simple screen locker. xss-lock invokes it reliably on inactivity or before a system suspend. For inactivity, it uses the XScreenSaver events. The delay is configured using the xset s command. The locker can be invoked immediately with xset s activate. X11 applications know how to prevent the screen saver from running. I have also developed a small dimmer application that is executed 20 seconds before the locker to give me a chance to move the mouse if I am not away.4 Have a look at my configuration script.
Demonstration of xss-lock, xss-dimmer and i3lock with a 4 speedup.

The remaining components
  • autorandr is a tool to detect the connected display, match them against a set of profiles, and configure them with xrandr.
  • inputplug executes a script for each new mouse and keyboard plugged. This is quite useful to load the appropriate the keyboard map. See my configuration.
  • xsettingsd provides settings to X11 applications, not unlike xrdb but it notifies applications for changes. The main use is to configure the Gtk and DPI settings. See my article on HiDPI support on Linux with X11.
  • Redshift adjusts the color temperature of the screen according to the time of day.
  • maim is a utility to take screenshots. I use Prt Scn to trigger a screenshot of a window or a specific area and Mod+Prt Scn to capture the whole desktop to a file. Check the helper script for details.
  • I have a collection of wallpapers I rotate every hour. A script selects them using advanced machine learning algorithms and stitches them together on multi-screen setups. The selected wallpaper is reused by i3lock.

  1. Apart from the eye candy, a compositor also helps to get tear-free video playbacks.
  2. My configuration works with both Haswell (2014) and Whiskey Lake (2018) Intel GPUs. It also works with AMD GPU based on the Polaris chipset (2017).
  3. You cannot manage two different displays this way e.g. :0 and :1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons.
  4. I have only discovered later that XSecureLock ships such a dimmer with a similar implementation. But mine has a cool countdown!

2 September 2021

Eddy Petri&#537;or: Stretch to Buster upgrade issues: "Grub error: symbol grub_is_lockdown not found", missing RTL8111/8168/8411 Ethernet driver and RTL8821CE Wireless adapter on Linux Kernel 5.10 (and 4.19)

I have been Debian Stretch running on my HP Pavilion 14-ce0000nq laptop since buying it back in April 2019, just before my presence at Oxidizeconf where I presented "How to Rust When Standards Are Defined in C".Debian Buster (aka Debian 10) was released about 4 months later and I've been postponing the upgrade as my free time isn't what it used to be. I also tend to wait for the first or even second update of the release to avoid any sharp edges.As this laptop has a Realtek 8821CE wireless card that wasn't officially supported in the Linux kernel, I had to use an out-of-tree hacked driver to have the wireless work on Stretch kernels such as 4.19, it didn't even got along with DKMS, so all compilations and installations of it, I did them manually. More reason to wait for a newer release that would contain a driver inside the official kernel.I was waiting for the inevitable and dreading the wireless issues, but since mid-august Bullseye became stable, turning Stretch into oldoldstable, I decided that I had to do the upgrade, at least to buster.The Grub error and the fix
Everything went quite smooth, except that after the reboot, the laptop failed to boot with this Grub error:

error: symbol grub_is_lockdown not found
I looked for a solution and it seemed everyone was stuck or the solution was unclear.There is even a bug report in Debian about this error, bug #984760.
Adding to the pile of confusion my own confused solution: I tried supergrubdisk2/rescatux, it didn't work for me, it might have been a combination of me using LVM and grub-efi-amd64. I also tried to boot in rescue mode the Buster first DVD (to avoid the need for network), I was able to enter the partition, mount the EFI partition, too, but since I didn't want to mess the setup even more or depend on an external USB stick, I didn't know where should I try to write the Grub EFI config - the root partition is on an NVME storage.When buying the laptop it had FreeDOS installed on it and some HP rescue app, which I did not wipe when installing Debian. I even forgot where or how was the EFI installed on the disk and EFI, even if it should be more reliable and simpler, I never got the hang of it.
In the end, I realized that I could via BIOS actually select manually which EFI executable should be booted into, so I was able to boot with some manual intervention during boot into the regular system.I tried regenerating the grub configuration, installing it and also tried restoring the default proper boot sequence (and I even installed refind in the system during my fumbling), but I think somewhere between grub-efi-amd64 reconfiguration and its reinstallation I managed to do the right thing, as the default boot screen is the Grub one now.Hints for anyone reading this in the hope to fix the same issue, hopefully it will make things better, not worse (see the text below):1) regenerate the grub config:
update-grub2
2) reinstall grub-efi-amd64 and make Debian the default
dpkg-reconfigure -plow grub-efi-amd64
When reinstalling grub-efi-amd64 onto the disk, I think the scariest questions were to these:

Force extra installation to the EFI removable media path?

Some EFI-based systems are buggy and do not handle new bootloaders correctly. If you force an extra installation of GRUB to the EFI removable media path, this should ensure that this system will boot Debian correctly despite such a problem. However, it may remove the ability to boot any other operating systems that also depend on this path. If so, you will need to make sure that GRUB is configured successfully to be able to boot any other OS installations correctly.

and
Update NVRAM variables to automatically boot into Debian?

GRUB can configure your platform's NVRAM variables so that it boots into Debian automatically when powered on. However, you may prefer to disable this behavior and avoid changes to your boot configuration. For example, if your NVRAM variables have been set up such that your system contacts a PXE server on every boot, this would preserve that behavior.

I think the first can be safely answered "No" if you don't plan on booting via a removable USB stick, while the second is the one that does the restoring.The second question is probably safe if you don't use PXE boot or other boot method, at least that's what I understand. But if you do, I suspect by installing refind, by playing with the multiple efi* named packages and tools, you can restore that, or it might be that your BIOS allows that directly.
I just did a walk through of these 2 steps again on my laptop and answered "No" to the removable media question as it leads to errors when the media was not inserted (in my case the internal SD card reader), and "Yes" to making Debian the default.It seems that for me this broke the FreeDOS and HP utilities boot entries from Grub, but I still can boot via the BIOS options and my goal was to have Debian boot correctly by default.
Fixing the missing RTL811/8168/8411 Ethernet card issue
As a side note for people with computers having Realtek RTL8111/8168/8411 Gigabit Ethernet Controller and upgrading to Buster or switching to a newer kernel, please note that you might end up having the unpleasant surprise even your Ethernet card to disappear because the r8169 driver is not loader by default.I had to add it to /etc/modules so is loaded by default:
eddy@aptonia:/ $ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
r8169

The 5.10 compatible driver for RTL8821CE wireless adapterAfter the upgrade to Buster, the oldstable version of the kernel, 4.19, the hacked version of the driver I've been using on Stretch on 4.9 kernels was no longer compatible - failed to compile due to missing symbols.The fix for me was to switch to the DKMS compatible driver from https://github.com/tomaspinho/rtl8821ce, as this seems to work for both 4.19 and 5.10 kernels (installed from backports).
I installed it via a modification of the manual install method only for the 4.19 and 5.10 kernels, leaving the legacy 4.9 kernels working with the hacked driver. You can do the same if instead of running the provided script, you do its steps manually and you install only for the kernel versions you want, instead of the default to install for all:I looked inside the dkms-install.sh script to do the required steps:Copy the driver, add it to the dkms set of known drivers:
DRV_NAME=rtl8821ce
DRV_VERSION=v5.5.2_34066.20200325

cp -r . /usr/src/$ DRV_NAME -$ DRV_VERSION

dkms add -m $ DRV_NAME -v $ DRV_VERSION
But you just build and install them only for the select kernel versions of your choice:
dkms build -m $ DRV_NAME -v $ DRV_VERSION -k 5.10.0-0.bpo.8-amd64
dkms install -m $ DRV_NAME -v $ DRV_VERSION -k 5.10.0-0.bpo.8-amd64
Or, without the variables:
dkms build rtl8821ce/v5.5.2_34066.20200325 -k 4.19.0-17-amd64
dkms install rtl8821ce/v5.5.2_34066.20200325 -k 4.19.0-17-amd64
dkms status should confirm everything is in place and I think you need to update grub2 again after this.
Please note this driver is no longer maintained and the 5.10 tree should support the RTL8821CE wireless card with the rtw88 driver from the kernel, but for me it did not. I'll probably try this at a later time, or after I upgrade to the current Debian stable, Bullseye.

25 August 2021

Jelmer Vernooij: Thousands of Debian packages updated from their upstream Git repository

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor. Linux distributions like Debian fulfill an important function in the FOSS ecosystem - they are system integrators that take existing free and open source software projects and adapt them where necessary to work well together. They also make it possible for users to install more software in an easy and consistent way and with some degree of quality control and review. One of the consequences of this model is that the distribution package often lags behind upstream releases. This is especially true for distributions that have tighter integration and standardization (such as Debian), and often new upstream code is only imported irregularly because it is a manual process - both updating the package, but also making sure that it still works together well with the rest of the system. The process of importing a new upstream used to be (well, back when I started working on Debian packages) fairly manual and something like this:

Ecosystem Improvements However, there have been developments over the last decade that make it easier to import new upstream releases into Debian packages.
Uscan and debian QA watch Uscan and debian/watch have been around for a while and make it possible to find upstream tarballs. A debian watch file usually looks something like this:
1
2
version=4
http://somesite.com/dir/filenamewithversion.tar.gz
The QA watch service regularly polls all watch locations in the archive and makes the information available, so it s possible to know which packages have changed without downloading each one of them.
Git Git is fairly ubiquitous nowadays, and most upstream projects and packages in Debian use it. There are still exceptions that do not use any version control system or that use a different control system, but they are becoming increasingly rare. [1]
debian/upstream/metadata DEP-12 specifies a file format with metadata about the upstream project that a package was based on. In particular relevant for our case is the fact it has fields for the location of the upstream version control location. debian/upstream/metadata files look something like this:
1
2
3
---
Repository: https://www.dulwich.io/code/dulwich/
Repository-Browse: https://www.dulwich.io/code/dulwich/
While DEP-12 is still a draft, it has already been widely adopted - there are about 10000 packages in Debian that ship a debian/upstream/metadata file with Repository information.
Autopkgtest The Autopkgtest standard and associated tooling provide a way to run a defined set of tests against an installed package. This makes it possible to verify that a package is working correctly as part of the system as a whole. ci.debian.net regularly runs these tests against Debian packages to detect regressions.
Vcs-Git headers The Vcs-Git headers in debian/control are the equivalent of the Repository field in debian/upstream/metadata, but for the packaging repositories (as opposed to the upstream ones). They ve been around for a while and are widely adopted, as can be seen from zack s stats: The vcswatch service that regularly polls packaging repositories to see whether they have changed makes it a lot easier to consume this information in usable way.
Debhelper adoption Over the last couple of years, Debian has slowly been converging on a single build tool - debhelper s dh interface. Being able to rely on a single build tool makes it easier to write code to update packaging when upstream changes require it.
Debhelper DWIM Debhelper (and its helpers) increasingly can figure out how to do the Right Thing in many cases without being explicitly configured. This makes packaging less effort, but also means that it s less likely that importing a new upstream version will require updates to the packaging. With all of these improvements in place, it actually becomes feasible in a lot of situations to update a Debian package to a new upstream version automatically. Of course, this requires that all of this information is available, so it won t work for all packages. In some cases, the packaging for the older upstream version might not apply to the newer upstream version. The Janitor has attempted to import a new upstream Git snapshot and a new upstream release for every package in the archive where a debian/watch file or debian/upstream/metadata file are present. These are the steps it uses:
  • Find new upstream version
    • If release, use debian/watch - or maybe tagged in upstream repository
    • If snapshot, use debian/upstream/metadata s Repository field
    • If neither is available, use guess-upstream-metadata from upstream-ontologist to guess the upstream Repository
  • Merge upstream version into packaging repository, possibly importing tarballs using pristine-tar
  • Update the changelog file to mention the new upstream version
  • Run some checks to ensure there are no unintentional changes, e.g.:
    • Scan diff between old and new for surprising license changes
      • Today, abort if there are any - in the future, maybe update debian/copyright
    • Check for obvious compatibility breaks - e.g. sonames changing
  • Attempt to update the packaging to reflect upstream changes
    • Refresh patches
  • Attempt to build the package with deb-fix-build, to deal with any missing dependencies
  • Run the autopkgtests with deb-fix-build to deal with missing dependencies, and abort if any tests fail
Results When run over all packages in unstable (sid), this process works for a surprising number of them.
Fresh Releases For fresh-releases (aka imports of upstream releases), processing all packages maintained in Git for which QA watch reports new releases (about 11,000): That means about 2300 packages updated, and about 4000 unchanged.
Fresh Snapshots For fresh-snapshots (aka imports of latest Git commit from upstream), processing all packages maintained in Git (about 26,000): Or 5100 packages updated and 2100 for which there was nothing to do, i.e. no upstream commits since the last Debian upload. As can be seen, this works for a surprising fraction of packages. It s possible to get the numbers up even higher, by both improving the tooling, the autopkgtests and the metadata that is provided by packages.
Using these packages All the packages that have been built can be accessed from the Janitor APT repository. More information can be found at https://janitor.debian.net/fresh, but in short - run:
1
2
3
4
5
6
echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \
    https://janitor.debian.net/ fresh-snapshots main   sudo tee /etc/apt/sources.list.d/fresh-snapshots.list
echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \
    https://janitor.debian.net/ fresh-releases main   sudo tee /etc/apt/sources.list.d/fresh-releases.list
sudo curl -o /usr/share/keyrings/debian-janitor-archive-keyring.gpg https://janitor.debian.net/pgp_keys
apt update
And then you can install packages from the fresh-snapshots (upstream git snapshots) or fresh-releases suites on a case-by-case basis by running something like:
1
apt install -t fresh-snapshots r-cran-roxygen2
Most packages are updated based on information provided by vcswatch and qa watch, but it s also possible for upstream repositories to call a web hook to trigger a refresh of a package. These packages were built against unstable, but should in almost all cases also work for testing.
Caveats Of course, since these packages are built automatically without human supervision it s likely that some of them will have bugs in them that would otherwise have been caught by the maintainer.
[1]I m not saying that a monoculture is great here, but it does help distributions.

23 August 2021

Vincent Bernat: ThinkPad X1 Carbon (Gen 7): 2 years later

Two years ago, I replaced my ThinkPad X1 Carbon 2014 with the latest generation. The new configuration embeds an Intel Core i7-8565U, 16 Gib of RAM, a 1 Tib NVMe disk, and a WQHD display (2560 1440). I did not ask for a WWAN card. I think it is easier and more reliable to use the wifi hotspot feature of a phone instead: no unreliable firmware and unsupported drivers.1 Here is my opinion on this model.
ThinkPad X1 Carbon 7th Gen with the lid closed
ThinkPad X1 Carbon with its lid closed
While the second generation got a very odd keyboard, this one got a classic one with a full row of function keys. I don t know if my model was defective, but the keyboard skips one keypress from time to time. I have got used to it, but the space key still has a hard time registering when hitting it with my right thumb. The travel course is also shorter and it is less comfortable to type on it than it was on the 2014 version. The trackpoint2 works well. The physical buttons are a welcome addition. I am only using the trackpad for scrolling with the two-finger gesture.
Keyboard of the X1 Carbon 7th Gen
Keyboard with an ANSI QWERTY layout (aka English EU for Lenovo). The LEDs on the speaker and microphone keys work out of the box on Linux.
The screen does not suffer from ghosting effects like the one in my previous ThinkPad. To avoid leaving marks on the screen, I use a piece of cloth on top of the keyboard when closing the laptop. The 720p webcam has a built-in mechanical cover. Its quality is not great, but it does the job.
X1 Carbon 7th Gen
ThinkPad X1 Carbon with its lid opened
Battery life effortlessly reaches about 8 hours on a full charge. This is the main reason this laptop feels like a solid upgrade compared to the previous one: no need to carry a power brick during the day. This is the first Lenovo model with a sound card requiring the Sound Open Firmware. Without the appropriate firmware and the related userspace components (ALSA UCM and PulseAudio), the microphone does not work. If everything is set up correctly, the speakers produce a very decent sound, better than the 2014 model. It should now work out-of-the-box with Debian Bullseye. Just install the firmware-sof-signed package.3 The BIOS can be updated directly from Linux, thanks to the Linux Vendor Firmware Service. I was using the ThinkPad USB-C Dock Gen 2 as a docking station. Everything works out-of-the-box. However, from time to time, I got issues reliably getting an image on the two screens. I was using a couple of 10-year LG monitors with DVI connectors, so I relied on DP HDMI DVI adapters. This may have been the source of some of the problems.
ThinkPad USB-C Dock Gen 2
ThinkPad USB-C Dock Gen 2 with network, keyboard, mouse, power and two screens plugged

In summary, this is a fine laptop plagued with a bad keyboard. I did appreciate the good battery life and the fact there is still one HDMI and two USB-A ports, so I didn t need to travel with dongles. I was disappointed by how small the performance gap was with my 5-year older laptop. I am unsure to get a Lenovo for the next one: HiDPI screens are mostly unavailable and current prices are high. Other possibilities include: Until then, I am using my previous ThinkPad.

  1. The seventh generation moved from Sierra Wireless to Fibocom. While Linux support for modems is good, thanks to ModemManager, they are usually driven over the USB bus. The Fibocom L850-GL can use either the USB bus or the PCI bus but Lenovo s BIOS blacklists the former. It took quite some time to find how to switch it to USB after booting. A PCI driver is in progress.
  2. I did replace the trackpoint with a low-profile one from SaotoTech.
  3. The package is in non-free, despite being open-source: many platforms require the firmware to be signed by Intel.

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Final Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Final Report.

IntroMy name is Leandro Doctors ( allentiak on IRC), and I ve been the GSoC intern working with the Debian Clojure Team during 2021. This is my final report. You can also check my original proposal and my first report.

SummaryWhereas the raw data may not sound by itself very positive, my personal conclusion is. This is, whereas I didn t fully finish the required deliverables envisioned in my original proposal, I do feel I am much closer to, eventually, becoming a Debian Developer. So, by all means, I consider this project has had a positive outcome.

ProjectThe goal of the Clojure Build Tools in Debian project was to provide Clojure Debian users with some of the latest advanced build tools and libraries the upstream Clojure developers have been lately working on. These include tools.deps.alpha, a library for dependency graph resolution and classpath building, and the CLI tool clj, for REPL interaction. If time permitted, I was also to improve the quality of both new and existing Clojure packages, and the overall Debian Clojure packaging process. My mentor was Louis-Louis-Philippe V ronneau, and my co-mentor was Utkarsh Gupta.

MotivationWhy this project? On the one side, if you re a Clojure lover like me, you may have noticed that the Clojure experience in Debian is, as of mid 2021, well... still quiet limited. Additionally, this project aligned with my own background in Free Software community building and my research interest in Peer Production.When I mention how limited today s Clojure experience in Debian is, I can see two reasons for this, deeply intertwined. The first one is that there currently aren t many Clojure-specific packaging tools in Debian (such as a clojure-debian-helper). The second reason for which we only currently have a suboptimal Clojure experience in Debian, and probably the root of the previous one, is that many core build tools and libraries for the language have not simply been packaged yet. My project aimed to attack that seemingly root cause.As I said, another reason for me choosing this project is my own experience as the Co-founder and Leader of, probably, the first Free Software Community experience in my hometown of San Juan, Argentina. That interest in Free Software evolved in a first PhD attempt in what is now known as the field of Peer Production. A subject that has lived within me as a research interest during my day job at a University.Being a Clojure fan, it felt only logical combining all those interests somehow. And this project seemed like the ideal combination.

The Debian Clojure TeamI ve been working with a small, yet very warm team. The current incarnation of the Debian Clojure Team exists thanks to the hard work of three people.Elana Hashman (aka the Clojure necromancer ), revived the team around three years ago. Later on, the team gained the invaluable presence of Louis-Philippe V ronneau and Utkarsh Gupta (my mentor and co-mentor, respectively).Together, these Three Musketeers have maintained the team alive, allowing us, Debian users, to enjoy Clojure.

StatusDuring the first part of my project, I mainly worked on learning the basics of Debian packaging, and got my first package uploaded. I have to thank Louis-Philippe, Utkarsh, and Elana for their immense patience and support during that part, as it took me quite some effort grasping the basics of Debian packaging.During the second part of my project, I worked on my last packages, and almost completed the originally required scope of the project. I only have to finish working on the transition from the currently provided set of packages (based on a Debian-specific clojure runner) to the newly provided upstream clojure and clj runners.Unfortunately, I didn t have much time left to start working on the opportunities for improvement already identified by the Debian Clojure Team originally outlined in my proposal. Whereas I did update one older Clojure package not built using leiningen (tools-data-xml-clojure), I didn t write any Lintian tags to make Clojure packaging in Debian more robust, nor worked towards the automation of Clojure unit tests in autopkgtests via autodep8.

Deliverables: Data vs. ConclusionsIf we are to talk about deliverables, we should start with the data. According to my original proposal, I was required to provide both new and updated Clojure packages accepted into Debian unstable , and updated Clojure packaging documentation. Additionally, if time permitted, I was to also provide new Clojure Lintian tags merged by the Lintian maintainers, and new Clojure autodep8 scripts merged by the autodep8 maintainers. Whereas I partially accomplished both required tasks, I didn t manage to start working on any of the optional deliverables.When looked in isolation, those numbers may look somewhat disappointing for some people. However, I can draw a much more positive conclusion. Why?Firstly, GSoC is supposed to be a learning experience. Moreover, as I said in my original proposal, I approach[ed] this project as a great opportunity to, finally, start my journey towards becoming a Debian Developer . In that sense, I consider the time invested into this project fruitful. In this way, I have learned the basics of packaging, how to interact with the Debian Clojure Team, and and already got my first packages accepted. Plus, I m looking forward to continuing to work with the Debian Clojure Team so I can attain the original scope of the project. Therefore, all things considered, I can consider this experience as a moderate success.

Lessons LearnedTechnically speaking, if I have learned one thing during these weeks, is that packaging, although easy to be underestimated, is by no means a trivial process. As any Debian Developer surely knows, the onboarding process can take some time. Plus, what is easy for some people, can be difficult for others. In my case, this was quite evident. Whereas I can speak several languages and learning new ones takes me little effort, grasping the basics of packaging took me (literally!) blood, sweat and tears. Indeed, the packaging learning curve was quite steep for me.That being said, I did learn a thing or two about packaging. So, if I managed to get here, I m sure many others can. It may take them more or less time than what it took me, but learning (at least the basics) of packaging is an achievable goal.Technical skill learning aside, I value very highly the non-technical skills I have so far improved during this project.For instance, I also learned that it can take some time to adapt to real-time online communication. Before this project, remote working meant either exchanging emails or getting into video or audio calls, with a low emphasis on chat-based interaction. Early on, I realized that the Debian Clojure Team interacts almost exclusively via, well... chat! And those two approaches are very different indeed. It has taken me some time to adjust, but I ve improved greatly in this aspect as well.Finally, improving my time management skills has been also a key part of this process. Whereas I had already been working remotely for over a year and a half already, my day job is not so interaction-dependent as this project (specially in the beginning). So it took me some time to adapt to this way of working, and to plan my workload so I could use those waiting moments to advance in other parts of the project. Still a lot to improve here, but improving nevertheless.

AcknowledgmentsI first have to thank upstream. More specifically, one of the upstream developers of the clojure-tools, Alex Miller. Everytime I needed specific information on what do specific parts of the Clojure CLI tools s codebase do, tools.deps.alpha do, he popped up a reply in a matter of hours. He has shown genuine interest in the success of is project during by carefully replying to my emails with detailed explanations of code intent and form, both in private and in public conversations. Thank you for all that, Alex!Let s move on to the Debian Clojure Team.First, Elana. I thank Elana for her initial openness when I first contacted her about this idea. It was *her* who initially contacted Louis-Philippe so he would become my mentor. I wouldn t have started to work on this project if it wasn t for her. Plus, she provided quite a piece of advise in more that one ocassion. So, thank you very much for all that, Elana!I also thank Utkarsh, my co-mentor, for his overall technical advise. And a special mention to his initial help to setup my Matrix client for OFTC chat. At that moment, it was *him* who took the time to help me in real time so I could solve that problem. So, thank you very much for all that, Utkarsh!I finally have to thank Louis-Philippe, my mentor, for his patient guidance during the whole process. His dedication and hard work has been *instrumental* for my progress. And a special mention for his tolerance with respect to some unforeseen personal circumstances I had to endure during the first weeks. When one is playing the newbie, times abound when one depends on other people s feedback. And Debian is made of volunteers, who have a life outside it. Every time I asked, Louis-Philippe was there. I wouldn t have gotten here if it wasn t for him. So, thank you *so* much for all that, Louis-Philippe!

Final WordsI would like to close this report with a reflection.I have been using Debian for many, many years now, and I had been looking for a way to contribute back to the project for some time already. I even did some work on a non-packaging Debian project. That being said, I never managed to deliver much, really.So, the very existence of outreach programs as this one is, in my humble opinion, crucial. In my case, the funding I got through the GSoC program was instrumental in being able to allocate time for this endeavor, and to finally get started contributing to Debian. Plus, it has had a very positive impact on me; in many ways, some of which I am only starting to discover now that the project is ending.When I put things into perspective, this project is very important for me. Actually, it is nothing but the first step within a long-term journey: becoming a Debian member. Hopefully, I would like to be able to apply for Debian membership by the end of this year.

Questions?Thank you very much for your time reading this! I look forward to hearing (or reading) your feedback. Please come and meet with the Debian Clojure Team Moreover, I will be in the Clojure BoF on DebConf2021. Moreover, do not hesitate to send me an email.

Data

Task Status
  • Required Tasks:
    • T1: Setting up a full Debian packaging development environment and learning the basics of Debian packaging.
      • Successfully completed the first part during the Application period.
      • Successfully completed the second part during the Coding periods.
    • T2: Identifying and packaging the missing dependencies to package clojure-cli.
      • Successfully completed as of the end of Coding II.
    • T3: Packaging clojure-cli.
      • 90% done as of the end of Coding II.
    • T4: Updating clojure to use clojure-cli.
      • To be completed after GSoC.
    • T5: Updating the Clojure Packaging Guide with information on how to use the new clojure-cli scripts.
      • Improved existing documentation. To be completed after GSoC.
  • Optional Tasks:
    • T6: Writing Lintian tags to make Clojure packaging in Debian more robust.
      • To be completed after GSoC.
    • T7: Working to automate Clojure unit testing in autopkgtests using autodep8.
      • To be completed after GSoC.
    • T8: Updating older Clojure packages not built using leiningen or clojure-cli.
      • To be completed after GSoC.

Packages
  1. Updated packages:
  2. New packages:
    • tools-gitlibs-clojure -- Clojure API for programatically accessing git libraries. ITP: #905543 in NEW.
    • ITP: tools-deps-alpha-clojure -- functional API for dependency management and classpath creation https://bugs.debian.org/891136 Needs to be uploaded by Louis-Philippe.
  3. In-Progress packages:
    • ITP: clojure-cli -- upstream CLI entrypoints for Clojure https://bugs.debian.org/891141 90% done - Package completed. I only need to finish implementing the transition from existing clojure scripts. To be completed after GSoC.
  4. Opened ITPs:
  5. Reported bugs

Other
  1. Interactions with upstream in the Debian-Clojure mailing list:
    • Many productive interactions with one of the upstream developers, Alex Miller (June, August).
  2. Wiki page:
    1. https://wiki.debian.org/Clojure/Goals/ClojureCLIToolsInDebian

15 August 2021

Mike Gabriel: Chromium Policies Managed under Linux

For a customer project, I recently needed to take a closer look at best strategies of deploying Chromium settings to thrillions of client machines in a corporate network. Unfortunately, the information on how to deploy site-wide Chromium browser policies are a little scattered over the internet and the intertwining of Chromium preferences and Chromium policies required deeper introspection. Here, I'd like to provide the result of that research, namely a list of references that has been studied before setting up Chromium policies for the customer's proof-of-concept. Difference between Preferences and Policies Chromium can be controlled via preferences (mainly user preferences) and administratively rolled-out policy files. The difference between preferences and policies are explained here:
https://www.chromium.org/administrators/configuring-other-preferences The site-admin (or distro package maintainer) can pre-configure the user's Chromium experience via a master preferences file (/etc/chromium/master_preferences). This master preferences file is the template for the user's preferences file and gets copied over into the Chromium user profile folder on first browser start. Note: By studying the recent Chromium code it was found out that /etc/chromium/master_preferences is the legacy filename of the initial preferences file. The new filename is /etc/chromium/initial_preferences. We will continue with master_preferences here as most Linux distributions still provide the initial preferences via this file. Whereas the new filename is already supported by Chromium in openSUSE/SLES, it is not yet support by Chromium in Debian/Ubuntu. (See Debian bug #992178). Difference of 'managed' and 'recommended' Policies The difference between 'managed' and 'recommended' Chromium policies is explained here:
https://www.chromium.org/administrators/configuring-other-preferences Quoting from above URL (last visited 2021/08): Policies that should be editable by the user are called "recommended policies" and offer a better alternative than the master_preferences file. Their contents can be changed and are respected as long as the user has not modified the value of that preference themselves. So, policies of type 'managed' override user preferences (and also lock them in the Chromium settings UI). Those 'managed' policies are good for enforcing browser settings. They can be blended in also for existing browser user profiles. Policies ('managed' and 'recommended') even get blended it at browser run-time when modified. Use case: e.g. for rolling out browser security settings that are required for enforcing a site-policy-compliant browser user configuration. Policies of type 'recommended' have an impact on setting defaults of the Chromium browser. They apply to already existing browser profiles, if the user hasn't tweaked with the to-be-recommended settings, yet. Also, they get applied at browser run-time. However, if the user has already fiddled with such a to-be-recommended setting via the Chromium settings UI, the user choice takes precedence over the recommended policy. Use case: Policies of type 'recommended' are good for long-term adjustments to browser configuration options. Esp. if users don't touch their browser settings much, 'recommended' policies are a good approach for fine-tuning site-wide browser settings on user machines. CAVEAT: While researching on this topic, two problematic observations were made:
  1. All setting parameters put into the master preferences file (/etc/chromium/master_preferences) can't be superceded by 'recommended' Chromium policies. Pre-configured preferences are handled as if the user has already tinkered with those preferences in Chromium's settings UI. It also was discovered, that distributors tend to overload /etc/chromium/master_preferences with their best practice browser settings. Everything that is not required on first browser start should be provided as 'recommended' policies, already in the distribution packages for Chromium .
  2. There does not seem to be an elegant way to override the package maintainer's choice of options in /etc/chromium/master_preferences file via some file drop-in replacement. (See Debian bug #992179). So, deploying Chromium involves post-install config file tinkering by hand, by script or by config management tools. There is space for improvement here.
Managing Chromium Policy with Files Chromium supports 'managed' policies and 'recommended' policies. Policies get deployed as JSON files. For Linux, this is explained here:
https://www.chromium.org/administrators/linux-quick-start Note, that for Chromium, the policy files have to be placed into /etc/chromium. The example on the above web page shows where to place them for Google Chrome. Good 'How to Get Started' Documentation for Chromium Policy Setups This overview page provides a good get-started-documentation on how to provision Chromium via policies:
https://www.chromium.org/administrators/configuring-policy-for-extensions First-Run Preferences It seems, not every setting can be tweaked via a Chromium policy. Esp. the first-run preferences are affected by this:
https://www.chromium.org/developers/design-documents/first-run-customiza... So, for tweaking the first-run settings, one needs to adjust /etc/chromium/master_prefences (which is suboptimal, again see Debian bug #992179 for a detailed explanation on why this is suboptimal). The required adjustments to master_preferences can be achieved with the jq command line tool, here is one example:
# Tweak chromium's /etc/chromium/master_preferences file.
# First change: drop everything that can be provisioned via Chromium Policies.
# Rest of the changes: Adjust preferences for new users to our needs for all
# parameters that cannot be provisioned via Chromium Policies.
cat /etc/chromium/master_preferences   \
    jq 'del(.browser.show_home_button, .browser.check_default_browser, .homepage)'  
    jq '.first_run_tabs=[ "https://first-run.example.com/", "https://your-admin-faq.example.com" ]'  
    jq '.default_apps="noinstall"'  
    jq '.credentials_enable_service=false   .credentials_enable_autosignin=false'  
    jq '.search.suggest_enabled=false'  
    jq '.distribution.import_bookmarks=false   .distribution.verbose_logging=false   .distribution.skip_first_run_ui=true'  
    jq '.distribution.create_all_shortcuts=true   .distribution.suppress_first_run_default_browser_prompt=true'  
    cat > /etc/chromium/master_preferences.adapted
if [ -n "/etc/chromium/master_preferences.adapted" ]; then
        mv /etc/chromium/master_preferences.adapted /etc/chromium/master_preferences
else
        echo "WARNING (chromium tweaks): The file /etc/chromium/master_preferences.adapted was empty after tweaking."
        echo "                           Leaving /etc/chromium/master_preferences untouched."
fi
The list of available (first-run and other) initial preferences can be found in Chromium's pref_names.cc code file:
https://github.com/chromium/chromium/blob/main/chrome/common/pref_names.cc List of Available Chromium Policies The list of available Chromium policies used to be maintained in the Chromium wiki: https://www.chromium.org/administrators/policy-list-3 However, that page these days redirects to the Google Chrome Enterprise documentation:
https://chromeenterprise.google/policies/ Each policy variable has its own documentation page there. Please note the "Supported Features" section for each policy item. There, you can see, if the policy supports being placed into "recommended" and/or "managed". This is an example /etc/chromium/policies/managed/50_browser-security.json file (note that all kinds of filenames are allowed, even files without .json suffix):
 
  "HideWebStoreIcon": true,
  "DefaultBrowserSettingEnabled": false,
  "AlternateErrorPagesEnabled": false,
  "AutofillAddressEnabled": false,
  "AutofillCreditCardEnabled": false,
  "NetworkPredictionOptions": 2,
  "SafeBrowsingProtectionLevel": 0,
  "PaymentMethodQueryEnabled": false,
  "BrowserSignin": false,
 
And this is an example /etc/chromium/policies/recommended/50_homepage.json file:
 
  "ShowHomeButton": true,
  "WelcomePageOnOSUpgradeEnabled": false,
  "HomepageLocation": "https://www.example.com"
 
And for defining a custom search provider, I use /etc/chromium/policies/recommended/60_searchprovider.json (here, I recommend not using DuckDuckGo as DefaultSearchProviderName, but some custom name; unfortunately, I did not find a policy parameter that simply selects an already existing search provider name as the default :-( ):
 
  "DefaultSearchProviderEnabled": true,
  "DefaultSearchProviderName": "DuckDuckGo used by Example.com",
  "DefaultSearchProviderIconURL": "https://duckduckgo.com/favicon.ico",
  "DefaultSearchProviderEncodings": ["UTF-8"],
  "DefaultSearchProviderSearchURL": "https://duckduckgo.com/?q= searchTerms ",
  "DefaultSearchProviderSuggestURL": "https://duckduckgo.com/ac/?q= searchTerms &type=list",
  "DefaultSearchProviderNewTabURL": "https://duckduckgo.com/chrome_newtab"
 
The Essence and Recommendations On first startup, Chromium copies /etc/chromium/master_preferences to $HOME/.config/chromium/Default/Preferences. It does this only if the Chromium user profile has'nt been created, yet. So, settings put into master_preferences by the distro and the site or device admin are one-time-shot preferences (new user logs into a device, preferences get applied on first start of Chromium). Chromium policy files, however, get continuously applied at browser runtime. Chromium watches its policy files and you can observe Chromium settings change when policy files get modified. So, for continuously provisioning site-wide settings that mostly always trickle into the user's browser configuration, Chromium policies should definitely be preferred over master_preferences and this should be the approach to take. When using Chromium policies, one needs to take into account that settings in /etc/chromium/master_preferences seem to have precedence over 'recommended' policies. So, settings that you want to deploy as recommended policies must be removed from /etc/chromium/master_preferences. Essentially, these are the recommendations extracted from all the above research and information for deploying Chromium on enterprise scale:
  1. Everything that's required at first-run should go into /etc/chromium/master_preferences.
  2. Everything that's not required at first-run should be removed from /etc/chromium/master_preferences.
  3. Everything that's deployable as a Chromium policy should be deployed as a policy (as you can influence existing browser sessions with that, also long-term)
  4. Chromium policy files should be split up into several files. Chromium parses those files in alpha-numerical order. If policies occur more than once, the last policy being parsed takes precedence.
Feedback If you have any feedback or input on this post, I'd be happy to hear it. Please get in touch via the various channels where I am known as sunweaver (OFTC and libera.chat IRC, [matrix], Mastodon, E-Mail at debian.org, etc.). Looking forward to hearing from you. Thanks! light+love
Mike Gabriel (aka sunweaver)

13 July 2021

Debian XMPP Team: XMPP Novelties in Debian 11 Bullseye

This is not only the Year of the Ox, but also the year of Debian 11, code-named bullseye. The release lies ahead, full freeze starts this week. A good opportunity to take a look at what is new in bullseye. In this post new programs and new software versions related to XMPP, also known as Jabber are presented. XMPP exists since 1999, and has a diverse and active developers community. It is a universal communication protocol, used for instant messaging, IoT, WebRTC, and social applications. You probably will encounter some oxen in this post. That's all for now. Enjoy Debian 11 bullseye and Happy Chatting!

12 July 2021

Chris Lamb: Saint Alethia? On Bodies of Light by Sarah Moss

How are you meant to write about an unfinished emancipation? Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage in an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, we follow the studious and intelligent Alethia 'Ally' Moberly, who is struggling to gain the acceptance of herself, her mother and the General Medical Council. 'Alethia' may be the Greek goddess of truth, but our Ally is really searching for wisdom. Her strengths are her patience and bookish learning, and she acquires Latin as soon as she learns male doctors will use it to keep women away from the operating theatre. In fact, Ally's acquisition of language becomes a recurring leitmotif: replaying a suggestive dream involving a love interest, for instance, Ally thinks of 'dark, tumbling dreams for which she has a perfectly adequate vocabulary'. There are very few moments of sensuality in the book, and pairing it with Ally's understated wit achieves a wonderful effect. The amount we learn about a character is adapted for effect as well. There are few psychological insights about Ally's sister, for example, and she thus becomes a fey, mysterious and almost Pre-Raphaelite figure below the surface of a lake to match the artistic movement being portrayed. By contrast, we get almost the complete origin story of Ally's mother, Elizabeth, who also constitutes of those rare birds in literature: an entirely plausible Christian religious zealot. Nothing Ally does is ever enough for her, but unlike most modern portrayals of this dynamic, neither of them are aware of what is going, and it is conveyed in a way that is chillingly... benevolent. This was brought home in the annual 'birthday letters' that Elizabeth writes to her daughter:
Last year's letter said that Ally was nervous, emotional and easily swayed, and that she should not allow her behaviour to be guided by feeling but remember always to assert her reason. Mamma would help her with early hours, plain food and plenty of exercise. Ally looks at the letter, plump in its cream envelope. She hopes Mamma wrote it before scolding her yesterday.
The book makes the implicit argument that it is a far more robust argument against pervasive oppression to portray a character in, say, 'a comfortable house, a kind husband and a healthy child', yet they are nonetheless still deeply miserable, for reasons they can't quite put their finger on. And when we see Elizabeth perpetuating some generational trauma with her own children, it is telling that is pattern is not short-circuited by an improvement in their material conditions. Rather, it is arrested only by a kind of political consciousness in Ally's case, the education in a school. In fact, if there is a real hero in Bodies of Light, it is the very concept of female education. There's genuine shading to the book's ideological villains, despite finding their apotheosis in the jibes about 'plump Tories'. These remarks first stuck out to me as cheap thrills by the author; easy and inexpensive potshots that are unbecoming of the pages around them. But they soon prove themselves to be moments of much-needed humour. Indeed, when passages like this are read in their proper context, the proclamations made by sundry Victorian worthies start to serve as deadpan satire:
We have much evidence that the great majority of your male colleagues regard you as an aberration against nature, a disgusting, unsexed creature and a danger to the public.
Funny as these remarks might be, however, these moments have a subtler and more profound purpose as well. Historical biography always has the risk of allowing readers to believe that the 'issue' has already been solved hence, perhaps, the enduring appeal of science fiction. But Moss providing these snippets from newspapers 150 years ago should make a clear connection to a near-identical moral panic today. On the other hand, setting your morality tale in the past has the advantage that you can show that progress is possible. And it can also demonstrate how that progress might come about as well. This book makes the argument for collective action and generally repudiates individualisation through ever-fallible martyrs. Ally always needs 'allies' not only does she rarely work alone, but she is helped in some way by almost everyone around her. This even includes her rather problematic mother, forestalling any simplistic proportioning of blame. (It might be ironic that Bodies of Light came out in 2014, the very same year that Sophia Amoruso popularised the term 'girl boss'.) Early on, Ally's schoolteacher is coded as the primary positive influence on her, but Ally's aunt later inherits this decisive role, continuing Ally's education on cultural issues and what appears to be the Victorian version of 'self-care'. Both the aunt and the schoolteacher are, of course, surrogate mother figures. After Ally arrives in the cut-throat capital, you often get the impression you are being shown discussions where each of the characters embodies a different school of thought within first-wave feminism. This can often be a fairly tedious device in fiction, the sort of thing you would find in a Sally Rooney novel, Pilgrim's Progress or some other ponderously polemical tract. Yet when Ally appears to 'win' an argument, it is only in the sense that the narrator continues to follow her, implicitly and lightly endorsing her point. Perhaps if I knew my history better, I might be able to associate names with the book's positions, but perhaps it is better (at least for the fiction-reading experience...) that I don't, as the baggage of real-world personalities can often get in the way. I'm reminded here of Regina King's One Night in Miami... (2020), where caricatures of Malcolm X, Muhammad Ali, Jim Brown and Sam Cooke awkwardly replay various arguments within an analogous emancipatory struggle. Yet none of the above will be the first thing a reader will notice. Each chapter begins with a description of an imaginary painting, providing a title and a date alongside a brief critical exegesis. The artworks serve a different purpose in each chapter: a puzzle to be unlocked, a fear to be confirmed, an unsolved enigma. The inclusion of (artificial) provenances is interesting as well, not simply because they add colour and detail to the chapter to come, but because their very inclusion feels reflective of how we see art today.
Orphelia (1852) by Sir John Everett Millais.
To continue the question this piece began, how should an author conclude a story about an as-yet-unfinished struggle for emancipation? How can they? Moss' approach dares you to believe the ending is saccharine or formulaic, but what else was she meant to turn in yet another tale of struggle and suffering? After all, Thomas Hardy has already written Tess of the d'Urbervilles. All the same, it still feels slightly unsatisfying to end merely with Ally's muted, uncelebrated success. Nevertheless, I suspect many readers will dislike the introduction of a husband in the final pages, taking it as a betrayal of the preceding chapters. Yet Moss denies us from seeing the resolution as a Disney-style happy ending. True, Ally's husband turns out to be a rather dashing lighthouse builder, but isn't it Ally herself who is lighting the way in their relationship, warning other women away from running aground on the rocks of mental illness? And Tom feels more of a reflection of Ally's newly acquired self-acceptance instead of that missing piece she needed all along. We learn at one point that Tom's 'importance to her is frightening' this is hardly something a Disney princess would say. In fact, it is easy to argue that a heroic ending for Ally might have been an even more egregious betrayal. The evil of saints is that you can never live up to them, for the concept of a 'saint' embodies an unreachable ideal that no human can begin to copy. By being taken as unimpeachable and uncorrectable as well, saints preclude novel political action, and are therefore undoubtedly agents of reaction. Appreciating historical figures as the (flawed) people that they really were is the first step if you wish to continue or adapt their political ideas. I had acquired Bodies of Light after enjoying Moss' Summerwater (2020), which had the dubious honour of being touted as the 'first lockdown novel', despite it being finished before Covid-19. There are countless ways one might contrast the two, so I will limit myself to the sole observation that the strengths of one are perhaps the weaknesses of the other. It's not that Bodies of Light ends with a whimper, of course, as it quietly succeeds in concert with Ally. But by contrast, the tighter arc of Summerwater (which is set during a single day, switches protagonist between chapters, features a closed-off community, etc.) can reach a higher high with its handful of narrative artifices. Summerwater is perhaps like Phil Collins' solo career: 'more satisfying, in a narrower way.'

31 May 2021

Russ Allbery: Review: The Relentless Moon

Review: The Relentless Moon, by Mary Robinette Kowal
Series: Lady Astronaut #3
Publisher: Tor
Copyright: 2020
ISBN: 1-250-23648-7
Format: Kindle
Pages: 542
Content note: Discussion of eating disorders in this review and portrayal of an eating disorder in the novel. The Relentless Moon is the third book of the Lady Astronaut series and the first one that doesn't feature Elma. It takes place simultaneously with The Fated Sky and tells the story of what happened on Earth, and the Moon, while Elma was in transit to Mars. It's meant to be read after The Fated Sky and would be a significant spoiler for that novel. The protagonist of this novel is Nicole Wargin: wife of the governor of Kansas (a more prestigious state in this universe since the seat of government for the United States was relocated to Kansas City after the Meteor), expert politician's wife, and another of the original group of female astronauts. Kenneth, her husband, is considering a run for president. Nicole is working as an astronaut, helping build out the permanent Moon base. But there are a lot of people on Earth who are not happy with the amount of money and political attention that the space program is getting. They have decided to move beyond protests and political opposition to active sabotage. Nicole was hoping to land an assignment piloting one of the larger rockets. What she gets instead is an assignment as secretary to the Lunar Colony Administrator, as cover. Her actual job is to watch for saboteurs that may or may not be operating on the Moon. Before she even leaves the planet, one of the chief engineers of the space program is poisoned. The pilot of the translunar shuttle falls ill during the flight to the Moon. And then the shuttle's controls fail during landing and disaster is only narrowly averted. The story from there is a cloak and dagger sabotage investigation mixed with Kowal's meticulously-researched speculation about a space program still running on 1950s technology but drastically accelerated by the upcoming climate collapse of Earth. Nicole has more skills for this type of mission than most around her realize due to very quiet work she did during the war, not to mention the feel for personalities and politics that she's honed as a governor's wife. But, like Elma, she's also fighting some personal battles. Elma's are against anxiety; Nicole's are against an eating disorder. I think my negative reaction to this aspect of the book is not the book's fault, but it was sufficiently strong that it substantially interfered with my enjoyment. The specific manifestation of Nicole's eating disorder is that she skips meals until she makes herself ill. My own anxious tendencies hyperfocus on prevention and on rule-following. The result is that once Kowal introduces the eating disorder subplot, my brain started anxiously monitoring everything that Nicole ate and keeping track of her last meal. This, in turn, felt horribly intrusive and uncomfortable. I did not want to monitor and police Nicole's eating, particularly when Nicole clearly was upset by anyone else doing exactly that, and yet I couldn't stop the background thread of my brain that did so. The result was a highly unsettling feeling that I was violating the privacy of the protagonist of the book that I was reading, mixed with anxiety and creeping dread about her calorie intake. Part of this may have been intentional to give the reader some sense of how this felt to Nicole. (The negative interaction with my own anxiety was likely not intentional.) Kowal did an exceptionally good job at invoking reader empathy (at least in me) for Elma's anxiety in The Calculating Stars. I didn't like the experience much this time, but that doesn't make it an invalid focus for a book. It may, however, make me a poor reviewer for this part of the reading experience. This was a major subplot, so it was hard to escape completely, but I quite enjoyed the rest of the book. It's not obvious who the saboteurs are or even how the sabotage is happening, and the acts of clear sabotage are complicated by other problems that may be more subtle sabotage, may be bad luck, or may be the inherent perils of trying to survive in space. Many of Nicole's suspicions do not pan out, which was a touch that I appreciated. She has to look for ulterior motives in everything, and in reality that means she'll be wrong most of the time, but fiction often unrealistically short-cuts that process. I also liked how Kowal handles the resolution, which avoids villain monologues and gives Nicole's opposition their own contingency plans, willingness to try to adapt to setbacks, and the intelligence to keep trying to manipulate the situation even when their plans fail. As with the rest of this series, there's a ton of sexism and racism, which the characters acknowledge and which Nicole tries to resist as much as she can, but which is so thoroughly baked into the society that it's mostly an obstacle that has to be endured. This is not the book, or series, to read if you're looking for triumph over discrimination or for women being allowed to be awesome without having to handle and soothe men's sexist feelings about their abilities. Nicole gets a clear victory arc, but it's a victory despite sexism rather than an end to it. The Relentless Moon did feel a bit long. There are a lot of scene-setting preliminaries before Nicole leaves for the Moon, and I'm not sure all of them were necessary at that length. Nicole also spends a lot of time being suspicious of everyone and second-guessing her theories, and at a few points I thought that dragged. But once things start properly happening, I thoroughly enjoyed the technological details and the thought that Kowal put into the mix of sabotage, accidents, and ill-advised human behavior that Nicole has to sort through. The last half of the book is the best, which is always a good property for a book to have. The eating disorder subplot made me extremely uncomfortable for reasons that are partly peculiar to me, but outside of that, this is a solid entry in the series and fills in some compelling details of what was happening on the other end of the intermittent radio messages Elma received. If you've enjoyed the series to date, you will probably enjoy this installment as well. But if you didn't like the handling of sexism and racism as deeply ingrained social forces that can at best be temporarily bypassed, be warned that The Relentless Moon continues the same theme. Also, if you're squeamish about medical conditions in your fiction, be aware that the specific details of polio feature significantly in the book. Rating: 7 out of 10

30 May 2021

Russell Coker: HP ML110 Gen9

I ve just bought a HP ML110 Gen9 as a personal workstation, here are my notes about it and documentation on running Debian on it. Why a Server? I bought this is because the ML350p Gen8 turned out to be too noisy for my taste [1]. I ve just been editing my page about Memtest86+ RAM speeds [2], over the course of 10 years (high end laptop in 2001 to low end server in 2011) RAM speed increased by a factor of 100. RAM speed has been increasing at a lower rate than CPU speed and is becoming an increasing bottleneck on system performance. So while I could get a faster white-box system the cost of a second-hand server isn t that great and I m getting a system that s 100* faster than what was adequate for most tasks in 2001. HP makes some nice workstation class machines with ECC RAM (think server without remote management, hot-swap disks, or redundant PSU but with sound hardware). But they are significantly more expensive on the second hand market than servers. This server cost me $650 and came with 2*480G DC grade SSDs (Intel but with HPE stickers). I hope that more than half of the purchase price will be recovered from selling the SSDs (I will use NVMe). Also 64G of non-ECC RAM costs $370 from my local store. As I want lots of RAM for testing software on VMs it will probably turn out that the server cost me less than the cost of new RAM once I ve sold the SSDs! Monitoring
wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP monitoring" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ stretch/current-gen9 non-free" >> /etc/apt/sources.list
The above commands will make the management utilities installable on Debian/Buster. If using Bullseye (Testing at the moment) then you need to have Buster repositories in APT for dependencies, HP doesn t seem to have packaged all their utilities for Buster.
wget -r -np -A Contents-amd64.bz2 http://downloads.linux.hpe.com/SDR/repo/mcp/debian/dists
To find out which repositories had the programs I need I ran the above recursive wget and then uncompressed them for grep -R (as an aside it would be nice if bzgrep supported -R). I installed the hp-health package which has hpasmcli for viewing and setting many configuration options and hplog for viewing event log data and thermal data (among a few other things). I ve added a new monitor to etbemon hp-temp.monitor to monitor HP server temperatures, I haven t made a configuration option to change the thresholds for what is considered normal because I don t expect server class systems to be routinely running above the warning temperature. For the linux-temp.monitor script I added a command-line option for the percentage of the high temperature that is an error condition as well as an option for the number of CPU cores that need to be over-temperature, having one core permanently over the high temperature due to a web browser seems standard for white-box workstations nowadays. The hp-health package depends on libc6-i686 lib32gcc1 even though none of the programs it contains use lib32gcc1. Depending on lib32gcc1 instead of lib32gcc1 lib32gcc-s1 means that installing hp-health requires removing mesa-opencl-icd which probably means that BOINC can t use the GPU among other things. I solved this by editing /var/lib/dpkg/status and changing the package dependencies to what I desired. Note that this is not something for a novice to do, make a backup and make sure you know what you are doing! Issues The HPE Dynamic Smart Array B140i is a software RAID device. While it s convenient for some users that software RAID gets supported in the UEFI boot process, generally software RAID is a bad idea. Also my system has hot-swap drive caddies but the controller doesn t support hot-swap. So the first thing to do was to configure the array controller to run in AHCI mode and give up on using hot-swap drive caddies for hot-swap. I tested all the documented ways of scanning for new devices and nothing other than a reboot made the kernel recognise a new SATA disk. According to specs provided by Dell and HP the ML110 Gen9 makes less noise than the PowerEdge T320, according to my own observations the reverse is the case. I don t know if this is because of Dell being more conservative in their specs than HP or because of how dBA is measured vs my own personal annoyance thresholds for sounds. As the system makes more noise than I m comfortable with I plan to build a rubber enclosure for the rear of the system to reduce noise, that will be the subject of another post. For Australian readers Bunnings has some good deals on rubber floor mats that can be used to reduce server noise. The server doesn t have sound hardware, while one could argue that servers don t need sound there are some server uses for sound hardware such as using line input as a source of entropy. Also for a manufacturer it might be a benefit to use the same motherboard for workstations and servers. Fortunately a friend gave me a nice set of Logitech USB speakers a few years ago that I hadn t previously had a cause to use, so that will solve the problem for me (I don t need line-in on a workstation). UEFI and Memtest I decided to try UEFI boot for something new (in the past I d only used UEFI boot for a server that only had large disks). In the past I ve booted all my own systems with BIOS boot because I m familiar with it and they all have SSDs for booting which are less than 2TB in size (until recently 2TB SSDs weren t affordable for my personal use). The Debian UEFI wiki page is worth reading [3]. The Debian Wiki page about ProLiant servers [4] is worth reading too. Memtest86+ doesn t support EFI booting (just goes to a black screen) even though Debian/Buster puts in a GRUB entry for it (Debian bug #695246 was filed for this in 2012). Also on my ML110 Memtest86+ doesn t report the RAM speed (a known issue on Memtest86+). Comments on the net say that Memtest86+ hasn t been maintained for a long time and Memtest86 (the non-free version) has been updated more recently. So far I haven t seen a system with ECC RAM have a memory problem that could be detected by Memtest86+, the memory problems I ve seen on ECC systems have been things that prevent booting (RAM not being recognised correctly), that are detected by the BIOS as ECC errors before booting, or that are reported by the kernel as ECC errors at run time (happened years ago and I can t remember the details). Overall I m not a fan of EFI with the way it currently works in Debian. It seems to add some of the GRUB functionality into the BIOS and then use that to load GRUB. It seems that EFI can do everything you need and it would be better to just have a single boot loader not two of them chained. Power Supply There are a range of PSUs for the ML110, the one I have has the smallest available PSU (350W) and doesn t have a PCIe power cable (the one used for video cards). Here is the HP document which shows the cabling for the various ML110 Gen8 PSUs [5], I have the 350W PSU. One thing I ve considered is whether I could make an adaptor from the drive bay power to the PCIe connector. A quick web search indicates that 4 SAS disks when active can take up to 75W more power than a system with no disks. If that s the case then the 2 spare drive bay connectors which can each handle 4 disks should be able to supply 150W. As a 6 pin PCIe power cable (GPU power cable) is rated at 75W that should be fine in theory (here s a page with the pinouts for PCIe power connectors [6]). My video card is a Radeon R7 260X which apparently takes about 113W all up so should be taking less than 75W from the PCIe power cable. All I really want is YouTube, Netflix, and text editing at 4K resolution. So I don t need much in terms of 3D power. KDE uses some of the advanced features of modern video cards, but it doesn t compare to 3D gaming. According to the Wikipedia page for Radeon RX 500 series [7] the RX560 supports DisplayPort 1.4 and HDMI 2.0 (both of which do 4K@60Hz) and has a TDP of 75W. So a RX560 video card seems like a good option that will work in any system that doesn t have a spare PCIe power cable. I ve just ordered one of those for $246 so hopefully that will arrive in a week or so. PCI Fan The ML110 Gen9 has an optional PCIe fan and baffle to cool PCIe cards (part number 784580-B21). Extra cooling of PCIe cards is a good thing, but $400 list price (and about $50 ebay price) for the fan and baffle is unpleasant. When I boot the system with a PCIe dual-ethernet card and two PCIe NVMe cards it gives a BIOS warning on boot, when I add a video card it refuses to boot without the extra fan. It s nice that the system makes sure it doesn t get into a thermal overload situation, but it would be nicer if they just shipped all necessary fans with it instead of trying to get more money out of customers. I just bought a PCI fan and baffle kit for $60. Conclusion In spite of the unexpected expense of a new video card and PCI fan the overall cost of this system is still low, particularly when considering that I ll find another use for the video card which needs and extra power connector. It is disappointing that HP didn t supply a more capable PSU and fit all the fans to all models, the expectation of a server is that you can just do server stuff not have to buy extra bits before you can do server stuff. If you want to install Tesla GPUs or something then it s expected that you might need to do something unusual with a server, but the basic stuff should just work. A single processor tower server should be designed to function as a deskside workstation and be able to handle an average video card. Generally it s a nice computer, I look forward to getting the next deliveries of parts so I can make it work properly.

27 May 2021

Wouter Verhelst: SReview and pandemics

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow. The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in. One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have. In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier). While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized. As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well. Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it). All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed. Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

20 May 2021

Taowa: Video calling in Dino in experimental, oh my!

Dino, packaged as dino-im in Debian, is an XMPP chat client. Thanks to the hard work of its developers, Dino has been making progress on supporting video and audio calls. Building on the work of my co-maintainer, I've packaged the latest commits to dino-im in Debian experimental. Adapting to the changes in how Dino is built since the last release took a bit of effort, but I'm glad to say that experimental now has support for video calls in dino-im, and that they work quite well! I was able to test video calling for multiple hours, and while there were clearly still a few issues, the entire experience felt quite... comfortable. I'm confident that video calling will be in great shape for bookworm!

4 May 2021

Steve Kemp: Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

Steve Kemp: Password store plugin: enve

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

1 May 2021

Ingo Juergensmann: The Fediverse What About Resources?

Today ist May, 1st. In about two weeks on May, 15th WhatsApp will put their changed Terms of Service into action and when you don t accept their rules you won t be able to use WhatsApp any longer. Early this year there was already a strong movement away from WhatsApp towards other solutions. Mainly to Signal, but also some other services like the Fediverse gained some new users. And also XMPP got their fair share of new users. So, what to do about the WhatsApp ToS change then? Shall we go all to Signal? Surely not. Signal is another vendor lock-in silo. It s centralistic and recent development plans want to implement some crypto payment system. Even Bruce Schneier thinks that this is a bad idea. Other alternatives often named include Matrix/Element or XMPP. Today, Don di Dislessia in the (german) Fediverse asked about power consumption of the Fediverse incl. Matrix and XMPP and how much renewable energy is being used. Of course this is no easy answer to this question, but I tried my best at least for my own server. Here are my findings and conclusions Power
screenshot showing power consumption of serverscreenshot showing power consumption of server
Currently my server in the colocation is using about 93W in average with 6c Xeon E5-2630L, 128 GB RAM, 4x 2 TB WD Red + 1 Samsung 960pro NVMe. The server is 7 years old. When I started with that server the power consumption was about 75W, but back then there were far less users on the server. So, 20W more over the past year Users I m running my Friendica node on Nerdica.net since 2013. Over the years it became one of the largest Friendica servers in the Fediverse, for some time it was the largest one. It has currently like 700 total users and 180 monthly active users. My Mastodon instance on Nerdculture.de has about 1000 total users and about 300 monthly active users. Since last year I also run a Matrix-Synapse server. Although I invited my family I m in fact the only active user on that server and have joined some channels. My XMPP server is even older than my Friendica node. For long time I had like maybe 20 users. Now I setup a new website and added some domains like hookipa.net and xmpp.social the user count increased and currently I have like 130 users on those two domains and maybe like 50 monthly active users. Also note that all my Friendica and Mastodon users can use XMPP with their accounts, but won t be counted the same way as on native users on ejabberd, because the auth backend is different. So, let s assume I do have like 2000 total users and 500 monthly active users. CPU, Database Sizes and Disk I/O Let s have a look about how many resources are being used by those users. Database Sizes: CPU times according to xentop: Friendica does use the largest database and causes most disk I/O on NVMe, but it s difficult to differentiate between the load between the web apps on the webserver. So, let s have a quick look on an simple metric: Number of lines in webserver logfile: These metrics correlate to some degree with the database I/O load, at least for Friendica. If you take into account the number of users, things look quite different. Conclusion Overall, and my personal impression, is that Matrix is really bad in regards of resource usage. Given that I m the only active user it uses exceptionally many resources. When you also consider that Matrix is using a distributed database for its chat rooms, you can assume that the resource usage is multiplied across the network, making things even worse. Friendica is using a large database and many disk accesses, but has a fairly large user base, so it seems ok, but of course should be improved. Mastodon seems to be quite good, considering the database size, the number of log lines and the user count. XMPP turns out to be the most efficient contestant in this comparison: it uses much less CPU cycles and database disk I/O. Of course, Mastdon/Friendica are different services than XMPP or Matrix. So, coming back to the initial question about alternatives to WhatsApp, the answer for me is: you should prefer XMPP over Matrix alone for reasons of saving resources and thus reducing power consumption. Less power consumption also means a smaller ecological footprint and fewer CO2 emissions for your communication with your family and friends. XMPP is surely not the perfect replacement for WhatsApp, but I think it is the best thing to recommend. As said above, I don t think that Signal is an viable option. It s just another proprierary silo with all the problems that come with it. Matrix is a resource hog and not a messenger but a MS Teams replacement. Element as the main Matrix client is laggy and not multi-account/multi-server capable. Other Matrix clients do support multiple accounts but are not as feature-complete as Element. In the end the Matrix ecosystem will suffer from the same issues as XMPP did already a decade ago. But XMPP has learned to deal with it. Also XMPP is proceeding fast in the last years and it has solved many problems many people are still complaining about. Sure, there still some open issues. The situation on IOS is still not as good as on Android with Conversations, but it is fairly close to it. There are many efforts to improve XMPP. There is Quicksy IM, which is a service that will use your phone number as Jabber ID/JID and is thus comparable to Signal which uses phone numbers as well as unique identifier. But Quicksy is compatible with XMPP standards. Snikket is another new XMPP ecosystem aiming at smaller groups hosting their own server by simply installing a Docker container and setup some basic SRV records in the DNS. Or there is Mailcow, a Docker based mailserver setup that added XMPP server in their setup as well, so you can have the same mail and XMPP address. Snikket even got EU based funding for implementing XMPP Account Portability which also will improve the decentralization even further. Additionally XMPP helps vaccination in Canada and USA with vaxbot by Monal. Be smart and use ecofriendly infrastructure.

19 April 2021

Ritesh Raj Sarraf: Catching Up Your Sources

I ve mostly had the preference of controlling my data rather than depend on someone else. That s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data. Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently. But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs. Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized. Over all this time, I m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out. So the reality is that while you may not be valuating the data you offer in exchange correctly, there s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data I m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats. The next best would be plain text. In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I ve come across hundreds of such content that I d always like to preserve. Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies. There s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call. But still, the information retention techniques were better. I haven t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.
Ellora Temple -  The majectic carving believed to be carved out of a single stone
Ellora Temple - The majectic carving believed to be carved out of a single stone
Dominance has the power to rewrite history and unfortunately that s true and it has done its part. It is just that in a mere human s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I m here now, and this is not the reality. And if not dominance, there s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there s no way one can go back in time and produce a fine evidence. There s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire. But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift So, in my opinion, having events documented is important. It d be nice to have skills documented too so that it can be passed over generations but that s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished. A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance. So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today s age. If I really had the resources, I d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS So for my communication, I like to prefer emails over any other means. That doesn t mean I don t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format. Such is the case that for information useful over the internet, I crave to have it formatted in email for archival. RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay. But my preference is still RSS. So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well. feed2imap is:
  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment
In a gist, it makes the online content available to me offline in the most admired email format In my mail box, in today s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below
RSS News Item through Evolution
RSS News Item through Evolution
The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

11 April 2021

Vishal Gupta: Sikkim 101 for Backpackers

Host to Kanchenjunga, the world s third-highest mountain peak and the endangered Red Panda, Sikkim is a state in northeastern India. Nestled between Nepal, Tibet (China), Bhutan and West Bengal (India), the state offers a smorgasbord of cultures and cuisines. That said, it s hardly surprising that the old spice route meanders through western Sikkim, connecting Lhasa with the ports of Bengal. Although the latter could also be attributed to cardamom (kali elaichi), a perennial herb native to Sikkim, which the state is the second-largest producer of, globally. Lastly, having been to and lived in India, all my life, I can confidently say Sikkim is one of the cleanest & safest regions in India, making it ideal for first-time backpackers.

Brief History
  • 17th century: The Kingdom of Sikkim is founded by the Namgyal dynasty and ruled by Buddhist priest-kings known as the Chogyal.
  • 1890: Sikkim becomes a princely state of British India.
  • 1947: Sikkim continues its protectorate status with the Union of India, post-Indian-independence.
  • 1973: Anti-royalist riots take place in front of the Chogyal's palace, by Nepalis seeking greater representation.
  • 1975: Referendum leads to the deposition of the monarchy and Sikkim joins India as its 22nd state.
Languages
  • Official: English, Nepali, Sikkimese/Bhotia and Lepcha
  • Though Hindi and Nepali share the same script (Devanagari), they are not mutually intelligible. Yet, most people in Sikkim can understand and speak Hindi.
Ethnicity
  • Nepalis: Migrated in large numbers (from Nepal) and soon became the dominant community
  • Bhutias: People of Tibetan origin. Major inhabitants in Northern Sikkim.
  • Lepchas: Original inhabitants of Sikkim

Food
  • Tibetan/Nepali dishes (mostly consumed during winter)
    • Thukpa: Noodle soup, rich in spices and vegetables. Usually contains some form of meat. Common variations: Thenthuk and Gyathuk
    • Momos: Steamed or fried dumplings, usually with a meat filling.
    • Saadheko: Spicy marinated chicken salad.
    • Gundruk Soup: A soup made from Gundruk, a fermented leafy green vegetable.
    • Sinki : A fermented radish tap-root product, traditionally consumed as a base for soup and as a pickle. Eerily similar to Kimchi.
  • While pork and beef are pretty common, finding vegetarian dishes is equally easy.
  • Staple: Dal-Bhat with Subzi. Rice is a lot more common than wheat (rice) possibly due to greater carb content and proximity to West Bengal, India s largest producer of Rice.
  • Good places to eat in Gangtok
    • Hamro Bhansa Ghar, Nimtho (Nepali)
    • Taste of Tibet
    • Dragon Wok (Chinese & Japanese)

Buddhism in Sikkim
  • Bayul Demojong (Sikkim), is the most sacred Land in the Himalayas as per the belief of the Northern Buddhists and various religious texts.
  • Sikkim was blessed by Guru Padmasambhava, the great Buddhist saint who visited Sikkim in the 8th century and consecrated the land.
  • However, Buddhism is said to have reached Sikkim only in the 17th century with the arrival of three Tibetan monks viz. Rigdzin Goedki Demthruchen, Mon Kathok Sonam Gyaltshen & Rigdzin Legden Je at Yuksom. Together, they established a Buddhist monastery.
  • In 1642 they crowned Phuntsog Namgyal as the first monarch of Sikkim and gave him the title of Chogyal, or Dharma Raja.
  • The faith became popular through its royal patronage and soon many villages had their own monastery.
  • Today Sikkim has over 200 monasteries.

Major monasteries
  • Rumtek Monastery, 20Km from Gangtok
  • Lingdum/Ranka Monastery, 17Km from Gangtok
  • Phodong Monastery, 28Km from Gangtok
  • Ralang Monastery, 10Km from Ravangla
  • Tsuklakhang Monastery, Royal Palace, Gangtok
  • Enchey Monastery, Gangtok
  • Tashiding Monastery, 35Km from Ravangla


Reaching Sikkim
  • Gangtok, being the capital, is easiest to reach amongst other regions, by public transport and shared cabs.
  • By Air:
    • Pakyong (PYG) :
      • Nearest airport from Gangtok (about 1 hour away)
      • Tabletop airport
      • Reserved cabs cost around INR 1200.
      • As of Apr 2021, the only flights to PYG are from IGI (Delhi) and CCU (Kolkata).
    • Bagdogra (IXB) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Larger airport with flights to most major Indian cities.
      • Reserved cabs cost about INR 3000. Shared cabs cost about INR 350.
  • By Train:
    • New Jalpaiguri (NJP) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Reserved cabs cost about INR 3000. Shared cabs from INR 350.
  • By Road:
    • NH10 connects Siliguri to Gangtok
    • If you can t find buses plying to Gangtok directly, reach Siliguri and then take a cab to Gangtok.
  • Sikkim Nationalised Transport Div. also runs hourly buses between Siliguri and Gangtok and daily buses on other common routes. They re cheaper than shared cabs.
  • Wizzride also operates shared cabs between Siliguri/Bagdogra/NJP, Gangtok and Darjeeling. They cost about the same as shared cabs but pack in half as many people in luxury cars (Innova, Xylo, etc.) and are hence more comfortable.

Gangtok
  • Time needed: 1D/1N
  • Places to visit:
    • Hanuman Tok
    • Ganesh Tok
    • Tashi View Point [6,800ft]
    • MG Marg
    • Sikkim Zoo
    • Gangtok Ropeway
    • Enchey Monastery
    • Tsuklakhang Palace & Monastery
  • Hostels: Tagalong Backpackers (would strongly recommend), Zostel Gangtok
  • Places to chill: Travel Cafe, Caf Live & Loud and Gangtok Groove
  • Places to shop: Lal Market and MG Marg

Getting Around
  • Taxis operate on a reserved or shared basis. In case of the latter, you can pool with other commuters your taxis will pick up and drop en-route.
  • Naturally shared taxis only operate on popular routes. The easiest way to get around Gangtok is to catch a shared cab from MG Marg.
  • Reserved taxis for Gangtok sightseeing cost around INR 1000-1500, depending upon the spots you d like to see
  • Key taxi/bus stands :
    • Deorali stand: For Darjeeling, Siliguri, Kalimpong
    • Vajra stand: For North & East Sikkim (Tsomgo Lake & Nathula)
    • Rumtek taxi: For Ravangla, Pelling, Namchi, Geyzing, Jorethang and Singtam.
Exploring Gangtok on an MTB

North Sikkim
  • The easiest & most economical way to explore North Sikkim is the 3D/2N package offered by shared-cab drivers.
  • This includes food, permits, cab rides and accommodation (1N in Lachen and 1N in Lachung)
  • The accommodation on both nights are at homestays with bare necessities, so keep your hopes low.
  • In the spirit of sustainable tourism, you ll be asked to discard single-use plastic bottles, so please carry a bottle that you can refill along the way.
  • Zero Point and Gurdongmer Lake are snow-capped throughout the year
3D/2N Shared-cab Package Itinerary
  • Day 1
    • Gangtok (10am) - Chungthang - Lachung (stay)
  • Day 2
    • Pre-lunch : Lachung (6am) - Yumthang Valley [12,139ft] - Zero Point - Lachung [15,300ft]
    • Post-lunch : Lachung - Chungthang - Lachen (stay)
  • Day 3
    • Pre-lunch : Lachen (5am) - Kala Patthar - Gurdongmer Lake [16,910ft] - Lachen
    • Post-lunch : Lachen - Chungthang - Gangtok (7pm)
  • This itinerary is idealistic and depends on the level of snowfall.
  • Some drivers might switch up Day 2 and 3 itineraries by visiting Lachen and then Lachung, depending upon the weather.
  • Areas beyond Lachen & Lachung are heavily militarized since the Indo-China border is only a few miles away.

East Sikkim

Zuluk and Silk Route
  • Time needed: 2D/1N
  • Zuluk [9,400ft] is a small hamlet with an excellent view of the eastern Himalayan range including the Kanchenjunga.
  • Was once a transit point to the historic Silk Route from Tibet (Lhasa) to India (West Bengal).
  • The drive from Gangtok to Zuluk takes at least four hours. Hence, it makes sense to spend the night at a homestay and space out your trip to Zuluk

Tsomgo Lake and Nathula
  • Time Needed : 1D
  • A Protected Area Permit is required to visit these places, due to their proximity to the Chinese border
  • Tsomgo/Chhangu Lake [12,313ft]
    • Glacial lake, 40 km from Gangtok.
    • Remains frozen during the winter season.
    • You can also ride on the back of a Yak for INR 300
  • Baba Mandir
    • An old temple dedicated to Baba Harbhajan Singh, a Sepoy in the 23rd Regiment, who died in 1962 near the Nathu La during Indo China war.
  • Nathula Pass [14,450ft]
    • Located on the Indo-Tibetan border crossing of the Old Silk Route, it is one of the three open trading posts between India and China.
    • Plays a key role in the Sino-Indian Trade and also serves as an official Border Personnel Meeting(BPM) Point.
    • May get cordoned off by the Indian Army in event of heavy snowfall or for other security reasons.


West Sikkim
  • Time needed: 3N/1N
  • Hostels at Pelling : Mochilerro Ostillo

Itinerary

Day 1: Gangtok - Ravangla - Pelling
  • Leave Gangtok early, for Ravangla through the Temi Tea Estate route.
  • Spend some time at the tea garden and then visit Buddha Park at Ravangla
  • Head to Pelling from Ravangla

Day 2: Pelling sightseeing
  • Hire a cab and visit Skywalk, Pemayangtse Monastery, Rabdentse Ruins, Kecheopalri Lake, Kanchenjunga Falls.

Day 3: Pelling - Gangtok/Siliguri
  • Wake up early to catch a glimpse of Kanchenjunga at the Pelling Helipad around sunrise
  • Head back to Gangtok on a shared-cab
  • You could take a bus/taxi back to Siliguri if Pelling is your last stop.

Darjeeling
  • In my opinion, Darjeeling is lovely for a two-day detour on your way back to Bagdogra/Siliguri and not any longer (unless you re a Bengali couple on a honeymoon)
  • Once a part of Sikkim, Darjeeling was ceded to the East India Company after a series of wars, with Sikkim briefly receiving a grant from EIC for gifting Darjeeling to the latter
  • Post-independence, Darjeeling was merged with the state of West Bengal.

Itinerary

Day 1 :
  • Take a cab from Gangtok to Darjeeling (shared-cabs cost INR 300 per seat)
  • Reach Darjeeling by noon and check in to your Hostel. I stayed at Hideout.
  • Spend the evening visiting either a monastery (or the Batasia Loop), Nehru Road and Mall Road.
  • Grab dinner at Glenary whilst listening to live music.

Day 2:
  • Wake up early to catch the sunrise and a glimpse of Kanchenjunga at Tiger Hill. Since Tiger Hill is 10km from Darjeeling and requires a permit, book your taxi in advance.
  • Alternatively, if you don t want to get up at 4am or shell out INR1500 on the cab to Tiger Hill, walk to the Kanchenjunga View Point down Mall Road
  • Next, queue up outside Keventers for breakfast with a view in a century-old cafe
  • Get a cab at Gandhi Road and visit a tea garden (Happy Valley is the closest) and the Ropeway. I was lucky to meet 6 other backpackers at my hostel and we ended up pooling the cab at INR 200 per person, with INR 1400 being on the expensive side, but you could bargain.
  • Get lunch, buy some tea at Golden Tips, pack your bags and hop on a shared-cab back to Siliguri. It took us about 4hrs to reach Siliguri, with an hour to spare before my train.
  • If you ve still got time on your hands, then check out the Peace Pagoda and the Darjeeling Himalayan Railway (Toy Train). At INR 1500, I found the latter to be too expensive and skipped it.


Tips and hacks
  • Download offline maps, especially when you re exploring Northern Sikkim.
  • Food and booze are the cheapest in Gangtok. Stash up before heading to other regions.
  • Keep your Aadhar/Passport handy since you need permits to travel to North & East Sikkim.
  • In rural areas and some cafes, you may get to try Rhododendron Wine, made from Rhododendron arboreum a.k.a Gurans. Its production is a little hush-hush since the flower is considered holy and is also the National Flower of Nepal.
  • If you don t want to invest in a new jacket, boots or a pair of gloves, you can always rent them at nominal rates from your hotel or little stores around tourist sites.
  • Check the weather of a region before heading there. Low visibility and precipitation can quite literally dampen your experience.
  • Keep your itinerary flexible to accommodate for rest and impromptu plans.
  • Shops and restaurants close by 8pm in Sikkim and Darjeeling. Plan for the same.

Carry
  • a couple of extra pairs of socks (woollen, if possible)
  • a pair of slippers to wear indoors
  • a reusable water bottle
  • an umbrella
  • a power bank
  • a couple of tablets of Diamox. Helps deal with altitude sickness
  • extra clothes and wet bags since you may not get a chance to wash/dry your clothes
  • a few passport size photographs

Shared-cab hacks
  • Intercity rides can be exhausting. If you can afford it, pay for an additional seat.
  • Call shotgun on the drives beyond Lachen and Lachung. The views are breathtaking.
  • Return cabs tend to be cheaper (WB cabs travelling from SK and vice-versa)

Cost
  • My median daily expenditure (back when I went to Sikkim in early March 2021) was INR 1350.
  • This includes stay (bunk bed), food, wine and transit (shared cabs)
  • In my defence, I splurged on food, wine and extra seats in shared cabs, but if you re on a budget, you could easily get by on INR 1 - 1.2k per day.
  • For a 9-day trip, I ended up shelling out nearly INR 15k, including 2AC trains to & from Kolkata
  • Note : Summer (March to May) and Autumn (October to December) are peak seasons, and thereby more expensive to travel around.

Souvenirs and things you should buy

Buddhist souvenirs :
  • Colourful Prayer Flags (great for tying on bikes or behind car windshields)
  • Miniature Prayer/Mani Wheels
  • Lucky Charms, Pendants and Key Chains
  • Cham Dance masks and robes
  • Singing Bowls
  • Common symbols: Om mani padme hum, Ashtamangala, Zodiac signs

Handicrafts & Handlooms
  • Tibetan Yak Wool shawls, scarfs and carpets
  • Sikkimese Ceramic cups
  • Thangka Paintings

Edibles
  • Darjeeling Tea (usually brewed and not boiled)
  • Wine (Arucha Peach & Rhododendron)
  • Dalle Khursani (Chilli) Paste and Pickle

Header Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

Next.

Previous.