Search Results: "lex"

21 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Issues extending Polis and adjusting our Goals

Here comes the 3rd article of the 5-episode blog post series on Polis, written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Enjoy also this read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals (this article)
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Issues extending Polis and adjusting our Goals After the initial implementation of limited branding support, user feedback and the involvement of an UX designer lead to the conclusion that we needed more far-reaching changes to the user interface in order to reduce visual clutter, rearrange and improve UI elements, and provide better integration with the websites in which conversations are embedded. Challenges when visualizing Data in Polis Polis visualizes groups using a spatial projection of users based on similarities in voting behavior and places them in two to five groups using a clustering algorithm. During our testing and evaluation users were rarely able to interpret the visualization and often intuitively made incorrect assumptions e.g. by associating the filled area of a group with its significance or size. After consultation with a member of the Multi-Agent Systems (MAS) Group at the University of Groningen we chose to temporarily replace the visualization offered by Polis with simple bar charts representing agreement or disagreement with statements of a group or the majority. We intend to revisit this and explore different forms of visualization at a later point in time. The different factors playing into the weight attached to statements which determine the pseuodo-random order in which they are presented for voting ( comment routing ) proved difficult to explain to stakeholders and users and the admission of the ad-hoc and heuristic nature of the used algorithm1 by Polis authors lead to the decision to temporarily remove this feature. Instead, statements should be placed into three groups, namely
  1. metadata questions,
  2. seed statements,
  3. and participant statements
Statements should then be sorted by group but in a fully randomized order within the group so that metadata questions would be presented before seed statements which would be presented before participant s statements. This simpler method was deemed sufficient for the scale of our pilot projects, however we intend to revisit this decision and explore different methods of comment routing in cooperation with our scientific partners at a later point in time. An evaluation of the requirements for implementing mandatory authentication and adding support for additional authentication methods to Polis showed that significant changes to both the administration and participation frontend were needed due to a lack of an abstraction layer or extension mechanism and the current authentication providers being hardcoded in many parts of the code base. A New Frontend is born: Particiapp Based on the implementation details of the participation frontend, the invasive nature of the changes required, and the overhead of keeping up with active upstream development it became clear that a different, more flexible approach to development was needed. This ultimately lead to the creation of Particiapp, a new Open Source project providing the building blocks and necessary abstraction layers for rapid protoyping and experimentation with different fontends which are compatible with but independent from Polis.
  1. Small, Christopher T., Bjorkegren, Michael, Erkkil , Timo, Shaw, Lynette and Megill, Colin (2021). Polis: Scaling deliberation by mapping high dimensional opinion spaces. Recerca. Revista de Pensament i An lisi, 26(2), pp. 1-26.

18 July 2024

Enrico Zini: meson, includedir, and current directory

Suppose you have a meson project like this: meson.build:
project('example', 'cpp', version: '1.0', license : ' ', default_options: ['warning_level=everything', 'cpp_std=c++17'])
subdir('example')
example/meson.build:
test_example = executable('example-test', ['main.cc'])
example/string.h:
/* This file intentionally left empty */
example/main.cc:
#include <cstring>
int main(int argc,const char* argv[])
 
    std::string foo("foo");
    return 0;
 
This builds fine with autotools and cmake, but not meson:
$ meson setup builddir
The Meson build system
Version: 1.0.1
Source dir: /home/enrico/dev/deb/wobble-repr
Build dir: /home/enrico/dev/deb/wobble-repr/builddir
Build type: native build
Project name: example
Project version: 1.0
C++ compiler for the host machine: ccache c++ (gcc 12.2.0 "c++ (Debian 12.2.0-14) 12.2.0")
C++ linker for the host machine: c++ ld.bfd 2.40
Host machine cpu family: x86_64
Host machine cpu: x86_64
Build targets in project: 1
Found ninja-1.11.1 at /usr/bin/ninja
$ ninja -C builddir
ninja: Entering directory  builddir'
[1/2] Compiling C++ object example/example-test.p/main.cc.o
FAILED: example/example-test.p/main.cc.o
ccache c++ -Iexample/example-test.p -Iexample -I../example -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Wcast-qual -Wconversion -Wfloat-equal -Wformat=2 -Winline -Wmissing-declarations -Wredundant-decls -Wshadow -Wundef -Wuninitialized -Wwrite-strings -Wdisabled-optimization -Wpacked -Wpadded -Wmultichar -Wswitch-default -Wswitch-enum -Wunused-macros -Wmissing-include-dirs -Wunsafe-loop-optimizations -Wstack-protector -Wstrict-overflow=5 -Warray-bounds=2 -Wlogical-op -Wstrict-aliasing=3 -Wvla -Wdouble-promotion -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wsuggest-attribute=pure -Wtrampolines -Wvector-operation-performance -Wsuggest-attribute=format -Wdate-time -Wformat-signedness -Wnormalized=nfc -Wduplicated-cond -Wnull-dereference -Wshift-negative-value -Wshift-overflow=2 -Wunused-const-variable=2 -Walloca -Walloc-zero -Wformat-overflow=2 -Wformat-truncation=2 -Wstringop-overflow=3 -Wduplicated-branches -Wattribute-alias=2 -Wcast-align=strict -Wsuggest-attribute=cold -Wsuggest-attribute=malloc -Wanalyzer-too-complex -Warith-conversion -Wbidi-chars=ucn -Wopenacc-parallelism -Wtrivial-auto-var-init -Wctor-dtor-privacy -Weffc++ -Wnon-virtual-dtor -Wold-style-cast -Woverloaded-virtual -Wsign-promo -Wstrict-null-sentinel -Wnoexcept -Wzero-as-null-pointer-constant -Wabi-tag -Wuseless-cast -Wconditionally-supported -Wsuggest-final-methods -Wsuggest-final-types -Wsuggest-override -Wmultiple-inheritance -Wplacement-new=2 -Wvirtual-inheritance -Waligned-new=all -Wnoexcept-type -Wregister -Wcatch-value=3 -Wextra-semi -Wdeprecated-copy-dtor -Wredundant-move -Wcomma-subscript -Wmismatched-tags -Wredundant-tags -Wvolatile -Wdeprecated-enum-enum-conversion -Wdeprecated-enum-float-conversion -Winvalid-imported-macros -std=c++17 -O0 -g -MD -MQ example/example-test.p/main.cc.o -MF example/example-test.p/main.cc.o.d -o example/example-test.p/main.cc.o -c ../example/main.cc
In file included from ../example/main.cc:1:
/usr/include/c++/12/cstring:77:11: error:  memchr  has not been declared in  :: 
   77     using ::memchr;
                  ^~~~~~
/usr/include/c++/12/cstring:78:11: error:  memcmp  has not been declared in  :: 
   78     using ::memcmp;
                  ^~~~~~
/usr/include/c++/12/cstring:79:11: error:  memcpy  has not been declared in  :: 
   79     using ::memcpy;
                  ^~~~~~
/usr/include/c++/12/cstring:80:11: error:  memmove  has not been declared in  :: 
   80     using ::memmove;
                  ^~~~~~~
 
It turns out that meson adds the current directory to the include path by default:
Another thing to note is that include_directories adds both the source directory and corresponding build directory to include path, so you don't have to care.
It seems that I have to care after all. Thankfully there is an implicit_include_directories setting that can turn this off if needed. Its documentation is not as easy to find as I'd like (kudos to Kangie on IRC), and hopefully this blog post will make it easier for me to find it in the future.

17 July 2024

Gunnar Wolf: Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven t cared to share with anybody, as I believe them to be quite trivial and quite dirty. But today, V ctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty. Forecast I am using OpenWeather s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paran , Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:
#!/usr/bin/bash
CONF_FILE=/etc/get_weather
if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi
if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q "https://api.openweathermap.org/data/2.5/weather?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ CURRENT "
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ FORECAST "
This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I m not logged in, wasting some of my free API calls but within reason) Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:
"custom/weather":  
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
 ,
This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:
#!/usr/bin/ruby
require 'json'
Sources =  :weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
           
Icons =  '01d' => ' ', # d   day
         '01n' => ' ', # n   night
         '02d' => ' ',
         '02n' => ' ',
         '03d' => ' ',
         '03n' => ' ',
         '04d'  => ' ',
         '04n' => ' ',
         '09d' => ' ',
         '10n' =>  '  ',
         '10d' => ' ',
         '13d' => ' ',
         '50d' => ' '
         
ret =  'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100 
# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)
  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']
  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']
  portions =  
  portions[:loc] = loc_name
  portions[:temp] = '%s  %2.2f C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '  %2d%%' % hum
  portions[:wind] = ' %2.2fm/s %d ' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map   p  portions[p] .join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s   %s: %s)' % [Sources[:weather], err.class, err.to_s]
end
# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''
  day=Time.now.strftime('%Y.%m.%d')
  by_day =  
  forecast['list'].each_with_index do  f,i 
    by_day[day]  = []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]
    icon = Icons[f['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '          min: <b>%s C</b> max: <b>%s C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '        <b>%04d.%02d.%02d</b>  ' %
              [time.year, time.month, time.day]
    end
    cast << '%s   %2.2f C    %2d%%   %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '          min: <b>%s</b> C max: <b>%s C</b>' % [min, max]
  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s   %s)' % [Sources[:forecast], err.class, err.to_s]
end
# Print out the result for Waybar to process
puts ret.to_json
The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out: Screenshot Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data for my home, I often get Ciudad Universitaria, but sometimes Coyoac n or even San ngel Inn.

12 July 2024

Russ Allbery: Review: The Splinter in the Sky

Review: The Splinter in the Sky, by Kemi Ashing-Giwa
Publisher: Saga Press
Copyright: July 2023
ISBN: 1-6680-0849-1
Format: Kindle
Pages: 372
The Splinter in the Sky is a stand-alone science fiction political thriller. It is Kemi Ashing-Giwa's first novel. Enitan is from Koriko, a vegetation-heavy moon colonized by the Vaalbaran empire. She lives in the Ijebu community with her sibling Xiang and has an on-again, off-again relationship with Ajana, the Vaalbaran-appointed governor. Xiang is studying to be an architect, which requires passing stringent entrance exams to be allowed to attend an ancillary imperial school intended for "primitives." Enitan works as a scribe and translator, one of the few Korikese allowed to use the sacred Orin language of Vaalbara. In her free time, she grows and processes tea. When Xiang mysteriously disappears while she's at work, Enitan goes to Ajana for help. Then Ajana dies, supposedly from suicide. The Vaalbaran government demands a local hostage while the death is investigated, someone who will be held as a diplomatic "guest" on the home world and executed if there is any local unrest. This hostage is supposed to be the child of the local headwoman, a concept that the Korikese do not have. Seeing a chance to search for Xiang, Enitan volunteers, heading into the heart of imperial power with nothing but desperate determination and a tea set. The empire doesn't stand a chance. Admittedly, a lot of the reason why the empire doesn't stand a chance is because the author is thoroughly on Enitan's side. Before she even arrives on Gondwana, Vaalbara's home world, Enitan is recruited as a spy by the other Gondwana power and Vaalbara's long-standing enemy. Her arrival in the Splinter, the floating arcology that serves as the center of Vaalbaran government, is followed by a startlingly meteoric rise in access. Some of this is explained by being a cultural curiosity for bored nobles, and some is explained by political factors Enitan is not yet aware of, but one can see the author's thumb resting on the scales. This was the sort of book that was great fun to read, but whose political implausibility provoked "wait, that didn't make sense" thoughts afterwards. I think one has to assume that the total population of Vaalbara is much less than first comes to mind when considering an interplanetary empire, which would help explain the odd lack of bureaucracy. Enitan is also living in, effectively, the palace complex, for reasonably well-explained political reasons, and that could grant her a surprising amount of access. But there are other things that are harder to explain away: the lack of surveillance, the relative lack of guards, and the odd political structure that's required for the plot to work. It's tricky to talk about this without spoilers, but the plot rests heavily on a conspiratorial view of how government power is wielded that I think strains plausibility. I'm not naive enough to think that the true power structure of a society matches the formal power structure, but I don't think they diverge as much as people think they do. It's one thing to say that the true power brokers of society can be largely unknown to the general population. In a repressive society with a weak media, that's believable. It's quite another matter for the people inside the palace to be in the dark about who is running what. I thought that was the biggest problem with this book. Its greatest feature is the characters, and particularly the character relationships. Enitan is an excellent protagonist: fascinating, sympathetic, determined, and daring in ways that make her success more believable. Early in the book, she forms an uneasy partnership that becomes the heart of the book, and I loved everything about that relationship. The politics of her situation might be a bit too simple, but the emotions were extremely well-done. This is a book about colonialism. Specifically, it's a book about cultural looting, appropriation, and racist superiority. The Vaalbarans consider Enitan barely better than an animal, and in her home they're merciless and repressive. Taken out of that context into their imperial capital, they see her as a harmless curiosity and novelty. Enitan exploits this in ways that are entirely believable. She is also driven to incandescent fury in ways that are entirely believable, and which she only rarely can allow herself to act on. Ashing-Giwa drives home the sheer uselessness of even the more sympathetic Vaalbarans more forthrightly than science fiction is usually willing to be. It's not a subtle point, but it is an accurate one. The first two thirds of this book had me thoroughly engrossed and unable to put it down. The last third unfortunately turns into a Pok mon hunt of antagonists, which I found less satisfying and somewhat less believable. I wish there had been more need for Enitan to build political alliances and go deeper into the social maneuverings of the first part of the book, rather than gaining some deus ex machina allies who trivially solve some otherwise-tricky plot problems. The setup is amazing; the resolution felt a bit like escaping a maze by blasting through the walls, which I don't think played to the strengths of the characters and relationships that Ashing-Giwa had constructed. The advantage of that approach is that we do get a satisfying resolution and a standalone novel. The central relationship of the book is unfortunately too much of a spoiler to talk about in a review, but I thought it was the best part of the story. This is a political thriller on the surface, but I think it's heart is an unexpected political alliance with a fascinatingly tricky balance of power. I was delighted that Ashing-Giwa never allows the tension in that relationship to collapse into one of the stock patterns it so easily could have become. The Splinter in the Sky reminded me a little of Arkady Martine's A Memory Called Empire. It's not as assured or as adroitly balanced as that book, and the characters are not quite as memorable, but that's a very high bar. The political point is even sharper, and it has some of the same appeal. I had so much fun reading this book. You may need to suspend your disbelief about some of the politics, and I wish the conclusion had been a bit less brute-force, but this is great stuff. Recommended when you're in the mood for a character story in the trappings of a political thriller. Rating: 8 out of 10

9 July 2024

Russ Allbery: Review: Raising Steam

Review: Raising Steam, by Terry Pratchett
Series: Discworld #40
Publisher: Anchor Books
Copyright: 2013
Printing: October 2014
ISBN: 0-8041-6920-9
Format: Trade paperback
Pages: 365
Raising Steam is the 40th Discworld novel and the third Moist von Lipwig novel, following Making Money. This is not a good place to start reading the series. Dick Simnel is a tinkerer from a line of tinkerers. He has been obsessed with mastering the power of steam since the age of ten, when his father died in a steam accident. That pursuit took him deeper into mathematics and precision, calculations and experiments, until he built Iron Girder: Discworld's first steam-powered locomotive. His early funding came from some convenient family pirate treasure, but turning his prototype into something more will require significantly more resources. That is how he ends up in the office of Harry King, Ankh-Morpork's sanitation magnate. Simnel's steam locomotive has the potential to solve some obvious logistical problems, such as getting fish from the docks of Quirm to the streets of Ankh-Morpork before it stops being vaguely edible. That's not what makes railways catch fire, however. As soon as Iron Girder is huffing and puffing its way around King's compound, it becomes the most popular attraction in the city. People stand in line for hours to ride it over and over again for reasons that they cannot entirely explain. There is something wild and uncontrollable going on. Vetinari is not sure he likes wild and uncontrollable, but he knows the lap into which such problems can be dumped: Moist von Lipwig, who is already getting bored with being a figurehead for the city's banking system. The setup for Raising Steam reminds me more of Moving Pictures than the other Moist von Lipwig novels. Simnel himself is a relentlessly practical engineer, but the trains themselves have tapped some sort of primal magic. Unlike Moving Pictures, Pratchett doesn't provide an explicit fantasy explanation involving intruding powers from another world. It might have been a more interesting book if he had. Instead, this book expects the reader to believe there is something inherently appealing and fascinating about trains, without providing much logic or underlying justification. I think some readers will be willing to go along with this, and others (myself included) will be left wishing the story had more world-building and fewer exclamation points. That's not the real problem with this book, though. Sadly, its true downfall is that Pratchett's writing ability had almost completely collapsed by the time he wrote it. As mentioned in my review of Snuff, we're now well into the period where Pratchett was suffering the effects of early-onset Alzheimer's. In that book, his health issues mostly affected the dialogue near the end of the novel. In this book, published two years later, it's pervasive and much worse. Here's a typical passage from early in the book:
It is said that a soft answer turneth away wrath, but this assertion has a lot to do with hope and was now turning out to be patently inaccurate, since even a well-spoken and thoughtful soft answer could actually drive the wrong kind of person into a state of fury if wrath was what they had in mind, and that was the state the elderly dwarf was now enjoying.
One of the best things about Discworld is Pratchett's ability to drop unexpected bits of wisdom in a sentence or two, or twist a verbal knife in an unexpected and surprising direction. Raising Steam still shows flashes of that ability, but it's buried in run-on sentences, drowned in cliches and repetition, and often left behind as the containing sentence meanders off into the weeds and sputters to a confused halt. The idea is still there; the delivery, sadly, is not. This is the first Discworld novel that I found mentally taxing to read. Sentences are often so overpacked that they require real effort to untangle, and the untangled meaning rarely feels worth the effort. The individual voice of the characters is almost gone. Vetinari's monologues, rather than being a rare event with dangerous layers, are frequent, rambling, and indecisive, often sounding like an entirely different character than the Vetinari we know. The constant repetition of the name any given character is speaking to was impossible for me to ignore. And the momentum of the story feels wrong; rather than constructing the events of the story in a way that sweeps the reader along, it felt like Pratchett was constantly pushing, trying to convince the reader that trains were the most exciting thing to ever happen to Discworld. The bones of a good story are here, including further development of dwarf politics from The Fifth Elephant and Thud! and the further fallout of the events of Snuff. There are also glimmers of Pratchett's typically sharp observations and turns of phrase that could have been unearthed and polished. But at the very least this book needed way more editing and a lot of rewriting. I suspect it could have dropped thirty pages just by tightening the dialogue and removing some of the repetition. I'm afraid I did not enjoy this. I am a bit of a hard sell for the magic fascination of trains I love trains, but my model railroad days are behind me and I'm now more interested in them as part of urban transportation policy. Previous Discworld books on technology and social systems did more of the work of drawing the reader in, providing character hooks and additional complexity, and building a firmer foundation than "trains are awesome." The main problem, though, was the quality of the writing, particularly when compared to the previous novels with the same characters. I dragged myself through this book out of a sense of completionism and obligation, and was relieved when I finished it. This is the first Discworld novel that I don't recommend. I think the only reason to read it is if you want to have read all of Discworld. Otherwise, consider stopping with Snuff and letting it be the send-off for the Ankh-Morpork characters. Followed by The Shepherd's Crown, a Tiffany Aching story and the last Discworld novel. Rating: 3 out of 10

8 July 2024

Russ Allbery: Review: Beyond Control

Review: Beyond Control, by Kit Rocha
Series: Beyond #2
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 364
Beyond Control is science fiction erotica (dystopian erotic romance, per the marketing) and a direct sequel to Beyond Shame. These books shift protagonists with each volume and enough of the world background is explained that you could start here, but there are significant spoilers for the previous book. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for. This is one of those reviews that I write because I'm stubborn about reviewing all the books I read, not because it's likely to be useful to anyone. There are also considerably more spoilers for the shape of the story than I normally include, so be warned. The Beyond series is erotica. Specifically, so far, consensual BDSM erotica with bisexuality but otherwise typical gender stereotypes. The authors (Kit Rocha is a pen name for Donna Herren and Bree Bridges) are women, so it's more female gaze than male gaze, but by erotica I don't mean romance with an above-average number of steamy scenes. I mean it felt like half the book by page count was descriptions of sex. This review is rather pointless because, one, I'm not going to review the sex that's the main point of the book, and two, I skimmed all the sex and read it for the story because I'm weird. Beyond Shame got me interested in these absurdly horny people and their post-apocalyptic survival struggles in the outskirts of a city run by a religious surveillance state, and I wanted to find out what happened next. Besides, this book promised to focus on my favorite character from the first novel, Lex, and I wanted to read more about her. Beyond Control uses a series pattern that I understand is common in romance but which is not often seen in SFF (my usual genre): each book focuses on a new couple adjacent to the previous couple, while the happily ever after of the previous couple plays out in the background. In this case, it also teases the protagonists of the next book. I can see why romance uses this structure: it's an excuse to provide satisfying interludes for the reader. In between Lex and Dallas's current relationship problems, one gets to enjoy how well everything worked out for Noelle and how much she's grown. In Beyond Shame, Lex was the sort-of partner of Dallas O'Kane, the leader of the street gang that is running Sector Four. (Picture a circle surrounding the rich-people-only city of Eden. That circle is divided into eight wedge-shaped sectors, which provide heavy industries, black-market pleasures, and slums for agricultural workers.) Dallas is an intensely possessive, personally charismatic semi-dictator who cultivates the image of a dangerous barbarian to everyone outside and most of the people inside Sector Four. Since he's supposed to be one of the good guys, this is more image than reality, but it's not entirely disconnected from reality. This book is about Lex and Dallas forming an actual relationship, instead of the fraught and complicated thing they had in the first book. I was hoping that this would involve Dallas becoming less of an asshole. It unfortunately does not, although some of what I attributed to malice may be adequately explained by stupidity. I'm not sure that's an improvement. Lex is great, just like she was in the first book. It's obvious by this point in the series that she does most of the emotional labor of keeping the gang running, and her support is central to Dallas's success. Like most of the people in this story, she has a nasty and abusive background that she's still dealing with in various ways. Dallas's possessiveness is intensely appealing to her, but she wants that possessiveness on different terms than Dallas may be willing to offer, or is even aware of. Lex was, I thought, exceptionally clear about what she wanted out of this relationship. Dallas thinks this is entirely about sex, and is, in general, dumber than a sack of hammers. That means fights. Also orgies, but, well, hopefully you knew what you were getting into if you picked up this book. I know, I know, it's erotica, that's the whole point, but these people have a truly absurd amount of sex. Eden puts birth control in the water supply, which is a neat way to simplify some of the in-story consequences of erotica. They must be putting aphrodisiacs in the water supply as well. There was a lot of sector politics in this book that I found way more interesting than it had any right to be. I really like most of these people, even Dallas when he manages to get his three brain cells connected for more than a few minutes. The events of the first book have a lot of significant fallout, Lex continues being a badass, the social dynamics between the women are very well-done (and pass the Bechdel test yet again even though this is mostly traditional-gender-role erotica), and if Dallas had managed to understand what he did wrong at a deeper-than-emotional level, I would have rather enjoyed the non-erotica story parts. Alas. I therefore wouldn't recommend this book even if I were willing to offer any recommendations about erotica (which I'm not). I was hoping it was going somewhere more rewarding than it did. But I still kind of want to read another one? I am weirdly fascinated with the lives of these people. The next book is about Six, who has the potential to turn into the sort of snarky, cynical character I love reading about. And it's not that hard to skim over the orgies. Maybe Dallas will get one additional brain cell per book? Followed by Beyond Pain. Rating: 5 out of 10

3 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Initial Evaluation and Adaptation (episode 2/5)

Here comes the 2nd article of the 5-episode blog post series written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Enjoy also this read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation (this article)
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Initial evaluation and adaptation The Polis code base consists of a number of components, the administration and participation interfaces, a common web backend, and a statistics processing server. Both frontends and the backend are written in a mixture of JavaScript and TypeScript, only the statistics processing server is written in Clojure. In case of self hosting the preferred method of deployment is via Docker containers using Docker Compose or any other orchestrator. The participation frontend for conversations can either be used as a standalone web page or be embedded via an iframe. For our planned use case we initially defined the following goals: After a preliminary evaluation of our own and consulting with Policy Lab UK who were also evaluating and testing Polis and had already made a range of improvements related to self-hosting as well as bug fixes and modernization changes we decided to take their work as a base for our adaptations with the intent of submitting generally useful changes back to the Polis project. Subsequently, a number of changes were implemented, including the removal of hardcoded domain names, the elimination of unnecessary cookies and third-party requests, support for an alternative email sending service, and the option of disabling Facebook and X integration. For the branding our approach was to add an option allowing websites which are embedding conversations in an iframe to load an alternative stylesheet for overriding the native Polis branding. For this to be practical we intended to use CSS custom properties for defining branding-related styles such as colors and fonts. That approach turned out to be problematic because although the Polis participation frontend stylesheet is generated via SCSS and some of the colors are parameterized, however, they are not used consistently throughout the SCSS stylesheets, unfortunately. In addition the frontend templates contain a large amount of hardcoded style attributes. While we succeeded in implementing user-defined stylesheets, it took a disproportionate amount of development resources to parameterize all used colors and fonts via CSS custom properties aggravated by the fact that the SCSS and template files are huge and contain many unused rules and code.

Samuel Henrique: Announcing wcurl: a curl wrapper to download files

tl;dr Whenever you need to download files through the terminal and don't feel like using wget:
wcurl example.com/filename.txt
Manpage:
https://manpages.debian.org/unstable/curl/wcurl.1.en.html

Availability (comes installed with the curl package):
  • Debian unstable - Since 2024-07-02
  • Debian testing - Since 2024-07-18
  • Debian 12/bookworm backports - Expected by the end of August 2024.
  • Debian 12/bookworm - Depends on whether Debian's release team will approve it, it could be available in the next point release.
  • Debian derivatives - Rolling releases will get it by the time it's on Debian testing (e.g.: Kali Linux). Stable derivatives only in their next major release.
If you don't want to wait for the package update to arrive, you can always copy the script and place it in your /usr/bin, the code is here:
https://github.com/Debian/wcurl/blob/main/wcurl
https://salsa.debian.org/debian/wcurl/-/blob/main/wcurl?ref_type=heads

Smoother CLI experience Starting with curl version 8.8.0-2, the Debian's curl package now ships a wcurl executable. wcurl is the solution for those who just need to download files without having to remember curl's parameters for things like automatically naming the files. Some people, myself included, would fall back to using wget whenever there was a need to download a file. Sometimes even installing wget just for that usecase. After all, it's easier to remember "apt install wget" rather than "curl -L -O -C - ...". wcurl consists of a simple shell script that provides sane defaults for the curl invocation, for when the use case is to just download files. By default, wcurl will:
  • Encode whitespaces in URLs;
  • Download multiple URLs in parallel if the installed curl's version is >= 7.66.0;
  • Follow redirects;
  • Automatically choose a filename as output;
  • Avoid overwriting files if the installed curl's version is >= 7.83.0 (--no-clobber);
  • Perform retries;
  • Set the downloaded file timestamp to the value provided by the server, if available;
  • Default to the protocol used as https if the URL doesn't contain any;
  • Disable curl's URL globbing parser so and [] characters in URLs are not treated specially.
Example to download a single file:
wcurl example.com/filename.txt
If you ever need to set a custom flag, you can make use of the --curl-options wcurl option, anything set there will be passed to the curl invocation. Just beware that if you need to set any custom flags, it's likely you will be better served by calling curl directly. The --curl-options option is there to allow for some flexibility in unforeseen circumstances.

The need for wcurl I've always felt a bit ashamed of not remembering curl's parameters for downloading a file and automatically naming it, having resorted to wget most of the times this was needed (even installing wget when it wasn't there, just for this). I've spoken to a few other experienced people I know and confirmed what could be obvious to others: a lot of people struggle with this. Recently, the curl project released the results of 2024's curl survey, which also showed this is as a much needed feature, just look at some of the answers:

Q: Which curl command line option do you think needs improvement and how?
-O, I really want wget like functionality where I don't have to specify the name
Downloading a file (like wget) could be improved - with automatic naming of the file
downloading files - wget is much cleaner
I wish the default behaviour when GETting a binary was to drop it on disk. That's the only reason 'wget foo.tgz" is still ingrained in my muscle memory .
Maybe have a way to download without specifying something in -o (the only reason i used wget still)
--remote-time should be default
--remote-name-all could really use a short flag

Q: If you miss support for something, tell us what!
"Write the data to the file named in the URL (or in redirects if I'm feeling daring), and timestamp the file to the last-modified-date". This is the main reason I'm still using wget.
I can finally feel less bad about falling back to wget due to not remembering the parameters I want.

Idealization vs. reality I don't believe curl will ever change its default behavior in such a way that would accommodate this need, as that would have a side-effect of breaking things which expect the current behavior (the blast radius is literally the solar system). This means a new executable needs to be shipped side-by-side with curl, an opportunity to start fresh and work with a more focused use case (to download files). Ideally, this new executable would be maintained by the curl project, make use of libcurl under-the-hood, and be available everywhere. Nobody wants to worry if their systems have the tool or not, it should always be there. Given I'm just a Debian Developer, with not as much free time as I wish, I've decided to write a simple shell script wrapper calling the curl CLI under-the-hood. wcurl will come installed with the curl package from now on, and I will check with the release team about shipping it on the current Debian stable as well. Shipping wcurl in other distros will be up to them (Debian-derivatives should pick it up automatically, though). We've tried to make it easy for anyone to ship this by using the curl license, keeping the script POSIX-compliant, and shipping a manpage. Maybe if there's enough interest across distributions, someone might sign up for implementing this in upstream curl and increase its reach. I would be happy with the curl project reusing the wcurl name when that happens. It's unlikely that wcurl would be shipped by curl upstream as it is, assuming they would prefer a solution that uses libcurl direclty (more similar to curl the CLI, to maintain). In the worst case, wcurl becomes a Debian-specific tool that only a few people are aware of, in the best case, it becomes the new go-to CLI tool for simply downloading files. I would be happy if at least someone other than me finds it useful.

Naming is hard When I started working on it, I was calling the new executable "curld" (stands for "curl download"), but then when discussing this in one of our weekly calls in the Debian Bras lia community, it was mentioned that this could be confused for a daemon. We then settled for the name "wcurl", suggested by Carlos Henrique Lima Melara <charles>. It doesn't really stand for anything, but it's very easy to remember. You know... "it's that wget alternative for when you want to use curl instead" :)

Feedback I'm hosting the code on Github and Debian's GitLab instance, feel free to open an issue to provide feedback.
https://salsa.debian.org/debian/wcurl
https://github.com/Debian/wcurl We also have a Matrix room for the Debian curl maintainers:
https://matrix.to/#/#debian-curl-maintainers:matrix.org

Acknowledgments The idea for wcurl came a few days before the curl-up conference 2024. I've been thinking a lot about developer productivity in the terminal lately, different tools and better defaults. Before curl-up, I was also thinking about packaging improvements for the curl package. I don't remember what exactly happened, but I likely had to download something and felt a bit ashamed of maintaining curl and not remembering the parameters to download files the way I wanted. I first discussed this idea in the conference, where I asked the participants about it and there were no concerns raised, and some people said I should give it a go. Participating in curl-up was a really great experience and I'm thankful for the interactions I've had there. On the Debian side, I've got reviews of the code and manpage by Sergio Durigan Junior <sergiodj>, Guilherme Puida Moreira <puida> and Carlos Henrique Lima Melara <charles>. Sergio ended up rewriting the tool to be POSIX-compliant (my version was written in bash), so he takes all the credit for the portability.

Changes since publication

2024-07-18
  • Update date of availability for Debian testing and expected date for bookworm backports.
  • Mention charles as the person who suggested "wcurl" as a name.
  • Update wcurl's -o/--opts options, it's now just --curl-options.
  • Remove mention of language spoken in the Matrix room, we are using English now.
  • Update list of features of wcurl.

2 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Introduction (episode 1/5)

This is the first article of a 5-episode blog post series written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Thanks, Guido for being on the Polis project. Enjoy the read on the work Guido has been doing over the past months,
Mike
A team lead by Raoul Kramer/BetaBreak is currently adapting Polis for evaluation and testing by several Dutch provincial governments and central government ministries. Guido Berh rster (author of this article) who is an employee at Fre(i)e Software GmbH has been involved in this project as the main software developer. This series of blog posts describes how and why Polis was initially modified and adapted, what issues the team ran into and how this ultimately lead them to start a new Open Source project called Particiapp for accelerating the development of alternative Polis frontends compatible to but independent from the upstream project. Table of Contents of the Blog Post Series
  1. Introduction (this article)
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - The Introduction What is Polis? Polis is a platform for participation which helps to gather, analyze and understand viewpoints of large groups of participants on complex issues. In practical terms participants take part in conversations on a predefined topic by voting on statements or submitting their own statements (referred to as comments in Polis) for others to vote on1. Through statistical analysis including machine learning participants are sorted into groups based on similarities in voting behavior. In addition, group-informed and overall consensus statements are identified and presented to participants in real-time. This allows for participants to react to and refine statements and either individually or through a predefined process to come to an overall consensus. Furthermore, the order in which statements are presented to participants is influenced by a complex weighting system based on a number of factors such as variance, recency, and frequency of skipping. This so called comment routing is intended to facilitate a meaningful contribution of participants without requiring them to vote on each of a potentially huge number of statements 2. Polis open-ended nature sets it apart from online surveys using pre-defined questions and allows its users to gather a more accurate picture of the public opinion. In contrast to a discussion forum or comment section where participants directly reply to each other, it discourages unproductive behavior such as provocations or personal attacks by not presenting statements in chronological order in combination with voting. Finally, its comment routing is intended to provide scalability towards a large number of participants which generate a potentially large number of statements. The project was developed and is maintained by The Computational Democracy Project, a USA-based non-profit organization which provides a hosted version and offers related services. It is also released as Open Source software under the AGPL 3.0 license. Polis has been used in a variety of different contexts as part of broader political processes facilitating broader political participation and opinion-forming, and gathering feedback and creative input. Use of Polis in Taiwan One prominent use case of Polis is its adoption as part of the vTaiwan participatory governance project. Established by the g0v civic tech community in the wake of the 2014 mass protests by the Sunflower movement, the vTaiwan project enables consultations on proposed legislation among a broad range of stakeholders including government ministries, lawmakers, experts, interest groups, civil society as well as the broader public. Although the resulting recommendations are non-binding, they exert pressure on the government to take action and recommendations have been adopted into legislation.345 vTaiwan uses Polis for large-scale online deliberations as part of a structured participation process. These deliberations take place after identifying and involving stakeholders and experts and providing through information about the topic at hand to the public. Citizens are then given the opportunity to vote on statements or provide alternative proposals which allows for the refinement of ideas and ideally leads to a consensus at the end. The results of these online deliberations are then curated, discussed in publicly broadcast face-to-face meetings which ultimately produce concrete policy recommendations. vTaiwan has in numerous cases given impulses resulting in government action and provided significant input e.g. on legislation regulating Uber or technological experiments by Fintech startups.35 See also
  1. https://compdemocracy.org/Polis/
  2. https://compdemocracy.org/comment-routing/
  3. https://info.vtaiwan.tw/
  4. https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform
  5. https://www.technologyreview.com/2018/08/21/240284/the-simple-but-ingenious-system-taiwan-uses-to-crowdsource-its-laws/

1 July 2024

Paul Wise: FLOSS Activities June 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Communication
  • Respond to queries from Debian users and contributors on IRC

Sponsors All work was done on a volunteer basis.

Sahil Dhiman: Personal ASNs From India

Internet and it s working are interesting and complex. We need an IP address to connect to the Internet. A group of IP addresses with common routing policy is known as an Autonomous System (AS). Each AS has a globally unique Autonomous System Number (ASN) and is maintained by a single entity or individual(s). Your ISP would have an ASN. IP addresses/prefixes are advertised (announced) by an AS through Border Gateway Protocol (BGP) to its peers (ASes which it connects to) to steer traffic in its direction or back. Take for example Google DNS service at 8.8.8.8 owned and operated by AS15169 Google LLC. AS15169 through BGP announcements, lets all its peers know that traffic for whole of 8.8.8.0/24 (including 8.8.8.8) prefix should be sent to them. See the following screenshot response of mtr -zt 8.8.8.8 from my system. From my Internet Service Provider (ISP), AS133982 Excitel Broadband, traffic travels to AS15169 to reach 8.8.8.8 (dns.google) and returns via the same path. This Inter-AS traffic makes the Internet tick. mtr from Excitel to Google ASes comes in different sizes and purposes. Like AS749 DoD Network Information Center which holds more than 200 million+ IPv4 addresses for historical reasons or AS23860 Alliance Broadband Services which has 68 thousand+ IPv4 address for purpose of providing consumer Internet. Similarly, some individuals also run their personal ASN including a bunch of Indians. Most of these Indian ASNs are IPv6 (primary or only) networks run for hobby and educational purposes. I was interested in this data, so complied a list of active ones (visible in the global routing table) from BGP.Tools: Let me know if I m missing someone.

Russell Coker: VoLTE in Australia

Introduction In Australia the 3G mobile frequencies are to be reused so they are in the process of shutting down the 3G service. That means that everyone has to use VoLTE (Voice Over LTE) for phone calls (including emergency calls). The shutdown time varies by telco, Kogan Mobile (one of the better services which has good value for money and generally works well) shut down their 3G service in January. Aldi Mobile (another one of the good services which is slightly more expensive but has included free calls to most first-world countries and uses the largest phone network) will shut theirs down at the end of August. For background there s a Fosdem talk about OpenSIPS with VoLTE and VoNR [1], it s more complex than you want to know. Also VoNR (Voice over New Radio) is the standard for 5G voice and it s different from VoLTE and has a fallback to VoLTE. Another good lecture for background information is the Fosdem talk on VoLTE at the handset end [2]. The PinePhonePro In October 2023 I tried using my PinePhonePro as my main phone but that only lasted a few days due to problems with calls and poor battery life [3]. Since then I went back to the Huawei Mate 10 Pro that I bought refurbished in June 2019 for $389. So that has been my main phone for 5 years now, giving a cost of $1.50 per week. I had tried using a Huawei Nova 7i running Android without Google Play as an experiment but that had failed, I do many things that need Android apps [4]. I followed the PinePhone wiki to get my PinePhonePro working with VoLTE [5]. That worked fine for me, the only difference from the instructions is that I had to use device /dev/ttyUSB3 and that the modem kept resetting itself during the process and when that happened I had to kill minicom and start again. After changing the setting and saving it the PinePhonePro seemed to work well with VoLTE on a Kogan Mobile SIM (so definitely not using 3G). One issue I have found is that Plasma Mobile (my preferred FOSS phone GUI) appears to have a library issue that results in polling every 14ms even when the screen is locked [6]. If you have a few processes doing that (which means the most lightly used Plasma system) it really hurts battery use. The maintainer has quite reasonably deferred action on this bug report given the KDE 6 transition. Later on in the Trixie development cycle I hope to get this issue resolved, I don t expect it to suddenly make battery life good. But it might make battery life acceptable. I am now idly considering carrying around my PinePhonePro in a powered off state for situations where I might need to do high security operations (root logins to servers or online banking) but for which carrying a laptop isn t convenient. It will do well for the turn on, do 30 mins of work that needs security, and then turn off scenario. Huawei Mate 10 Pro and Redmi 9A The Huawei Mate 10 Pro has been my main phone for 5 years and it has worked well, so it would be ideal if it could do VoLTE as the PinePhonePro isn t ready yet. All the web pages I ve seen about the Mate 10 Pro say that it will either allow upgrading to a VoLTE configuration if run with the right SIM or only support it with the right SIM. I did a test with a Chinese SIM which gave an option of turning on VoLTE but didn t allow any firmware updates and the VoLTE option went away when I put an Australian SIM in. Some forum comments had led me to believe that it would either permanently enable VoLTE or allow upgrading the firmware to one that enables VoLTE if I used a Chinese SIM but that is not the case. I didn t expect a high probability of success but I had to give it a go as it s a nice phone. I did some tests on a Redmi 9A (a terrible phone that has really bad latency on the UI in spite of having reasonably good hardware). The one I tested on didn t have VoLTE enabled when I got it, to test that I used the code *#*#4636#*#* in the dialler to get the menu of SIM information and it showed that VoLTE was not provisioned. I then had to update to the latest release of Android for that phone and enter *#*#86583#*#* in the dialler to enable VoLTE, the message displayed after entering that magic number must end in DISABLE . I get the impression that the code in question makes the phone not check certain aspects of whether the carrier is good for VoLTE and just do it. So apparently Kogan Mobile somehow gives the Redmi 9A the impression that VoLTE isn t supported but if the phone just goes ahead and connects it will work. I don t plan to use a Redmi 9A myself as it s too slow, but I added it to my collection to offer to anyone else I know who needs a phone with VoLTE and doesn t use the phone seriously or to someone who needs a known good phone for testing things. Samsung Galaxy Note 9 I got some Samsung Galaxy Note 9 phones to run Droidian as an experiment [7]. But Droidian dropped support for the Note 9 and I couldn t figure out how to enable VoLTE via Droidian, which was very annoying after I had spent $109 on a test phone and $215 on a phone for real use (I have no plans to try Droidian again at this time). I tried installing LineageOS on one Note 9 [8] which was much easier than expected (especially after previously installing Droidian). But VoLTE wasn t an option. According to Reddit LineageOS doesn t support VoLTE on Samsung devices and you can use a magisk module or a VoLTE enabler module but those aren t supported by LineageOS either [9]. I downloaded an original image for the Note 9 from SamsMobile.com [10]. That image booted past the orange stage (where if you have problems then your phone is probably permanently useless) but didn t boot into the OS. A friend helped me out with that and it turned out that the Heimdal flash tool on Linux didn t do something it needed to do and that Odin on Windows was required. After using Odin everything was fine and I have a Note 9 with VoLTE running the latest Samsung firmware which is security patch level 1st July 2022!!! So I have a choice between using a Note 9 for data and SMS while running a current version of Lineage OS with all security fixes or running a Samsung image with no security updates for 2 years which supports phone calls. So based on this I have to recommend Pixel as the phone of choice, it has a decent level of support from Google and long term support from LineageOS. According to the LineageOS web site you can run the current version of Lineage on the original Pixel phone from 2016! Of course getting VoLTE to work on it might be another saga, but it would probably be easier to do with LineageOS on a Pixel than on a Samsung phone. Conclusion The operation of the Note 9 for me is decent now apart from the potential security issues. The same goes for selling one of the phones. The PinePhonePro still has potential to become my daily driver at some future time if I and others can optimise power use. Also a complicating factor is that I want to have both Jabber and Matrix be actually instant IM systems not IM with a 5 minute delay, so suspend mode isn t a good option. Pixel phones will be a much higher priority when looking at phones to buy in future. The older Pixel phones go for as little as $100 on eBay and can still run the latest LineageOS. VoLTE seems needlessly complicated.

Guido G nther: Free Software Activities June 2024

A short status update of what happened on my side last month. Was able to test our Cellbroadcast bits, feedbackd became more flexible regarding LEDs, Phosh 0.40 is out, and some more. Phosh Phoc gmobile phosh-mobile-settings phosh-wallpapers emacs Debian ModemManager Feedbackd Livi Calls Chatty meta-phosh Libhandy If you want to support my work see donations.

29 June 2024

Russ Allbery: Review: Bitter Angels

Review: Bitter Angels, by C.L. Anderson
Publisher: Ballantine Books
Copyright: 2009
ISBN: 0-553-59217-3
Format: Mass market
Pages: 438
Bitter Angels is a stand-alone science fiction novel. It won the Philip K. Dick award for best SF original paperback in 2010. C.L. Anderson is a pen name for Sarah Zettel. Terese was a Guardian, one of the agents of the Pax Solaris who find ways to keep the peace in troubled systems and high-stress situations with the help of an implanted Companion, an assistant AI. Forty years ago, on one of those missions, she was captured and her Companion was forcibly removed. She was rescued by her friend and mentor and retired afterwards, starting a new life and a new family, trying to leave the memories behind. Now, the woman who rescued her is dead. She was murdered on duty in the Erasmus system, a corporate hellhole that appears to be on the verge of exploding into a political hot spot. Bianca's last instructions asked for Terese to replace her. Terese's family is furious at her for even considering returning to the Guardians, but she can't say no. Duty, and Bianca's dying request, call too strongly. Amerand is Security on Dazzle, one of the Erasmus stations. He is one of the refugees from Oblivion, the station that the First Bloods who rule the system let die. He keeps his head above water and tries to protect his father and find his mother without doing anything that the ever-present Clerks might find concerning. Keeping an eye on newly-arriving Solaris saints is a typical assignment, since the First Bloods don't trust the meddling do-gooders. But something is not quite right, and a cryptic warning from his Clerk makes him even more suspicious. This is the second book by Sarah Zettel that I've read, and both of them have been tense, claustrophobic thrillers set in a world with harsh social inequality and little space for the characters to maneuver. In this case, the structure of her future universe reminded me a bit of Iain M. Banks's Culture, but with less advanced technology and only humans. The Pax Solaris has eliminated war within its borders and greatly extended lifespans. That peace is maintained by Guardians, who play a role similar to Special Circumstances but a bit more idealist and less lethal. They show up where there are problems and meddle, manipulating and pushing to try to defuse the problems before they reach the Pax Solaris. Like a Culture novel, nearly all of the action takes place outside the Pax Solaris in the Erasmus system. Erasmus is a corporate colony that has turned into a cross between a hereditary dictatorship and the Corporate Rim from Martha Wells's Murderbot series. Debt slavery is ubiquitous, economic inequality is inconceivably vast, and the Clerks are everywhere. Erasmus natives like Amerand have very little leeway and even fewer options. Survival is a matter of not drawing the attention of the wrong people. Terese and her fellow Guardians are appalled, but also keenly aware that destabilizing the local politics may make the situation even worse and get a lot of people killed. Bitter Angels is structured like a mystery: who killed Bianca, and what was her plan when she was killed? Unlike a lot of books of this type, the villains are not idiots and their plan is both satisfyingly complex and still depressingly relevant. I don't think I'm giving anything away by saying that I have read recent news articles about people with very similar plans, albeit involving less science-fiction technology. Anderson starts with a tense situation and increases the pressure relentlessly, leaving the heroes one step behind the villains for almost the entire novel. It is not happy or optimistic reading at times, the book is quite dark but it certainly was engrossing. The one world-building quibble that I had is that the Erasmus system is portrayed partly as a hydraulic empire, and while this is arguably feasible given that spaceship travel is strictly controlled, it seemed like a weird choice given the prevalence of water on the nearby moons. Water smuggling plays a significant role in the plot, and I wasn't entirely convinced of the politics and logistics behind it. If this sort of thing bugs you, there are some pieces that may require suspension of disbelief. Bitter Angels is the sort of tense thriller where catastrophe is barely avoided and the cost of victory is too high, so you will want to be in the mood for that before you dive in. But if that's what you're looking for, I thought Anderson delivered a complex and satisfying story. Content warning: major character suicide. Rating: 7 out of 10

23 June 2024

Vincent Bernat: Why content providers need IPv6

IPv4 is an expensive resource. However, many content providers are still IPv4-only. The most common reason is that IPv4 is here to stay and IPv6 is an additional complexity.1 This mindset may seem selfish, but there are compelling reasons for a content provider to enable IPv6, even when they have enough IPv4 addresses available for their needs.

Disclaimer It s been a while since this article has been in my drafts. I started it when I was working at Shadow, a content provider, while I now work for Free, an internet service provider.

Why ISPs need IPv6? Providing a public IPv4 address to each customer is quite costly when each IP address costs US$40 on the market. For fixed access, some consumer ISPs are still providing one IPv4 address per customer.2 Other ISPs provide, by default, an IPv4 address shared among several customers. For mobile access, most ISPs distribute a shared IPv4 address. There are several methods to share an IPv4 address:3
NAT44
The customer device is given a private IPv4 address, which is translated to a public one by a service provider device. This device needs to maintain a state for each translation.
464XLAT and DS-Lite
The customer device translates the private IPv4 address to an IPv6 address or encapsulates IPv4 traffic in IPv6 packets. The provider device then translates the IPv6 address to a public IPv4 address. It still needs to maintain a state for the NAT64 translation.
Lightweight IPv4 over IPv6, MAP-E, and MAP-T
The customer device encapsulates IPv4 in IPv6 packets or performs a stateless NAT46 translation. The provider device uses a binding table or an algorithmic rule to map IPv6 tunnels to IPv4 addresses and ports. It does not need to maintain a state.
Solutions to share an IPv4 address
Solutions to share an IPv4 address across several customers. Some of them require the ISP to keep state, some don't.
All these solutions require a translation device in the ISP s network. This device represents a non-negligible cost in terms of money and reliability. As half of the top 1000 websites support IPv6 and the biggest players can deliver most of their traffic using IPv6,4 ISPs have a clear path to reduce the cost of translation devices: provide IPv6 by default to their customers.

Why content providers need IPv6? Content providers should expose their services over IPv6 primarily to avoid going through the ISP s translation devices. This doesn t help users who don t have IPv6 or users with a non-shared IPv4 address, but it provides a better service for all the others. Why would the service be better delivered over IPv6 than over IPv4 when a translation device is in the path? There are two main reasons for that:5
  1. Translation devices introduce additional latency due to their geographical placement inside the network: it is easier and cheaper to only install these devices at a few points in the network instead of putting them close to the users.
  2. Translation devices are an additional point of failure in the path between the user and the content. They can become overloaded or malfunction. Moreover, as they are not used for the five most visited websites, which serve their traffic over IPv6, the ISPs may not be incentivized to ensure they perform as well as the native IPv6 path.
Looking at Google statistics, half of the users reach Google over IPv6. Moreover, their latency is lower.6 In the US, all the nationwide mobile providers have IPv6 enabled. For France, we can refer to the annual ARCEP report: in 2022, 72% of fixed users and 60% of mobile users had IPv6 enabled, with projections of 94% and 88% for 2025. Starting from this projection, since all mobile users go through a network translation device, content providers can deliver a better service for 88% of them by exposing their services over IPv6. If we exclude Orange, which has 40% of the market share on consumer fixed access, enabling IPv6 should positively impact more than 55% of fixed access users.
In conclusion, content providers aiming for the best user experience should expose their services over IPv6. By avoiding translation devices, they can ensure fast and reliable content delivery. This is crucial for latency-sensitive applications, like live streaming, but also for websites in competitive markets, where even slight delays can lead to user disengagement.

  1. A way to limit this complexity is to build IPv6 services and only provide IPv4 through reverse proxies at the edge.
  2. In France, this includes non-profit ISPs, like FDN and Milkywan. Additionally, Orange, the previously state-owned telecom provider, supplies non-shared IPv4 addresses. Free also provides a dedicated IPv4 address for customers connected to the point-to-point FTTH access.
  3. I use the term NAT instead of the more correct term NAPT. Feel free to do a mental substitution. If you are curious, check RFC 2663. For a survey of the IPv6 transition technologies enumerated here, have a look at RFC 9313.
  4. For AS 12322, Google, Netflix, and Meta are delivering 85% of their traffic over IPv6. Also, more than half of our traffic is delivered over IPv6.
  5. An additional reason is for fighting abuse: blacklisting an IPv4 address may impact unrelated users who share the same IPv4 as the culprits.
  6. IPv6 may not be the sole reason the latency is lower: users with IPv6 generally have a better connection.

8 June 2024

Reproducible Builds: Reproducible Builds in May 2024

Welcome to the May 2024 report from the Reproducible Builds project! In these reports, we try to outline what we have been up to over the past month and highlight news items in software supply-chain security more broadly. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. A peek into build provenance for Homebrew
  2. Distribution news
  3. Mailing list news
  4. Miscellaneous news
  5. Two new academic papers
  6. diffoscope
  7. Website updates
  8. Upstream patches
  9. Reproducibility testing framework


A peek into build provenance for Homebrew Joe Sweeney and William Woodruff on the Trail of Bits blog wrote an extensive post about build provenance for Homebrew, the third-party package manager for MacOS. Their post details how each bottle (i.e. each release):
[ ] built by Homebrew will come with a cryptographically verifiable statement binding the bottle s content to the specific workflow and other build-time metadata that produced it. [ ] In effect, this injects greater transparency into the Homebrew build process, and diminishes the threat posed by a compromised or malicious insider by making it impossible to trick ordinary users into installing non-CI-built bottles.
The post also briefly touches on future work, including work on source provenance:
Homebrew s formulae already hash-pin their source artifacts, but we can go a step further and additionally assert that source artifacts are produced by the repository (or other signing identity) that s latent in their URL or otherwise embedded into the formula specification.

Distribution news In Debian this month, Johannes Schauer Marin Rodrigues (aka josch) noticed that the Debian binary package bash version 5.2.15-2+b3 was uploaded to the archive twice. Once to bookworm and once to sid but with differing content. This is problem for reproducible builds in Debian due its assumption that the package name, version and architecture triplet is unique. However, josch highlighted that
This example with bash is especially problematic since bash is Essential:yes, so there will now be a large portion of .buildinfo files where it is not possible to figure out with which of the two differing bash packages the sources were compiled.
In response to this, Holger Levsen performed an analysis of all .buildinfo files and found that this needs almost 1,500 binNMUs to fix the fallout from this bug. Elsewhere in Debian, Vagrant Cascadian posted about a Non-Maintainer Upload (NMU) sprint to take place during early June, and it was announced that there is now a #debian-snapshot IRC channel on OFTC to discuss the creation of a new source code archiving service to, perhaps, replace snapshot.debian.org. Lastly, 11 reviews of Debian packages were added, 15 were updated and 48 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated by Chris Lamb as well. [ ][ ]
Elsewhere in the world of distributions, deep within a larger announcement from Colin Percival about the release of version 14.1-BETA2, it was mentioned that the FreeBSD kernels are now built reproducibly.
In Fedora, however, the change proposal mentioned in our report for April 2024 was approved, so, per the ReproduciblePackageBuilds wiki page, the add-determinism tool is now running in new builds for Fedora 41 ( rawhide ). The add-determinism tool is a Rust program which, as its name suggests, adds determinism to files that are given as input by attempting to standardize metadata contained in binary or source files to ensure consistency and clamping to $SOURCE_DATE_EPOCH in all instances . This is essentially the Fedora version of Debian s strip-nondeterminism. However, strip-nondeterminism is written in Perl, and Fedora did not want to pull Perl in the buildroot for every package. The add-determinism tool eliminates many causes of non-determinism and work is ongoing to continue the scope of packages it can operate on.

Mailing list news On our mailing list this month, regular contributor kpcyrd wrote to the list with an update on their source code indexing project, whatsrc.org. The whatsrc.org project, which was launched last month in response to the XZ Utils backdoor, now contains and indexes almost 250,000 unique source code archives. In their post, kpcyrd gives an example of its intended purpose, noting that it shown that whilst there seems to be consensus about [the] source code for zsh 5.9 in various Linux distributions, it does not align with the contents of the zsh Git repository . Holger Levsen also posted to the list with a pre-announcement of sorts for the 2024 Reproducible Builds summit. In particular:
[Whilst] the dates and location are not fixed yet, however if you don help us with finding a suitable location soon, it is very likely that we ll meet again in Hamburg in the 2nd half of September 2024 [ ].
Lastly, Frederic-Emmanuel Picca wrote to the list asking for help understanding the non-reproducible status of the Debian silx package and received replies from both Vagrant Cascadian and Chris Lamb.

Miscellaneous news strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month strip-nondeterminism version 1.14.0-1 was uploaded to Debian unstable by Chris Lamb chiefly to incorporate a change from Alex Muntada to avoid a dependency on Sub::Override to perform monkey-patching and break circular dependencies related to debhelper [ ]. Elsewhere in our tooling, Jelle van der Waa modified reprotest because the pipes module will be removed in Python version 3.13 [ ].
It was also noticed that a new blog post by Daniel Stenberg detailing How to verify a Curl release mentions the SOURCE_DATE_EPOCH environment variable. This is because:
The [curl] release tools document also contains another key component: the exact time stamp at which the release was done using integer second resolution. In order to generate a correct tarball clone, you need to also generate the new version using the old version s timestamp. Because the modification date of all files in the produced tarball will be set to this timestamp.

Furthermore, Fay Stegerman filed a bug against the Signal messenger app for Android to report that their reproducible builds cannot, in fact, be reproduced. However, Fay is quick to note that she has:
found zero evidence of any kind of compromise. Some differences are yet unexplained but everything I found seems to be benign. I am disappointed that Reproducible Builds have been broken for months but I have zero reason to doubt Signal s security in any way.

Lastly, it was observed that there was a concise and diagrammatic overview of supply chain threats on the SLSA website.

Two new academic papers Two new scholarly papers were published this month. Firstly, Mathieu Acher, Beno t Combemale, Georges Aaron Randrianaina and Jean-Marc J z quel of University of Rennes on Embracing Deep Variability For Reproducibility & Replicability. The authors describe their approach as follows:
In this short [vision] paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (e.g., language, libraries, compiler, virtual machine, OS, environment variables, etc.). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
(A PDF of this article is available.)
Secondly, Ludovic Court s, Timothy Sample, Simon Tournier and Stefano Zacchiroli have collaborated to publish a paper on Source Code Archiving to the Rescue of Reproducible Deployment. Their paper was motivated because:
The ability to verify research results and to experiment with methodologies are core tenets of science. As research results are increasingly the outcome of computational processes, software plays a central role. GNU Guix is a software deployment tool that supports reproducible software deployment, making it a foundation for computational research workflows. To achieve reproducibility, we must first ensure the source code of software packages Guix deploys remains available.
(A PDF of this article is also available.)

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 266, 267, 268 and 269 to Debian, making the following changes:
  • New features:
    • Use xz --list to supplement output when comparing .xz archives; essential when metadata differs. (#1069329)
    • Include xz --verbose --verbose (ie. double) output. (#1069329)
    • Strip the first line from the xz --list output. [ ]
    • Only include xz --list --verbose output if the xz has no other differences. [ ]
    • Actually append the xz --list after the container differences, as it simplifies a lot. [ ]
  • Testing improvements:
    • Allow Debian testing to fail right now. [ ]
    • Drop apktool from Build-Depends; we can still test APK functionality via autopkgtests. (#1071410)
    • Add a versioned dependency for at least version 5.4.5 for the xz tests as they fail under (at least) version 5.2.8. (#374)
    • Fix tests for 7zip 24.05. [ ][ ]
    • Fix all tests after additon of xz --list. [ ][ ]
  • Misc:
    • Update copyright years. [ ]
In addition, James Addison fixed an issue where the HTML output showed only the first difference in a file, while the text output shows all differences [ ][ ][ ], Sergei Trofimovich amended the 7zip version test for older 7z versions that include the string [64] [ ][ ] and Vagrant Cascadian relaxed the versioned dependency to allow version 5.4.1 for the xz tests [ ] and proposed updates to guix for versions 267, 268 and pushed version 269 to Guix. Furthermore, Eli Schwartz updated the diffoscope.org website in order to explain how to install diffoscope on Gentoo [ ].

Website updates There were a number of improvements made to our website this month, including Chris Lamb making the print CSS stylesheet nicer [ ]. Fay Stegerman made a number of updates to the page about the SOURCE_DATE_EPOCH environment variable [ ][ ][ ] and Holger Levsen added some of their presentations to the Resources page. Furthermore, IOhannes zm lnig stipulated support for SOURCE_DATE_EPOCH in clang version 16.0.0+ [ ], Jan Zerebecki expanded the Formal definition page and fixed a number of typos on the Buy-in page [ ] and Simon Josefsson fixed the link to Trisquel GNU/Linux on the Projects page [ ].

Upstream patches This month, we wrote a number of patches to fix specific reproducibility issues, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In May, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Enable the rebuilder-snapshot API on osuosl4. [ ]
    • Schedule the i386 architecture a bit more often. [ ]
    • Adapt cleanup_nodes.sh to the new way of running our build services. [ ]
    • Add 8 more workers for the i386 architecture. [ ]
    • Update configuration now that the infom07 and infom08 nodes have been reinstalled as real i386 systems. [ ]
    • Make diffoscope timeouts more visible on the #debian-reproducible-changes IRC channel. [ ]
    • Mark the cbxi4a-armhf node as down. [ ][ ]
    • Only install the hdmi2usb-mode-switch package only on Debian bookworm and earlier [ ] and only install the haskell-platform package on Debian bullseye [ ].
  • Misc:
    • Install the ntpdate utility as we need it later. [ ]
    • Document the progress on the i386 architecture nodes at Infomaniak. [ ]
    • Drop an outdated and unnoticed notice. [ ]
    • Add live_setup_schroot to the list of so-called zombie jobs. [ ]
In addition, Mattia Rizzolo reinstalled the infom07 and infom08 nodes [ ] and Vagrant Cascadian marked the cbxi4a node as online [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 June 2024

Freexian Collaborators: Debian Contributions: DebConf Bursaries, /usr-move, sbuild, and more! (by Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Bursary updates, by Utkarsh Gupta Utkarsh is the bursaries team lead for DebConf 24. Bursary requests are dispatched to a team of volunteers to review. The results are collated, adjusted and merged to produce priority lists of requests to fund. Utkarsh raised the team, coordinated the review, and issued bursaries to attendees.

/usr-move, by Helmut Grohne More and more, the /usr-move transition is being carried out by multiple contributors and many performed around a hundred of the requested uploads. Of these, Helmut contributed five patches and two uploads. As a result, there are less than 350 packages left to be converted, and all of the non-trivial cases have patches. We started with three times that number. Thanks to everyone involved for supporting this effort. For people interested in background information of this transition, Helmut gave a presentation at MiniDebConf Berlin 2024 (slides).

sbuild, by Helmut Grohne While unshare mode of sbuild has existed for quite a while, it is now getting significant use in Debian, and new problems are popping up. Helmut looked into an apparmor-related failure and provided a diagnosis. While relevant code would detect the chroot nature of a schroot backend and skip apparmor tests, the unshare environment would be just good enough to run and fail the test. As sbuild exposes fewer special kernel filesystems, the tests will be skipped again. Another problem popped up when gobject-introspection added a dependency on the host architecture Python interpreter in a cross build environment. sbuild would prefer installing (and failing) a host architecture Python to installing the qemu alternative. Attempts to fix this would result in systemd killing sbuild. ischroot as used by libc6.postinst would not classify the unshare environment as a chroot. Therefore libc6.postinst would run telinit which would kill the build process. This is a complex interaction problem that shall eventually be solved by providing triggers from libc6 to be implemented by affected init systems.

Salsa CI updates, by Santiago Ruano Rinc n Several issues arose about Salsa CI last month, and it is probably worth mentioning part of the challenges of defining its framework in YAML. With the upcoming end-of-support of Debian 10 buster as LTS, armel was removed from deb.debian.org, making the jobs that build images for buster/armel to fail. While the removal of buster/armel from the repositories is a natural change, it put some light on the flaws in the Salsa CI design regarding the support of the different Debian releases. Currently, the images are defined like these (from .images-debian.yml):
.all-supported-releases: &all-supported-releases
  - stretch
  - stretch-backports
  - buster
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
And from them, different images are built according to the different jobs and how they are supported, for example:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, all releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE: *all-supported-releases
The removal of buster/armel could be easily reflected as:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, fully supported releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE:
          - stretch
          - buster
          - bullseye
          - bullseye-backports
          - bookworm
          - bookworm-backports
          - trixie
          - sid
          - experimental
      # buster only supports armhf and arm64
      - IMAGE_NAME: base
        ARCH:
          - arm32v7
          - arm64v8
        RELEASE: buster
Evidently, this increases duplication of the release support data, which is of course not optimal and it is error prone when changing the data about supported releases. A better approach would be to have two different YAML lists, such as:
# releases that have partial support. E.g.: buster is transitioning to
# Debian LTS, and buster armel is no longer found in deb.debian.org
.old-releases: &old-releases
  - stretch
  - buster

.currently-supported-releases: &currently-supported-releases
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
and then a unified list:
.all-supported-releases: &all-supported-releases
  - *old-releases
  - *currently-supported-releases
that could be used in the matrix of the jobs that build all the images available in the pipeline container registry. However, due to limitations in GitLab, it is not possible to expand the variables or mapping values in a parallel:matrix context. At least not in an elegant fashion. This is the kind of issue that recently arose and that Santiago is currently working to solve, in the simplest possible way. Astute readers would notice that stretch is listed in the fully supported releases. And there is no problem with stretch, because it is built from archive.debian.org. Otto actually has tried to fix the broken image build job doing the same, but it is still incorrect, because the security repository is not (yet) available in archive.debian.org. Additionally, Santiago has also worked on other merge requests, such as:
  1. support branch/tags as target head in the test projects,
  2. build autopkgtest image on top of stable
  3. Add .yamllint and make it happy in the autopkgtest-lxc project
  4. enable FF_SCRIPT_SECTIONS to log multiline commands, among others.

Archiving DebConf Websites, by Stefano Rivera DebConf, the annual Debian conference, has its own new website every year. These are typically complex dynamic web applications (featuring registration, call for papers, scheduling, etc.) Once the conference is over, there is no need to keep maintaining these applications, so we archive the sites off as static HTML, and serve them from Debian s static CDN. Stefano archived the websites for the last two DebConfs. The schedule system behind DebConf 14 and 15 s websites was a derivative of Canonical s summit system. This was only used for a couple of years before migrating to wafer, the current system. Archiving summit content has been on the nice to have list for years, but nobody has ever tackled it. The machine that served the sites went away a couple of years ago. After much digging, a backup of the database was found, and Stefano got this code running on an ancient Python 2.7. Recently Stefano put this all together and hooked in an archive export to finally get this content preserved.

Python 3.x and pypy3 security bug triage, by Stefano Rivera Stefano Rivera triaged all the open security bugs against the Python 3.x and PyPy3 packages for Debian s stable and LTS releases. Several had been fixed but this wasn t recorded in the security tracker.

Linux livepatching support for Debian, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago filed ITP bug #1070494. As stated in the bug, more than an Intent to Package, it is an Intent to Design and Implement live patching support for the Linux kernel in Debian. For now, Emmanuel and Santiago have done exploratory work and they are working to understand the different possibilities to implement livepatching. One possible direction is to rely on kpatch, and the other is to package the modules using regular packaging tools. Also, it is needed to evaluate if it is possible to rely on distributing the modules via packages, or instead as a service, as it is done by some commercial distributions.

Miscellaneous contributions
  • Thorsten Alteholz uploaded cups-bjnp to improve packaging.
  • Colin Watson tracked down a baffling CI issue in openssh to unblock several merge requests, removed the user_readenv=1 option from its PAM configuration, and started on the first stage of his plan to split out GSS-API key exchange support to separate packages.
  • Colin did his usual routine work on the Python team, upgrading 26 packages to new upstream versions, and cherry-picking an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • Colin NMUed a couple of packages to reduce the need for explicit overrides in Packages-arch-specific, and removed some other obsolete entries from there.
  • Emilio managed various library transitions, and helped finish a few of the remaining t64 transitions.
  • Helmut sent a patch for enabling piuparts to work as a regular user building on earlier work.
  • Helmut sent patches for 7 cross build failures, 6 other debian bugs and fixed an infrastructure problem in crossqa.debian.net.
  • Nicholas worked on a sponsored package upload, and discovered the blhc tool for diagnosing build hardening.
  • Stefano Rivera started and completed the re2 transition. The release team suggested moving to a virtual package scheme that includes the absl ABI (as re2 now depends on it). Adopted this.
  • Stefano continued to work on DebConf 24 planning.
  • Santiago continued to work on DebConf24 Content tasks as well as Debconf25 organisation.

5 June 2024

Alberto Garc a: More ways to install software in SteamOS: Distrobox and Nix

Introduction In my previous post I talked about how to use systemd-sysext to add software to the Steam Deck without modifying the root filesystem. In this post I will give a brief overview of two additional methods. Distrobox distrobox is a tool that uses containers to create a mutable environment on top of your OS. Distrobox running in SteamOS With distrobox you can open a terminal with your favorite Linux distro inside, with full access to the package manager and the ability to install additional software. Containers created by distrobox are integrated with the system so apps running inside have normal access to the user s home directory and the Wayland/X11 session. Since these containers are not stored in the root filesystem they can survive an OS update and continue to work fine. For this reason they are particularly suited to systems with an immutable root filesystem such as Silverblue, Endless OS or SteamOS. Starting from SteamOS 3.5 the system comes with distrobox (and podman) preinstalled and it can be used right out of the box without having to do any previous setup. For example, in order to create a Debian bookworm container simply open a terminal and run this:
$ distrobox create -i debian:bookworm debbox
Here debian:bookworm is the image that this container is created from (debian is the name and bookworm is the tag, see the list of supported tags here) and debbox is the name that is given to this new container. Once the container is created you can enter it:
$ distrobox enter debbox
Or from the Debian entry in the desktop menu -> Lost & Found. Once inside the container you can run your Debian commands normally:
$ sudo apt update
$ sudo apt install vim-gtk3
Nix Nix is a package manager for Linux and other Unix-like systems. It has the property that it can be installed alongside the official package manager of any distribution, allowing the user to add software without affecting the rest of the system. Nix running in SteamOS Nix installs everything under the /nix directory, and packages are made available to the user through a new entry in the PATH and a ~/.nix-profile symlink stored in the home directory. Nix is more things, including the basis of the NixOS operating system. Explaning Nix in more detail is beyond the scope of this blog post, but for SteamOS users these are perhaps its most interesting properties: The only thing that Nix needs from SteamOS is help to set up the /nix directory so its contents are not stored in the root filesystem. This is already happening starting from SteamOS 3.5 so you can install Nix right away in single-user mode:
$ sudo chown deck:deck /nix
$ wget https://nixos.org/nix/install
$ sh ./install --no-daemon
This installs Nix and adds a line to ~/.bash_profile to set up the necessary environment variables. After that you can log in again and start using it. Here s a very simple example (refer to the official documentation for more details):
# Install and run Midnight Commander
$ nix-env -iA nixpkgs.mc
$ mc
# List installed packages
$ nix-env -q
mc-4.8.31
nix-2.21.1
# Uninstall Midnight Commander
$ nix-env -e mc-4.8.31
What we have seen so far is how to install Nix in single-user mode, which is the simplest one and probably good enough for a single-user machine like the Steam Deck. The Nix project however recommends a multi-user installation, see here for the reasons. Unfortunately the official multi-user installer does not work out of the box on the Steam Deck yet, but if you want to go the multi-user way you can use the Determinate Systems installer: https://github.com/DeterminateSystems/nix-installer Conclusion Distrobox and Nix are useful tools and they give SteamOS users the ability to add additional software to the system without having to modify the base operating system. While for graphical applications the recommended way to install third-party software is still Flatpak, Distrobox and Nix give the user additional flexibility and are particularly useful for installing command-line utilities and other system tools.

4 June 2024

Dirk Eddelbuettel: ulid 0.4.0 on CRAN: Extended to Milliseconds

A new version of the ulid package is now on CRAN. The packages provides universally (unique) lexicographically (sortable) identifiers see the spec at GitHub for details on those which offer sorting which uuids lack. The R package provides access via the standard C++ library, had been put together by Bob Rudis and is now maintained by me. Mark Heckmann noticed that a ulid round trip of generating and unmarshalling swallowed subsecond informationm and posted on a well-known site I no longer go to. Duncan Murdoch was kind enough to open an issue to make me aware, and in it included the nice minimally complete verifiable example by Mark. It turns out that this issue was known, documented upstream in two issues and fixed in fork by the authors of those issues, Chris Bove. It replaces time_t as the value of record (constrained at the second resolution) with a proper std::chrono object which offers milliseconds (and much more, yay Modern C++). So I switched the two main files of library to his, and updated the wrapper code to interface from POSIXct to std::chrono object. And with that we are in business. The original example of five ulids create 100 millisecond part, then unmarshalled and here printed as a data.table as data.frame by default truncates to seconds:
> library(ulid)
> gen_ulid <- \(sleep) replicate(5,  Sys.sleep(sleep); generate() )
> u <- gen_ulid(.1)
> df <- unmarshal(u)
> data.table::data.table(df)
                        ts              rnd
                    <POSc>           <char>
1: 2024-05-30 16:38:28.588 CSQAJBPNX75R0G5A
2: 2024-05-30 16:38:28.688 XZX0TREDHD6PC1YR
3: 2024-05-30 16:38:28.789 0YK9GKZVTED27QMK
4: 2024-05-30 16:38:28.890 SC3M3G6KGPH7S50S
5: 2024-05-30 16:38:28.990 TSKCBWJ3TEKCPBY0
>
We updated the documentation accordingly, and added some new tests as well. The NEWS entry for this release follows.

Changes in version 0.4.0 (2024-06-03)
  • Switch two functions to fork by Chris Bove using std::chrono instead of time_t for consistent millisecond resolution (#3 fixing #2)
  • Updated documentation showing consistent millisecond functionality
  • Added unit tests for millisecond functionality

Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 May 2024

Antoine Beaupr : Playing with fonts again

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code). So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!). One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults. This is what it looks like, before:
A dark terminal showing the test sheet in Fira Mono Fira Mono
After:
A dark terminal showing the test sheet in Fira Mono Commit Mono
(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.) They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it. All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the and are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all. I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes ( , GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.
US keyboard coverage:
abcdefghijklmnopqrstuvwxyz 1234567890-=[]\;',./
ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+ :"<>?
latin1 coverage:  
EURO SIGN, TRADE MARK SIGN:  
ambiguity test:
e coC0ODQ iI71lL! 
b6G&0B83  []() /\. 
zs$S52Z%   '" 
all characters in a sentence, uppercase:
the quick fox jumps over the lazy dog
THE QUICK FOX JUMPS OVER THE LAZY DOG
same, in french:
Portez ce vieux whisky au juge blond qui fume.
d s no l, o  un z phyr ha  me v t de gla ons w rmiens, je d ne
d exquis r tis de b uf au kir,   l a  d ge m r, &c tera.
D S NO L, O  UN Z PHYR HA  ME V T DE GLA ONS W RMIENS, JE D NE
D EXQUIS R TIS DE B UF AU KIR,   L A  D GE M R, &C TERA.
Ligatures test:
-<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>-
=<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>=
<-> <--> <---> <----> <=> <==> <===> <====> :: ::: __
<~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!=
<: := *= *+ <* <*> *> <  < >  > <. <.> .> +* =* =: :>
(* *) /* */ [   ]     ++ +++ \/ /\  - -  <!-- <!---
Box drawing alignment tests:
                                                                    
                                 
                           
                                                   
                                       
                                                 
                               
                                
Dashes alignment test:
HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE
--------------------------------------------------
 
 
 
 
__________________________________________________
Update: here is another such sample sheet, it's pretty good and has support for more languages while being still relatively small. So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again. Sources and inspiration for the above:
  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and (U+2212 MINUS SIGN, a math symbol)
  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this
  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters
  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes
  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"
  • UTF-8 sampler - unused, similar

Other fonts In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments. My requirements are:
  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian
Here is the list of alternatives I have considered in the past and why I'm not using them:
  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave
  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code
  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode
  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack
  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term
  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex
  • Inconsolata: no ligatures, maybe italics? more compressed than others, feels a little out of balance because of that, packaged in Debian as fonts-inconsolata
  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian
  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian
  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono
  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid
  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki
  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian
  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen
  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian
So, if I get tired of Commit Mono, I might probably try, in order:
  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono
Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide). This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

Next.