Search Results: "tina"

29 July 2020

Dirk Eddelbuettel: Installing and Running Ubuntu on a 2015-ish MacBook Air

So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent genuis bar verdict of it s a goner a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work. Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine s content was really (really, and check again: really) no longer needed, I got hold of it for good. tl;dr It works just fine. It is a little heavier than I thought (and isn t air supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix ). Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.

Initial Steps
  • Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck
  • Startup Disk Creator using a so-far unused 8gb usb drive
  • Plug into USB, recycle power, press Option on macOS keyboard: voila
  • After a quick hunch no to live/test only and yes to install, whole disk
  • install easy, very few questions, somehow skips wifi
  • so activate wifi manually and everythings pretty much works

Customization
  • First deal with fn and ctrl key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in half a decade? Longer?
  • Fire up firefox, go to download chrome , install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.
  • syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop
  • keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.
  • Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.
  • Added emacs via apt, so far empty , so config files yet
  • Added ssh via apt, need to propagate keys to github and gitlab
  • Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio
  • Added wajig (apt frontend) and byobu, both via apt
  • Created ssh key, shipped it to server and github + gitlab
  • Cloned (not-public) dotfiles repo and linked some dotfiles in
  • Cloned git repo for nord-theme for gnome terminal and installed it; also added it to RStudio via this repo
  • Emacs installed, activated dotfiles, then incrementally install a few elpa-* packages and a few M-x package-install including nord-theme, of course
  • Installed JetBrains Mono font from my own local package; activated for Gnome Terminal and Emacs
  • Install gnome-tweak-tool via apt, adjusted a few settings
  • Ran gsettings set org.gnome.desktop.wm.preferences focus-mode 'sloppy'
  • Set up camera following this useful GH repo
  • At some point also added slack and zoom, because, well, it is 2020
  • STILL TODO:
    • docker
    • bother with email setup?,
    • maybe atom/code/ ?

28 July 2020

Russ Allbery: Review: The City in the Middle of the Night

Review: The City in the Middle of the Night, by Charlie Jane Anders
Publisher: Tor
Copyright: February 2019
Printing: February 2020
ISBN: 1-4668-7113-X
Format: Kindle
Pages: 366
January is a tidally-locked planet divided between permanent night and permanent day, an unfortunate destination for a colony starship. Now, humans cling to a precarious existence along the terminator, huddling in two wildly different cities and a handful of smaller settlements, connected by a road through the treacherous cold. The novel opens with Sophie, a shy university student from the dark side of the city of Xiosphant. She has an overwhelming crush on Bianca, her high-class, self-confident roommate and one of the few people in her life to have ever treated her with compassion and attention. That crush, and her almost non-existent self-esteem, lead her to take the blame for Bianca's petty theft, resulting in what should have been a death sentence. Sophie survives only because she makes first contact with a native intelligent species of January, one that the humans have been hunting for food and sport. Sadly, I think this is enough Anders for me. I've now bounced off two of her novels, both for structural reasons that I think go deeper than execution and indicate a fundamental mismatch between what Anders wants to do as an author and what I'm looking for as a reader. I'll talk more about what this book is doing in a moment, but I have to start with Bianca and Sophie. It's difficult for me to express how much I loathed this relationship and how little I wanted to read about it. It took me about five pages to peg Bianca as a malignant narcissist and Sophie's all-consuming crush as dangerous codependency. It took the entire book for Sophie to figure out how awful Bianca is to her, during which Bianca goes through the entire abusive partner playbook of gaslighting, trivializing, contingent affection, jealous rage, and controlling behavior. And meanwhile Sophie goes back to her again, and again, and again, and again. If I hadn't been reading this book on a Kindle, I think it would have physically hit a wall after their conversation in the junkyard. This is truly a matter of personal taste and preference. This is not an unrealistic relationship; this dynamic happens in life all too often. I'm sure there is someone for whom reading about Sophie's spectacularly poor choices is affirming or cathartic. I've not personally experienced this sort of relationship, which doubtless matters. But having empathy for someone who is making awful and self-destructive life decisions and trusting someone they should not be trusting and who is awful to them in every way is difficult work. Sophie is the victim of Bianca's abuse, but she does so many stupid and ill-conceived things in support of this twisted relationship that I found it very difficult to not get angry at her. Meanwhile, Anders writes Sophie as so clearly fragile and uncertain and devoid of a support network that getting angry at her is like kicking a puppy. The result for me was spending nearly an entire book in a deeply unpleasant state of emotional dissonance. I may be willing to go through that for a close friend, but in a work of fiction it's draining and awful and entirely not fun. The other viewpoint character had the opposite problem for me. Mouth starts the book as a traveling smuggler, the sole survivor of a group of religious travelers called the Citizens. She's practical, tough, and guarded. Beneath that, I think the intent was to show her as struggling to come to terms with the loss of her family and faith community. Her first goal in the book is to recover a recording of Citizen sacred scripture to preserve it and to reconnect with her past. This sounds interesting on the surface, but none of it gelled. Mouth never felt to me like someone from a faith community. She doesn't act on Citizen beliefs to any meaningful extent, she rarely talks about them, and when she does, her attitude is nostalgia without spirituality. When Mouth isn't pursuing goals that turn out to be meaningless, she aimlessly meanders through the story. Sophie at least has agency and makes some important and meaningful decisions. Mouth is just there, even when Anders does shattering things to her understanding of her past. Between Sophie and Bianca putting my shoulders up around my ears within the first few pages of the first chapter and failing to muster any enthusiasm for Mouth, I said the eight deadly words ("I don't care what happens to these people") about a hundred pages in and the book never recovered. There are parts of the world-building I did enjoy. The alien species that Sophie bonds with is not stunningly original, but it's a good (and detailed) take on one of the alternate cognitive and social models that science fiction has dreamed up. I was comparing the strangeness and dislocation unfavorably to China Mi ville's Embassytown while I was reading it, but in retrospect Anders's treatment is more decolonialized. Xiosphant's turn to Circadianism as their manifestation of order is a nicely understated touch, a believable political overreaction to the lack of a day/night cycle. That touch is significantly enhanced by Sophie's time working in a salon whose business model is to help Xiosphant residents temporarily forget about time. And what glimmers we got of politics on the colony ship and their echoing influence on social and political structures were intriguing. Even with the world-building, though, I want the author to be interested in and willing to expand the same bits of world-building that I'm engaged with. Anders didn't seem to be. The reader gets two contrasting cities along a road, one authoritarian and one libertine, which makes concrete a metaphor for single-axis political classification. But then Anders does almost nothing with that setup; it's just the backdrop of petty warlord politics, and none of the political activism of Bianca's student group seems to have relevance or theoretical depth. It's a similar shallowness as the religion of Mouth's Citizens: We get a few fragments of culture and religion, but without narrative exploration and without engagement from any of the characters. The way the crew of the Mothership was assembled seems to have led to a factional and racial caste system based on city of origin and technical expertise, but I couldn't tell you more than that because few of the characters seem to care. And so on. In short, the world-building that I wanted to add up to a coherent universe that was meaningful to the characters and to the plot seemed to be little more than window-dressing. Anders tosses in neat ideas, but they don't add up to anything. They're just background scenery for Bianca and Sophie's drama. The one thing that The City in the Middle of the Night does well is Sophie's nervous but excited embrace of the unknown. It was delightful to see the places where a typical protagonist would have to overcome a horror reaction or talk themselves through tradeoffs and where Sophie's reaction was instead "yes, of course, let's try." It provided an emotional strength to an extended first-contact exploration scene that made it liberating and heart-warming without losing the alienness. During that part of the book (in which, not coincidentally, Bianca does not appear), I was able to let my guard down and like Sophie for the first time, and I suspect that was intentional on Anders's part. But, overall, I think the conflict between Anders's story-telling approach and my preferences as a reader are mostly irreconcilable. She likes to write about people who make bad decisions and compound their own problems. In one of the chapters of her non-fiction book about writing that's being serialized on Tor.com she says "when we watch someone do something unforgivable, we're primed to root for them as they search desperately for an impossible forgiveness." This is absolutely not true for me; when I watch a character do something unforgivable, I want to see repudiation from the protagonists and ideally some clear consequences. When that doesn't happen, I want to stop reading about them and find something more enjoyable to do with my time. I certainly don't want to watch a viewpoint character insist that the person who is doing unforgivable things is the center of her life. If your preferences on character and story arc are closer to Anders's than mine, you may like this book. Certainly lots of people did; it was nominated for multiple awards and won the Locus Award for Best Science Fiction Novel. But despite the things it did well, I had a truly miserable time reading it and am not anxious to repeat the experience. Rating: 4 out of 10

12 July 2020

Enrico Zini: Police brutality links

I was a police officer for nearly ten years and I was a bastard. We all were.
We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilit sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, pi militari dell Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altres , documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove mor la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso stato oggetto di grande attenzione mediatica e ha ispirato un documentario, stato morto un ragazzo.[1][5]
Federico Aldrovandi (17 July 1987 in Ferrara 25 September 2005 in Ferrara) was an Italian student, who was killed by four policemen.[1]
24 Giugno 2020

15 June 2020

Russ Allbery: Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.) Yarimar Bonilla & Marisol LeBr n (ed.) Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) Policing the Planet (non-fiction anthology)
Zachary D. Carter The Price of Peace (non-fiction)
Justin Akers Chac n & Mike Davis No One is Illegal (non-fiction)
Grace Chang Disposable Domestics (non-fiction)
Suzanne Collins The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis Freedom is a Constant Struggle (non-fiction)
Danny Katch Socialism... Seriously (non-fiction)
Naomi Klein The Battle for Paradise (non-fiction)
Naomi Klein No is Not Enough (non-fiction)
Naomi Kritzer Catfishing on CatNet (sff)
Derek K nsken The Quantum Magician (sff)
Rob Larson Bit Tyrants (non-fiction)
Michael L wy Ecosocialism (non-fiction)
Joe Macar , Maya Schenwar, et al. (ed.) Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi Riot Baby (sff)
Sarah Pinsker A Song for a New Day (sff)
Lina Rather Sisters of the Vast Black (sff)
Marta Russell Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) How We Get Free (non-fiction anthology)
Linda Tirado Hand to Mouth (non-fiction)
Alex S. Vitale The End of Policing (non-fiction)
C.M. Waggoner Unnatural Magic (sff)
Martha Wells Network Effect (sff)
Kai Ashante Wilson Sorcerer of the Wildeeps (sff)

2 June 2020

Lisandro Dami n Nicanor P rez Meyer: Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa) - Fighting COVID-19

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.


This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

All the above done with Free/Libre/Open Source software and hardware designs. Any manufacturing company can then use them for mass production.

The importance of early pneumonia detection


We were already working in MoSimPa when an NYTimes article caught or attention: "The Infection That s Silently Killing Coronavirus Patients". From the article:

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint


As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

System schema

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware


As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software


Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

Between the database and MQTT is mosimpa-datakeeper. The piece of software from which medical staff monitor patients is unsurprisingly called mosimpa-monitor.

mosimpa-monitor
MoSimPa's monitor main screen

mosimpa-monitor plots
Plots of a patient's data


mosimpa-monitor-alarms-setup
Alarm thresholds setup


And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

mosimpa-abm
ABM main screen


mosimpa-abm-internments
ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

Even if I am a packager I still have some stuff to fix from the packaging itself, like letting datakeeper run with its own user. I just haven't got to it yet.



Certifications


We are working towards getting the system certified by ANMAT, which is the Argentinian equivalent for EEUU's FDA.

Funding


While all the people involved are working ad-honorem funding is still required in order to buy materials, create the prototypes, etc. The project created payments links with Mercado Pago (in spanish and argentinian pesos) and other bank methods (PDF, also in spanish).

I repeat the links here with an aproximation to US$.

- 500 AR$ (less than 8 US$)
- 1000 AR$ (less than 15 US$)
- 2000 AR$ (less than 30 US$)
- 3000 AR$ (less than 45 US$)
- 5000 AR$ (less than 75 US$)

You can check the actual convertion rate in https://www.google.com/search?q=argentine+peso+to+us+dollars

The project was also presented at a funding call of argentinian Agencia de Promoci n de la Investigaci n, el Desarrollo Tecnol gico y la Innovaci n (Agencia I+D+i). 900+ projects where presented and 64 funded, MoSimPa between them.

3 May 2020

Evgeni Golov: Remote management for OpenWRT devices without opening inbound connections

Everyone is working from home these days and needs a decent Internet connection. That's especially true if you need to do video calls and the room you want to do them has the worst WiFi coverage of the whole flat. Well, that's exactly what happened to my parents in law. When they moved in, we knew that at some point we'll have to fix the WiFi - the ISP provided DSL/router/WiFi combo would not cut it, especially not with the shape of the flat and the elevator shaft in the middle of it: the flat is essentially a big C around said shaft. But it was good enough for email, so we postponed that. Until now. The flat has wired Ethernet, but the users MacBook Air does not. That would have been too easy, right? So let's add another access point and hope the situation improves. Luckily I still had a TP-Link Archer C7 AC1750 in a drawer, which I could quickly flash with a fresh OpenWRT release, disable DHCPd and configure the same SSID and keys as the main/old WiFi. But I didn't know which channels would be best in the destination environment. Under normal circumstances, I'd just take the AP, drive to my parents in law and finish the configuration there. Nope, not gonna happen these days. So my plan was to finish configuration here, put the AP in a box and on the porch where someone can pick it up. But this would leave me without a way to further configure the device once it has been deployed - I was not particularly interested in trying to get port forwarding configured via phone and I was pretty sure UPnP was disabled in the ISP router. Installing a Tor hidden service for SSH was one possibility, setting up a VPN and making the AP a client another. Well, or just creating a reverse tunnel with SSH! sshtunnel Creating a tunnel with OpenSSH is easy: ssh -R127.0.0.1:2222:127.0.0.1:22 server.example.com will forward localhost:2222 on server.example.com to port 22 of the machine the SSH connection originated from. But what happens if the connection dies? Adding a while true; do ; done around it might help, but I would really like not to reinvent the wheel here! Thankfully, somebody already invented that particular wheel and OpenWRT comes with a sshtunnel package that takes care of setting up and keeping up such tunnels and documentation how to do so. Just install the sshtunnel package, edit /etc/config/sshtunnel to contain a server stanza with hostname, port and username and a tunnelR stanza referring said server plus the local and remote sides of the tunnel and you're good to go.
config server home
  option user     user
  option hostname server.example.com
  option port     22
config tunnelR local_ssh
  option server         home
  option remoteaddress  127.0.0.1
  option remoteport     2222
  option localaddress   127.0.0.1
  option localport      22
The only caveat is that sshtunnel needs the OpenSSH client binary (and the package correctly depends on it) and OpenWRT does not ship the ssh-keygen tool from OpenSSH but only the equivalent for Dropbear. As OpenSSH can't read Dropbear keys (and vice versa) you'll have to generate the key somewhere else and deploy it to the OpenWRT box and the target system. Oh, and OpenWRT defaults to enabling password login via SSH, so please disable that if you expose the box to the Internet in any way! Using the tunnel After configuring and starting the service, you'll see the OpenWRT system logging in to the configured remote and opening the tunnel. For some reason that connection would not show up in the output of w -- probably because there was no shell started or something, but logs show it clearly. Now it's just a matter of connecting to the newly open port and you're in. As the port is bound to 127.0.0.1, the connection is only possible from server.example.com or using it as a jump host via OpenSSH's ProxyJump option: ssh -J server.example.com -p 2222 root@localhost. Additionally, you can forward a local port over the tunneled connection to create a tunnel for the OpenWRT webinterface: ssh -J server.example.com -p 2222 -L8080:localhost:80 root@localhost. Yes, that's a tunnel inside a tunnel, and all the network engineers will go brrr, but it works and you can access LuCi on http://localhost:8080 just fine. If you don't want to type that every time, create an entry in your .ssh/config:
Host openwrt
  ProxyJump server.example.com
  HostName localhost
  Port 2222
  User root
  LocalForward 8080 localhost:80
And we're done. Enjoy easy access to the newly deployed box and carry on.

1 May 2020

Paul Wise: FLOSS Activities April 2020

Changes

Issues

Review

Administration
  • myrepos: fix the forum
  • Debian: restart non-responsive tor daemon, restart processes due to OOM, apply debian.net changes for DD with expired key
  • Debian wiki: approve accounts
  • Debian QA services: deploy changes, auto-disable oldoldstable pockets

Communication

Sponsors The purple-discord work was sponsored by my employer. All other work was done on a volunteer basis.

30 April 2020

Jonathan McDowell: Let's talk about work/life balance in lock down

A SYNCNI article passed by on my Twitter feed this morning, talking about balancing work life balance while working from home in these times of COVID-19 inspired lock down. The associated Twitter thread expressed an interest in some words of advice from men to other men (because of course the original article has the woman having to do all the balancing). This post does not contain the words of advice searched for, but it hopefully at least offers some reassurance that if you re finding all of this difficult you re not alone. From talking to others I don t think there s anything particularly special we re doing in this house; a colleague is taking roughly the same approach, and some of the other folk I ve spoken to in the local tech scene seem to be doing likewise. First, the situation. Like many households both my wife and I work full time. We have a small child (not even a year and a half old yet). I work for a software startup, my wife is an HR business partner for a large multinational, dealing with employees all over the UK and Ireland. We re both luckily able to work from home easily - our day to day work machines are laptops, our employers were already setup with the appropriate VPN / video conferencing etc facilities. Neither of us has seen any decrease in workload since lock down; there are always more features and/or bugs to work on when it comes to a software product, and, as I m sure you can imagine, there has been a lot more activity in the HR sphere over the past 6 weeks as companies try to work out what to do. On top of this our childcare arrangements, like everyone else s, are completely gone. Nursery understandably shut down around the same time as the schools (slightly later, but not by much) and contact with grandparents is obviously out (which they re finding hard). So we re left with trying to fit 2 full time jobs in with full time childcare, of a child who up until recently tried to go down stairs by continuing to crawl forward. Make no mistake, this is hard. I know we are exceptionally lucky in our situation, but that doesn t mean we re finding it easy. We ve adopted an approach of splitting the day up. I take the morning slot (previously I would have got up with our son anyway), getting him up and fed while my wife showers. She takes over for a bit while I shower and dress, then I take over again in time for her to take her daily 8am conference call. My morning is mostly taken up with childcare until nap time; I try to check in first thing to make sure there s nothing urgent, and get a handle on what I might have to work on later in the day. My local team mates know they re more likely to get me late morning and it s better to arrange meetings in the afternoon. Equally I work with a lot of folk on the US West coast, so shifting my hours to be a bit later is not a problem there. After nap time (which, if we re lucky, takes us to lunch) my wife takes over. As she deals with UK/Ireland folk she often ends up having to take calls even while looking after our son; generally important meetings can be moved to the morning and meetings with folk who understand there might be a lot of pot banging going on in the background can happen in the afternoon. Having started late I generally work late - past the point where I d normally get home; if I m lucky I pop my head in for bath time, but sometimes it s only for a couple of minutes. We alternate cooking; usually based on work load + meetings. For example tonight I m cooking while my wife catches up on some work after having put our son to bed. Last night I had a few meetings so my wife cooked. So what s worked for us? Splitting the day means we can plan with our co-workers. We always make sure we eat together in the evening, and that generally is the cut-off point for either of us doing any work. I m less likely to be online in the evening because my study has become the place I work. That means I m not really doing any of my personal projects - this definitely isn t a case of being at home during lock down and having lots of time to achieve new things. It s much more of case of trying to find a sustainable way to get through the current situation, however long it might last.

26 April 2020

Enrico Zini: Some Italian women

Artemisia Gentileschi - Wikipedia
art history people archive.org
Artemisia Lomi or Artemisia Gentileschi (US: / d nt l ski, -ti -/, Italian: [arte mi zja d enti leski]; July 8, 1593 c. 1656) was an Italian Baroque painter, now considered one of the most accomplished seventeenth-century artists working in the dramatic style of Caravaggio. In an era when women had few opportunities to pursue artistic training or work as professional artists, Artemisia was the first woman to become a member of the Accademia di Arte del Disegno in Florence and had an international clientele.
Maria Pellegrina Amoretti (1756 1787), was an Italian lawyer. She is referred to as the first woman to graduate in law in Italy, and the third woman to earn a degree.
Laura Maria Caterina Bassi (October 1711 20 February 1778) was an Italian physicist and academic. She received a doctoral degree in Philosophy from the University of Bologna in May 1732. She was the first woman to earn a professorship in physics at a university. She is recognized as the first woman in the world to be appointed a university chair in a scientific field of studies. Bassi contributed immensely to the field of science while also helping to spread the study of Newtonian mechanics through Italy.
Maria Gaetana Agnesi (UK: / n je zi/ an-YAY-zee,[1] US: / n -/ ahn-,[2][3] Italian: [ma ri a ae ta na a zi, - e z-];[4] 16 May 1718 9 January 1799) was an Italian mathematician, philosopher, theologian, and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a mathematics professor at a university.[5]
Elena Lucrezia Cornaro Piscopia (US: /k r n ro p sko pi /,[4] Italian: [ lena lu kr ttsja kor na ro pi sk pja]) or Elena Lucrezia Corner (Italian: [kor n r]; 5 June 1646 26 July 1684), also known in English as Helen Cornaro, was a Venetian philosopher of noble descent who in 1678 became one of the first women to receive an academic degree from a university, and the first to receive a Doctor of Philosophy degree.
Maria Tecla Artemisia Montessori (/ m nt s ri/ MON-tiss-OR-ee, Italian: [ma ri a montes s ri]; August 31, 1870 May 6, 1952) was an Italian physician and educator best known for the philosophy of education that bears her name, and her writing on scientific pedagogy. At an early age, Montessori broke gender barriers and expectations when she enrolled in classes at an all-boys technical school, with hopes of becoming an engineer. She soon had a change of heart and began medical school at the Sapienza University of Rome, where she graduated with honors in 1896. Her educational method is still in use today in many public and private schools throughout the world.
Rita Levi-Montalcini OMRI OMCA (US: / le vi mo nt l t i ni, l v-, li vi m nt l -/, Italian: [ ri ta l vi montal t i ni]; 22 April 1909 30 December 2012) was an Italian Nobel laureate, honored for her work in neurobiology. She was awarded the 1986 Nobel Prize in Physiology or Medicine jointly with colleague Stanley Cohen for the discovery of nerve growth factor (NGF). From 2001 until her death, she also served in the Italian Senate as a Senator for Life. This honor was given due to her significant scientific contributions. On 22 April 2009, she became the first Nobel laureate ever to reach the age of 100, and the event was feted with a party at Rome's City Hall. At the time of her death, she was the oldest living Nobel laureate.
Margherita Hack Knight Grand Cross OMRI (Italian: [mar e ri ta (h)ak]; 12 June 1922 29 June 2013) was an Italian astrophysicist and scientific disseminator. The asteroid 8558 Hack, discovered in 1995, was named in her honour.
Samantha Cristoforetti (Italian pronunciation: [sa manta kristofo retti]; born 26 April 1977, in Milan) is an Italian European Space Agency astronaut, former Italian Air Force pilot and engineer. She holds the record for the longest uninterrupted spaceflight by a European astronaut (199 days, 16 hours), and until June 2017 held the record for the longest single space flight by a woman until this was broken by Peggy Whitson and later by Christina Koch. She is also the first Italian woman in space. Samantha Cristoforetti is also known as the first person who brewed an espresso in space.

25 April 2020

Reinhard Tartler: Building Packages with Buildah in Debian

1 Building Debian Packages with buildah
Building packages in Debian seems to be a solved problem. But is it? At the bottom, installing the dpkg-dev package provides all the basic tools needed. Assuming that you already succeeded with creating the necessary packaging metadata (i.e., debian/changelog, debian/control, debian/copyright, etc., and there are great helper tools for this such ash dh-make, dh-make-golang, etc.,) it should be as simple as invoking the dpkg-buildpackage tool. So what's the big deal here?

The issue is that dpkg-buildpackage expects to be called with an appropriately setup build context, that is, it needs to be called in an environment that satisfies all build dependencies on the system. Let's say you are building a package for Debian unstable on your Debian stable system (this is the common scenario for the official Debian build machines), you would need your build to link against libraries in unstable, not stable. So how to tell the package build process where to find its dependencies?

The answer (in Debian and many other Linux distributions) is you do not at all. This is actually a somewhat surprising answer for software developers without a Linux distribution development background1. Instead, chroots "simulate" an environment that has all dependencies that we want to build against at the system locations, that is. /usr/lib, etc.

Chroots are basically full system installations in a subdirectory that includes system and application libraries. In order to use them, a package build needs to use the chroot(2) system call, which is a privileged operation. Also creating these system installations is a somewhat finicky process. In Debian, we have tools that make this process easier, the most popular ones are probably pbuilder(1)2 and sbuild(1)3. Still, they are somewhat clumsy to use, add significant levels of abstraction in the sense that they do quite a bit of magic behind the scenes to work hide the fact that privileged operations (need root to run chroot(2), etc.) are required. They are also somewhat brittle, for instance, if a build process is aborted (SIGKILL, or system crash), you may end up with temporary directories and files under userids other than your own that again may require root-privileges to cleanup.

What if there was an easy way to do all of the above without any process running as root? Enter rootless buildah.

Modern Linux container technologies allow unprivileged users to "untie" a process from the system (cf. the unshare(2) system call). This means that a regular user may run a process (and its child processes) in an "environment" where system calls behave differently and provide a configurable amount of isolation levels. This article demonstrates a novel build tool buildah, which is:
  • easy to setup build environment from the command line
  • secure, as no process needs to run as root
  • simple in architecture, requires no running daemon like for example docker
  • convenient to use: you can debug your debian/rules interactively
Architecturally, buildah is written in golang and compiled as a (mostly) statically linked executable. It builds on top a number of libraries written in golang, including github.com/containers/storage and github.com/containers/image. The overlay functionality is provided by the fuse-overlayfs(1) utility.

1.1 Preparation:
The Kernel in Debian bullseye (and in buster, and recent Ubuntu kernels to the best of my knowledge) do support usernamespaces, but leave them disabled by default. Here is how to enable them:
echo kernel.unprivileged_userns_clone = 1   sudo tee -a /etc/sysctl.d/containers.conf
systemctl -p /etc/sysctl.d/containers.conf
I have to admit that I'm not entirely sure why the Debian kernels don't enable usernamespaces by default. I've found a reference on stackexchange4 that claims this disables some "hardening" features in the Debian kernel. I also understand this step is not necessary if you chose to compile and run a vanilla upstream kernel. I'd appreciate a better reference and am happy to update this text.
$ c=$(buildah from docker.io/debian:sid)
Getting image source signatures
Copying blob 2bbc6b8c460d done
Copying config 9b90abe801 done
Writing manifest to image destination
Storing signatures
This command downloads the image from docker.io and stores it locally in your home directory. Let's install essential utilities for building Debian packages:
1: buildah run $c apt-get update -qq
2: buildah run $c apt-get install dpkg-dev -y
3: buildah config --workingdir /src $c
4: buildah commit $c dpkg-dev
The command on line 1 and 2 execute the installation of compilers and dpkg development tools such as dpkg-buildpackage, etc. The buildah config command in line 4 arranges that whenever you start a shell in the container, the current working directory in the container is in /src. Don't worry about this location not existing yet, we will make sources from your host system available there. The last command creates an OCI image with the name dpkg-dev. BTW, the name you use for the image in the commit command can be used in podman (but not the "containers"). See 5 and 6 for a comparison between podman and buildah.
buildah images -a
This output might look like this:
  REPOSITORY                 TAG      IMAGE          ID   CREATED   SIZE             
localhost/dpkg-dev latest b85c34f95d3e 16 seconds ago 406 MB
docker.io/library/debian sid 9b90abe801db 11 hours ago 124 MB

1.2 Running a package build
Now we have a working container with the reference in the variable $c. To use it conveniently with source packages that I have stored in /srv/scratch/packages/containers, let's introduce a shell alias r like this:
alias r='buildah run --tty -v /srv/scratch/packages/containers:/src $c '
This allows you to easily execute commands in that container:
r pwd
r bash
The last command will give you an interactive shell that we'll be using for building packages!
siretart@x1:~/scratch/packages/containers/libpod$ r bash

root@x1:/src# cd golang-github-openshift-imagebuilder

root@x1:/src/golang-github-openshift-imagebuilder# dpkg-checkbuilddeps
dpkg-checkbuilddeps: error: Unmet build dependencies: debhelper (>= 11) dh-golang golang-any golang-github-containers-storage-dev (>= 1.11) golang-github-docker-docker-dev (>= 18.09.3+dfsg1) golang-github-fsouza-go-dockerclient-dev golang-glog-dev

root@x1:/src/golang-github-openshift-imagebuilder# apt-get build-dep -y .
Note, using directory '.' to get the build dependencies
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
[...]

root@x1:/src/golang-github-openshift-imagebuilder# dpkg-checkbuilddeps

root@x1:/src/golang-github-openshift-imagebuilder#
Now we have a environment that has all build-dependencies available and we are ready to build the package:
root@x1:/src/golang-github-openshift-imagebuilder# dpkg-buildpackage -us -uc -b
Assuming the package builds, the package build results are placed in /src inside the container, and are visible at ~/scratch/packages/containers on the host. There you can inspect, extract or even install them. The latter part allows you to interactively rebuild packages against updated dependencies, without the need of setting up an apt archive or similar.

2 Availability
The buildah(1) tool is available in debian/bullseye since 2019-12-17. Since is a golang binary, it only links dynamically against system libraries such as libselinux and libseccomp, which are all available in buster-backports. I'd expect buildah to just work on a debian/buster system as well provided you install those system libraries and possibly the backported Linux kernel.

Keeping the package at the latest upstream version is challenging because of its fast development pace that picks up new dependencies on golang libraries and new versions of existing libraries with every new upstream release. In general, this requires updating several source packages, and in many cases also uploading new source packages to the Debian archive that need to be processed by the Debian ftp-masters.

As an exercise, I suggest to install the buildah package from bullseye, 'git clone' the packaging repository from https://salsa.debian.org/go-team/packages/golang-github-containers-buildah and build the latest version yourself. Note, I would expect the above to even work on a Fedora laptop.

The Debian packages have not swept into the Ubuntu distributions yet, but I expect them to be included in the Ubuntu 20.10 release. In the mean time, ubuntu users can install the package that are provided by the upstream maintainers in the "Project Atomic" PPA at https://launchpad.net/~projectatomic/+archive/ubuntu/ppa

3 Related Tools
The buildah tool is accompanied with two "sister" tools:

The Skopeo package provides tooling to work with remote images registries. This allows you download, upload images to remote registries, convert container images between different formats. It has been available in Debian since 2020-04-20.

The podman tool is a 'docker' replacement. It provides a command-line interface that mimics the original docker command to an extend that a user familiar with docker might want to place this in their ~/.bashrc file:
alias docker='podman'
Unfortunately, at the time of writing podman is still being processed by the ftp-masters since 01-03-2020. At this point, I recommend building the package from our salsa repository at https://salsa.debian.org/debian/libpod

4 Conclusion
Building packages in the right build context is a fairly technical issue for which many tools have been written for. They come with different trade-off when it comes to usability. Containers promise a secure alternative to the tried and proven chroot-based approaches, and the buildah makes using this technology very easy to use.

I'd love to get into a conversation with you on how these tools work for you, and would like to encourage participation and assistance with keeping the complicated software stack up-to-date in Debian (and by extension, in derived distribution such as Ubuntu, etc.).

Footnotes:
1
At my day-job, we build millions of lines of C++ code on Solaris10 and AIX6, where concepts such as "chroots" are restricted to the super user 'root' and is therefore not available to developers, not even through wrappers. Instead, libraries and headers are installed into "refroots", that is, subdirectories that mimic the structure of the "sysroot" directories that are used in the embedded Linux community for cross-compiling packages, and we use Makefiles that set include flags (-I rules and -L flags) to tell the compiler and linker where to look.

21 April 2020

Gunnar Wolf: Fresh, daily built Debian images for your Raspberries (finally!)

So, after I took over or at least, said to take over maintainership of the Raspberry Pi build images originally produced by Michael Stapelberg in mid-2018, I pushed very few updates. Yes, there was some good and visible work, migrated the image-spec repository from GitHub to Salsa, ironed out several outstanding issues, and most important for me managed to get an image built and running for all Raspberry models below the Raspberry Pi 4 Today, I am happy to announce a very good step in the right direction, so I am now

Announcing raspi.debian.net! I have registered and set up an (unofficial!) site, raspi.debian.net, that should serve as a go-grab-it destination for regularly autobuilt images for the Raspberry Pi family. Of course, today you will get an early view of the site. I plan to fix the lack of https due to a mistake on my part WRT the processes at my hosting provider, do some testing in the hardware I have handy (and bless + gpg-sign some images as tested), migrate and update the information in the Debian Wiki, etc. Oh! And I hope to get good news on the Raspberry 4 front, as I know work is actively underway to achieve it.

6 April 2020

Russ Allbery: Review: Thick

Review: Thick, by Tressie McMillan Cottom
Publisher: The New Press
Copyright: 2019
ISBN: 1-62097-437-1
Format: Kindle
Pages: 247
Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women. I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework. If that's also your reaction, read this book. I regret having waited as long as I did. Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points. The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well. The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.
What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.
What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.
When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.
I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.
We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.
I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion. Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me. This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended. Rating: 9 out of 10

5 April 2020

Enrico Zini: Burnout links

Demystifying Burnout in Tech
burnout selfcare archive.org
How to save your soul from getting too callused
Mental health is becoming an increasingly important topic. For this talk Andrew will focus on one particular aspect of mental health, burnout. Including his own personal experiences of when it can get really bad and steps that could be taken to help catch it early.
Let s unpack society s general misunderstanding of the latest buzzword- burnout, shall we?
How to save your soul from getting too callused
Christina Maslach defines and explains burnout, in particular relating it to activism. She gives tips and lessons for avoiding it. Recorded at the Hero Round...
DOES19 London Burnout is a hot topic in today's workplace, given its high costs for both employees and organizations. What causes this problem? And what ca...

23 November 2017

Russ Allbery: Holiday haul

Catching up on accumulated book purchases. I'm going to get another burst of reading time over the holidays (and am really looking forward to it). Alfred Bester The Stars My Destination (sff)
James Blish A Case of Conscience (sff)
Leigh Brackett The Long Tomorrow (sff)
Algis Budrys Who? (sff)
Frances Hardinge Fly By Night (sff)
Robert A. Heinlein Double Star (sff)
N.K. Jemisin The Obelisk Gate (sff)
N.K. Jemisin The Stone Sky (sff)
T. Kingfisher Clockwork Boys (sff)
Ursula K. Le Guin City of Illusions (sff)
Ursula K. Le Guin The Complete Orsinia (historical)
Ursula K. Le Guin The Dispossessed (sff)
Ursula K. Le Guin Five Ways to Forgiveness (sff)
Ursula K. Le Guin The Left Hand of Darkness (sff)
Ursula K. Le Guin Planet of Exile (sff)
Ursula K. Le Guin Rocannon's World (sff)
Ursula K. Le Guin The Telling (sff)
Ursula K. Le Guin The World for Word Is Forest (sff)
Fritz Leiber The Big Time (sff)
Melina Marchetta Saving Francesca (mainstream)
Richard Matheson The Shrinking Man (sff)
Foz Meadows An Accident of Stars (sff)
Dexter Palmer Version Control (sff)
Frederick Pohl & C.M. Kornbluth The Space Merchants (sff)
Adam Rex True Meaning of Smekday (sff)
John Scalzi The Dispatcher (sff)
Julia Spencer-Fleming In the Bleak Midwinter (mystery)
R.E. Stearns Barbary Station (sff)
Theodore Sturgeon More Than Human (sff)
I'm listing the individual components except for the Orsinia collection, but the Le Guin are from the Library of America Hainish Novels & Stories two-volume set. I had several of these already, but I have a hard time resisting a high-quality Library of America collection for an author I really like. Now I can donate a bunch of old paperbacks. Similarly, a whole bunch of the older SF novels are from the Library of America American Science Fiction two-volume set, which I finally bought since I was ordering Library of America sets anyway. The rest is a pretty random collection of stuff, although several of them are recommendations from Light. I was reading through her old reviews and getting inspired to read (and review) more.

9 October 2017

Gunnar Wolf: Achievement unlocked - Made with Creative Commons translated to Spanish! (Thanks, @xattack!)

I am very, very, very happy to report this And I cannot believe we have achieved this so fast: Back in June, I announced I'd start working on the translation of the Made with Creative Commons book into Spanish. Over the following few weeks, I worked out the most viable infrastructure, gathered input and commitments for help from a couple of friends, submitted my project for inclusion in the Hosted Weblate translations site (and got it approved!) Then, we quietly and slowly started working. Then, as it usually happens in late August, early September... The rush of the semester caught me in full, and I left this translation project for later For the next semester, perhaps... Today, I received a mail that surprised me. That stunned me. 99% of translated strings! Of course, it does not look as neat as "100%" would, but there are several strings not to be translated. So, yay for collaborative work! Oh, and FWIW Thanks to everybody who helped. And really, really, really, hats off to Luis Enrique Amaya, a friend whom I see way less than I should. A LIDSOL graduate, and a nice guy all around. Why to him specially? Well... This has several wrinkles to iron out, but, by number of translated lines: ...Need I say more? Luis, I hope you enjoyed reading the book :-] There is still a lot of work to do, and I'm asking the rest of the team some days so I can get my act together. From the mail I just sent, I need to:
  1. Review the Pandoc conversion process, to get the strings formatted again into a book; I had got this working somewhere in the process, but last I checked it broke. I expect this not to be too much of a hurdle, and it will help all other translations.
  2. Start the editorial process at my Institute. Once the book builds, I'll have to start again the stylistic correction process so the Institute agrees to print it out under its seal. This time, we have the hurdle that our correctors will probably hate us due to part of the work being done before we had actually agreed on some important Spanish language issues... which are different between Mexico, Argentina and Costa Rica (where translators are from). Anyway This sets the mood for a great start of the week. Yay!
AttachmentSize
Screenshot from 2017-10-08 20-55-30.png103.1 KB

8 October 2017

Daniel Pocock: A step change in managing your calendar, without social media

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention? Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions? Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors? In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media? Making it happen There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them. How can we take these concepts further and make a convenient, compelling and global solution? Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT? Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy? Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal? I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018. Ways to get involved If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki. Mini DebConf Prishtina 2017 This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community. Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

13 September 2017

Vincent Bernat: Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:
conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route
The same configuration can be used on both sides. Each side will figure out if it is left or right . The IPsec site-to-site tunnel endpoints are 2001:db8: 1::1 and 2001:db8: 2::1. The protected subnets are 2001:db8: a1::/64 and 2001:db8: a2::/64. As a result, strongSwan configures the following policies in the kernel:
$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[ ]
This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements: When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):
$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[ ]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[ ]
If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:
$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)
All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs: Redundant VPNs between 3 sites A possible configuration between V1-1 and V2-1 could be:
conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route
Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8: a6::/64 on one side and 2001:db8: a6::cc:1/128 at the other) can also be problematic. The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:
  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper Before looking at how to achieve that on Linux, let s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform). Let s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the private routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):
interfaces  
    st0  
        unit 1  
            family inet6  
                address 2001:db8:ff::7/127;
             
         
     
 
routing-instances  
    private  
        instance-type virtual-router;
        interface st0.1;
     
 
The second step is to configure the VPN:
security  
    /* Phase 1 configuration */
    ike  
        proposal IKE-P1  
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
         
        policy IKE-V1-1  
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
         
        gateway GW-V1-1  
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
         
     
    /* Phase 2 configuration */
    ipsec  
        proposal ESP-P2  
            protocol esp;
            encryption-algorithm aes-256-gcm;
         
        policy IPSEC-V1-1  
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
         
        vpn VPN-V1-1  
            bind-interface st0.1;
            df-bit copy;
            ike  
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
             
            establish-tunnels on-traffic;
         
     
 
We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8: 1::1 endpoint. The last step is to configure BGP in the private routing instance to exchange routes with the remote site:
routing-instances  
    private  
        routing-options  
            router-id 1.0.3.2;
            maximum-paths 16;
         
        protocols  
            bgp  
                preference 140;
                log-updown;
                group v4-VPN  
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                 
             
         
     
 
The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:
policy-options  
    policy-statement OUR-ROUTES  
        term 10  
            from  
                protocol ospf3;
                route-type internal;
             
            then  
                metric 0;
                accept;
             
         
     
 
The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8: a1::/64:
> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path
private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2
It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won t lead to a disaster.

Route-based VPN on Linux Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the private namespace:
# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1
Any private interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):
# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2
Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):
# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up
The tunnel interface is created in the initial namespace and moved to the private one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work. We can then configure strongSwan5:
conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no
The IKE daemon configures the following policies in the kernel:
$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[ ]
Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface. The last step is to configure BGP to exchange routes. We can use BIRD for this:
router id 1.0.1.1;
protocol device  
   scan time 10;
 
protocol kernel  
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
 
protocol bgp IBGP_V3_2  
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
 
Once BIRD is started in the private namespace, we can check routes are learned correctly:
$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1
The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the private namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network. As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:
$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69
You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.

  1. Everything in this post should work with Libreswan.
  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity.
  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces.
  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts.
  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them.

12 September 2017

Markus Koschany: My Free Software Activities in August 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. DebConf 17 in Montreal I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the The past, present and future of Debian Games , listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out. No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven t responded to my refund claims) ? It s WoW-Air from Iceland. (just wow) Debian Games Debian Java Debian LTS This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Non-maintainer upload Thanks for reading and see you next time.

5 September 2017

Kees Cook: security things in Linux v4.13

Previously: v4.12. Here s a short summary of some of interesting security things in Sunday s v4.13 release of the Linux kernel: security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they ll get some extra attention. CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an unchecked refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity s PAX_REFCOUNT). CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them. One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled. NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64 s canary was 32-bits until recently.) IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses. randstruct gcc plugin
I ported grsecurity s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack. Unfortunately, due to the timing of the development cycle, only the manual mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers. A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing. lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary. early device randomness
I noticed that early device randomness wasn t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin. That s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle s patches. ;) Anyway, the v4.14 merge window is open!

2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

1 September 2017

Russ Allbery: Review: Regeneration

Review: Regeneration, by Julie E. Czerneda
Series: Species Imperative #3
Publisher: DAW
Copyright: 2006
ISBN: 0-7564-0345-6
Format: Hardcover
Pages: 543
This is the third book of the Species Imperative trilogy, and this is the type of trilogy that's telling a single story in three books. You don't want to read this out of order, and I'll have to be cautious about aspects of the plot to not spoil the earlier books. Mac is still recovering from the effects of the first two books of the series, but she's primarily worried about a deeply injured friend. Worse, that friend is struggling to explain or process what's happened, and the gaps in her memory and her very ability to explain may point at frightening, lingering risks to humanity. As much as she wants to, Mac can't give her friend all of her focus, since she's also integral to the team trying to understand the broader implications of the events of Migration. Worse, some of the non-human species have their own contrary interpretations that, if acted on, Mac believes would be desperately risky for humanity and all the other species reachable through the transects. That set of competing priorities and motivations eventually sort themselves out into a tense and rewarding multi-species story, but they get off to an awkward start. The first 150 pages of Regeneration are long on worry, uncertainty, dread, and cryptic conversations, and short on enjoyable reading. Czerneda's recaps of the previous books are appreciated, but they weren't very smoothly integrated into the story. (I renew my occasional request for series authors to include a simple plot summary of the previous books as a prefix, without trying to weave it into the fiction.) I was looking forward to this book after the excellent previous volumes, but struggled to get into the story. That does change. It takes a bit too long, with a bit too much nameless dread, a bit too much of an irritating subplot between Fourteen and Oversight that I didn't think added anything to the book, and not enough of Mac barreling forward doing sensible things. But once Mac gets back into space, with a destination and a job and a collection of suspicious (or arrogant) humans and almost-incomprehensible aliens to juggle, Czerneda hits her stride. Czerneda doesn't entirely avoid Planet of the Hats problems with her aliens, but I think she does better than most of science fiction. Alien species in this series do tend to be a bit all of a type, and Mac does figure them out by drawing conclusions from biology, but those conclusions are unobvious and based on Mac's mix of biological and human social intuition. They refreshingly aren't as simple as biology completely shaping culture. (Czerneda's touch is more subtle than James White's Sector General, for example.) And Mac has a practical, determined, and selfless approach that's deeply likable and admirable. It's fun as a reader to watch her win people over by just being competent, thoughtful, observant, and unrelentingly ethical. But the best part of this book, by far, are the Sinzi. They first appeared in the second book, Migration, and seemed to follow the common SF trope of a wise elder alien race that can bring some order to the universe and that humanity can learn from. They, or more precisely the one Sinzi who appeared in Migration, was very good at that role. But Czerneda had something far more interesting planned, and in Regeneration they become truly alien in their own right, with their own nearly incomprehensible way of viewing the universe. There are so many ways that this twist can go wrong, and Czerneda avoids all of them. She doesn't undermine their gravitas, nor does she elevate them to the level of Arisians or other semi-angelic wise mentors of other series. Czerneda makes them different in profound ways that are both advantage and disadvantage, pulls that difference into the plot as a complicating element, and has Mac stumble yet again into a role that is accidentally far more influential than she intends. Mac is the perfect character to do that to: she has just the right mix of embarrassment, ethics, seat-of-the-pants blunt negotiation skills, and a strong moral compass. Given a lever and a place to stand, one can believe that Mac can move the world, and the Sinzi are an absolutely fascinating lever. There are also three separate, highly differentiated Sinzi in this story, with different goals, life experience, personalities, and levels of gravitas. Czerneda's aliens are good in general, but her focus is usually more on biology than individual differentiation. The Sinzi here combine the best of both types of character building. I think the ending of Regeneration didn't entirely work. After all the intense effort the characters put into understanding the complexity of the universe over the course of the series, the denouement has a mopping-up feel and a moral clarity that felt a bit too easy. But the climax has everything I was hoping for, there's a lot more of Mac being Mac, and I loved every moment of the Sinzi twist. Now I want a whole new series exploring the implications of the Sinzi's view of the universe on the whole history of galactic politics that sat underneath this story. But I'll settle for moments of revelation that sent shivers down my spine. This is a bit of an uneven book that falls short of its potential, but I'll remember it for a long time. Add it on to a deeply rewarding series, and I will recommend the whole package unreservedly. The Species Imperative is excellent science fiction that should be better-known than it is. I still think the romance subplot was unfortunate, and occasionally the aliens get too cartoony (Fourteen, in particular, goes a bit too far in that direction), but Czerneda never lingers too long on those elements. And the whole work is some of the best writing about working scientific research and small-group politics that I've read. Highly recommended, but read the whole series in order. Rating: 9 out of 10

Next.

Previous.