Search Results: "tina"

4 January 2021

John Goerzen: More Topics on Store-And-Forward (Possibly Airgapped) ZFS and Non-ZFS Backups with NNCP

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP. In my previous post, I introduced a way to use ZFS backups over NNCP. In this post, I ll expand on that and also explore non-ZFS backups. Use of nncp-file instead of nncp-exec The previous example used nncp-exec (like UUCP s uux), which lets you pipe stdin in, then queues up a request to run a given command with that input on a remote. I discussed that NNCP doesn t guarantee order of execution, but that for the ZFS use case, that was fine since zfs receive would just fail (causing NNCP to try again later). At present, nncp-exec stores the data piped to it in RAM before generating the outbound packet (the author plans to fix this shortly). That made it unusable for some of my backups, so I set it up another way: with nncp-file, the tool to transfer files to a remote machine. A cron job then picks them up and processes them. On the machine being backed up, we have to find a way to encode the dataset to be received. I chose to do that as part of the filename, so the updated simplesnap-queue could look like this:
#!/bin/bash
set -e
set -o pipefail
DEST=" echo $1   sed 's,^tank/simplesnap/,,' "
FILE="bakfsfmt2- date "+%s.%N".$$ _ echo "$DEST"   sed 's,/,@,g' "
echo "Processing $DEST to $FILE" >&2
# stdin piped to this
zstd -8 - \
    gpg --compress-algo none --cipher-algo AES256 -e -r 012345...  \
    su nncp -c "/usr/local/nncp/bin/nncp-file -nice B -noprogress - 'backupsvr:$FILE'" >&2
echo "Queued $DEST to $FILE" >&2
I ve added compression and encryption here as well; more on that below. On the backup server, we would define a different incoming directory for each node in nncp.hjson. For instance:
host1:  
...
   incoming: "/var/local/nncp-bakcups-incoming/host1"
 
host2:  
...
   incoming: "/var/local/nncp-backups-incoming/host2"
 
I ll present the scanning script in a bit. Offsite Backup Rotation Most of the time, you don t want just a single drive to store the backups. You d like to have a set. At minimum, one wouldn t be plugged in so lightning wouldn t ruin all your backups. But maybe you d store a second drive at some other location you have access to (friend s house, bank box, etc.) There are several ways you could solve this: The third option can be helped with NNCP, too. One way is to create separate NNCP installations for each of the drives that you store data on. Then, whenever one is plugged in, the appropriate NNCP config will be loaded and appropriate packets received and processed. The neighbor machine the spooler would just store up packets for the offsite drive until it comes back onsite (or, perhaps, your airgapped USB transport would do this). Then when it s back onsite, all the queued up ZFS sends get replayed and the backups replicated. Now, how might you handle this with NNCP? The simple way would be to have each system generating backups send them to two destinations. For instance:
zstd -8 -   gpg --compress-algo none --cipher-algo AES256 -e -r 07D5794CD900FAF1D30B03AC3D13151E5039C9D5 \
    tee >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk1:$FILE'") \
        >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk2:$FILE'") \
   > /dev/null
You could probably also more safely use pee(1) (from moreutils) to do this. This has an unfortunate result of doubling the network traffic from every machine being backed up. So an alternative option would be to queue the packets to the spooling machine, and run a distribution script from it; something like this, in part:
INCOMINGDIR="/var/local/nncp-bakfs-incoming"
LOCKFILE="$INCOMINGDIR/.lock"
printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE"
if dotlockfile -r 0 -l -p "$ LOCKFILE "; then
  logit "Lock obtained at $ LOCKFILE  with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"$ EVAL_SAFE_LOCKFILE "'; exit $ECODE' EXIT INT TERM
else
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0
fi
logit "Scanning queue directory..."
cd "$INCOMINGDIR"
for HOST in *; do
   cd "$INCOMINGDIR/$HOST"
   for FILE in bakfsfmt2-*; do
           if [ -f "$FILE" ]; then
                   for BAKFS in backupdisk1 backupdisk2; do
                           runcommand nncp-file -nice B+5 -noprogress "$FILE" "$BAKFS:$HOST/$FILE"
                   done
                   runcommand rm "$FILE"
           else
                   logit "$HOST: Skipping $FILE since it doesn't exist"
           fi
   done
done
logit "Scan complete."
Security Considerations You ll notice that in my example above, the encryption happens as the root user, but nncp is called under su. This means that even if there is a vulnerability in NNCP, the data would still be protected by GPG. I ll also note here that many sites run ssh as root unnecessarily; the same principles should apply there. (ssh has had vulnerabilities in the past as well). I could have used gpg s built-in compression, but zstd is faster and better, so we can get good performance by using fast compression and piping that to an algorithm that can use hardware acceleration for encryption. I strongly encourage considering transport, whether ssh or NNCP or UUCP, to be untrusted. Don t run it as root if you can avoid it. In my example, the nncp user, which all NNCP commands are run as, has no access to the backup data at all. So even if NNCP were compromised, my backup data wouldn t be. For even more security, I could also sign the backup stream with gpg and validate that on the receiving end. I should note, however, that this conversation assumes that a network- or USB-facing ssh or NNCP is more likely to have an exploitable vulnerability than is gpg (which here is just processing a stream). This is probably a safe assumption in general. If you believe gpg is more likely to have an exploitable vulnerability than ssh or NNCP, then obviously you wouldn t take this particular approach. On the zfs side, the use of -F with zfs receive is avoided; this could lead to a compromised backed-up machine generating a malicious rollback on the destination. Backup zpools should be imported with -R or -N to ensure that a malicious mountpoint property couldn t be used to cause an attack. I choose to use zfs receive -u -o readonly=on which is compatible with both unmounted backup datasets and zpools imported with -R (or both). To access the data in a backup dataset, you would normally clone it and access it there. The processing script So, put this all together and look at an example of a processing script that would run from cron as root and process the incoming ZFS data.
#!/bin/bash
set -e
set -o pipefail
# Log a message
logit ()  
   logger -p info -t " basename "$0" [$$]" "$1"
 
# Log an error message
logerror ()  
   logger -p err -t " basename "$0" [$$]" "$1"
 
# Log stdin with the given code.  Used normally to log stderr.
logstdin ()  
   logger -p info -t " basename "$0" [$$/$1]"
 
# Run command, logging stderr and exit code
runcommand ()  
   logit "Running $*"
   if "$@" 2> >(logstdin "$1") ; then
      logit "$1 exited successfully"
      return 0
   else
       RETVAL="$?"
       logerror "$1 exited with error $RETVAL"
       return "$RETVAL"
   fi
 
STORE=backups/simplesnap
INCOMINGDIR=/backups/nncp/incoming
if ! [ -d "$INCOMINGDIR" ]; then
        logerror "$INCOMINGDIR doesn't exist"
        exit 0
fi
LOCKFILE="/backups/nncp/.nncp-backups-zfs-scan.lock"
printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE"
if dotlockfile -r 0 -l -p "$ LOCKFILE "; then
  logit "Lock obtained at $ LOCKFILE  with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"$ EVAL_SAFE_LOCKFILE "'; exit $ECODE' EXIT INT TERM
else
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0
fi
EXITCODE=0
cd "$INCOMINGDIR"
logit "Scanning queue directory..."
for HOST in *; do
    HOSTPATH="$INCOMINGDIR/$HOST"
    # files like backupsfmt2-134.13134_dest
    for FILE in "$HOSTPATH"/backupsfmt2-[0-9]*_?*; do
        if [ ! -f "$FILE" ]; then
            logit "Skipping non-existent $FILE"
            continue
        fi
        # Now, $DEST will be HOST/DEST.  Strip off the @ also.
        DEST=" echo "$FILE"   sed -e 's/^.*backupsfmt2[^_]*_//' -e 's,@,/,g' "
        if [ -z "$DEST" ]; then
            logerror "Malformed dest in $FILE"
            continue
        fi
        HOST2=" echo "$DEST"   sed 's,/.*,,g' "
        if [ -z "$HOST2" ]; then
            logerror "Malformed DEST $DEST in $FILE"
            continue
        fi
        if [ ! "$HOST" = "$HOST2" ]; then
            logerror "$DIR: $HOST doesn't match $HOST2"
            continue
        fi
        logit "Processing $FILE to $STORE/$DEST"
            if runcommand gpg -q -d < "$FILE"   runcommand zstdcat   runcommand zfs receive -u -o readonly=on "$STORE/$DEST"; then
                logit "Successfully processed $FILE to $STORE/$DEST"
                runcommand rm "$FILE"
        else
                logerror "FAILED to process $FILE to $STORE/$DEST"
                EXITCODE=15
        fi
Applying These Ideas to Non-ZFS Backups ZFS backups made our job easier in a lot of ways: Some of these benefits you just won't get without ZFS (or something similar like btrfs), but let's see how we could apply these ideas to non-ZFS backups. I will explore the implementation of them in a future post. When I say "non ZFS", I am being a bit vague as to whether the source, the destination, or both systems are running a non-ZFS filesystem. In general I'll assume that neither are ZFS. The first and most obvious answer is to just tar up the whole system and send that every day. This is, of course, only suitable for small datasets on a fast network. These tarballs could be unpacked on the destination and stored more efficiently via any number of methods (hardlink trees, a block-level deduplicator like borg or rdedup, or even just simply compressed tarballs). To make the network trip more efficient, something like rdiff or xdelta could be used. A signature file could be stored on the machine being backed up (generated via tee/pee at stream time), and the next run could simply send an rdiff delta over NNCP. This would be quite network-efficient, but still would require reading every byte of every file on every backup, and would also require quite a bit of temporary space on the receiving end (to apply the delta to the previous tarball and generate a new one). Alternatively, a program that generates incremental backup files such as rdup could be used. These could be transmitted over NNCP to the backup server, and unpacked there. While perhaps less efficient on the network -- every file with at least one modified byte would be retransmitted in its entirety -- it avoids the need to read every byte of unmodified files or to have enormous temporary space. I should note here that GNU tar claims to have an incremental mode, but it has a potential data loss bug. There are also some tools with algorithms that may apply well in this use care: syrep and fssync being the two most prominent examples, though rdedup (mentioned above) and the nascent asuran project may also be combinable with other tools to achieve this effect. I should, of course, conclude this section by mentioning btrfs. Every time I've tried it, I've run into serious bugs, and its status page indicates that only some of them have been resolved. I would not consider using it for something as important as backups. However, if you are comfortable with it, it is likely to be able to run in more constrained environments than ZFS and could probably be processed in much the same way as zfs streams.

30 December 2020

John Goerzen: Airgapped / Asynchronous Backups with ZFS over NNCP

In my previous articles in the series on asynchronous communication with the modern NNCP tool, I talked about its use for asynchronous, potentially airgapped, backups. The first article, How & Why To Use Airgapped Backups laid out the foundations for this. Now let s dig into the details. Today s post will cover ZFS, because it has a lot of features that make it very easy to support in this setup. Non-ZFS backups will be covered later. The setup is actually about as simple as it is for SSH, but since people are less familiar with this kind of communication, I m going to try to go into more detail here. Assumptions I am assuming a setup where: Hardware Let s start with hardware for the machine to hold the backups. I initially considered a Raspberry Pi 4 with 8GB of RAM. That would probably have been a suitable machine, at least for smaller backup sets. However, none of the Raspberry Pi machines support hardware AES encryption acceleration, and my Pi4 benchmarks as about 60MB/s for AES encryption. I want my backups to be encrypted, and decided this would just be too slow for my purposes. Again, if you don t need encrypted backups or don t care that much about performance may people probably fall into this category you can have a fully-functional Raspberry Pi 4 system for under $100 that would make a fantastic backup server. I wound up purchasing a Qotom-Q355G4 micro PC with a Core i5 for about $315. It has USB 3 ports and is designed as a rugged, long-lasting system. I have been using one of their older Celeron-based models as my router/firewall for a number of years now and it s been quite reliable. For backup storage, you can get a USB 3 external drive. My own preference is to get a USB 3 toaster (device that lets me plug in SATA drives) so that I have more control over the underlying medium and can save the expense and hassle of a bunch of power supplies. In a future post, I will discuss drive rotation so you always have an offline drive. Then, there is the question of transport to the backup machine. A simple solution would be to have a heavily-firewalled backup system that has no incoming ports open but makes occasional outgoing connections to one specific NNCP daemon on the spooling machine. However, for airgapped operation, it would also be very simple to use nncp-xfer to transport the data across on a USB stick or some such. You could set up automounting for a specific USB stick plug it in, all the spooled data is moved over, then plug it in to the backup system and it s processed, and any outbound email traffic or whatever is copied to the USB stick at that point too. The NNCP page has some more commentary about this kind of setup. Both are fairly easy to set up, and NNCP is designed to be transport-agnostic, so in this article I m going to focus on how to integrate ZFS with NNCP. Operating System Of course, it should be no surprise that I set this up on Debian. As an added step, I did all the configuration in Ansible stored in a local git repo. This adds a lot of work, but it means that it is trivial to periodically wipe and reinstall if any security issue is suspected. The git repo can be copied off to another system for storage and takes the system from freshly-installed to ready-to-use state. Security There is, of course, nothing preventing you from running NNCP as root. The zfs commands, obviously, need to be run as root. However, from a privilege separation standpoint, I have chosen to run everything relating to NNCP as a nncp user. NNCP already does encryption, but if you prefer to have zero knowledge of the data even to NNCP, it s trivial to add gpg to the pipeline as well, and in fact I ll be demonstrating that in a future post for other reasons. Software Besides NNCP, there needs to be a system that generates the zfs send streams. For this project, I looked at quite a few. Most were designed to inspect the list of snapshots on a remote end, compare it to a list on the local end, and calculate a difference from there. This, of course, won t work for this situation. I realized my own simplesnap project was very close to being able to do this. It already used an algorithm of using specially-named snapshots on the machine being backed up, so never needed any communication about what snapshots were present where. All it needed was a few more options to permit sending to a stream instead of zfs receive. I made those changes and they are available in simplesnap 2.0.0 or above. That version has also been uploaded to sid, and will work fine as-is on buster as well. Preparing NNCP I m going to assume three hosts in this setup: The basic NNCP workflow documentation covers the basic steps. You ll need to run nncp-cfgnew on each machine. This generates a basic configuration, along with public and private keys for that machine. You ll copy the public key sets to the configurations of the other machines as usual. On the laptop, you ll add a via line like this:
backupsvr:  
  id: ....
  exchpub: ...
  signpub: ...
  noisepub: ...
  via: ["spooler"]
This tells NNCP that data destined for backupsvr should always be sent via spooler first. You can then arrange for the nncp-daemon to run on the spooler, and nncp-caller or nncp-call on the backupsvr. Or, alternatively, airgapped between the two with nncp-xfer. Generating Backup Data Now, on the laptop, install simplesnap (2.0.0 or above). Although you won t be backing up to the local system, simplesnap still maintains a hostlock in ZFS. Prepate a dataset for it:
zfs create tank/simplesnap
zfs set org.complete.simplesnap:exclude=on tank/simplesnap
Then, create a script /usr/local/bin/runsimplesnap like this:
#!/bin/bash
set -e
simplesnap --store tank/simplesnap --setname backups --local --host  hostname  \
   --receivecmd /usr/local/bin/simplesnap-queue \
   --noreap
su nncp -c '/usr/local/nncp/bin/nncp-toss -noprogress -quiet'
if ip addr   grep -q 192.168.65.64; then
  su nncp -c '/usr/local/nncp/bin/nncp-call -noprogress -quiet -onlinedeadline 1 spooler'
fi
The call to simplesnap sets it up to send the data to simplesnap-queue, which we ll create in a moment. The receivmd, plus noreap, sets it up to run without ZFS on the local system. The call to nncp-toss will process any previously-received inbound NNCP packets, if there are any. Then, in this example, we do a very basic check to see if we re on the LAN (checking 192.168.65.64), and if so, will establish a connection to the spooler to transmit the data. If course, you could also do this over the Internet, with tor, or whatever, but in my case, I don t want to automatically do this in case I m tethered to mobile. I figure if I want to send backups in that case, I can fire up nncp-call myself. You can also use nncp-caller to set up automated connections on other schedules; there are a lot of options. Now, here s what /usr/local/bin/simplesnap-queue looks like:
#!/bin/bash
set -e
set -o pipefail
DEST=" echo $1   sed 's,^tank/simplesnap/,,' "
echo "Processing $DEST" >&2
# stdin piped to this
su nncp -c "/usr/local/nncp/bin/nncp-exec -nice B -noprogress backupsvr zfsreceive '$DEST'" >&2
echo "Queued for $DEST" >&2
This is a pretty simple script. simplesnap will call it with a path based on the store, with the hostname after; so, for instance, tank/simplesnap/laptop/root or some such. This script strips off the leading tank/simplesnap (which is a local fragment), leaving the host and dataset paths. Then it just pipes it to nncp-exec. -nice B classifies it as low-priority bulk data (so if you have some more important interactive data, it would be sent first), then passes it to whatever the backupsvr defines as zfsreceive. Receiving ZFS backups In the NNCP configuration on the recipient s side, in the laptop section, we define what command it s allowed to run as zfsreceive:
      exec:  
        zfsreceive: ["/usr/bin/sudo", "-H", "/usr/local/bin/nncp-zfs-receive"]
       
We authorize the nncp user to run this under sudo in /etc/sudoers.d/local nncp:
Defaults env_keep += "NNCP_SENDER"
nncp ALL=(root) NOPASSWD: /usr/local/bin/nncp-zfs-receive
The NNCP_SENDER is the public key ID of the sending node when nncp-toss processes the incoming data. We can use that for sanity checking later. Now, here s a basic nncp-zfs-receive script:
#!/bin/bash
set -e
set -o pipefail
STORE=backups/simplesnap
DEST="$1"
# now process stdin
runcommand zfs receive -o readonly=on -x mountpoint "$STORE/$DEST"
And there you have it all the basics are in place. Update 2020-12-30: An earlier version of this article had zfs receive -F instead of zfs receive -o readonly=on -x mountpoint . These changed arguments are more robust.
Update 2021-01-04: I am now recommending zfs receive -u -o readonly=on ; see my successor article for more. Enhancements You could enhance the nncp-zfs-receive script to improve logging and error handling. For instance:
#!/bin/bash
set -e
set -o pipefail
STORE=backups/simplesnap
# $1 will be the host/dataset
DEST="$1"
HOST=" echo "$1"   sed 's,/.*,,g' "
if [ -z "$HOST" ]; then
   echo "Malformed command line"
   exit 5
fi
# Log a message
logit ()  
   logger -p info -t " basename "$0" [$$]" "$1"
 
# Log an error message
logerror ()  
   logger -p err -t " basename "$0" [$$]" "$1"
 
# Log stdin with the given code.  Used normally to log stderr.
logstdin ()  
   logger -p info -t " basename "$0" [$$/$1]"
 
# Run command, logging stderr and exit code
runcommand ()  
   logit "Running $*"
   if "$@" 2> >(logstdin "$1") ; then
      logit "$1 exited successfully"
      return 0
   else
       RETVAL="$?"
       logerror "$1 exited with error $RETVAL"
       return "$RETVAL"
   fi
 
exiterror ()  
   logerror "$1"
   echo "$1" 1>&2
   exit 10
 
# Sanity check
if [ "$HOST" = "laptop" ]; then
  if [ "$NNCP_SENDER" != "12345678" ]; then
    exiterror "Host $HOST doesn't match sender $NNCP_SENDER"
  fi
else
  exiterror "Unknown host $HOST"
fi
runcommand zfs receive -F "$STORE/$DEST"
Now you ll capture the ZFS receive output in syslog in a friendly way, so you can look back later why things failed if they did. Further notes on NNCP nncp-toss will examine the exit code from an invocation. If it is nonzero, it will keep the command (and associated stdin) in the queue and retry it on the next invocation. NNCP does not guarantee order of execution, so it is possible in some cases that ZFS streams may be received in the wrong order. That is fine here; zfs receive will exit with an error, and nncp-toss will just run it again after the dependent snapshots have been received. For non-ZFS backups, a simple sequence number can handle this issue.

27 December 2020

John Goerzen: Asynchronous Email: Exim over NNCP (or UUCP)

Following up to yesterday s article about how NNCP rehabilitates asynchronous communication with modern encryption and onion routing, here is the first of my posts showing how to put it into action. Email is a natural fit for async; in fact, much of early email was carried by UUCP. It is useful for an airgapped machine to be able to send back messages; errors from cron, results of handling incoming data, disk space alerts, etc. (Of course, this would apply to a non-airgapped machine also). The NNCP documentation already describes how to do this for Postfix. Here I will show how to do it for Exim. A quick detour to UUCP land When you encounter a system such as email that has instructions for doing something via UUCP, that should be an alert to you that here is some very relevant information for doing this same thing via NNCP. The syntax is different, but broadly, here s a table of similar NNCP commands:
Purpose UUCP NNCP
Connect to remote system uucico -s, uupoll nncp-call, nncp-caller
Receive connection (pipe, daemon, etc) uucico (-l or similar) nncp-daemon
Request remote execution, stdin piped in uux nncp-exec
Copy file to remote machine uucp nncp-file
Copy file from remote machine uucp nncp-freq
Process received requests uuxqt nncp-toss
Move outbound requests to dir (for USB stick, airgap, etc) N/A nncp-xfer
Create streaming package of outbound requests N/A nncp-bundle
If you used UUCP back in the day, you surely remember bang paths. I will not be using those here. NNCP handles routing itself, rather than making the MTA be aware of the network topology, so this simplifies things considerably. Sending from Exim to a smarthost One common use for async email is from a satellite system: one that doesn t receive mail, or have local mailboxes, but just needs to get email out to the Internet. This is a common situation even for conventionally-connected systems; in Exim speak, this is a satellite system that routes mail via a smarthost. That is, every outbound message goes to a specific target, which then is responsible for eventual delivery (over the Internet, LAN, whatever). This is fairly simple in Exim. We actually have two choices for how to do this: bsmtp or rmail mode. bsmtp (batch SMTP) is the more modern way, and is essentially a derivative of SMTP that explicitly can be queued asynchronously. Basically it s a set of SMTP commands that can be saved in a file. The alternative is rmail (which is just an alias for sendmail these days), where the data is piped to rmail/sendmail with the recipients given on the command line. Both can work with Exim and NNCP, but because we re doing shiny new things, we ll use bsmtp. These instructions are loosely based on the Using outgoing BSMTP with Exim HOWTO. Some of these may assume Debianness in the configuration, but should be easily enough extrapolated to other configs as well. First, configure Exim to use satellite mode with minimal DNS lookups (assuming that you may not have working DNS anyhow). Then, in the Exim primary router section for smarthost (router/200_exim4-config_primary in Debian split configurations), just change transport = remote_smtp_smarthost to transport = nncp. Now, define the NNCP transport. If you are on Debian, you might name this transports/40_exim4-config_local_nncp:
nncp:
  debug_print = "T: nncp transport for $local_part@$domain"
  driver = pipe
  user = nncp
  batch_max = 100
  use_bsmtp
  command = /usr/local/nncp/bin/nncp-exec -noprogress -quiet hostname_goes_here rsmtp
.ifdef REMOTE_SMTP_HEADERS_REWRITE
  headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE
.endif
.ifdef REMOTE_SMTP_RETURN_PATH
  return_path = REMOTE_SMTP_RETURN_PATH
.endif
This is pretty straightforward. We pipe to nncp-exec, run it as the nncp user. nncp-exec sends it to a target node and runs whatever that node has called rsmtp (the command to receive bsmtp data). When the target node processes the request, it will run the configured command and pipe the data in to it. More complicated: Routing to various NNCP nodes Perhaps you would like to be able to send mail directly to various NNCP nodes. There are a lot of ways to do that. Fundamentally, you will need a setup similar to the UUCP example in Exim s manualroute manual, which lets you define how to reach various hosts via UUCP/NNCP. Perhaps you have a star topology (every NNCP node exchanges email with a central hub). In the NNCP world, you have two choices of how you do this. You could, at the Exim level, make the central hub the smarthost for all the side nodes, and let it redistribute mail. That would work, but requires decrypting messages at the hub to let Exim process. The other alternative is to configure NNCP to just send to the destinations via the central hub; that takes advantage of onion routing and doesn t require any Exim processing at the central hub at all. Receiving mail from NNCP On the receiving side, first you need to configure NNCP to authorize the execution of a mail program. In the section of your receiving host where you set the permissions for the client, include something like this:
      exec:  
        rsmtp: ["/usr/sbin/sendmail", "-bS"]
       
The -bS option is what tells Exim to receive BSMTP on stdin. Now, you need to tell Exim that nncp is a trusted user (able to set From headers arbitrarily). Assuming you are running NNCP as the nncp user, then add MAIN_TRUSTED_USERS = nncp to a file such as /etc/exim4/conf.d/main/01_exim4-config_local-nncp. That s it! Some hosts, of course, both send and receive mail via NNCP and will need configurations for both.

23 December 2020

John Goerzen: How & Why To Use Airgapped Backups

A good backup strategy needs to consider various threats to the integrity of data. For instance: It s that last one that is of particular interest today. A lot of backup strategies are such that if a user (or administrator) has their local account or network compromised, their backups could very well be destroyed as well. For instance, do you ssh from the account being backed up to the system holding the backups? Or rsync using a keypair stored on it? Or access S3 buckets, etc? It is trivially easy in many of these schemes to totally ruin cloud-based backups, or even some other schemes. rsync can be run with delete (and often is, to prune remotes), S3 buckets can be deleted, etc. And even if you try to lock down an over-network backup to be append-only, still there are vectors for attack (ssh credentials, OpenSSL bugs, etc). In this post, I try to explore how we can protect against them and still retain some modern conveniences. A backup scheme also needs to make a balance between: My story so far About 20 years ago, I had an Exabyte tape drive, with the amazing capacity of 7GB per tape! Eventually as disk prices fell, I had external disks plugged in to a server, and would periodically rotate them offsite. I ve also had various combinations of partial or complete offsite copies over the Internet as well. I have around 6TB of data to back up (after compression), a figure that is growing somewhat rapidly as I digitize some old family recordings and videos. Since I last wrote about backups 5 years ago, my scheme has been largely unchanged; at present I use ZFS for local and to-disk backups and borg for the copies over the Internet. Let s take a look at some options that could make this better. Tape The original airgapped backup. You back up to a tape, then you take the (fairly cheap) tape out of the drive and put in another one. In cost per GB, tape is probably the cheapest medium out there. But of course it has its drawbacks. Let s start with cost. To get a drive that can handle capacities of what I d be needing, at least LTO-6 (2.5TB per tape) would be needed, if not LTO-7 (6TB). New, these drives cost several thousand dollars, plus they need LVD SCSI or Fibre Channel cards. You re not going to be hanging one off a Raspberry Pi; these things need a real server with enterprise-style connectivity. If you re particularly lucky, you might find an LTO-6 drive for as low as $500 on eBay. Then there are tapes. A 10-pack of LTO-6 tapes runs more than $200, and provides a total capacity of 25TB sufficient for these needs (note that, of course, you need to have at least double the actual space of the data, to account for multiple full backups in a set). A 5-pack of LTO-7 tapes is a little more expensive, while providing more storage. So all-in, this is going to be in the best possible scenario nearly $1000, and possibly a lot more. For a large company with many TB of storage, the initial costs can be defrayed due to the cheaper media, but for a home user, not so much. Consider that 8TB hard drives can be found for $150 $200. A pair of them (for redundancy) would run $300-400, and then you have all the other benefits of disk (quicker access, etc.) Plus they can be driven by something as cheap as a Raspberry Pi. Fancier tape setups involve auto-changers, but then you re not really airgapped, are you? (If you leave all your tapes in the changer, they can generally be selected and overwritten, barring things like hardware WORM). As useful as tape is, for this project, it would simply be way more expensive than disk-based options. Fundamentals of disk-based airgapping The fundamental thing we need to address with disk-based airgapping is that the machines being backed up have no real-time contact with the backup storage system. This rules out most solutions out there, that want to sync by comparing local state with remote state. If one is willing to throw storage efficiency out the window maybe practical for very small data sets one could just send a full backup daily. But in reality, what is more likely needed is a way to store a local proxy for the remote state. Then a runner device (a USB stick, disk, etc) could be plugged into the network, filled with queued data, then plugged into the backup system to have the data dequeued and processed. Some may be tempted to short-circuit this and just plug external disks into a backup system. I ve done that for a long time. This is, however, a risk, because it makes those disks vulnerable to whatever may be attacking the local system (anything from lightning to ransomware). ZFS ZFS is, it should be no surprise, particularly well suited for this. zfs send/receive can send an incremental stream that represents a delta between two checkpoints (snapshots or bookmarks) on a filesystem. It can do this very efficiently, much more so than walking an entire filesystem tree. Additionally, with the recent addition of ZFS crypto to ZFS on Linux, the replication stream can optionally reflect the encrypted data. Yes, as long as you don t need to mount them, you can mostly work with ZFS datasets on an encrypted basis, and can directly tell zfs send to just send the encrypted data instead of the decrypted data. The downside of ZFS is the resource requirements at the destination, which in terms of RAM are higher than most of the older Raspberry Pi-style devices. Still, one could perhaps just save off zfs send streams and restore them later if need be, but that implies a periodic resend of a full stream, an inefficient operation. dedpulicating software such as borg could be used on those streams (though with less effectiveness if they re encrypted). Tar Perhaps surprisingly, tar in listed incremental mode can solve this problem for non-ZFS users. It will keep a local cache of the state of the filesystem as of the time of the last run of tar, and can generate new tarballs that reflect the changes since the previous run (even deletions). This can achieve a similar result to the ZFS send/receive, though in a much less elegant way. Bacula / Bareos Bacula (and its fork Bareos) both have support for a FIFO destination. Theoretically this could be used to queue of data for transfer to the airgapped machine. This support is very poorly documented in both and is rumored to have bitrotted, however. rdiff and xdelta rdiff and xdelta can be used as sort of a non-real-time rsync, at least on a per-file basis. Theoretically, one could generate a full backup (with tar, ZFS send, or whatever), take an rdiff signature, and send over the file while keeping the signature. On the next run, another full backup is piped into rdiff, and on the basis of the signature file of the old and the new data, it produces a binary patch that can be queued for the backup target to update its stored copy of the file. This leaves history preservation as an exercise to be undertaken on the backup target. It may not necessarily be easy and may not be efficient. rsync batches rsync can be used to compute a delta between two directory trees and express this as a single-file batch that can be processed by a remote rsync. Unfortunately this implies the sender must always keep an old tree around (barring a solution such as ZFS snapshots) in order to compute the delta, and of course it still implies the need for history processing on the remote. Getting the Data There OK, so you ve got an airgapped system, some sort of runner device for your sneakernet (USB stick, hard drive, etc). Now what? Obviously you could just copy data on the runner and move it back off at the backup target. But a tool like NNCP (sort of a modernized UUCP) offer a lot of help in automating the process, returning error reports, etc. NNCP can be used online over TCP, over reliable serial links, over ssh, with offline onion routing via intermediaries or directly, etc. Imagine having an airgapped machine at a different location you go to frequently (workplace, friend, etc). Before leaving, you put a USB stick in your pocket. When you get there, you pop it in. It s despooled and processed while you want, and return emails or whatever are queued up to be sent when you get back home. Not bad, eh? Future installment I m going to try some of these approaches and report back on my experiences in the next few weeks.

21 December 2020

Russ Allbery: Review: Behind the Throne

Review: Behind the Throne, by K.B. Wagers
Series: Indranan War #1
Publisher: Orbit
Copyright: August 2016
ISBN: 0-316-30859-5
Format: Kindle
Pages: 416
Hail is a gunrunner, an outlaw and criminal, someone who knows how to survive violence and navigate by personal loyalty. That world knows her as Cressen Stone. What her colleagues don't know is that she's also an Imperial Princess. Hailimi Mercedes Jaya Bristol left that world twenty years earlier in secret pursuit of her father's killer and had no intention of returning. But her sisters are dead, her mother's health is failing, and two Imperial Trackers have been sent to bring her back to her rightful position as heir. I'm going to warn up-front that the first half of this novel was rough to the point of being unreadable. Wagers tries much too hard to establish Hail as a reluctant heroine torn between her dislike of royal protocols and her grief and anger at the death of her sisters. The result is excessively melodramatic and, to be frank, badly written. There are a lot of passages like this:
His words slammed into me, burning like the ten thousand volts of a Solarian Conglomerate police Taser.
(no, there's no significance to the Solarian Conglomerate here), or, just three paragraphs later:
The air rushed out of my lungs. Added grief for a niece I'd never known. One more log on the pyre set to burn my freedom to ashes. The hope I'd had of getting out of this mess was lost in that instant, and I couldn't do anything but stare at Emmory in abject shock.
Given how much air rushes out of Hail's lungs and how often she's struck down with guilt or grief, it's hard to believe she doesn't have brain damage. Worse, Hail spends a great deal of the first third of the book whining, which given that the book is written in first person gets old very quickly. Every emotion is overwritten and overstressed as Hail rails against obvious narrative inescapability. It's blatantly telegraphed from the first few pages that Hail is going to drop into the imperial palace like a profane invasion force and shake everything up, but the reader has to endure far too long of Hail being dramatically self-pitying about the plot. I almost gave up on this book in irritation (and probably should have). And then it sort of grew on me, because the other thing Wagers is doing (also not subtly) is a story trope for which I have a particular weakness: The fish out of water who nonetheless turns out to be the person everyone needs because she's systematically and deliberately kind and thoughtful while not taking any shit. Hail left Pashati young and inexperienced, with a strained relationship with her mother and a habit of letting her temper interfere with her ability to negotiate palace politics. She still has the temper, but age, experience, and confidence mean that she's decisive and confident in a way she never was before. The second half of this book is about Hail building her power base and winning loyalty by being loyal and decent. It's still not great writing, but there's something there I enjoyed reading. Wagers's setting is intriguing, although it makes me a bit nervous. The Indranan Empire was settled by colonists of primarily Indian background. The court trappings, mythology, and gods referenced in Behind the Throne are Hindu-derived, and I suspect (although didn't confirm) that the funeral arrangements are as well. Formal wear (and casual wear) for women is a sari. There's a direct reference to the goddess Lakshimi (not Lakshmi, which Wikipedia seems to indicate is the correct spelling, although transliteration is always an adventure). I was happy to see this, since there are more than enough SF novels out there that seem to assume only western countries go into space. But I'm never sure whether the author did enough research or has enough personal knowledge to pull off the references correctly, and I personally wouldn't know the difference. The Indranan Empire is also matriarchal, and here Wagers goes for an inversion of sexism that puts men in roughly the position women were in the 1970s. They can, in theory, do most jobs, but there are many things they're expected not to do, there are some explicit gender lines in power structures, and the role of men in society is a point of political conflict. It's skillfully injected as social background, with a believable pattern of societal prejudice that doesn't necessarily apply to specific men in specific situations. I liked that Wagers did this without giving the Empire itself any feminine-coded characteristics. All admirals are women because the characters believe women are obviously better military leaders, not because of some claptrap about nurturing or caring or some other female-coded reason from our society. That said, this gender role inversion didn't feel that significant to the story. The obvious "sexism is bad, see what it would be like if men were subject to it" message ran parallel to the main plot and never felt that insightful to me. I'm therefore not sure it was successful or worth the injection of sexism into the reading experience, although it certainly is different from the normal fare of space empires. I can't recommend Behind the Throne because a lot of it just isn't very good. But I still kind of want to because I sincerely enjoyed the last third of the book, despite some lingering melodrama. Watching Hail succeed by being a decent, trustworthy, loyal, and intelligent person is satisfying, once she finally stops whining. The destination is probably not worth the journey, but now that I've finished the first book, I'm tempted to grab the second. Followed by After the Crown. Rating: 6 out of 10

26 October 2020

Marco d'Itri: RPKI validation with FORT Validator

This article documents how to install FORT Validator (an RPKI relying party software which also implements the RPKI to Router protocol in a single daemon) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:
cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END
cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100
Package: fort-validator rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END
apt update
Before starting, make sure that curl (or wget) and the web PKI certificates are installed:
apt install curl ca-certificates
If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.
echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
    debconf-set-selections
Install the package as usual:
apt install fort-validator
You may also install rpki-client and gortr on Debian 10, or maybe cfrpki and gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the good packaging practices of Linux distributions.

Marco d'Itri: RPKI validation with OpenBSD's rpki-client and Cloudflare's gortr

This article documents how to install rpki-client (an RPKI relying party software, the actual validator) and gortr (which implements the RPKI to Router protocol) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:
cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END
cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100
Package: gortr rpki-client rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END
apt update
Before starting, make sure that curl (or wget) and the web PKI certificates are installed:
apt install curl ca-certificates
If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.
echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
    debconf-set-selections
Install the packages as usual:
apt install rpki-client gortr
And then configure rpki-client to generate its output in the the JSON format needed by gortr:
echo 'OPTIONS=-j' > /etc/default/rpki-client
You may manually start the service unit to immediately generate the data instead of waiting for the next timer run:
systemctl start rpki-client &
gortr too needs to be configured to use the JSON data generated by rpki-client:
echo 'GORTR_ARGS=-bind :323 -verify=false -checktime=false -cache /var/lib/rpki-client/json' > /etc/default/gortr
And then it needs to be restarted to use the new configuration:
systemctl restart gortr
You may also install FORT Validator on Debian 10, or maybe cfrpki with gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the packaging practices of Linux distributions.

3 October 2020

Steve Kemp: Writing an assembler.

Recently I've been writing a couple of simple compilers, which take input in a particular format and generate assembly language output. This output can then be piped through gcc to generate a native executable. Public examples include this trivial math compiler and my brainfuck compiler. Of course there's always the nagging thought that relying upon gcc (or nasm) is a bit of a cheat. So I wondered how hard is it to write an assembler? Something that would take assembly-language program and generate a native (ELF) binary? And the answer is "It isn't hard, it is just tedious". I found some code to generate an ELF binary, and after that assembling simple instructions was pretty simple. I remember from my assembly-language days that the encoding of instructions can be pretty much handled by tables, but I've not yet gone into that. (Specifically there are instructions like "add rax, rcx", and the encoding specifies the source/destination registers - with different forms for various sized immediates.) Anyway I hacked up a simple assembler, it can compile a.out from this input:
.hello   DB "Hello, world\n"
.goodbye DB "Goodbye, world\n"
        mov rdx, 13        ;; write this many characters
        mov rcx, hello     ;; starting at the string
        mov rbx, 1         ;; output is STDOUT
        mov rax, 4         ;; sys_write
        int 0x80           ;; syscall
        mov rdx, 15        ;; write this many characters
        mov rcx, goodbye   ;; starting at the string
        mov rax, 4         ;; sys_write
        mov rbx, 1         ;; output is STDOUT
        int 0x80           ;; syscall
        xor rbx, rbx       ;; exit-code is 0
        xor rax, rax       ;; syscall will be 1 - so set to xero, then increase
        inc rax            ;;
        int 0x80           ;; syscall
The obvious omission is support for "JMP", "JMP_NZ", etc. That's painful because jumps are encoded with relative offsets. For the moment if you want to jump:
        push foo     ; "jmp foo" - indirectly.
        ret
:bar
        nop          ; Nothing happens
        mov rbx,33   ; first syscall argument: exit code
        mov rax,1    ; system call number (sys_exit)
        int 0x80     ; call kernel
:foo
        push bar     ; "jmp bar" - indirectly.
        ret
I'll update to add some more instructions, and see if I can use it to handle the output I generate from a couple of other tools. If so that's a win, if not then it was a fun learning experience:

21 September 2020

Kees Cook: security things in Linux v5.7

Previously: v5.6 Linux v5.7 was released at the end of May. Here s my summary of various security things that caught my attention: arm64 kernel pointer authentication
While the ARMv8.3 CPU Pointer Authentication (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well. BPF LSM
The kernel s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting). execve() deadlock refactoring
There have been a number of long-standing races in the kernel s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn t be able to. slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up. RISCV page table dumping
Following v5.6 s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it s much easier to examine the kernel s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables. array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don t do this:
int foo[5];
...
foo[8] = bar;
The long version gets complicated by the evolution of flexible array structure members, so we ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to fake flexible array members. Before flexible arrays were standardized, GNU C supported zero sized array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the header in front of some data blob that could be addressable through the last structure member:
/* 1-element array */
struct foo  
    ...
    char contents[1];
 ;
/* GNU C extension: 0-element array */
struct foo  
    ...
    char contents[0];
 ;
/* C standard: flexible array */
struct foo  
    ...
    char contents[];
 ;
instance = kmalloc(sizeof(struct foo) + content_size);
Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel s deprecation docs. However, that will only catch the visible at compile time cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:
int foo[5];
...
for (i = 0; i < some_argument; i++)  
    ...
    foo[i] = bar;
    ...
 
It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings. Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:
/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);
The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:
/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);
So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes. That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

31 August 2020

Russ Allbery: Review: Men at Arms

Review: Men at Arms, by Terry Pratchett
Series: Discworld #15
Publisher: Harper
Copyright: 1993
Printing: November 2013
ISBN: 0-06-223740-3
Format: Mass market
Pages: 420
Men at Arms is the fifteenth Discworld novel and a direct plot sequel to Guards! Guards!. You could start here without missing too much, but starting with Guards! Guards! would make more sense. And of course there are cameos (and one major appearance) by other characters who are established in previous books. Carrot, the adopted dwarf who joined the watch in Guards! Guards!, has been promoted to corporal. He is now in charge of training new recruits, a role that is more important because of the Night Watch's new Patrician-ordered diversity initiative. The Watch must reflect the ethnic makeup of the city. That means admitting a troll, a dwarf... and a woman? Trolls and dwarfs hate each other because dwarfs mine precious things out of rock and trolls are composed of precious things embedded in rocks, so relations between the new recruits are tense. Captain Vimes is leaving the Watch, and no one is sure who would or could replace him. (The reason for this is a minor spoiler for Guards! Guards!) A magical weapon is stolen from the Assassin's Guild. And a string of murders begins, murders that Vimes is forbidden by Lord Vetinari from investigating and therefore clearly is going to investigate. This is an odd moment at which to read this book. The Night Watch are not precisely a police force, although they are moving in that direction. Their role in Ankh-Morpork is made much stranger by the guild system, in which the Thieves' Guild is responsible for theft and for dealing with people who steal outside of the quota of the guild. But Men at Arms is in part a story about ethics, about what it means to be a police officer, and about what it looks like when someone is very good at that job. Since I live in the United States, that makes it hard to avoid reading Men at Arms in the context of the current upheavals about police racism, use of force, and lack of accountability. Men at Arms can indeed be read that way; community relations, diversity in the police force, the merits of making two groups who hate each other work together, and the allure of violence are all themes Pratchett is working with in this novel. But they're from the perspective of a UK author writing in 1993 about a tiny city guard without any of the machinery of modern police, so I kept seeing a point of clear similarity and then being slightly wrong-footed by the details. It also felt odd to read a book where the cops are the heroes, much in the style of a detective show. This is in no way a problem with the book, and in a way it was helpful perspective, but it was a strange reading experience.
Cuddy had only been a guard for a few days but already he had absorbed one important and basic fact: it is almost impossible for anyone to be in a street without breaking the law.
Vimes and Carrot are both excellent police officers, but in entirely different ways. Vimes treats being a cop as a working-class job and is inclined towards glumness and depression, but is doggedly persistent and unable to leave a problem alone. His ethics are covered by a thick layer of world-weary cynicism. Carrot is his polar opposite in personality: bright, endlessly cheerful, effortlessly charismatic, and determined to get along with everyone. On first appearance, this contrast makes Vimes seem wise and Carrot seem a bit dim. That is exactly what Pratchett is playing with and undermining in Men at Arms. Beneath Vimes's cynicism, he's nearly as idealistic as Carrot, even though he arrives at his ideals through grim contrariness. Carrot, meanwhile, is nowhere near as dim as he appears to be. He's certain about how he wants to interact with others and is willing to stick with that approach no matter how bad of an idea it may appear to be, but he's more self-aware than he appears. He and Vimes are identical in the strength of their internal self-definition. Vimes shows it through the persistent, grumpy stubbornness of a man devoted to doing an often-unpleasant job, whereas Carrot verbally steamrolls people by refusing to believe they won't do the right thing.
Colon thought Carrot was simple. Carrot often struck people as simple. And he was. Where people went wrong was thinking that simple meant the same thing as stupid.
There's a lot going on in this book apart from the profiles of two very different models of cop. Alongside the mystery (which doubles as pointed commentary on the corrupting influence of violence and personal weaponry), there's a lot about dwarf/troll relations, a deeper look at the Ankh-Morpork guilds (including a horribly creepy clown guild), another look at how good Lord Vetinari is at running the city by anticipating how other people will react, a sarcastic dog named Gaspode (originally seen in Moving Pictures), and Pratchett's usual collection of memorable lines. It is also the origin of the now-rightfully-famous Vimes boots theory:
The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money. Take boots, for example. He earned thirty-eight dollars a month plus allowances. A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. Those were the kind of boots Vimes always bought, and wore until the soles were so thin that he could tell where he was in Ankh-Morpork on a foggy night by the feel of the cobbles. But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that'd still be keeping his feet dry in ten years' time, while the poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet. This was the Captain Samuel Vimes 'Boots' theory of socioeconomic unfairness.
Men at Arms regularly makes lists of the best Discworld novels, and I can see why. At this point in the series, Pratchett has hit his stride. The plots have gotten deeper and more complex without losing the funny moments, movie and book references, and glorious turns of phrase. There is also a lot of life philosophy and deep characterization when one pays close attention to the characters.
He was one of those people who would recoil from an assault on strength, but attack weakness without mercy.
My one complaint is that I found it a bit overstuffed with both characters and subplots, and as a result had a hard time following the details of the plot. I found myself wanting a timeline of the murders or a better recap from one of the characters. As always with Pratchett, the digressions are wonderful, but they do occasionally come at the cost of plot clarity. I'm not sure I recommend the present moment in the United States as the best time to read this book, although perhaps there is no better time for Carrot and Vimes to remind us what good cops look like. But regardless of when one reads it, it's an excellent book, one of the best in the Discworld series to this point. Followed, in publication order, by Soul Music. The next Watch book is Feet of Clay. Rating: 8 out of 10

30 August 2020

Bits from Debian: DebConf20 online closes

DebConf20 group photo - click to enlarge On Saturday 29 August 2020, the annual Debian Developers and Contributors Conference came to a close. DebConf20 has been held online for the first time, due to the coronavirus (COVID-19) disease pandemic. All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms. With more than 850 attendees from 80 different countries and a total of over 100 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf20 was a large success. When it became clear that DebConf20 was going to be an online-only event, the DebConf video team spent much time over the next months to adapt, improve, and in some cases write from scratch, technology that would be required to make an online DebConf possible. After lessons learned from the MiniDebConfOnline in late May, some adjustments were made, and then eventually we came up with a setup involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, and a newly written web-based frontend for voctomix as the various elements of the stack. All components of the video infrastructure are free software, and the whole setup is configured through their public ansible repository. The DebConf20 schedule included two tracks in other languages than English: the Spanish language MiniConf, with eight talks in two days, and the Malayalam language MiniConf, with nine talks in three days. Ad-hoc activities, introduced by attendees over the course of the entire conference, have been possible too, streamed and recorded. There have also been several team gatherings to sprint on certain Debian development areas. Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends. For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days. The DebConf20 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf21 is planned to be held in Haifa, Israel, in August or September. DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf20 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf20, particularly our Platinum Sponsors: Lenovo, Infomaniak, Google and Amazon Web Services (AWS). About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Lenovo As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. About Infomaniak Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). About Google Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. About Amazon Web Services (AWS) Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies. Contact Information For further information, please visit the DebConf20 web page at https://debconf20.debconf.org/ or send mail to press@debian.org.

29 August 2020

Andrew Cater: Just coming to the end of Debconf 20 2020 - and a preview.

One more talk from Jon "maddog" Hall and then the closing wrap up. This has been a blast: I've enjoyed it a lot and it's made me more enthusiastic than I have been for a long time. So once more with thanks to the video team
It's almost the end of this year's DebConf dream
As we all break up, there's a favour to ask
Stay safe - wear a smile - and where needed, a mask

We'll not take you to task ; it's a favour we ask
Stay safe - wear a smile - and where needed, a mask

Haifa, Pristina or Kochi - we'll see how that lands
There's not much left to tidy up - wash your own hands
So now if you'll join us in virtual beer
We'll bring this to a close - and we'll meet up next year

So now if you'll join us - let us all raise a cheer
To Debconf21 - and we'll see you next year

24 August 2020

Arnaud Rebillout: Send emails from your terminal with msmtp

In this tutorial, we'll configure everything needed to send emails from the terminal. We'll use msmtp, a lightweight SMTP client. For the sake of the example, we'll use a GMail account, but any other email provider can do. Your OS is expected to be Debian, as usual on this blog, although it doesn't really matter. We will also see how to store the credentials for the email account in the system keyring. And finally, we'll go the extra mile, and see how to configure various command-line utilities so that they automatically use msmtp to send emails. Even better, we'll make msmtp the default email sender, to actually avoid configuring these utilities one by one. Prerequisites Strong prerequisites (if you don't recognize yourself here, you probably landed on the wrong page): Weak prerequisites (if your setup doesn't match those points exactly, that's fine, you can still read on): GMail account setup For a GMail account, there's a bit of configuration to do. For other email providers, I have no idea, maybe you can just skip this part, or maybe you will have to go through a similar procedure. If you want an external program (msmtp in this case) to talk to the GMail servers on your behalf, and send emails, you can't just use your usual GMail password. Instead, GMail requires you to generate so-called app passwords, one for each application that needs to access your GMail account. This approach has several advantages: So app passwords are a good idea, it just requires a bit of work to set it up. Let's see what it takes. First, 2-Step Verification must be enabled on your GMail account. Visit https://myaccount.google.com/security, and if that's not the case, enable it. You'll need to authorize all of your devices (computer(s), phone(s) and so on), and it can be a bit tedious, granted. But you only have to do it once in a lifetime, and after it's done, you're left with a more secure account, so it's not that bad, right? Enabling the 2-Step Verification will unlock the feature we need: App passwords. Visit https://myaccount.google.com/apppasswords, and under "Signing in to Google", click "App passwords", and generate one. An app password is a 16 characters string, something like qwertyuiopqwerty. It's supposed to be used from only one place, ie. from ONE application that is installed on ONE device. That's why it's common to give it a name of the form application@device, so in our case it could be msmtp@laptop, but really it's free form, choose whatever name suits you, as long as it makes sense to you. So let's give a name to this app password, write it down for now, and we're done with the GMail config. Send your first email Time to get started with msmtp. First thing first, installation, trivial:
sudo apt install msmtp
Let's try to send an email. At this point, we did not create any configuration file for msmtp yet, so we have to provide every details on the command line.
# Write a dummy email
cat << EOF > message.txt
From: YOUR_LOGIN@gmail.com
To: SOMEONE_ELSE@SOMEWHERE_ELSE.com
Subject: Cafe Sua Da
Iced-coffee with condensed milk
EOF
# Send it
cat message.txt   msmtp \
    --auth=on --tls=on \
    --host smtp.gmail.com \
    --port 587 \
    --user YOUR_LOGIN \
    --read-envelope-from \
    --read-recipients
# msmtp prompts you for your password:
# this is where goes the app password!
Obviously, in this example you should replace the uppercase words with the real thing, that is, your email login, and real email addresses. Also, let me insist, you must enter the app password that was generated previously, not your real GMail password. And it should work already, this email should have been sent and received by now. So let me explain quickly what happened here. In the file message.txt, we provided From: (the email address of the person sending the email) and To: (the destination email address). Then we asked msmtp to re-use those values to set the envelope of the email with --read-envelope-from and --read-recipients. What about the other parameters? For more details, you should refer to the msmtp documentation. Write a configuration file So we could send an email, that's cool already. However the command to do that was a bit long, and we don't want to juggle with all these arguments every time we send an email. So let's write down all of that into a configuration file. msmtp supports two locations: ~/.msmtprc and ~/.config/msmtp/config, at your preference. In this tutorial we'll use ~/.msmtprc for brevity:
cat << 'EOF' > ~/.msmtprc
defaults
tls on
account gmail
auth on
host smtp.gmail.com
port 587
user YOUR_LOGIN
from YOUR_LOGIN@gmail.com
account default : gmail
EOF
And for a quick explanation: All in all it's pretty simple, and it's becoming easier to send an email:
# Write a dummy email. Note that the
# header 'From:' is no longer needed,
# it's already in '~/.msmtprc'.
cat << 'EOF' > message.txt
To: SOMEONE_ELSE@SOMEWHERE_ELSE.com
Subject: Flat White
The milky way for coffee
EOF
# Send it
cat message.txt   msmtp \
    --account default \
    --read-recipients
Actually, --account default is not needed, as it's the default anyway if you don't provide a --account argument. Furthermore --read-recipients can be shortened as -t. So we can make it real short now:
msmtp -t < message.txt
At this point, life is good! Except for one thing maybe: we still have to type the password every time we send an email. Surely it must be possible to avoid that annoyance... Store your password in the system keyring For this part, we'll make use of the libsecret tool to store the password in the system keyring via the Secret Service API. It means that your desktop environment should implement the Secret Service specification, which is the case for both GNOME and KDE. Note that GNOME provides Seahorse to have a look at your secrets, KDE has the KDE Wallet. There's also KeePassXC, which I have only heard of but never used. I guess it can be your password manager of choice if you use neither GNOME nor KDE. For those running an up-to-date Debian unstable, you should have msmtp >= 1.8.11-2, and you're all good to go. For those having an older version than that however, you will have to install the package msmtp-gnome in order to have msmtp built with libsecret support. Note that this package depends on seahorse, hence it pulls in a good part of the GNOME stack when you install it. For those not running GNOME, that's unfortunate. All of this was discussed and fixed in #962689. Alright! So let's just make sure that the libsecret tools are installed:
sudo apt install libsecret-tools
And now we can store our password in the system keyring with this command:
secret-tool store --label msmtp \
    host smtp.gmail.com \
    service smtp \
    user YOUR_LOGIN
If this looks a bit too magic, and you want something more visual, you can actually fire a GUI like seahorse (for GNOME users), or kwalletmanager5 (for KDE users), and then you will see what passwords are stored in there. Here's a screenshot of Seahorse, with a msmtp password stored: seahorse with msmtp password Let's try to send an email again:
msmtp -t < message.txt
No need for a password anymore, msmtp got it from the system keyring! For more details on how msmtp handle the passwords, and to see what other methods are supported, refer to the extensive documentation. Use-cases and integration Let's go over a few use-cases, situations where you might end up sending emails from the command-line, and what configuration is required to make it work with msmtp. Git Send-Email Sending emails with git is a common workflow for some projects, like the Linux kernel. How does git send-email actually send emails? From the git-send-email manual page:
the built-in default is to search for sendmail in /usr/sbin, /usr/lib and $PATH if such program is available
It is possible to override this default though:
--smtp-server=
[...] Alternatively it can specify a full pathname of a sendmail-like program instead; the program must support the -i option.
So in order to use msmtp here, you'd add a snippet like that to your ~/.gitconfig file:
[sendemail]
    smtpserver = /usr/bin/msmtp
For a full guide, you can also refer to https://git-send-email.io. Debian developer tools Tools like bts or reportbug are also good examples of command-line tools that need to send emails. From the bts manual page:
--sendmail=SENDMAILCMD
Specify the sendmail command [...] Default is /usr/sbin/sendmail.
So if you want bts to send emails with msmtp instead of sendmail, you must use bts --sendmail='/usr/bin/msmtp -t'. Note that bts also loads settings from the file /etc/devscripts.conf and ~/.devscripts, so you could also set BTS_SENDMAIL_COMMAND='/usr/bin/msmtp -t' in one of those files. From the reportbug manual page:
--mta=MTA
Specify an alternate MTA, instead of /usr/sbin/sendmail (the default).
In order to use msmtp here, you'd write reportbug --mta=/usr/bin/msmtp. Note that reportbug reads it settings from /etc/reportbug.conf and ~/.reportbugrc, so you could as well set mta /usr/bin/msmtp in one of those files. So who is this sendmail again? By now, you probably noticed that sendmail seems to be considered the default tool for the job, the "traditional" command that has been around for ages. Rather than configuring every tool to use something else than sendmail, wouldn't it be simpler to actually replace sendmail by msmtp? Like, create a symlink that points to msmtp, something like ln -sr /usr/bin/msmtp /usr/sbin/sendmail? So that msmtp acts as a drop-in replacement for sendmail, and there's nothing else to configure? Answer is yes, kind of. Actually, the first msmtp feature that is listed on the homepage is "Sendmail compatible interface (command line options and exit codes)". Meaning that msmtp is a drop-in replacement for sendmail, that seems to be the intent. However, you should refrain from creating or modifying anything in /usr, as it's the territory of the package manager, apt. Any change in /usr might be overwritten by apt the next time you run an upgrade or install new packages. In the case of msmtp, there is actually a package named msmtp-mta that will create this symlink for you. So if you really want a definitive replacement for sendmail, there you go:
sudo apt install msmtp-mta
From this point, sendmail is now a symlink /usr/sbin/sendmail /usr/bin/msmtp, and there's no need to configure git, bts, reportbug or any other tool that would rely on sendmail. Everything should work "out of the box". Conclusion I hope that you enjoyed reading this article! If you have any comment, feel free to send me a short email, preferably from your terminal!

8 August 2020

Holger Levsen: 20200808-debconf8

DebConf8 This tshirt is 12 years old and from DebConf8. DebConf8 was my 6th DebConf and took place in Mar de la Plata, Argentina. Also this is my 6th post in this series of posts about DebConfs and for the last two days for the first time I failed my plan to do one post per day. And while two days ago I still planned to catch up on this by doing more than one post in a day, I have now decided to give in to realities, which mostly translates to sudden fantastic weather in Hamburg and other summer related changes in life. So yeah, I still plan to do short posts about all the DebConfs I was lucky to attend, but there might be days without a blog post. Anyhow, Mar de la Plata. When we held DebConf in Argentina it was winter there, meaning locals and other folks would wear jackets, scarfs, probably gloves, while many Debian folks not so much. Andreas Tille freaked out and/or amazed local people by going swimming in the sea every morning. And when I told Stephen Gran that even I would find it a bit cold with just a tshirt he replied "na, the weather is fine, just like british summer", while it was 14 celcius and mildly raining. DebConf8 was the first time I've met Valessio Brito, who I had worked together since at least DebConf6. That meeting was really super nice, Valessio is such a lovely person. Back in 2008 however, there was just one problem: his spoken English was worse than his written one, and that was already hard to parse sometimes. Fast forward eleven years to Curitiba last year and boom, Valessio speaks really nice English now. And, you might wonder why I'm telling this, especially if you were exposed to my Spanish back then and also now. So my point in telling this story about Valessio is to illustrate two things: a.) one can contribute to Debian without speaking/writing much English, Valessio did lots of great artwork since DebConf6 and b.) one can learn English by doing Debian stuff. It worked for me too! During set up of the conference there was one very memorable moment, some time after the openssl maintainer, Kurt Roeckx arrived at the venue: Shortly before DebConf8 Luciano Bello, from Argentina no less, had found CVE-2008-0166 which basically compromised the security of sshd of all Debian and Ubuntu installations done in the last 4 years (IIRC two Debian releases were affected) and which was commented heavily and noticed everywhere. So poor Kurt arrived and wondered whether we would all hate him, how many toilets he would have to clean and what not... And then, someone rather quickly noticed this, approached some people and suddenly a bunch of people at DebConf were group-hugging Kurt and then we were all smiling and continuing doing set up of the conference. That moment is one of my most joyful memories of all DebConfs and partly explains why I remember little about the conference itself, everything else pales in comparison and most things pale over the years anyway. As I remember it, the conference ran very smoothly in the end, despite quite some organisational problems right before the start. But as usual, once the geeks arrive and are happily geeking, things start to run smooth, also because Debian people are kind and smart and give hands and brain were needed. And like other DebConfs, Mar de la Plata also had moments which I want to share but I will only hint about, so it's up to you to imagine the special leaves which were brought to that cheese and wine party! ;-) Update: added another xkcd link, spelled out Kurt's name after talking to him and added a link to a video of the group hug.

29 July 2020

Dirk Eddelbuettel: Installing and Running Ubuntu on a 2015-ish MacBook Air

So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent genuis bar verdict of it s a goner a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work. Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine s content was really (really, and check again: really) no longer needed, I got hold of it for good. tl;dr It works just fine. It is a little heavier than I thought (and isn t air supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix ). Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.

Initial Steps
  • Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck
  • Startup Disk Creator using a so-far unused 8gb usb drive
  • Plug into USB, recycle power, press Option on macOS keyboard: voila
  • After a quick hunch no to live/test only and yes to install, whole disk
  • install easy, very few questions, somehow skips wifi
  • so activate wifi manually and everythings pretty much works

Customization
  • First deal with fn and ctrl key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in half a decade? Longer?
  • Fire up firefox, go to download chrome , install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.
  • syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop
  • keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.
  • Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.
  • Added emacs via apt, so far empty , so config files yet
  • Added ssh via apt, need to propagate keys to github and gitlab
  • Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio
  • Added wajig (apt frontend) and byobu, both via apt
  • Created ssh key, shipped it to server and github + gitlab
  • Cloned (not-public) dotfiles repo and linked some dotfiles in
  • Cloned git repo for nord-theme for gnome terminal and installed it; also added it to RStudio via this repo
  • Emacs installed, activated dotfiles, then incrementally install a few elpa-* packages and a few M-x package-install including nord-theme, of course
  • Installed JetBrains Mono font from my own local package; activated for Gnome Terminal and Emacs
  • Install gnome-tweak-tool via apt, adjusted a few settings
  • Ran gsettings set org.gnome.desktop.wm.preferences focus-mode 'sloppy'
  • Set up camera following this useful GH repo
  • At some point also added slack and zoom, because, well, it is 2020
  • STILL TODO:
    • docker
    • bother with email setup?,
    • maybe atom/code/ ?

28 July 2020

Russ Allbery: Review: The City in the Middle of the Night

Review: The City in the Middle of the Night, by Charlie Jane Anders
Publisher: Tor
Copyright: February 2019
Printing: February 2020
ISBN: 1-4668-7113-X
Format: Kindle
Pages: 366
January is a tidally-locked planet divided between permanent night and permanent day, an unfortunate destination for a colony starship. Now, humans cling to a precarious existence along the terminator, huddling in two wildly different cities and a handful of smaller settlements, connected by a road through the treacherous cold. The novel opens with Sophie, a shy university student from the dark side of the city of Xiosphant. She has an overwhelming crush on Bianca, her high-class, self-confident roommate and one of the few people in her life to have ever treated her with compassion and attention. That crush, and her almost non-existent self-esteem, lead her to take the blame for Bianca's petty theft, resulting in what should have been a death sentence. Sophie survives only because she makes first contact with a native intelligent species of January, one that the humans have been hunting for food and sport. Sadly, I think this is enough Anders for me. I've now bounced off two of her novels, both for structural reasons that I think go deeper than execution and indicate a fundamental mismatch between what Anders wants to do as an author and what I'm looking for as a reader. I'll talk more about what this book is doing in a moment, but I have to start with Bianca and Sophie. It's difficult for me to express how much I loathed this relationship and how little I wanted to read about it. It took me about five pages to peg Bianca as a malignant narcissist and Sophie's all-consuming crush as dangerous codependency. It took the entire book for Sophie to figure out how awful Bianca is to her, during which Bianca goes through the entire abusive partner playbook of gaslighting, trivializing, contingent affection, jealous rage, and controlling behavior. And meanwhile Sophie goes back to her again, and again, and again, and again. If I hadn't been reading this book on a Kindle, I think it would have physically hit a wall after their conversation in the junkyard. This is truly a matter of personal taste and preference. This is not an unrealistic relationship; this dynamic happens in life all too often. I'm sure there is someone for whom reading about Sophie's spectacularly poor choices is affirming or cathartic. I've not personally experienced this sort of relationship, which doubtless matters. But having empathy for someone who is making awful and self-destructive life decisions and trusting someone they should not be trusting and who is awful to them in every way is difficult work. Sophie is the victim of Bianca's abuse, but she does so many stupid and ill-conceived things in support of this twisted relationship that I found it very difficult to not get angry at her. Meanwhile, Anders writes Sophie as so clearly fragile and uncertain and devoid of a support network that getting angry at her is like kicking a puppy. The result for me was spending nearly an entire book in a deeply unpleasant state of emotional dissonance. I may be willing to go through that for a close friend, but in a work of fiction it's draining and awful and entirely not fun. The other viewpoint character had the opposite problem for me. Mouth starts the book as a traveling smuggler, the sole survivor of a group of religious travelers called the Citizens. She's practical, tough, and guarded. Beneath that, I think the intent was to show her as struggling to come to terms with the loss of her family and faith community. Her first goal in the book is to recover a recording of Citizen sacred scripture to preserve it and to reconnect with her past. This sounds interesting on the surface, but none of it gelled. Mouth never felt to me like someone from a faith community. She doesn't act on Citizen beliefs to any meaningful extent, she rarely talks about them, and when she does, her attitude is nostalgia without spirituality. When Mouth isn't pursuing goals that turn out to be meaningless, she aimlessly meanders through the story. Sophie at least has agency and makes some important and meaningful decisions. Mouth is just there, even when Anders does shattering things to her understanding of her past. Between Sophie and Bianca putting my shoulders up around my ears within the first few pages of the first chapter and failing to muster any enthusiasm for Mouth, I said the eight deadly words ("I don't care what happens to these people") about a hundred pages in and the book never recovered. There are parts of the world-building I did enjoy. The alien species that Sophie bonds with is not stunningly original, but it's a good (and detailed) take on one of the alternate cognitive and social models that science fiction has dreamed up. I was comparing the strangeness and dislocation unfavorably to China Mi ville's Embassytown while I was reading it, but in retrospect Anders's treatment is more decolonialized. Xiosphant's turn to Circadianism as their manifestation of order is a nicely understated touch, a believable political overreaction to the lack of a day/night cycle. That touch is significantly enhanced by Sophie's time working in a salon whose business model is to help Xiosphant residents temporarily forget about time. And what glimmers we got of politics on the colony ship and their echoing influence on social and political structures were intriguing. Even with the world-building, though, I want the author to be interested in and willing to expand the same bits of world-building that I'm engaged with. Anders didn't seem to be. The reader gets two contrasting cities along a road, one authoritarian and one libertine, which makes concrete a metaphor for single-axis political classification. But then Anders does almost nothing with that setup; it's just the backdrop of petty warlord politics, and none of the political activism of Bianca's student group seems to have relevance or theoretical depth. It's a similar shallowness as the religion of Mouth's Citizens: We get a few fragments of culture and religion, but without narrative exploration and without engagement from any of the characters. The way the crew of the Mothership was assembled seems to have led to a factional and racial caste system based on city of origin and technical expertise, but I couldn't tell you more than that because few of the characters seem to care. And so on. In short, the world-building that I wanted to add up to a coherent universe that was meaningful to the characters and to the plot seemed to be little more than window-dressing. Anders tosses in neat ideas, but they don't add up to anything. They're just background scenery for Bianca and Sophie's drama. The one thing that The City in the Middle of the Night does well is Sophie's nervous but excited embrace of the unknown. It was delightful to see the places where a typical protagonist would have to overcome a horror reaction or talk themselves through tradeoffs and where Sophie's reaction was instead "yes, of course, let's try." It provided an emotional strength to an extended first-contact exploration scene that made it liberating and heart-warming without losing the alienness. During that part of the book (in which, not coincidentally, Bianca does not appear), I was able to let my guard down and like Sophie for the first time, and I suspect that was intentional on Anders's part. But, overall, I think the conflict between Anders's story-telling approach and my preferences as a reader are mostly irreconcilable. She likes to write about people who make bad decisions and compound their own problems. In one of the chapters of her non-fiction book about writing that's being serialized on Tor.com she says "when we watch someone do something unforgivable, we're primed to root for them as they search desperately for an impossible forgiveness." This is absolutely not true for me; when I watch a character do something unforgivable, I want to see repudiation from the protagonists and ideally some clear consequences. When that doesn't happen, I want to stop reading about them and find something more enjoyable to do with my time. I certainly don't want to watch a viewpoint character insist that the person who is doing unforgivable things is the center of her life. If your preferences on character and story arc are closer to Anders's than mine, you may like this book. Certainly lots of people did; it was nominated for multiple awards and won the Locus Award for Best Science Fiction Novel. But despite the things it did well, I had a truly miserable time reading it and am not anxious to repeat the experience. Rating: 4 out of 10

12 July 2020

Enrico Zini: Police brutality links

I was a police officer for nearly ten years and I was a bastard. We all were.
We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilit sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, pi militari dell Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altres , documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove mor la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso stato oggetto di grande attenzione mediatica e ha ispirato un documentario, stato morto un ragazzo.[1][5]
Federico Aldrovandi (17 July 1987 in Ferrara 25 September 2005 in Ferrara) was an Italian student, who was killed by four policemen.[1]
24 Giugno 2020

15 June 2020

Russ Allbery: Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.) Yarimar Bonilla & Marisol LeBr n (ed.) Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) Policing the Planet (non-fiction anthology)
Zachary D. Carter The Price of Peace (non-fiction)
Justin Akers Chac n & Mike Davis No One is Illegal (non-fiction)
Grace Chang Disposable Domestics (non-fiction)
Suzanne Collins The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis Freedom is a Constant Struggle (non-fiction)
Danny Katch Socialism... Seriously (non-fiction)
Naomi Klein The Battle for Paradise (non-fiction)
Naomi Klein No is Not Enough (non-fiction)
Naomi Kritzer Catfishing on CatNet (sff)
Derek K nsken The Quantum Magician (sff)
Rob Larson Bit Tyrants (non-fiction)
Michael L wy Ecosocialism (non-fiction)
Joe Macar , Maya Schenwar, et al. (ed.) Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi Riot Baby (sff)
Sarah Pinsker A Song for a New Day (sff)
Lina Rather Sisters of the Vast Black (sff)
Marta Russell Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) How We Get Free (non-fiction anthology)
Linda Tirado Hand to Mouth (non-fiction)
Alex S. Vitale The End of Policing (non-fiction)
C.M. Waggoner Unnatural Magic (sff)
Martha Wells Network Effect (sff)
Kai Ashante Wilson Sorcerer of the Wildeeps (sff)

2 June 2020

Lisandro Dami n Nicanor P rez Meyer: Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa) - Fighting COVID-19

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.


This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

All the above done with Free/Libre/Open Source software and hardware designs. Any manufacturing company can then use them for mass production.

The importance of early pneumonia detection


We were already working in MoSimPa when an NYTimes article caught or attention: "The Infection That s Silently Killing Coronavirus Patients". From the article:

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint


As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

System schema

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware


As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software


Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

Between the database and MQTT is mosimpa-datakeeper. The piece of software from which medical staff monitor patients is unsurprisingly called mosimpa-monitor.

mosimpa-monitor
MoSimPa's monitor main screen

mosimpa-monitor plots
Plots of a patient's data


mosimpa-monitor-alarms-setup
Alarm thresholds setup


And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

mosimpa-abm
ABM main screen


mosimpa-abm-internments
ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

Even if I am a packager I still have some stuff to fix from the packaging itself, like letting datakeeper run with its own user. I just haven't got to it yet.



Certifications


We are working towards getting the system certified by ANMAT, which is the Argentinian equivalent for EEUU's FDA.

Funding


While all the people involved are working ad-honorem funding is still required in order to buy materials, create the prototypes, etc. The project created payments links with Mercado Pago (in spanish and argentinian pesos) and other bank methods (PDF, also in spanish).

I repeat the links here with an aproximation to US$.

- 500 AR$ (less than 8 US$)
- 1000 AR$ (less than 15 US$)
- 2000 AR$ (less than 30 US$)
- 3000 AR$ (less than 45 US$)
- 5000 AR$ (less than 75 US$)

You can check the actual convertion rate in https://www.google.com/search?q=argentine+peso+to+us+dollars

The project was also presented at a funding call of argentinian Agencia de Promoci n de la Investigaci n, el Desarrollo Tecnol gico y la Innovaci n (Agencia I+D+i). 900+ projects where presented and 64 funded, MoSimPa between them.

3 May 2020

Evgeni Golov: Remote management for OpenWRT devices without opening inbound connections

Everyone is working from home these days and needs a decent Internet connection. That's especially true if you need to do video calls and the room you want to do them has the worst WiFi coverage of the whole flat. Well, that's exactly what happened to my parents in law. When they moved in, we knew that at some point we'll have to fix the WiFi - the ISP provided DSL/router/WiFi combo would not cut it, especially not with the shape of the flat and the elevator shaft in the middle of it: the flat is essentially a big C around said shaft. But it was good enough for email, so we postponed that. Until now. The flat has wired Ethernet, but the users MacBook Air does not. That would have been too easy, right? So let's add another access point and hope the situation improves. Luckily I still had a TP-Link Archer C7 AC1750 in a drawer, which I could quickly flash with a fresh OpenWRT release, disable DHCPd and configure the same SSID and keys as the main/old WiFi. But I didn't know which channels would be best in the destination environment. Under normal circumstances, I'd just take the AP, drive to my parents in law and finish the configuration there. Nope, not gonna happen these days. So my plan was to finish configuration here, put the AP in a box and on the porch where someone can pick it up. But this would leave me without a way to further configure the device once it has been deployed - I was not particularly interested in trying to get port forwarding configured via phone and I was pretty sure UPnP was disabled in the ISP router. Installing a Tor hidden service for SSH was one possibility, setting up a VPN and making the AP a client another. Well, or just creating a reverse tunnel with SSH! sshtunnel Creating a tunnel with OpenSSH is easy: ssh -R127.0.0.1:2222:127.0.0.1:22 server.example.com will forward localhost:2222 on server.example.com to port 22 of the machine the SSH connection originated from. But what happens if the connection dies? Adding a while true; do ; done around it might help, but I would really like not to reinvent the wheel here! Thankfully, somebody already invented that particular wheel and OpenWRT comes with a sshtunnel package that takes care of setting up and keeping up such tunnels and documentation how to do so. Just install the sshtunnel package, edit /etc/config/sshtunnel to contain a server stanza with hostname, port and username and a tunnelR stanza referring said server plus the local and remote sides of the tunnel and you're good to go.
config server home
  option user     user
  option hostname server.example.com
  option port     22
config tunnelR local_ssh
  option server         home
  option remoteaddress  127.0.0.1
  option remoteport     2222
  option localaddress   127.0.0.1
  option localport      22
The only caveat is that sshtunnel needs the OpenSSH client binary (and the package correctly depends on it) and OpenWRT does not ship the ssh-keygen tool from OpenSSH but only the equivalent for Dropbear. As OpenSSH can't read Dropbear keys (and vice versa) you'll have to generate the key somewhere else and deploy it to the OpenWRT box and the target system. Oh, and OpenWRT defaults to enabling password login via SSH, so please disable that if you expose the box to the Internet in any way! Using the tunnel After configuring and starting the service, you'll see the OpenWRT system logging in to the configured remote and opening the tunnel. For some reason that connection would not show up in the output of w -- probably because there was no shell started or something, but logs show it clearly. Now it's just a matter of connecting to the newly open port and you're in. As the port is bound to 127.0.0.1, the connection is only possible from server.example.com or using it as a jump host via OpenSSH's ProxyJump option: ssh -J server.example.com -p 2222 root@localhost. Additionally, you can forward a local port over the tunneled connection to create a tunnel for the OpenWRT webinterface: ssh -J server.example.com -p 2222 -L8080:localhost:80 root@localhost. Yes, that's a tunnel inside a tunnel, and all the network engineers will go brrr, but it works and you can access LuCi on http://localhost:8080 just fine. If you don't want to type that every time, create an entry in your .ssh/config:
Host openwrt
  ProxyJump server.example.com
  HostName localhost
  Port 2222
  User root
  LocalForward 8080 localhost:80
And we're done. Enjoy easy access to the newly deployed box and carry on.

Next.

Previous.