Russ Allbery: Review: Sweep of the Heart
Series: | Innkeeper Chronicles #6 |
Publisher: | NYLA Publishing |
Copyright: | 2022 |
ISBN: | 1-64197-239-4 |
Format: | Kindle |
Pages: | 440 |
Series: | Innkeeper Chronicles #6 |
Publisher: | NYLA Publishing |
Copyright: | 2022 |
ISBN: | 1-64197-239-4 |
Format: | Kindle |
Pages: | 440 |
devices.yaml
. It contains the
device list. The second file is classifier.yaml
.
It defines a scope for each device. A scope is a set of keys and
values. It is used in templates and to look up data associated with a
device.
$ ./run-jerikan scope to1-p1.sk1.blade-group.net continent: apac environment: prod groups: - tor - tor-bgp - tor-bgp-compute host: to1-p1.sk1 location: sk1 member: '1' model: dell-s4048 os: cumulus pod: '1' shorthost: to1-p1
to1-p1.sk1.blade-group.net
, the following subset of
classifier.yaml
defines its scope:
matchers: - '^(([^.]*)\..*)\.blade-group\.net': environment: prod host: '\1' shorthost: '\2' - '\.(sk1)\.': location: '\1' continent: apac - '^to([12])-[as]?p(\d+)\.': member: '\1' pod: '\2' - '^to[12]-p\d+\.': groups: - tor - tor-bgp - tor-bgp-compute - '^to[12]-(p ap)\d+\.sk1\.': os: cumulus model: dell-s4048
searchpaths.py
. It describes
which directories to search for a variable. A Python function provides
a list of paths to look up in data/
for a given scope. Here
is a simplified version:2
def searchpaths(scope): paths = [ "host/ scope[location] / scope[shorthost] ", "location/ scope[location] ", "os/ scope[os] - scope[model] ", "os/ scope[os] ", 'common' ] for idx in range(len(paths)): try: paths[idx] = paths[idx].format(scope=scope) except KeyError: paths[idx] = None return [path for path in paths if path]
to1-p1.sk1.blade-group.net
is
looked up in the following paths:
$ ./run-jerikan scope to1-p1.sk1.blade-group.net [ ] Search paths: host/sk1/to1-p1 location/sk1 os/cumulus-dell-s4048 os/cumulus common
system
for accounts, DNS, syslog servers, topology
for ports, interfaces, IP addresses, subnets, bgp
for BGP configurationbuild
for templates and validation scriptsapps
for application variablesto1-p1.sk1.blade-group.net
in the bgp
namespace, the following
YAML files are processed: host/sk1/to1-p1/bgp.yaml
,
location/sk1/bgp.yaml
, os/cumulus-dell-s4048/bgp.yaml
,
os/cumulus/bgp.yaml
, and common/bgp.yaml
. The search stops at the
first match.
The schema.yaml
file allows us to override this
behavior by asking to merge dictionaries and arrays across all
matching files. Here is an excerpt of this file for the topology
namespace:
system: users: merge: hash sampling: merge: hash ansible-vars: merge: hash netbox: merge: hash
~
:
# In data/os/junos/system.yaml netbox: manufacturer: Juniper model: "~ model upper " # In data/groups/tor-bgp-compute/system.yaml netbox: role: net_tor_gpu_switch
netbox
in the system
namespace for
to1-p2.ussfo03.blade-group.net
yields the following result:
$ ./run-jerikan scope to1-p2.ussfo03.blade-group.net continent: us environment: prod groups: - tor - tor-bgp - tor-bgp-compute host: to1-p2.ussfo03 location: ussfo03 member: '1' model: qfx5110-48s os: junos pod: '2' shorthost: to1-p2 [ ] Search paths: [ ] groups/tor-bgp-compute [ ] os/junos common $ ./run-jerikan lookup to1-p2.ussfo03.blade-group.net system netbox manufacturer: Juniper model: QFX5110-48S role: net_tor_gpu_switch
# In groups/adm-gateway/topology.yaml interface-rescue: address: "~ lookup('topology', 'addresses').rescue " up: - "~ip route add default via lookup('topology', 'addresses').rescue ipaddr('first_usable') table rescue" - "~ip rule add from lookup('topology', 'addresses').rescue ipaddr('address') table rescue priority 10" # In groups/adm-gateway-sk1/topology.yaml interfaces: ens1f0: "~ lookup('topology', 'interface-rescue') "
$ ./run-jerikan lookup gateway1.sk1.blade-group.net topology interfaces [ ] ens1f0: address: 121.78.242.10/29 up: - ip route add default via 121.78.242.9 table rescue - ip rule add from 121.78.242.10 table rescue priority 10
peers: transit: cogent: asn: 174 remote: - 38.140.30.233 - 2001:550:2:B::1F9:1 specific-import: - name: ATT-US as-path: ".*7018$" lp-delta: 50 ix-sfmix: rs-sfmix: monitored: true asn: 63055 remote: - 206.197.187.253 - 206.197.187.254 - 2001:504:30::ba06:3055:1 - 2001:504:30::ba06:3055:2 blizzard: asn: 57976 remote: - 206.197.187.42 - 2001:504:30::ba05:7976:1 irr: AS-BLIZZARD
build
namespace:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build templates data.yaml: data.j2 config.txt: junos/main.j2 config-base.txt: junos/base.j2 config-irr.txt: junos/irr.j2 $ ./run-jerikan lookup to1-p1.ussfo03.blade-group.net build templates data.yaml: data.j2 config.txt: cumulus/main.j2 frr.conf: cumulus/frr.j2 interfaces.conf: cumulus/interfaces.j2 ports.conf: cumulus/ports.j2 dhcpd.conf: cumulus/dhcp.j2 default-isc-dhcp: cumulus/default-isc-dhcp.j2 authorized_keys: cumulus/authorized-keys.j2 motd: linux/motd.j2 acl.rules: cumulus/acl.j2 rsyslog.conf: cumulus/rsyslog.conf.j2
ipaddr
. Here is an excerpt of
templates/junos/base.j2
to configure DNS
and NTP servers on Juniper devices:
system ntp % for ntp in lookup("system", "ntp") % server ntp ; % endfor % name-server % for dns in lookup("system", "dns") % dns ; % endfor %
% for dns in lookup('system', 'dns') % domain vrf VRF-MANAGEMENT name-server dns % endfor % ! % for syslog in lookup('system', 'syslog') % logging syslog vrf VRF-MANAGEMENT % endfor % !
devices()
returns the list of devices matching a set of
conditions on the scope. For example, devices("location==ussfo03",
"groups==tor-bgp")
returns the list of devices in San Francisco in
the tor-bgp
group. You can also omit the operator if you want the
specified value to be equal to the one in the local scope. For
example, devices("location")
returns devices in the current
location.lookup()
does a key lookup. It takes the namespace, the key, and
optionally, a device name. If not provided, the current device
is assumed.scope()
returns the scope of the provided device.% for neighbor in devices("location", "groups==edge") if neighbor != device % % for address in lookup("topology", "addresses", neighbor).loopback tolist % protocols bgp group IPV address ipv -EDGES-IBGP neighbor address description "IPv address ipv : iBGP to neighbor "; % endfor % % endfor %
store()
as a filter:
interface Loopback0 description 'Loopback:' % for address in lookup('topology', 'addresses').loopback tolist % ipv address ipv address address store('addresses', 'Loopback0') ipaddr('cidr') % endfor % !
store()
:4
% for device, ip, interface in store('addresses') % % set interface = interface replace('/', '-') replace('.', '-') replace(':', '-') % % set name = ' . '.format(interface lower, device) % name . IN 'A' if ip ipv4 else 'AAAA' ip ipaddr('address') % endfor %
./run-jerikan build
. The
--limit
argument restricts the devices to generate configuration
files for. Build is not done in parallel because a template may depend
on the data collected by another template. Currently, it takes 1
minute to compile around 3000 files spanning over 800 devices.
When an error occurs, a detailed traceback is displayed, including the
template name, the line number and the value of all visible variables.
This is a major time-saver compared to Ansible!
templates/opengear/config.j2:15: in top-level template code config.interfaces. interface .netmask adddress ipaddr("netmask") continent = 'us' device = 'con1-ag2.ussfo03.blade-group.net' environment = 'prod' host = 'con1-ag2.ussfo03' infos = 'address': '172.30.24.19/21' interface = 'wan' location = 'ussfo03' loop = <LoopContext 1/2> member = '2' model = 'cm7132-2-dac' os = 'opengear' shorthost = 'con1-ag2' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = JerkianUndefined, query = 'netmask', version = False, alias = 'ipaddr' [ ] # Check if value is a list and parse each element if isinstance(value, (list, tuple, types.GeneratorType)): _ret = [ipaddr(element, str(query), version) for element in value] return [item for item in _ret if item] > elif not value or value is True: E jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'adddress'
jerikan/jinja.py
. Mastering
Jinja2 is a good investment. Take time to browse through our
templates as some of them show interesting features.
checks/
directory. Jerikan looks up the key checks
in the build
namespace to know which checks to run:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build checks - description: Juniper configuration file syntax check script: checks/junoser cache: input: config.txt output: config-set.txt - description: check YAML data script: checks/data.yaml cache: data.yaml
checks/junoser
is executed if there is a
change to the generated config.txt
file. It also outputs a
transformed version of the configuration file which is easier to
understand when using diff
. Junoser checks a Junos configuration
file using Juniper s XML schema definition for Netconf.5 On
error, Jerikan displays:
jerikan/build.py:127: RuntimeError -------------- Captured syntax check with Junoser call -------------- P: checks/junoser edge2.ussfo03.blade-group.net C: /app/jerikan O: E: Invalid syntax: set system syslog archive size 10m files 10 word-readable S: 1
.gitlab-ci.yml
file. When we need to make a change, we create a dedicated branch and
a merge request. GitLab compiles the templates using the same
environment we use on our laptops and store them as an artifact.
Before approving the merge request, another team member looks at the
changes in data and templates but also the differences for the
generated configuration files:
ob1-n1.sk1.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios ob2-n1.sk1.blade-group.net ansible_host=172.29.15.13 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios ob1-n1.ussfo03.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios none ansible_connection=local [oob] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [os-ios] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [model-c2960s] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net [location-sk1] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net [location-ussfo03] ob1-n1.ussfo03.blade-group.net [in-sync] ob1-n1.sk1.blade-group.net ob2-n1.sk1.blade-group.net ob1-n1.ussfo03.blade-group.net none
in-sync
is a special group for devices which configuration should
match the golden configuration. Daily and unattended, Ansible should
be able to push configurations to this group. The mid-term goal is to
cover all devices.
none
is a special device for tasks not related to a specific host.
This includes synchronizing NetBox, IRR objects, and the DNS,
updating the RPKI, and building the geofeed files.
ansible/playbooks/site.yaml
file.
Here is a shortened version:
- hosts: adm-gateway:!done strategy: mitogen_linear roles: - blade.linux - blade.adm-gateway - done - hosts: os-linux:!done strategy: mitogen_linear roles: - blade.linux - done - hosts: os-junos:!done gather_facts: false roles: - blade.junos - done - hosts: os-opengear:!done gather_facts: false roles: - blade.opengear - done - hosts: none:!done gather_facts: false roles: - blade.none - done
blade.junos
role. Once a play has been executed, the
device is added to the done
group and the other plays are skipped.
The playbook can be executed with the configuration files generated by
the GitLab CI using the ./run-ansible-gitlab
command. This is a
wrapper around Docker and the ansible-playbook
command and it
accepts the same arguments. To deploy the configuration on the edge
devices for the SK1 datacenter in check mode, we use:
$ ./run-ansible-gitlab playbooks/site.yaml --limit='edge:&location-sk1' --diff --check [ ] PLAY RECAP ************************************************************* edge1.sk1.blade-group.net : ok=6 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 edge2.sk1.blade-group.net : ok=5 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
--check
must detect if a change is needed;--diff
must provide a visualization of the planned changes;--check
and --diff
must not display anything if there is nothing to change;cisco.iosxr
collection. The quality of Ansible
Galaxy collections is quite random and it is an additional
maintenance burden. It seems better to write roles tailored to our
needs. The collections we use are in
ci/ansible/ansible-galaxy.yaml
. We use
Mitogen to get a 10 speedup on Ansible executions on Linux
hosts.
We also have a few playbooks for operational purpose: upgrading the OS
version, isolate an edge router, etc. We were also planning on how to
add operational checks in roles: are all the BGP sessions up? They
could have been used to validate a deployment and rollback if there is
an issue.
Currently, our playbooks are run from our laptops. To keep tabs, we
are using ARA. A weekly dry-run on devices in the in-sync
group
also provides a dashboard on which devices we need to run Ansible
on.
$ ansible --version ansible 2.10.8 [ ] $ cat test.j2 Hello name ! $ ansible all -i localhost, \ > --connection=local \ > -m template \ > -a "src=test.j2 dest=test.txt" localhost FAILED! => "changed": false, "msg": "AnsibleUndefinedVariable: 'name' is undefined"
jerikan/jinja.py
. This is a remain of the
fact we do not maintain Jerikan as a standalone software.
store()
filter and a
store()
function. With Jinja2, filters and functions live in
two different namespaces.
ansible/roles/blade.linux/tasks/firewall.yaml
and
ansible/roles/blade.linux/tasks/interfaces.yaml
.
They are meant to be called when needed, using import_role
.
remotes/private/attic/novena 822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot de09d82b quick review, add note and graph
remotes/private/attic/wireguard 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat 914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet 9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks dcf02676 config notes for http2
remotes/private/backlog/serverless 9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting 00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview 21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018 1edc5ec8 add series
remotes/private/fin/netconf 3f4b7ece publish the netconf articles
remotes/private/fin/netdev 6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline f841deed pgp offline branch ready for publication
remotes/private/fin/primes c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes 4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks 5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy 95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya 20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin 7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery 8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench 451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics 2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time f3c24a53 another way of telling time
remotes/private/ideas/wallabako 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench 8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work
Wow, what a mess! Let's see if I can make sense of this:
novena
: the project is ooold now, didn't seem to fit a LWN
article. it was basically "how can i build my novena now" and "you
guys rock!" it seems like the MNT Reform is the brain child of
the Novena now, and I dare say it's even cooler!secureboot
: my LWN editors were critical of my approach, and
probably rightly so - it's a really complex subject and I was
probably out of my depth... it's also out of date now, we did
manage secureboot in Debianwireguard
: LWN ended up writing extensive coverage, and
I was biased against Donenfeld because of conflicts in a previous
projectdat
: I already had written Sharing and archiving data sets with
Dat, but
it seems I had more to say... mostly performance issues, beaker, no
streaming, limited adoption... to be investigated, I guess?packet
: a primer on data communications over ham radio, and the
cool new tech that has emerged in the free software world. those
are mainly notes about Pat, Direwolf, APRS and so
on... just never got around to making sense of it or really using
the tech...performance-tweaks
: "optimizing websites at the age of http2",
the unwritten story of the optimization of this website with HTTP/2
and friendsserverless
: god. one of the leftover topics at Kubecon, my notes
on this were thin, and the actual subject, possibly even
thinner... the only lie worse than the cloud is that there's no
server at all! concretely, that's a pile of notes about
Kubecon which I wanted to sort
through. Probably belongs in the attic now.astropy
: "Python in astronomy" - had a chat with saimn
while writing about sigal, and it
turns out he actually works on free software in astronomy, in
Python... I actually expect LWN to cover this sooner than
later, after Lee Phillips's introduction to SciPyavaneya
: crowdfunded blade-runner-themed GPLv3 simcity-like
simulator, i just have that link so farbackups-benchmarks
: review of backup software through performance
and features, possibly based on those benchmarks, maybe based
on this list from restic although they refused
casync. benchmark articles are hard though, especially when
you want to "cover them all"... I did write a silly Attic vs
Bup back when those programs existed (2014), in a related
note...ideas/cumin
: review of the Cumin automation tool from
WikiMedia Foundation... I ended up using the tool at work and
writing service documentation for itideas/future-of-distros
: modern packaging problems and complex
apps, starting from this discussion about the removal of
Dolibarr from Debian, a summary of the thread from liw,
and ideas from joeyh (now from the outside of Debian), then
debates over the power of FTP masters - ugh, glad I
didn't step in that rat's nestideas/on-dying
: "what happens when a hacker dies?" rather grim
subject, but a more and more important one... joeyh has ideas
again, phk as well, then there's a protocol for dying
(really grim)... then there are site policies like GitHub,
Facebook, etc... more in the branch, but that one I can't help but
think about now that family has taken a bigger place in my life...ideas/openpgp-discovery
: OpenPGP discovery mechanisms (WKD, etc),
suggested by Jonas Meurer (somewhere?), only links to
Mailveloppe, LEAP, WKD (or is it WKS?), another
standard, probably would need to talk about OpenPGP CA now
and how Debian and Tor manage their keyrings... pain in the back.ideas/password-bench
: bruteforce estimates for various password
patterns compared with RSA key sizes, spinoff of my smartcard
article, in
the crypto-bench, look at this shiny graph, surely that
must mean an article, right?ideas/prometheus-openmetrics
: "Evolving the Prometheus exposition
format into a standard", seems like this happenedideas/telling-time
: telling time to users is hard. xclock vs
ttyclock, etc. maybe gameclock and undertime as well? syncing time
is hard, but it turns out showing it is non trivial as
well... basically turning this bug report into an article. for
some reason I linked to this meme, derived from this
meme, presumably a premonition of my stupid idea of writing
undertime TIMEZONES!ideas/wallabako
: "talk about wallabako, read-it-later + kobo
hacking", that's it, not even a link to the project!stalled/bench-bench-bench
benchmarking http benchmarking tools, a
horrible mess of links, copy-paste from terminals, and ideas about
benchmarking... some of this trickled out into this benchmarking
guide at Tor, but not much more than the list of toolsstalled/debian-survey-democracy
: "free software surveys and
Debian democracy, volunteer vs paid work"... A long standing
concern of mine is that all Debian work is supposed to be
volunteer, and paying explicitly for work inside Debian has
traditionally been frowned upon, even leading to serious drama and
dissent (remember Dunc-Tank)? back when I was writing for LWN,
I was also doing paid work for Debian LTS. I also learned
that a lot (most?) Debian Developers were actually being paid by
their job to work on Debian. So I was confused by this apparent
contradiction, especially given how the LTS project has been mostly
accepted, while Dunc-Tank was not... See also this talk at Debconf
16. I had hopes that this study would show the "hunch"
people have offered (that most DDs are paid to work on Debian) but
it seems to show the reverse (only 36% of DDs, and 18% of all
respondents paid). So I am still confused and worried about the
sustainability of Debian.Publisher: | Red Wombat Tea Company |
Copyright: | 2013 |
ASIN: | B00G9GSEXO |
Format: | Kindle |
Pages: | 140 |
Sings-to-Trees had hair the color of sunlight and ashes, delicately pointed ears, and eyes the translucent green of new leaves. His shirt was off, he had the sort of tanned muscle acquired from years of healthy outdoor living, and you could have sharpened a sword on his cheekbones. He was saved from being a young maiden's fantasy unless she was a very peculiar young maiden by the fact that he was buried up to the shoulder in the unpleasant end of a heavily pregnant unicorn.Sings-to-Trees is the sort of elf who lives by himself, has a healthy appreciation for what nursing wild animals involves, and does it anyway because he truly loves animals. Despite that, he was not entirely prepared to deal with a skeleton deer with a broken limb, or at least with the implications of injured skeleton deer who are attracted by magical disturbances showing up in his yard. As one might expect, Sings-to-Trees and the goblins run into each other while having to sort out some problems that are even more dangerous than the war the goblins were unexpectedly removed from. But the point of this novella is not a deep or complex plot. It pushes together a bunch of delightfully weird and occasionally grumpy characters, throws a challenge at them, and gives them space to act like fundamentally decent people working within their constraints and preconceptions. It is, in other words, an excellent vehicle for Ursula Vernon (writing as T. Kingfisher) to describe exasperated good-heartedness and stubbornly determined decency.
Sings-to-Trees gazed off in the middle distance with a vague, pleasant expression, the way that most people do when present at other people's minor domestic disputes, and after a moment, the stag had stopped rattling, and the doe had turned back and rested her chin trustingly on Sings-to-Trees' shoulder. This would have been a touching gesture, if her chin hadn't been made of painfully pointy blades of bone. It was like being snuggled by an affectionate plow.It's not a book you read for the twists and revelations (the resolution is a bit of an anti-climax). It's strength is in the side moments of characterization, in the author's light-hearted style, and in descriptions like the above. Sings-to-Trees is among my favorite characters in all of Vernon's books, surpassed only by gnoles and a few characters in Digger. The Kingfisher books I've read recently have involved humans and magic and romance and more standard fantasy plots. This book is from seven years ago and reminds me more of Digger. There is less expected plot machinery, more random asides, more narrator presence, inhuman characters, no romance, and a lot more focus on characters deciding moment to moment how to tackle the problem directly in front of them. I wouldn't call it a children's book (all of the characters are adults), but it has a bit of that simplicity and descriptive focus. If you like Kingfisher in descriptive mode, or enjoy Vernon's descriptions of D&D campaigns on Twitter, you are probably going to like this. If you don't, you may not. I thought it was slight but perfect for my mood at the time. Rating: 7 out of 10
rr.ntt.net
or whois.radb.net
can be a bottleneck. Updating many
filters may take several tens of minutes, depending on the load:
$ time bgpq4 -h whois.radb.net AS-HURRICANE wc -l 909869 1.96s user 0.15s system 2% cpu 1:17.64 total $ time bgpq4 -h rr.ntt.net AS-HURRICANE wc -l 927865 1.86s user 0.08s system 12% cpu 14.098 total
$ git clone https://github.com/vincentbernat/irrd-legacy.git -b blade/master $ cd irrd-legacy $ docker build . -t irrd-snapshot:latest [ ] Successfully built 58c3e83a1d18 Successfully tagged irrd-snapshot:latest $ docker container run --rm --detach --publish=43:43 irrd-snapshot 4879cfe7413075a0c217089dcac91ed356424c6b88808d8fcb01dc00eafcc8c7 $ time bgpq4 -h localhost AS-HURRICANE wc -l 904137 1.72s user 0.11s system 96% cpu 1.881 total
rr.ntt.net
:
NTTCOM, RADB, RIPE, ALTDB, BELL, LEVEL3, RGNET, APNIC, JPIRR, ARIN,
BBOI, TC, AFRINIC, ARIN-WHOIS, and REGISTROBR. However, it misses
RPKI.2 Feel free to adapt!
The image can be scheduled to be rebuilt daily or weekly, depending on
your needs. The repository includes a .gitlab-ci.yaml
file automating the build and triggering the compilation
of all filters by your CI/CD upon success.
rpki-client
and
transformed into RPSL objects to be imported in IRRd.
whois
and
bgpq4
. The first one allows you to do a query with the WHOIS
protocol:
$ whois -BrG 2a0a:e805:400::/40 [ ] inet6num: 2a0a:e805:400::/40 netname: FR-BLADE-CUSTOMERS-DE country: DE geoloc: 50.1109 8.6821 admin-c: BN2763-RIPE tech-c: BN2763-RIPE status: ASSIGNED mnt-by: fr-blade-1-mnt remarks: synced with cmdb created: 2020-05-19T08:04:58Z last-modified: 2020-05-19T08:04:58Z source: RIPE route6: 2a0a:e805:400::/40 descr: Blade IPv6 - AMS1 origin: AS64476 mnt-by: fr-blade-1-mnt remarks: synced with cmdb created: 2019-10-01T08:19:34Z last-modified: 2020-05-19T08:05:00Z source: RIPE
$ bgpq4 -6 -S RIPE -b AS64476 NN = [ 2a0a:e805::/40, 2a0a:e805:100::/40, 2a0a:e805:300::/40, 2a0a:e805:400::/40, 2a0a:e805:500::/40 ];
Notice I recommend that you read Writing a custom Ansible module as an introduction, as well as Syncing MySQL tables for a more instructive example.
- name: prepare RIPE objects irr_sync: irr: RIPE mntner: fr-blade-1-mnt source: whois-ripe.txt register: irr
- name: sign RIPE objects shell: cmd: gpg --batch --user noc@example.com --clearsign stdin: " irr.objects " register: signed check_mode: false changed_when: false - name: update RIPE objects by email mail: subject: "NEW: update for RIPE" from: noc@example.com to: "auto-dbm@ripe.net" cc: noc@example.com host: smtp.example.com port: 25 charset: us-ascii body: " signed.stdout "
key-cert
object and adding it as a valid authentication
method for the corresponding mntner
object:
key-cert: PGPKEY-A791AAAB certif: -----BEGIN PGP PUBLIC KEY BLOCK----- certif: certif: mQGNBF8TLY8BDADEwP3a6/vRhEERBIaPUAFnr23zKCNt5YhWRZyt50mKq1RmQBBY [ ] certif: -----END PGP PUBLIC KEY BLOCK----- mnt-by: fr-blade-1-mnt source: RIPE mntner: fr-blade-1-mnt [ ] auth: PGPKEY-A791AAAB mnt-by: fr-blade-1-mnt source: RIPE
module_args = dict( irr=dict(type='str', required=True), mntner=dict(type='str', required=True), source=dict(type='path', required=True), ) result = dict( changed=False, ) module = AnsibleModule( argument_spec=module_args, supports_check_mode=True )
whois
command to retrieve all
the objects from the provided maintainer.
# Per-IRR variations: # - whois server whois = 'ARIN': 'rr.arin.net', 'RIPE': 'whois.ripe.net', 'APNIC': 'whois.apnic.net' # - whois options options = 'ARIN': ['-r'], 'RIPE': ['-BrG'], 'APNIC': ['-BrG'] # - objects excluded from synchronization excluded = ["domain"] if irr == "ARIN": # ARIN does not return these objects excluded.extend([ "key-cert", "mntner", ]) # Grab existing objects args = ["-h", whois[irr], "-s", irr, *options[irr], "-i", "mnt-by", module.params['mntner']] proc = subprocess.run("whois", *args, capture_output=True) if proc.returncode != 0: raise AnsibleError( f"unable to query whois: args ") output = proc.stdout.decode('ascii') got = extract(output, excluded)
whois
command and the
objects to exclude from synchronization. The second part invokes the
whois
command, requesting all objects whose mnt-by
field is the
provided maintainer. Here is an example of output:
$ whois -h whois.ripe.net -s RIPE -BrG -i mnt-by fr-blade-1-mnt [ ] inet6num: 2a0a:e805:300::/40 netname: FR-BLADE-CUSTOMERS-FR country: FR geoloc: 48.8566 2.3522 admin-c: BN2763-RIPE tech-c: BN2763-RIPE status: ASSIGNED mnt-by: fr-blade-1-mnt remarks: synced with cmdb created: 2020-05-19T08:04:59Z last-modified: 2020-05-19T08:04:59Z source: RIPE [ ] route6: 2a0a:e805:300::/40 descr: Blade IPv6 - PA1 origin: AS64476 mnt-by: fr-blade-1-mnt remarks: synced with cmdb created: 2019-10-01T08:19:34Z last-modified: 2020-05-19T08:05:00Z source: RIPE [ ]
extract()
function. It parses and
normalizes the results into a dictionary mapping object names to
objects. We store the result in the got
variable.
def extract(raw, excluded): """Extract objects.""" # First step, remove comments and unwanted lines objects = "\n".join([obj for obj in raw.split("\n") if not obj.startswith(( "#", "%", ))]) # Second step, split objects objects = [RPSLObject(obj.strip()) for obj in re.split(r"\n\n+", objects) if obj.strip() and not obj.startswith( tuple(f" x :" for x in excluded))] # Last step, put objects in a dict objects = repr(obj): obj for obj in objects return objects
RPSLObject()
is a class enabling normalization and comparison of
objects. Look at the module code for more details.
>>> output=""" ... inet6num: 2a0a:e805:300::/40 ... [ ] ... """ >>> pprint( k: str(v) for k,v in extract(output, excluded=[]) ) '<Object:inet6num:2a0a:e805:300::/40>': 'inet6num: 2a0a:e805:300::/40\n' 'netname: FR-BLADE-CUSTOMERS-FR\n' 'country: FR\n' 'geoloc: 48.8566 2.3522\n' 'admin-c: BN2763-RIPE\n' 'tech-c: BN2763-RIPE\n' 'status: ASSIGNED\n' 'mnt-by: fr-blade-1-mnt\n' 'remarks: synced with cmdb\n' 'source: RIPE', '<Object:route6:2a0a:e805:300::/40>': 'route6: 2a0a:e805:300::/40\n' 'descr: Blade IPv6 - PA1\n' 'origin: AS64476\n' 'mnt-by: fr-blade-1-mnt\n' 'remarks: synced with cmdb\n' 'source: RIPE'
wanted
dictionary using the same structure, thanks
to the extract()
function we can use verbatim:
with open(module.params['source']) as f: source = f.read() wanted = extract(source, excluded)
got
and wanted
to build the diff
object:
if got != wanted: result['changed'] = True if module._diff: result['diff'] = [ dict(before_header=k, after_header=k, before=str(got.get(k, "")), after=str(wanted.get(k, ""))) for k in set((*wanted.keys(), *got.keys())) if k not in wanted or k not in got or wanted[k] != got[k]]
source
variable) and let
the IRR ignore unmodified objects. We also append the objects to be
deleted by adding a delete:
attribute to each them them.
# We send all source objects and deleted objects. deleted_mark = f" 'delete:':16 deleted by CMDB" deleted = "\n\n".join([f" got[k].raw \n deleted_mark " for k in got if k not in wanted]) result['objects'] = f" source \n\n deleted " module.exit_json(**result)
--diff
and --check
flags. It does not return
anything if no change is detected. It can work with APNIC, RIPE and
ARIN. It is not perfect: it may not detect some changes,3 it
is not able to modify objects not owned by the provided
maintainer4 and some attributes cannot be modified, requiring
to manually delete and recreate the updated object.5 However,
this module should automate 95% of your needs.
key-cert
and mntner
objects
and therefore we cannot detect changes in them. It is also not
possible to detect changes to the auth mechanisms of a mntner
object.
inetnum
object requires
deleting and recreating the object.
virt-install --name armhf-vm --arch armv7l --memory 512 \
--disk /srv/armhf-vm.img,bus=virtio
--filesystem /srv/armhf-vm-boot,virtio-boot,mode=mapped \
--boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0,root=/dev/vda1"
Run through the install as you'd normally would. Towards the end
the installer will likely complain it can't figure out how to
install a bootloader, which is fine. Just before ending the
install/reboot, switch to the shell and copy the /boot/vmlinuz and
/boot/initrd.img from the target system to the host in some fashion
(e.g. chroot into /target and use scp from the installed system).
This is required as the installer doesn't support 9p, but to boot
the system an initramfs will be needed with the modules needed to
mount the root fs, which is provided by the installed initramfs :).
Once that's all moved around, the installer can be finished.
Next, booting the installed system. For that adjust the libvirt
config (e.g. using virsh edit and tuning the xml) to use the kernel
and initramfs copied from the installer rather then the installer
ones. Spool up the VM again and it should happily boot into a
freshly installed Debian system.
To finalize on the guest side /boot should be moved onto the
shared 9pfs, the fstab entry for the new /boot should look
something like:
virtio-boot /boot 9p trans=virtio,version=9p2000.L,x-systemd.automount 0 0
With that setup, it's just a matter of shuffling the files in
/boot around to the new filesystem and the guest is done (make sure
vmlinuz/initrd.img stay symlinks). Kernel upgrades will work as
normal and visible to the host.
Now on the host side there is one extra hoop to jump through, as
the guest uses the 9p mapped security model symlinks in the guest
will be normal files on the host containing the symlink target. To
resolve that one, we've used libvirt's qemu hook support to setup a
proper symlink before the guest is started. Below is the script we
ended up using as an example (/etc/libvirt/hooks/qemu):
vm=$1
action=$2
bootdir=/srv/$ vm -boot
if [ $ action != "prepare" ] ; then
exit 0
fi
if [ ! -d $ bootdir ] ; then
exit 0
fi
ln -sf $(basename $(cat $ bootdir /vmlinuz)) $ bootdir /virtio-vmlinuz
ln -sf $(basename $(cat $ bootdir /initrd.img)) $ bootdir /virtio-initrd.img
With that in place, we can simply point the libvirt definition
to use /srv/$ vm -boot/virtio- vmlinuz,initrd.img as the
kernel/initramfs for the machine and it'll automatically get the
latest kernel/initramfs as installed by the guest when the VM is
started.
Just one final rough edge remains, when doing reboot from the VM
libvirt leaves qemu to handle that rather than restarting qemu.
This unfortunately means a reboot won't pick up a new kernel if
any, for now we've solved this by configuring libvirt to stop the
VM on reboot instead. As we typically only reboot VMs on kernel
(security) upgrades, while a bit tedious, this avoid rebooting with
an older kernel/initramfs than intended.
Before the 2.6 series, there was a stable branch (2.4) where only relatively minor and safe changes were merged, and an unstable branch (2.5), where bigger changes and cleanups were allowed. Both of these branches had been maintained by the same set of people, led by Torvalds. This meant that users would always have a well-tested 2.4 version with the latest security and bug fixes to use, though they would have to wait for the features which went into the 2.5 branch. The downside of this was that the stable kernel ended up so far behind that it no longer supported recent hardware and lacked needed features. In the late 2.5 kernel series, some maintainers elected to try backporting of their changes to the stable kernel series, which resulted in bugs being introduced into the 2.4 kernel series. The 2.5 branch was then eventually declared stable and renamed to 2.6. But instead of opening an unstable 2.7 branch, the kernel developers decided to continue putting major changes into the 2.6 branch, which would then be released at a pace faster than 2.4.x but slower than 2.5.x. This had the desirable effect of making new features more quickly available and getting more testing of the new code, which was added in smaller batches and easier to test. Then, in the Ruby community. In 2007, Ruby 1.8.6 was the stable version of Ruby. Ruby 1.9.0 was released on 2007-12-26, without being declared stable, as a snapshot from Ruby s trunk branch, and most of the development s attention moved to 1.9.x. On 2009-01-31, Ruby 1.9.1 was the first release of the 1.9 branch to be declared stable. But at the same time, the disruptive changes introduced in Ruby 1.9 made users stay with Ruby 1.8, as many libraries (gems) remained incompatible with Ruby 1.9.x. Debian provided packages for both branches of Ruby in Squeeze (2011) but only changed the default to 1.9 in 2012 (in a stable release with Wheezy 2013). Finally, in the Python community. Similarly to what happened with Ruby 1.9, Python 3.0 was released in December 2008. Releases from the 3.x branch have been shipped in Debian Squeeze (3.1), Wheezy (3.2), Jessie (3.4). But the python command still points to 2.7 (I don t think that there are plans to make it point to 3.x, making python 3.x essentially a different language), and there are talks about really getting rid of Python 2.7 in Buster (Stretch+1, Jessie+2). In retrospect, and looking at what those projects have been doing in recent years, it is probably a better idea to break early, break often, and fix a constant stream of breakages, on a regular basis, even if that means temporarily exposing breakage to users, and spending more time seeking strategies to limit the damage caused by introducing breakage. What also changed since the time those branches were introduced is the increased popularity of automated testing and continuous integration, which makes it easier to measure breakage caused by disruptive changes. Distributions are in a good position to help here, by being able to provide early feedback to upstream projects about potentially disruptive changes. And distributions also have good motivations to help here, because it is usually not a great solution to ship two incompatible branches of the same project. (I wonder if there are other occurrences of the same pattern?) Update: There s a discussion about this post on HN
% time ./bin/eierskap-dotty 958033540 > dagbladet.dot real 0m2.841s user 0m0.184s sys 0m0.036s %The script accept several organisation numbers on the command line, allowing a cluster of companies to be graphed in the same image. The resulting dot file for the example above look like this. The edges are labeled with the ownership percentage, and the nodes uses the organisation number as their name and the name as the label:
digraph ownership rankdir = LR; "Aller Holding A/s" -> "910119877" [label="100%"] "910119877" -> "998689015" [label="100%"] "998689015" -> "958033540" [label="99%"] "974530600" -> "958033540" [label="1%"] "958033540" [label="AS DAGBLADET"] "998689015" [label="Berner Media Holding AS"] "974530600" [label="Dagbladets Stiftelse"] "910119877" [label="Aller Media AS"]To view the ownership graph, run "dotty dagbladet.dot" or convert it to a PNG using "dot -T png dagbladet.dot > dagbladet.png". The result can be seen below: Note that I suspect the "Aller Holding A/S" entry to be incorrect data in the official ownership register, as that name is not registered in the official company register for Norway. The ownership register is sensitive to typos and there seem to be no strict checking of the ownership links. Let me know if you improve the script or find better data sources. The code is licensed according to GPL 2 or newer. Update 2015-06-15: Since the initial post I've been told that "Aller Holding A/S" is a Danish company, which explain why it did not have a Norwegian organisation number. I've also been told that there is a web services API available from Br nn ysundsregistrene, for those willing to accept the terms or pay the price.
Next.