Amounts are to taste: [Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.] [Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a cooler such as natural yogurt, soured cream or something else.
You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.
BTW if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.
There you go. It s a Zen recipe, which is why I couldn t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.
Looks like it is time to change all the passwords again. There s a tiny little
flaw in a CDN used everywhere, it seems.
Here s a quick hack for users of the pass password manager to qickly find the
domains affected. It is not perfect, but it is fast. :)
#!/bin/bash# Stig Sandbeck Mathisen <ssm@fnord.no># Checks the content of "pass" against the list of sites using cloudflare.# Expect false positives, and possibly false negatives.# TODO: remove the left part of each hostname from pass, to check domains.set -euo pipefail
tempdir=$(mktemp -d)trap'echo >&2 "removing $ tempdir " ; rm -rf "$tempdir"' EXIT
git clone https://github.com/pirate/sites-using-cloudflare.git "$tempdir"
grep -F -x -f \
<(pass git ls-files sed -e s,/,\ ,g -e s/.gpg// xargs -n 1 sort -u)\"$ tempdir/sorted_unique_cf.txt"\ sort -u
Update: The previous example used parallel. Actually, I didn t need that.
Turns out, using grep correctly is much faster than using grep the wrong way.
Lession: Read the manual. :)
Just been watching the video for Kate Bush The Red Shoes (I have actually seen the 1948 film). I came to a strange realisation. Activism, especially LGBTQ activism, is like the Red Shoes. When you put them on, you dance their dance, and you can never take them off.
I wonder how many other people have had this happen to them, and understand.
On a Linux system with desktop-file-utils installed, the default
application for opening a file with a file manager, from a web
browser, or using xdg-open on the command line is not static. The
last installed or upgraded application becomes the default.
For example: After installing gimp, that application will be used to
open any of the many types of files it supports. This lasts until
another application which can open those mime types is installed or
upgraded.
If I later install or upgrade mupdf , that application will be used for PDF,
until, etcetera.
There are several bug reports filed for this confusing behaviour:
Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525077
Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gimp/+bug/574342
Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=727422
Components
/usr/bin/update-desktop-database
is a command in the package desktop-file-utils
This command is run in the package postinst script, and triggers on
writes to /usr/share/applications where .desktop files are written.
/usr/share/applications
This directory contains a list of applications (files ending with
.desktop). These desktop files include mime types they are able to
work with.
The mupdf.desktop example shows it is able to work with (among
other) application/pdf
However, I m quite sure I do not want gimp to be the default viewer
for all those file types.
/usr/share/applications/mimeinfo.cache
This is a list of MIME types, with a list of applications able to open
them. The first entry in the list is the default application.
You may also have one of these in ~/.local/share/applications for
applications installed in the user s home directory.
Examples:
With gimp.desktop first, xdg-open test.pdf will use gimp
The order of .desktop files in mimeinfo.cache is the reverse of the
order they are added to that directory.
The last installed utility is first in that list.
Application Trace
This was fun to dig into. I ve just gotten some training which
included a a better look at auditd. Auditd is a nice hammer, and this
problem was a good nail.
I ran the command under autrace , and then looked for the order of
reads from each run.
When mupdf is installed last, mupdf.desktop is read last, and placed
first in the list of applications:
Reinstalling gimp puts that first in the entry for application/pdf
root@laptop:~# apt install --reinstall gimp
[...]Preparing to unpack .../gimp_2.8.18-1_amd64.deb ...Unpacking gimp (2.8.18-1) over (2.8.18-1) ...Processing triggers for mime-support (3.60) ...Processing triggers for desktop-file-utils (0.23-1) ...Setting up gimp (2.8.18-1) ...Processing triggers for gnome-menus (3.13.3-8) ...[...]root@laptop:~# autrace /usr/bin/update-desktop-database
Waiting to execute: /usr/bin/update-desktop-databaseCleaning up...Trace complete. You can locate the records with 'ausearch -i -p 15043'root@laptop:~# ausearch -p 15043 aureport --file egrep 'gimp mupdf'389. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 4 yes /usr/bin/update-desktop-database 1000 9550390. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 2 yes /usr/bin/update-desktop-database 1000 9551391. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 4 yes /usr/bin/update-desktop-database 1000 9556392. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 2 yes /usr/bin/update-desktop-database 1000 9557root@laptop:~# grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=gimp.desktop;mupdf.desktop;evince.desktop;libreoffice-draw.desktop;
Configuration
The solution to this is to add configuration so it will use something
else than the default. The xdg-mime command is your tool.
The various desktop environments often do this for you. However, if
you have a lightweight work environment, you may need to do this
yourself for the MIME types you care about.
So, today at Cambridge MiniDebConf, I was scheduled to do a Birds of a Feather (BoF) about Diversity and Inclusion within Debian. I was expecting a handful of people in the breakout room. Instead it was a full blown workshop in the lecture theatre with me nominally facilitating. It went far, far better than I hoped (although a couple of other and myself people had to wrench us back on topic a few times).
There were lots of good ideas, and productive friendly debate (although we were pretty much all coming from the same ball park). There are three points I have taken away from it (others may have different views):
We are damned good at Inclusion, but have a long way to go on the Diversity (which is a problem of the entire tech sector).
Debian is a social project as well as a technical one our immediately accessible documentation does not reflect this.
We are currently too reactive and passive when it comes to social issues and getting people involved. It is essential that we become more proactive.
Combined with the recent Diversity drive from Debconf 2016, I really believe we can do this. Thank-you all you who attended, contributed, and approached me afterwards.
Edit: Video here Debian Diversity and Inclusion Workshop
Edit Edit: video link fixed.
From last month s toybox of distractions, I ve spent time with GitLab CI,
Ansible, Prometheus and OpenShift.
GitLab CI is a lot like Travis
CI, and a little less like Jenkins. When a commit is pushed to the
repository in GitLab, and the branch contains a .gitlab-ci.yml file, a
GitLab CI runner will check out the repository, and follow the
instructions in that file. Useful for configuration syntax checks,
unit tests, and puppet environment deployments.
I ve mostly used Ansible for
orchestration, performing tasks across a number of nodes. I ve used
it much configuration management for a bit, and from what I see, it
can do that rather well, too. I ve used Puppet in production for a
bit (I committed revision 1 in the old puppet configuration management
repository at work in 2007-07-04). A new perspective on configuration
management is good.
Prometheus is a master-node stats gatherer
and presenter. It does a single HTTP GET to fetch all metrics in a
single request. I ve used Munin for a long, long time, and while the
plugin ecosystem is far larger for Munin, the Prometheus master scales
much better (millions of metrics per minute on a modern machine). I
use Grafana to present graphs from Prometheus and logs from
Elasticsearch in the same dashboard. Prometheus can collect data from
a munin node, using a munin node exporter.
Last week I got
training
in OpenShift, which was an eye-opener.
I ve used Docker for a good while, and planned to introduce
Kubernetes, as well as an imperial buttload of shell scripts to keep
it all automated. Thankfully, OpenShift Origin already includes
Kubernetes and does the required automation. An OpenShift cluster is
now being added to the core infrastructure to do the required care and
feeding of the herd of APIs and microservices written over the
years. Bunch it together behind an API Management Gateway, and you
should be able to label the whole thing Microservice Architecture .
I m not running out of fun things to do for a while.
So this morning, along with a few other members of staff, I was filmed for a Diversity and Inclusion video for Ada Lovelace Day at work. Very positive experience, and I was wearing my rainbow chain mail necklace made by the wonderful Rosemary Warner, and a safety pin, which I had to explain the meaning of to the two peeps doing the filming. We all of us read the same script, and they are going to paste it together with each of us saying one sentence at a time. The script was not just about gender, it also mentioned age, skills, sexual orientation and physical ability among other things (I cannot remember the entire list). I was very happy and proud to take part.
As some of the world knows full well by now, I've been noodling with Go
for a few years, working through its pros, its cons, and thinking a lot
about how humans use code to express thoughts and ideas. Go's got a lot of
neat use cases, suited to particular problems, and used in the right place,
you can see some clear massive wins.
I've started writing Debian tooling in Go, because it's a pretty natural fit.
Go's fairly tight, and overhead shouldn't be taken up by your operating system.
After a while, I wound up hitting the usual blockers, and started to build up
abstractions. They became pretty darn useful, so, this blog post is announcing
(a still incomplete, year old and perhaps API changing) Debian package for Go.
The Go importable name is pault.ag/go/debian. This contains a lot of utilities
for dealing with Debian packages, and will become an edited down "toolbelt"
for working with or on Debian packages.
Module Overview
Currently, the package contains 4 major sub packages. They're a changelog
parser, a control file parser, deb file format parser, dependency parser
and a version parser. Together, these are a set of powerful building blocks
which can be used together to create higher order systems with reliable
understandings of the world.
changelog
The first (and perhaps most incomplete and least tested) is a changelog file
parser.. This provides the
programmer with the ability to pull out the suite being targeted in the
changelog, when each upload was, and the version for each. For example, let's
look at how we can pull when all the uploads of Docker to sid took place:
funcmain()resp,err:=http.Get("http://metadata.ftp-master.debian.org/changelogs/main/d/docker.io/unstable_changelog")iferr!=nilpanic(err)allEntries,err:=changelog.Parse(resp.Body)iferr!=nilpanic(err)for_,entry:=rangeallEntriesfmt.Printf("Version %s was uploaded on %s\n",entry.Version,entry.When)
The output of which looks like:
Version 1.8.3~ds1-2 was uploaded on 2015-11-04 00:09:02 -0800 -0800
Version 1.8.3~ds1-1 was uploaded on 2015-10-29 19:40:51 -0700 -0700
Version 1.8.2~ds1-2 was uploaded on 2015-10-29 07:23:10 -0700 -0700
Version 1.8.2~ds1-1 was uploaded on 2015-10-28 14:21:00 -0700 -0700
Version 1.7.1~dfsg1-1 was uploaded on 2015-08-26 10:13:48 -0700 -0700
Version 1.6.2~dfsg1-2 was uploaded on 2015-07-01 07:45:19 -0600 -0600
Version 1.6.2~dfsg1-1 was uploaded on 2015-05-21 00:47:43 -0600 -0600
Version 1.6.1+dfsg1-2 was uploaded on 2015-05-10 13:02:54 -0400 EDT
Version 1.6.1+dfsg1-1 was uploaded on 2015-05-08 17:57:10 -0600 -0600
Version 1.6.0+dfsg1-1 was uploaded on 2015-05-05 15:10:49 -0600 -0600
Version 1.6.0+dfsg1-1~exp1 was uploaded on 2015-04-16 18:00:21 -0600 -0600
Version 1.6.0~rc7~dfsg1-1~exp1 was uploaded on 2015-04-15 19:35:46 -0600 -0600
Version 1.6.0~rc4~dfsg1-1 was uploaded on 2015-04-06 17:11:33 -0600 -0600
Version 1.5.0~dfsg1-1 was uploaded on 2015-03-10 22:58:49 -0600 -0600
Version 1.3.3~dfsg1-2 was uploaded on 2015-01-03 00:11:47 -0700 -0700
Version 1.3.3~dfsg1-1 was uploaded on 2014-12-18 21:54:12 -0700 -0700
Version 1.3.2~dfsg1-1 was uploaded on 2014-11-24 19:14:28 -0500 EST
Version 1.3.1~dfsg1-2 was uploaded on 2014-11-07 13:11:34 -0700 -0700
Version 1.3.1~dfsg1-1 was uploaded on 2014-11-03 08:26:29 -0700 -0700
Version 1.3.0~dfsg1-1 was uploaded on 2014-10-17 00:56:07 -0600 -0600
Version 1.2.0~dfsg1-2 was uploaded on 2014-10-09 00:08:11 +0000 +0000
Version 1.2.0~dfsg1-1 was uploaded on 2014-09-13 11:43:17 -0600 -0600
Version 1.0.0~dfsg1-1 was uploaded on 2014-06-13 21:04:53 -0400 EDT
Version 0.11.1~dfsg1-1 was uploaded on 2014-05-09 17:30:45 -0400 EDT
Version 0.9.1~dfsg1-2 was uploaded on 2014-04-08 23:19:08 -0400 EDT
Version 0.9.1~dfsg1-1 was uploaded on 2014-04-03 21:38:30 -0400 EDT
Version 0.9.0+dfsg1-1 was uploaded on 2014-03-11 22:24:31 -0400 EDT
Version 0.8.1+dfsg1-1 was uploaded on 2014-02-25 20:56:31 -0500 EST
Version 0.8.0+dfsg1-2 was uploaded on 2014-02-15 17:51:58 -0500 EST
Version 0.8.0+dfsg1-1 was uploaded on 2014-02-10 20:41:10 -0500 EST
Version 0.7.6+dfsg1-1 was uploaded on 2014-01-22 22:50:47 -0500 EST
Version 0.7.1+dfsg1-1 was uploaded on 2014-01-15 20:22:34 -0500 EST
Version 0.6.7+dfsg1-3 was uploaded on 2014-01-09 20:10:20 -0500 EST
Version 0.6.7+dfsg1-2 was uploaded on 2014-01-08 19:14:02 -0500 EST
Version 0.6.7+dfsg1-1 was uploaded on 2014-01-07 21:06:10 -0500 EST
control
Next is one of the most complex, and one of the oldest parts of go-debian,
which is the control file parser
(otherwise sometimes known as deb822). This module was inspired by the way
that the json module works in Go, allowing for files to be defined in code
with a struct. This tends to be a bit more declarative, but also winds up
putting logic into struct tags, which can be a nasty anti-pattern if used too
much.
The first primitive in this module is the concept of a Paragraph, a struct
containing two values, the order of keys seen, and a map of string to string.
All higher order functions dealing with control files will go through this
type, which is a helpful interchange format to be aware of. All parsing of
meaning from the Control file happens when the Paragraph is unpacked into
a struct using reflection.
The idea behind this strategy that you define your struct, and let the Control
parser handle unpacking the data from the IO into your container, letting you
maintain type safety, since you never have to read and cast, the conversion
will handle this, and return an Unmarshaling error in the event of failure.
Additionally, Structs that define an anonymous member of control.Paragraph
will have the raw Paragraph struct of the underlying file, allowing the
programmer to handle dynamic tags (such as X-Foo), or at least, letting
them survive the round-trip through go.
The default decoder
contains an argument, the ability to verify the input control file using an
OpenPGP keyring, which is exposed to the programmer through the
(*Decoder).Signer() function. If the passed argument is nil, it will not
check the input file signature (at all!), and if it has been passed, any
signed data must be found or an error will fall out of the NewDecoder call.
On the way out, the opposite happens, where the struct is introspected,
turned into a control.Paragraph, and then written out to the io.Writer.
Here's a quick (and VERY dirty) example showing the basics of reading and
writing Debian Control files with go-debian.
packagemainimport("fmt""io""net/http""strings""pault.ag/go/debian/control")typeAllowedPackagestructPackagestringFingerprintstringfunc(a*AllowedPackage)UnmarshalControl(instring)errorin=strings.TrimSpace(in)chunks:=strings.SplitN(in," ",2)iflen(chunks)!=2returnfmt.Errorf("Syntax sucks: '%s'",in)a.Package=chunks[0]a.Fingerprint=chunks[1][1:len(chunks[1])-1]returnniltypeDMUAstructFingerprintstringUidstringAllowedPackages[]AllowedPackage control:"Allow" delim:"," funcmain()resp,err:=http.Get("http://metadata.ftp-master.debian.org/dm.txt")iferr!=nilpanic(err)decoder,err:=control.NewDecoder(resp.Body,nil)iferr!=nilpanic(err)fordmua:=DMUAiferr:=decoder.Decode(&dmua);err!=niliferr==io.EOFbreakpanic(err)fmt.Printf("The DM %s is allowed to upload:\n",dmua.Uid)for_,allowedPackage:=rangedmua.AllowedPackagesfmt.Printf(" %s [granted by %s]\n",allowedPackage.Package,allowedPackage.Fingerprint)
Output (truncated!) looks a bit like:
...
The DM Allison Randal <allison@lohutok.net> is allowed to upload:
parrot [granted by A4F455C3414B10563FCC9244AFA51BD6CDE573CB]
...
The DM Benjamin Barenblat <bbaren@mit.edu> is allowed to upload:
boogie [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
dafny [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
transmission-remote-gtk [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
urweb [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
...
The DM <aelmahmoudy@sabily.org> is allowed to upload:
covered [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
dico [granted by 6ADD5093AC6D1072C9129000B1CCD97290267086]
drawtiming [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
fonts-hosny-amiri [granted by BD838A2BAAF9E3408BD9646833BE1A0A8C2ED8FF]
...
...
deb
Next up, we've got the deb module. This contains code to handle reading
Debian 2.0 .deb files. It contains a wrapper that will parse the control
member, and provide the data member through the
archive/tar interface.
Here's an example of how to read a .deb file, access some metadata, and
iterate over the tar archive, and print the filenames of each of the
entries.
dependency
The dependency package provides an interface to parse and compute
dependencies. This package is a bit odd in that, well, there's no other
library that does this. The issue is that there are actually two different
parsers that compute our Dependency lines, one in Perl (as part of dpkg-dev)
and another in C (in dpkg).
To date, this has resulted in me filing
threedifferentbugs.
I also found a broken package in the
archive,
which actually resulted in another bug being (totally accidentally)
already fixed.
I hope to continue to run the archive through my parser in hopes of finding
more bugs! This package is a bit complex, but it basically just returns what
amounts to be an AST
for our Dependency lines. I'm positive there are bugs, so file them!
funcmain()dep,err:=dependency.Parse("foo bar, baz, foobar [amd64] bazfoo [!sparc], fnord:armhf [gnu-linux-sparc]")iferr!=nilpanic(err)anySparc,err:=dependency.ParseArch("sparc")iferr!=nilpanic(err)for_,possi:=rangedep.GetPossibilities(*anySparc)fmt.Printf("%s (%s)\n",possi.Name,possi.Arch)
Gives the output:
foo (<nil>)
baz (<nil>)
fnord (armhf)
version
Right off the bat, I'd like to thank
Michael Stapelberg for letting me graft this
out of dcs and into the go-debian package.
This was nearly entirely his work (with a one or two line function I added
later), and was amazingly helpful to have. Thank you!
This module implements Debian version comparisons and parsing, allowing for
sorting in lists, checking to see if it's native or not, and letting the
programmer to implement smart(er!) logic based on upstream (or Debian)
version numbers.
This module is extremely easy to use and very straightforward, and not worth
writing an example for.
Final thoughts
This is more of a "Yeah, OK, this has been useful enough to me at this point
that I'm going to support this" rather than a "It's stable!" or even
"It's alive!" post. Hopefully folks can report bugs and help iterate on
this module until we have some really clean building blocks to build
solid higher level systems on top of. Being able to have multiple libraries
interoperate by relying on go-debian will be a massive ease.
I'm in need of more documentation, and to finalize some parts of the older
sub package APIs, but I'm hoping to be at a "1.0" real soon now.
Puppet 4 has been uploaded to Debian unstable. This is a major upgrade
from Puppet 3.
If you are using Puppet, chances are that it is handling important
bits of your infrastructure, and you should upgrade with care.
Here are some points to consider.
Read Puppet s upgrade checklist
First, there are a number of changes in Puppet itself.
There is an
upgrade checklist
for Puppet (the software) published by Puppet (the company).
Please read this before you upgrade, and not after.
Using exported resources?
In Puppet 4, using
exported resources
requires PuppetDB, which is not packaged in Debian.
I ve uploaded puppet 4.4.2-1 to Debian
experimental.
Please test with caution, and expect sharp corners. This is a new
major version of Puppet in Debian, with
many new features and potentially breaking changes,
as well as a big rewrite of the .deb packaging. Bug reports for
src:puppet are very welcome.
As previously described in #798636, the new package
names are:
puppet (all the software)
puppet-agent (package containing just the init script and systemd unit
for the puppet agent)
puppet-master (init script and systemd unit for starting a single
master)
puppet-master-passenger (This package depends on
apache2 and
libapache2-mod-passenger,
and configures a puppet master scaled for more than a handful of
puppet agents)
Lots of hugs to the authors, keepers and maintainers of
autopkgtest,
debci,
piuparts and
ruby-serverspec
for their software. They helped me figure out when I had reached good
enough for experimental .
Some notes:
To use exported resources with puppet 4, you need a puppetdb
installation and a relevant puppetdb-terminus package on your puppet
master. This is not available in Debian, but is available from
Puppet s repositories.
Syntax highlighting for Emacs and Vim are no longer built from the
puppet package. Standalone packages will be made.
The packaged puppet modules need an overhaul of their dependencies
to install alongside this version of puppet. Testing would probably
also be great to see if they actually work.
On any given server, or workstation, knowing what is at the other end
of the network cable is often very useful.
There s a protocol for that: LLDP. This is a link layer protocol, so
it is not routed. Each end transmits information about itself
periodically.
You can typically see the type of equipment, the server or switch
name, and the network port name of the other end, although there are
lots of other bits of information available, too.
This is often used between switches and routers in a server centre,
but it is useful to enable on server hardware as well.
There are a few different packages available. I ve looked at a few of
them available for the RedHat OS family (Red Hat Enterprise Linux,
CentOS, ) as well as the Debian OS family (Debian, Ubuntu, )
(Updated 2016-04-29, added more recent information about lldpd, and
gathered the switch output at the end.)
ladvd
A simple daemon, with no configuration needed. This runs as a
privilege-separated daemon, and has a command line control
utility. You invoke it with a list of interfaces as command line
arguments to restrict the interfaces it should use.
ladvd is not available on RedHat, but is available on Debian.
Install the ladvd package, and run ladvdc to query the daemon for
information.
root@turbotape:~# ladvdc
Capability Codes:
r - Repeater, B - Bridge, H - Host, R - Router, S - Switch,
W - WLAN Access Point, C - DOCSIS Device, T - Telephone, O - Other
Device ID Local Intf Proto Hold-time Capability Port ID
office1-switch23 eno1 LLDP 98 B 42
Even better, it has output that can be parsed for scripting:
lldpd
Another package is lldpd , which is also simple to configure and
use.
lldpd is not available on RedHat, but it is present on Debian.
It features a command line interface, lldpcli , which can show output
with different level of detail, and on different formats, as well as
configure the running daemon.
root@turbotape:~# lldpcli show neighbors
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface: eno1, via: LLDP, RID: 1, Time: 0 day, 00:00:59
Chassis:
ChassisID: mac 00:11:22:33:44:55
SysName: office1-switch23
SysDescr: ProCurve J9280A Switch 2510G-48, revision Y.11.12, ROM N.10.02 (/sw/code/build/cod(cod11))
Capability: Bridge, on
Port:
PortID: local 42
PortDescr: 42
-------------------------------------------------------------------------------
Among the output formats are json , which is easy to re-use elsewhere.
lldpad
A much more featureful LLDP daemon, available for both the Debian and
RedHat OS families. This has lots of features, but is less trivial to
set up.
Configure lldp for each interface
#!/bin/sh
find /sys/class/net/ -maxdepth 1 -name 'en*'whileread device;do
basename "$device"donewhileread interface;do
lldptool set-lldp -i "$interface"adminStatus=rxtx
for item in sysName portDesc sysDesc sysCap mngAddr;do
lldptool set-tlv -i "$interface" -V "$item"enableTx=yes
sed -e "s/^/$item /"done
sed -e "s/^/$interface /"done
[...]
enp3s0f0
Chassis ID TLV
MAC: 01:23:45:67:89:ab
Port ID TLV
Local: 588
Time to Live TLV
120
System Name TLV
site3-row2-rack1
System Description TLV
Juniper Networks, Inc. ex2200-48t-4g , version 12.3R12.4 Build date: 2016-01-20 05:03:06 UTC
System Capabilities TLV
System capabilities: Bridge, Router
Enabled capabilities: Bridge, Router
Management Address TLV
IPv4: 10.21.0.40
Ifindex: 36
OID: $
Port Description TLV
some important server, port 4
MAC/PHY Configuration Status TLV
Auto-negotiation supported and enabled
PMD auto-negotiation capabilities: 0x0001
MAU type: Unknown [0x0000]
Link Aggregation TLV
Aggregation capable
Currently aggregated
Aggregated Port ID: 600
Maximum Frame Size TLV
9216
Port VLAN ID TLV
PVID: 2000
VLAN Name TLV
VID 2000: Name bumblebee
VLAN Name TLV
VID 2001: Name stumblebee
VLAN Name TLV
VID 2002: Name fumblebee
LLDP-MED Capabilities TLV
Device Type: netcon
Capabilities: LLDP-MED, Network Policy, Location Identification, Extended Power via MDI-PSE
End of LLDPDU TLV
enp3s0f1
[...]
on the switch side
On the switch, it is a bit easier to see what s connected to each
interface:
office switch
On the switch side, this system looks like:
office1-switch23# show lldp info remote-device
LLDP Remote Devices Information
LocalPort ChassisId PortId PortDescr SysName
--------- + ------------------------- ------ --------- ----------------------
[...]
42 22 33 44 55 66 77 eno1 Intel ... turbotape.example.com
[...]
office1-switch23# show lldp info remote-device 42
LLDP Remote Device Information Detail
Local Port : 42
ChassisType : mac-address
ChassisId : 00 11 22 33 33 55
PortType : interface-name
PortId : eno1
SysName : turbotape.example.com
System Descr : Debian GNU/Linux testing (stretch) Linux 4.5.0-1-amd64 #1...
PortDescr : Intel Corporation Ethernet Connection I217-LM
System Capabilities Supported : bridge, router
System Capabilities Enabled : bridge, router
Remote Management Address
Type : ipv4
Address : 192.0.2.93
Type : ipv6
Address : 20 01 0d b8 00 00 00 00 00 00 00 00 00 00 00 01
Type : all802
Address : 22 33 44 55 66 77
datacenter switch
ssm@site3-row2-rack1> show lldp neighbors
Local Interface Parent Interface Chassis Id Port info System Name
[...]
ge-0/0/38.0 ae1.0 01:23:45:67:89:58 Interface 2 as enp3s0f0 server.example.com
ge-1/0/38.0 ae1.0 01:23:45:67:89:58 Interface 3 as enp3s0f1 server.example.com
[...]
ssm@site3-row2-rack1> show lldp neighbors interface ge-0/0/38
LLDP Neighbor Information:
Local Information:
Index: 157 Time to live: 120 Time mark: Fri Apr 29 13:00:19 2016 Age: 24 secs
Local Interface : ge-0/0/38.0
Parent Interface : ae1.0
Local Port ID : 588
Ageout Count : 0
Neighbour Information:
Chassis type : Mac address
Chassis ID : 01:23:45:67:89:58
Port type : Mac address
Port ID : 01:23:45:67:89:58
Port description : Interface 2 as enp3s0f0
System name : server.example.com
System Description : Linux server.example.com 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 4
System capabilities
Supported : Station Only
Enabled : Station Only
Management Info
Type : IPv6
Address : 2001:0db8:0000:0000:0000:dead:beef:cafe
Port ID : 2
Subtype : 2
Interface Subtype : ifIndex(2)
OID : 1.3.6.1.2.1.31.1.1.1.1.2
This is a fusion recipe from a rather bland just stuff it with ricotta recipe I saw, David Scott s The Peniless Vegetarian , and my own mutations on those themes.
I can t give you exact quantities, just make a little more than you will make the hollowed mound (grin), and the rest will make an excellent pasta sauce.
Ingredients
For an average sized butternut squash, you will need:
1 onion (I prefer red)
3 cloves of garlic
1 capsicum pepper (I prefer green, my ex- preferred red)
Some red lentils
Optional green or brown lentils for texture and flavour. I used some puy
The lentil quantity is hard to estimate, but I ratio 4 red to 1 optional.
Roughly one handful of chopped mushrooms i.e. when chopped, it is one handful
1 tin tinned tomatoes
Some tomato puree
A generous amout of garam masala garam masala is what brings out the flavout in lentils
Some paprike
Optional chilli if using chilli, I recommend fresh of course.
Optional Balsamic vinegar
Optional Marmite
Preperation of the Squash
1. Cut the butternut squash in half, length ways. This is very hard, you will need a good large knife, and may require you jumping up and down into the air. This is the second most hard of the procedure.
2. For each half, scoop out the seeds, and pare back the bowl till it is no longer overly fibrous. Discard this, or find a use for the seeds.
3. For each half, scoop a channel of the softer flesh up from the baisin up near the top. This has to be done by feel, is hard and thankless work. Also experimentation required. Reserve this flesh.
Preperation of the Filling
This is just basically a nice lentil sauce that can be used with pasta, rice, toast etc.
Important: this is not a stir fry, but a largish, heavy bottom pan is recommended.
1. Finely peel then chopp the onions and the garlic. Chopp the chillis if used (I am a chilli gal). Please observe Chilli Protocol[0]
2. Wash and chop the pepper and mushrooms. Not finely diced, but not crudite-sized slices. Remember that peppers shrivel down a little, mushrooms a lot.
3. Start frying the onions for a while in some oil (I prefer olive, but others are acceptible), until they just about to go translucent. Then add the garlic and optional chillis until the garlic is just cooking nicely.
4. Add the spices, turn over until all the containts of the pan are covered, and cook for another 30 seconds or so. Then add the tinned tomato, and then add half a can of cold water water which rinsed the tin out with. Stir this around, and make sure it is now at just at a simmer or pre-simmer.
5. Add the lentils. You want 0.5-1 cm of water above the lentils when you have added and stirred. Let these cook and expand for about 5 mins, stirring all the while, all the lentils will stick to the bottom.
6. Add the pepper, mushroom, reserved squash flesh, and optional dash of balsamic vinegar, and half a tea spoon of marmite. Cook and stir until the pepper goes soft. This is the hard part. Add boiling water if really too thick, or some tomato puree if too thin. There is no hard science to this, you want at the end of 10 minutes or so something resembling the thickness in texture of a stiff bolognaise sauce.
Assembly
1. Have a baking tray. Whether you prefer to grease, line with foil, or line with baking parchment is up to you. I prefer baking parchment.
2. Stuff those two halves of butternut squash with that sauce you made. It should make a mound of about 1cm about the level. If you feel extravagent, and are not vegan, sprinkle a little grated cheese on top.
3. Place in a pre-heated oven of 200oC. Cooking time should be about 20 mins, but larger ones take longer. The acid test is to briefly take them out, and prod the lower side with a fork. It should go through the skin with little resistance.
When ready, serve. It s really a dish in itself, but some people might like a bit of salad, or maybe a light green risotto.
It s been a while since I ve looked at the puppet 4 packaging. During
this time, a lot of things have happened.
puppet
Puppet 4 is out. I ve done some packaging tests, and they are
promising.
puppet-agent
Puppet Labs released
puppet-agent recently,
with instructions.
It looks like an umbrella project for building and bundling puppet,
facter, augeas, ruby, and everything else to install on puppet managed
nodes.
It downloads a lot of software over HTTP or git, and verifies at least
some of it with md5 checksums. I don t think this is needed for
Debian.
puppet-server
This really needs packaging. There are probably lots of dependencies
not packaged yet. I am not familiar with any of the build tools.
puppetdb
Needs packaging. There are probably lots of dependencies not packaged
yet. I am not familiar with any of the build tools.
vim-puppet
This has moved outside the puppet repository.
Both puppetlabs and
rodjek have vim modes for
puppet. I ve grown very fond of rodjek s vim-puppet.
puppet-el
This has moved outside the puppet repository.
Both puppetlabs
and lunaryorn have emacs
modes for puppet.
puppet modules
There are a lot of puppet modules in the team VCS.
The Munin project is moving slowly closer to a
Munin 3 release. In parallel, the Debian packaging is changing, too.
The new web interface is looking much better than the
traditional web-1.0 interface normally associated with munin.
New package layout
perl libraries
All the Munin perl libraries are placed in libmunin-*-perl , and
split into separate packages, where the split is decided mostly on
dependencies.
If you don t want to monitor samba, or SNMP, or MySQL, there should be
no need to have those libraries installed. That does mean more
binary packages, on the other hand.
Munin master
Munin now runs as a standalone HTTPD, it no longer graphs from cron,
nor does it run as CGI or FastCGI scripts.
The user munin grants read-write access, while the group munin
grants read only access. The new web interface runs as the
munin-httpd user, which is member of the munin group.
There is a munin service. For now, it runs rrdcached for the munin
user and RRD directory.
munin node
The perl munin-node and the compiled munin-node-c should be
interchangeable, and be able to run the same plugins.
Munin node, and Munin async node, should be wholly separate from the
munin master. It should be possible to use the perl munin-node
package, and the
munin plugins
The munin plugins are placed separate packages named
munin-plugins-* . The split is based on monitoring subject, or
dependencies. They depend on appropriate libmunin-plugin-*-perl
packages
The munin-plugins-c package, which is is from the munin-node-c
source, contains a number of compiled plugins which should use less
resources than their shell, perl or python equivalents.
Plugins from other sources than munin must work similar to the ones
from munin . More work on this is needed.
Testing
Late December 2015, I set up Jenkins, with
jenkins-debian-glue to build packages, test with autopkgtest
and and update my development apt repository on each commit. That
helped developing and testing the new Munin packages.
The packages are not quite ready to upload to experimental, but they
are continuously deployed to weed out bugs. They can be found in
my packaging apt repo. (The usual non-guarantees apply, handle
with care, keep away from small children, etc )
Comments
Munin developers, packagers and users hang out on
#munin on the OFTC network.
Please drop by if you have questions or comments.
On the 13th of May this year, I legally became Lucy Wayland. I d been living as a woman full time a couple of months before that, but that is when two dear friends witnessed my name change. I am going to post about the whole experience when it is finally into the completion zone.
However, this last weekend just gone, I was helping out with the Cambridge (UK) MiniDebConf. I was mostly gophering and front-desk-helpering, with side orders of beverages, so I missed most of the talks. Which is not the point.
I met nearly everybody at the conference. Many of them knew me as Jon, a goateed man. I was there as Lucy, a woman. And nobody batted an eyelid.
Not a single person used my old name
Not a single person mis-gendered me
Not a single person referred to my transition
The only time I had to produce my Deed Poll out was for keysigning, as I still do not have photo ID with my new name on. I proffered it along with my passport, so there was no embarrassment.
I know other people within Debian have gone through the same process. However, I just have to say how wonderful it is, to be accepted just that way.
And hence the title of my article. Our differences bring us together. So many different people from so many different cultures came together, wanted to create, and my change of gender was just irrelevant.
And that s how it should be.
Let's Encrypt is now in beta.
Here's how to use the automated CA with Hitch TLS proxy, and
Varnish HTTP accelerator as a backend for Hitch.
Let's Encrypt
Let's Encrypt is a
free, automated and open Certificate Authority (CA), run for the
public's benefit.
Varnish
Varnish is a HTTP
accelerator. I use it as a web server, and it serves content
from various backends.
Varnish is used on a lot of high-traffic web servers, and
supports simple and complex web site configurations.
Hitch
Hitch is a TLS proxy, by
Varnish Software. It terminates TLS connections, and forwards
them to a backend, unencrypted.
The TLS proxy is simple to configure, and handles many thousands
of requests per second on commodity hardware.
Hitch was originally called stud and was written by Jamie Turner
at Bump.com.
Varnish plugin for Let's Encrypt
Let's
Encrypt Varnish Plugin is needed for the Let's Encrypt
client to authenticate against their service, for getting the
certificates.
You can skip this if you have the option of temporarily
disabling your web service, when registering and renewing your
certificates.
Hitch TLS proxy
Hitch must be configured to listen on the https port (TCP/443),
with a list of strong ciphers.
To get Forward
Secrecy, we need ciphers with EECDH or EDH. To use these, we
need to generate a set of DH
Parameters.
In /etc/hitch/hitch.conf:
Varnish as backend for Hitch
Hitch will forward traffic to Varnish on localhost:6082, using
the PROXY protocol. Here's a snippet for
/etc/hitch/hitch.conf to do this:
# Send traffic to the varnish backend
backend = "[::1]:6086"
write-proxy-v2 = on
For varnish, I've added a separate port for the "PROXY"
protocol, by adding an extra "-a" argument for port 6086. The
-a :80 handles the existing http traffic, while the
-a '[::1]:6086,PROXY' adds a listener on localhost,
which will be used by hitch.
/usr/sbin/varnishd -j unix,user=vcache -a :80 -a '[::1]:6086,PROXY' ...
Let's Encrypt
install letsencrypt
First, install the letsencrypt client. The beta program sends you a
mail with instructions. When it is generally available, I suspect
it'll be something like:
apt install letsencrypt
install varnish plugin
Then, get the let's
encrypt varnish plugin.
The plugin will extend the letsencrypt software to be able to
rewrite the VCL and reload Varnish to satisfy the authentication
challenge of the Let's Encrypt server.
Adding certificates to Hitch
Hitch requires one PEM file per domain we serve. Each .pem file
contains the private key, the signed certificate and any
required intermediates.
Create a file with DH parameters:
openssl dhparam -out /etc/hitch/dhparam.pem 2048
This file may take a long while to generate.
Combine the key, the signed certificate, and the dhparam file to
something hitch can use:
and configure hitch to use the file, by adding it to
/etc/hitch/hitch.conf:
# List of PEM files, each with key, certificates and dhparams
pem-file = "/etc/hitch/example.org.pem"
pem-file = "/etc/hitch/www.example.org.pem"
pem-file = "/etc/hitch/example.com.pem"
pem-file = "/etc/hitch/www.example.com.pem"
Munin
Working on making the munin master fit inside Mojolicious. The existing code
is not written to make this trivial, but all the pieces are there. Most of the
pieces need breaking up into smaller pieces to fit.
Debian
Packaging
New version of puppet-module-puppetlabs-apache (Closes:
#788124#788125#788127 ). I like it
when a new upstream version closes all bugs left in the bts for a
package.
A new package, the TLS proxy hitch currently
waiting in the queue.
Puppet
Lots of work on a new ceph puppet module.
I recently wrote about tools to handle archives conveniently. If you just have to handle
compressed text files, there are some widely known shortcut commands
to mimic common commands on files compressed with a specific
compression format.
gzip
bzip2
lzma
xz
cat
zcat
bzcat
lzcat
xzcat
cmp
zcmp
bzcmp
lzcmp
xzcmp
diff
zdiff
bzdiff
lzdiff
xzdiff
grep
zgrep
bzgrep
lzgrep
xzgrep
egrep
zegrep
bzegrep
lzegrep
xzegrep
fgrep
zfgrep
bzfgrep
lzfgrep
xzfgrep
more
zmore
bzmore
lzmore
xzmore
less
zless
bzless
lzless
xzless
In Debian and derivatives, those tools are part of the according
package for that compression utility, i.e. the zcat
command is part of the gzip package and the
xzfgrep command is part of the xz-utils package.
But despite this matrix is quite easy to remember, the situation has a
few drawbacks:
Those tools can only handle the format they re written for (which
btw. means that all xz-tools can also handle
lzma-compressed files as lzma is
xz s predecessor)
zcat and the other cat variants can t even recognize
non-compressed files and throw an error instead of just showing their
contents.
I always tend to think that lzcat and friends are for
lzip-based compression as xzcat can handle
lzma-compressed files anyway.
This is where the zutils project comes in: zutils provides the
functionality of most of these utilities, too, but with one big
difference: You don t have to remember, think about or type which
compression method has been used for your data, just use
zcat, zcmp, zdiff,
zgrep, zegrep, or zfgrep and it
works independently of what compression method has been used
if any or if there are different compression types
mixed in the parameters to the same command:
Especially if you use logrotate and let
logrotate compress old logs, it s very comfortable that
one command suffices to concatenate all the available logfiles,
including the current uncompressed one:
$ zcat /var/log/syslog*
Additionally, zutils versions of these tools also support
lzip-compressed files.
The zutils package is available in Debian starting with
Wheezy and in Ubuntu since Oneiric. When being installed, it replaces
the original z* utilities from the gzip package
by diverting them away.
The only drawback so far is that there neither a
zlessnor a zmore utility from the
zutils project, so zless bla.txt fnord.gz hurz.xz quux.lz
bar.lzma will not work as expected even after
installing zutils as it is still the one from the gzip package and hence it will show you just the first two files in
plain text, but not the remaining ones.
But if you have colour, why still having these hard to read wdiff
markers still in the text?
There exists a tool named dwdiff which can do word diffs in colour without
textual markers and with even less to type (and without being
git diff --color-words ;-). Actually it looks like
git diff --color-words, just without the git:
$ dwdiff -c foobar.txt barfoo.txt
foo bar fnord
gnarz hurz quux
bla foo fasel
Another cool thing about dwdiff (and its name giving feature) is that
you can defined what you consider whitespace, i.e. which character(s)
delimit the words. So lets do the
example above again, but this time declare that f is considered
the only whitespace character:
$ dwdiff -W f -c foobar.txt barfoo.txt
foo bar bar fnord
gnarz hurz quux
bla foo fasel
dwdiff can also show line numbers:
$ dwdiff -c -L foobar.txt barfoo.txt
1:1 foo bar fnord
2:2 gnarz hurz quux
3:3 bla foo fasel
$ dwdiff -c -L foobar.txt quux.txt
1:1 foo bar fnord
1:2 foobar floedeldoe
2:3 gnarz hurz quux
3:4 bla foo fasel
After talking with some LinuxTag guys about which kind of talks are
still missing for the upcoming LinuxTag, I submitted another proposal
for a still only roughly sketched talk: KISS – Keep it simple and
stupid, also on the web.
KISS – “Keep it simple and stupid” is an old and successful principle
in the Unix world: Small and simple programs, doing only one thing,
but they’re doing perfect, fast and reliable. This principle can also
work on the web and make webservers or surf terminals out of already
discharged computers.
I planned to show “simple” (or at least “simple to use”) tools like
Blosxom or the Website Meta Language, a more slim webserver than
Apache (e.g. fefe’s fnord or one of the ACME webservers thttpd, mini_httpd
or micro_httpd), slim web-browsers (e.g. like Dillo, Opera, glinks,
ViewML or Minimo) and one or more Linux distributions optimized for
low end PCs. While thinking about low end PCs, usually the following
distributions come to my mind: DeLi Linux, fli4l and Debian Woody.
But none of them seems to fit for my talk as perfectly as I would
like:
DeLi Linux is no bad distribution, since it’s designed especially for
386 to Pentium I, but I have some strong disagreements with the
maintainer of DeLi Linux, since he sees a very small package list as
necessary requirement for a distribution for old PCs. He states that
distributions for old PCs “don’t have that many harddisk
space” (beyond other, more realistic arguments — but it
seemed to be his main argument) while I see a rich package diversity
as an quality criteria. (One of the reasons, why I like Debian and
dislike Ubuntu.) So I’m not sure if I should present a very raped DeLi
Linux to the audience, just to make it fit my needs, although I’m
quite curious about his upcoming 0.7 release with the low end, KHTML
based ViewML webbrowser. (Apart from me seeing PHP5 and KDE as a big
nono on old PCs…)
Although I still like Debian Woody very much (you know that old story… ;-), it is just too old for making a talk about how
to turn old PCs into being usable again. Sarge would be fine, but it
was suggested to showcase an easy and fast way to get something ready
to run, and I can’t give the auditors a list of all the Debian
packages with low resource consumption and therefore usable on low end
PCs.
I haven’t used it yet, but fli4l seems to be very good distribution to
turn an old PC into a ISDN or DSL router, even without harddisk. The
last time I had a look at fli4l, it used an Apache as (optional)
webserver, which wouldn’t fit into my scheme, since I would like to
show an alternative to Apache. But as I found out today the recently released version 3.0 of fli4l uses
the already mentioned ACME mini_httpd. Cool! They’re on the right way!
;-) Unfortunately it only seems to be used for serving information
pages about the fli4l status and not as common webserver. (Please
correct me, if this is wrong! I would appreciate it, if I’m wrong at
this point. :-)
Since I first read about viewml on the DeLi Linux page, I looked for
Debian packages of viewml today. apt-cache search hasn’t found
anything on Woody or Sarge and packages.debian.org is still down, so
I
used Google. I found out, that there at least was a viewml package
in Debian since at least 2001, so I expect, it just didn’t make it to
stable.
But I also found this interesting page on a webserver called www.ubuntulite.org. Ubuntu Lite? That sounds very interesting,
since I see Ubuntu not as the baddest idea (expect for it’s horribly
resource hunger and only offering one package per application by
default ;-), but having an Ubuntu derivative prepackaged for low end
PCs and with several webbrowsers instead of only Epiphany (and
probably Firefox, don’t they?) would be perfect for my purpose.
So I’m currently downloading an Ubuntu Lite ISO and will give it a try
on one of my Pentium MMX boxes. Unfortunately it doesn’t seem to
support Pentium I or AMD K5 since Ubuntu itself only supports i686 and
upwards. :-/
But this also means, that it’s no occasion for my Pentium I Compaq LTE 5100 (which I probably will name pony), but currently,
after Bartosz’ recent post on Planet Debian, it looks like
Debian GNU/kFreeBSD could also be an interesting OS, since it fits all
requirements perfectly: Free, Modern, Exotic and all conveniences of
Debian. ;-)
Now Playing: Jefferson Starship — We Built This City