Tollef Fog Heen: Pronoun support in userdir-ldap

echo "pronouns: he/him" gpg --clearsign mail changes@db.debian.org
I see that four people have already done so in the time I ve taken to
write this post.
echo "pronouns: he/him" gpg --clearsign mail changes@db.debian.org
I see that four people have already done so in the time I ve taken to
write this post.
VerifyHostKeyDNS
, which when
enabled, will pull SSH host keys from DNS, and you no longer need to
either trust on first use, or copy host keys around out of band.
Naturally, trusting unsecured DNS is a bit scary, so this requires the
record to be signed using DNSSEC. This has worked for a long time,
but then broke, seemingly out of the blue. Running ssh -vvv
gave
output similar to
debug1: found 4 insecure fingerprints in DNS
debug3: verify_host_key_dns: checking SSHFP type 1 fptype 2
debug3: verify_host_key_dns: checking SSHFP type 4 fptype 2
debug1: verify_host_key_dns: matched SSHFP type 4 fptype 2
debug3: verify_host_key_dns: checking SSHFP type 4 fptype 1
debug1: verify_host_key_dns: matched SSHFP type 4 fptype 1
debug3: verify_host_key_dns: checking SSHFP type 1 fptype 1
debug1: matching host key fingerprint found in DNS
even though the zone was signed, the resolver was checking the
signature and I even checked that the DNS response had the AD
bit
set.
The fix was to add options trust-ad
to /etc/resolv.conf
. Without
this, glibc will discard the AD
bit from any upstream DNS
servers. Note that you should only add this if you actually have a
trusted DNS resolver. I run unbound on localhost, so if somebody can
do a man-in-the-middle attack on that traffic, I have other problems.
DEB_CHANGELOG_DATETIME
variable in the pom.properties
file embedded in the jar files.dh_install
, dh_installdocs
, and dh_installexamples
reproducible. Patch by Niko Tyni.SOURCE_DATE_EPOCH
in libxslt upstream.
Packages fixed
The following packages have become reproducible due to changes in their
build dependencies:
antlr3/3.5.2-3,
clusterssh,
cme,
libdatetime-set-perl,
libgraphviz-perl,
liblingua-translit-perl,
libparse-cpan-packages-perl,
libsgmls-perl,
license-reconcile,
maven-bundle-plugin/2.4.0-2,
siggen,
stunnel4,
systemd,
x11proto-kb.
The following packages became reproducible after getting fixed:
armhf
node using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure.
diffoscope development
diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than amd64
.
Package reviews
180 reviews have been removed, 268 added and 59 updated this week.
70 new fail to build from source bugs have been reported by Chris West, Chris Lamb and Niko Tyni.
New issue this week:
randomness_in_ocaml_preprocessed_files.
Misc.
Jim MacArthur started to work on a system to rebuild and compare packages built on reproducible.debian.net using .buildinfo
and snapshot.debian.org.
On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.
.dts
into an unpacked kernel tree and then running make
dtbs
.
Once this works, you need to compile the w1-gpio
kernel module,
since Debian hasn't yet enabled that. Run make menuconfig
, find it
under "Device drivers", "1-wire", "1-wire bus master", build it as a
module. I then had to build a full kernel to get the symversions
right, then build the modules. I think there is or should be an
easier way to do that, but as I cross-built it on a fast AMD64
machine, I didn't investigate too much.
Insmod-ing w1-gpio
then works, but for me, it failed to detect any
sensors. Reading the data sheet, it looked like a pull-up resistor on
the data line was needed. I had enabled the internal pull-up, but
apparently that wasn't enough, so I added a 4.7kOhm resistor between
pin 3 (VDD_3V3
) on P9 and pin (GPIO_45
) on P8. With that in
place, my sensors showed up in /sys/bus/w1/devices
and you can read
the values using cat
.
In my case, I wanted the data to go into collectd and then to
graphite. I first tried using an Exec plugin, but never got it to
work properly. Using a [python plugin] worked much better and my
graphite installation is now showing me temperatures.
Now I just need to add more probes around the house.
The most useful references were
I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.
May 3 04:14:38 restaurant apache: 81.17.17.90, 127.0.0.1 - - [03/May/2014:04:14:38 +0000] "GET /mailman/subscribe/banshee-list?email=example@me.com&fullname=&pw=123456789&pw-conf=123456789&language=en&digest=0&email-button=Subscribe HTTP/1.1" 403 313 "http://spam/index2.html" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"As you can the see attackers were sending all the relevant details needed for the subscription to go forward (and specifically the full name, the email, the digest option and the password for the target list). At first we tried to either stop the spam by banning the subnets where the requests were coming from, then when it was obvious that more subnets were being used and manual intervention was needed we tried banning their User-Agents. Again no luck, the spammers were smart enough to change it every now and then making it to match an existing browser User-Agent. (with a good percentage to have a lot of false-positives) Now you might be wondering why such an attack caused a lot of issues and pain, well, the attackers made use of addresses found around the web for their malicius subscription requests. That means we received a lot of emails from people that have never heard about the GNOME mailing lists but received around 10k subscription requests that were seemingly being sent by themselves. It was obvious we needed to look at a backup solution and luckily someone on our support channel suggested the freedesktop.org sysadmins recently added CAPTCHAs support to Mailman. I m now sharing the patch and providing a few more details on how to properly set it up on either DEB or RPM based distributions. Credits for the patch should be given to Debian Developer Tollef Fog Heen, who has been so kind to share it with us. Before patching your installation make sure to install the python-recaptcha package (tested on Debian with Mailman 2.1.15) on DEB based distributions and python-recaptcha-client on RPM based distributions. (I personally tested it against Mailman release 2.1.15, RHEL 6) The Patch
diff --git a/Mailman/Cgi/listinfo.py b/Mailman/Cgi/listinfo.py index 4a54517..d6417ca 100644 --- a/Mailman/Cgi/listinfo.py +++ b/Mailman/Cgi/listinfo.py @@ -22,6 +22,7 @@ import os import cgi +import sys from Mailman import mm_cfg from Mailman import Utils @@ -30,6 +31,8 @@ from Mailman import Errors from Mailman import i18n from Mailman.htmlformat import * from Mailman.Logging.Syslog import syslog +sys.path.append("/usr/share/pyshared") +from recaptcha.client import captcha # Set up i18n _ = i18n._ @@ -200,6 +203,9 @@ def list_listinfo(mlist, lang): replacements[''] = mlist.FormatFormStart('listinfo') replacements[''] = mlist.FormatBox('fullname', size=30) + # Captcha + replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False) + # Do the expansion. doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang)) print doc.Format() diff --git a/Mailman/Cgi/subscribe.py b/Mailman/Cgi/subscribe.py index 7b0b0e4..c1c7b8c 100644 --- a/Mailman/Cgi/subscribe.py +++ b/Mailman/Cgi/subscribe.py @@ -21,6 +21,8 @@ import sys import os import cgi import signal +sys.path.append("/usr/share/pyshared") +from recaptcha.client import captcha from Mailman import mm_cfg from Mailman import Utils @@ -132,6 +130,17 @@ def process_form(mlist, doc, cgidata, lang): remote = os.environ.get('REMOTE_HOST', os.environ.get('REMOTE_ADDR', 'unidentified origin')) + + # recaptcha + captcha_response = captcha.submit( + cgidata.getvalue('recaptcha_challenge_field', ""), + cgidata.getvalue('recaptcha_response_field', ""), + mm_cfg.RECPTCHA_PRIVATE_KEY, + remote, + ) + if not captcha_response.is_valid: + results.append(_('Invalid captcha')) + # Was an attempt made to subscribe the list to itself? if email == mlist.GetListEmail(): syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)Additional setup Then on the /var/lib/mailman/templates/en/listinfo.html template (right below <mm-digest-question-end>) add:
<tr> <td>Please fill out the following captcha</td> <td><mm-recaptcha-javascript></TD> </tr>Make also sure to generate a public and private key at https://www.google.com/recaptcha and add the following paramaters on your mm_cfg.py file:
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False)to
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True)EPEL 6 related details A few additional details should be provided in case you are setting this up against a RHEL 6 host: (or any other machine using the EPEL 6 package python-recaptcha-client-1.0.5-3.1.el6) Importing the recaptcha.client module will fail for some strange reason, importing it correctly can be done this way:
ln -s /usr/lib/python2.6/site-packages/recaptcha/client /usr/lib/mailman/pythonlib/recaptchaand then fix the imports also making sure sys.path.append( /usr/share/pyshared ) is not there:
from recaptcha import captchaThat s not all, the package still won t work as expected given the API_SSL_SERVER, API_SERVER and VERIFY_SERVER variables on captcha.py are outdated (filed as bug #1093855), substitute them with the following ones:
API_SSL_SERVER="https://www.google.com/recaptcha/api" API_SERVER="http://www.google.com/recaptcha/api" VERIFY_SERVER="www.google.com"And then on line 76:
url = "https://%s/recaptcha/api/verify" % VERIFY_SERVER,That should be all! Enjoy!
resolv.conf
so you use the
DHCP-provided resolver stops the redirect loop and you can then log
in. Afterwards, you're free to switch back to using your own local
resolver.
#! /usr/bin/python
# -* coding: utf-8 -*-
import time
import sys
# format is:
# [TIMESTAMP] COMMAND_NAME;argument1;argument2; ;argumentN
#
# For passive checks, we want PROCESS_SERVICE_CHECK_RESULT with the
# format:
#
# PROCESS_SERVICE_CHECK_RESULT;<host_name>;<service_description>;<return_code>;<plugin_output>
#
# return code is 0=OK, 1=WARNING, 2=CRITICAL, 3=UNKNOWN
#
# Read lines from stdin with the format:
# $HOSTNAME\t$SERVICE_NAME\t$RETURN_CODE\t$TEXT_OUTPUT
if len(sys.argv) != 2:
print "Usage: 0 HOSTNAME".format(sys.argv[0])
sys.exit(1)
HOSTNAME = sys.argv[1]
timestamp = int(time.time())
nagios_cmd = file("/var/lib/nagios3/rw/nagios.cmd", "w")
for line in sys.stdin:
(_, service, return_code, text) = line.split("\t", 3)
nagios_cmd.write(u"[ timestamp ] PROCESS_SERVICE_CHECK_RESULT; hostname ; service ; return_code ; text \n".format
(timestamp = timestamp,
hostname = HOSTNAME,
service = service,
return_code = return_code,
text = text))
The reason for the hostname in the line (even though it's overridden)
is to be compatible with send_nsca
's input format.
Machines submit check results over SSH using its excellent
ForceCommand capabilities, the Chef template for the authorized_keys
file looks like:
<% for host in @nodes %>
command="/usr/local/lib/nagios/nagios-passive-check-result <%= host[:hostname] %>",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa <%= host[:keys][:ssh][:host_rsa_public] %> <%= host[:hostname] %>
<% end %>
The actual chef recipe looks like:
nodes = []
search(:node, "*:*") do n
# Ignore not-yet-configured nodes
next unless n[:hostname]
next unless n[:nagios]
next if n[:nagios].has_key?(:ignore)
nodes << n
end
nodes.sort! a,b a[:hostname] <=> b[:hostname]
print nodes
template "/etc/ssh/userkeys/nagios" do
source "authorized_keys.erb"
mode 0400
variables(
:nodes => nodes
)
end
cookbook_file "/usr/local/lib/nagios/nagios-passive-check-result" do
mode 0555
end
user "nagios" do
action :manage
shell "/bin/sh"
end
To submit a check, hosts do:
printf "$HOSTNAME\t$SERVICE_NAME\t$RET\t$TEXT\n" ssh -i /etc/ssh/ssh_host_rsa_key -o BatchMode=yes -o StrictHostKeyChecking=no -T nagios@$NAGIOS_SERVER
apt-get update
failed two days ago) and
they're not aggregated.
I think we need a system that at its core has level and edge triggers
and some way of doing flap detection. Level interrupts means "tell me
if a disk is full right now". Edge means "tell me if the checksums
have changed, even if they now look ok". Flap detection means "tell
me if the nightly apt-get update
fails more often than once a week".
It would be useful if it could extrapolate some notifications too, so
it could tell me "your disk is going to be full in $period unless you
add more space".
The system needs to be able to take in input in a variety of formats:
syslog, unstructured output from cron scripts (including their exit
codes), snmp, nagios notifications, sockets and fifos and so on.
Based on those inputs and any correlations it can pull out of it, it
should try to reason about what's happening on the system. If the
conclusion there is "something is broken", it should see if it's
something that it can reasonably fix by itself. If so, fix it and
record it (so it can be used for notification if appropriate: I want
to be told if you restart apache every two minutes). If it can't fix
it, notify the admin.
It should also group similar messages so a single important message
doesn't drown in a million unimportant ones. Ideally, this should be
cross-host aggregation. The notifications should be possible to
escalate if they're not handled within some time period.
I'm not aware of such a tool. Maybe one could be rigged together by
careful application of logstash, nagios, munin/ganglia/something and
sentry. If anybody knows of such a tool, let me know, or if you're
working on one, also please let me know.
Something else
I am involved in Debian for a very long time now, first as Debian Maintainer and then as Debian Developer and I never thought much about the work the Debian system administrators do. I didn t know how dak worked or how Wanna-build handles the buildds and what exactly the ftpmasters have to do. By not knowing, I mean I knew the very basic theory and what these people do. But this is something different than experiencing how much work setting up and maintaining the infrastructure is and what an awesome job the people do for Debian, keeping it all up and running and secure! Kudos for that, to all people maintaining Debian infrastructure! You rock! (And I will never ever complain about slow buildds or packages which stay in NEW for too long
)
sudo
to the
support user and then run SSH.
This has bugged me a fair bit, since there was nothing stopping a
person from making a copy of the key onto their laptop, except policy.
Thanks to a tip, I got around to implementing this and figured writing
up how to do it would be useful.
First, you need a directory readable by root only, I use
/var/local/support-ssh
here. The other bits you need are a small
sudo
snippet and a profile.d
script.
My sudo snippet looks like:
Defaults!/usr/bin/ssh-add env_keep += "SSH_AUTH_SOCK"
%support ALL=(root) NOPASSWD: /usr/bin/ssh-add /var/local/support-ssh/id_rsa
Everybody in group support can run ssh-add as root.
The profile.d
goes in /etc/profile.d/support.sh
and looks like:
if [ -n "$(groups grep -E "(^ )support( $)")" ]; then
export SSH_AUTH_ENV="$HOME/.ssh/agent-env"
if [ -f "$SSH_AUTH_ENV" ]; then
. "$SSH_AUTH_ENV"
fi
ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
mkdir -p "$HOME/.ssh"
rm -f "$SSH_AUTH_ENV"
ssh-agent > "$SSH_AUTH_ENV"
. "$SSH_AUTH_ENV"
fi
sudo ssh-add /var/local/support-ssh/id_rsa
fi
The key is unavailable for the user in question because ssh-add
is
sgid and so runs with group ssh and the process is only debuggable for
root. The only thing missing is there's no way to have the agent
prompt to use a key and I would like it to die or at least unload keys
when the last session for a user is closed, but that doesn't seem
trivial to do.
/var/lib/sbuild/build
which is exposed as /build
once sbuild
runs. The other hard part
was getting Varnish itself built. sbuild
exposes two hooks that
could work: a pre-build
hook and a chroot-setup
hook. Neither
worked: Pre-build is called before the chroot is set up, so we can't
build Varnish. Chroot-setup is run before the build-dependencies are
installed and it runs as the user invoking sbuild
, so it can't
install packages.
Sparc32 and similar architectures use the linux32
tool to set the
personality before building packages. I ended up abusing this, so I
set HOME to a temporary directory where I create a .sbuildrc
which sets
$build_env_cmnd
to a script which in turns unpacks the Varnish
source, builds it and then chains to dpkg-buildpackage
. Of course,
the build-dependencies for modules don't include all the
build-dependencies for Varnish itself, so I have to extract those from
the Varnish source package too.
No source available at this point, mostly because it's beyond ugly.
I'll see if I can get it cleaned up.
master
in the doc/
subtree) or
trivial ("admin can do anything"). Gitano is written by Daniel
Silverstone, and I'd like to thank him both for writing it and for
holding my hand as I went stumbling through my initial gitano setup.
Getting started with Gitano can be a bit tricky, as it's not yet
packaged and fairly undocumented. Until it is packaged, it's install
from source time. You need luxio, lace,
supple, clod, gall and gitano
itself.
luxio
needs a make install LOCAL=1
, the others will be installed
to /usr/local
with just make install
.
Once that is installed, create a user to hold the instance. I've
named mine git
, but you're free to name it whatever you would like.
As that user, run gitano-setup and answer the prompts. I'll use
git.example.com
as the host name and john
as the user I'm setting this
up for.
To create users, run ssh git@git.example.com user add john john@example.com
John Doe
, then add their SSH key with ssh git@git.example.com as john
sshkey add workstation < /tmp/john_id_rsa.pub
.
To create a repository, run ssh git@git.example.com repo create
myrepo
. Out of the box, this only allows the owner (typically
"admin", unless overridden) to do anything with it. To change ACLs,
you'll want to grab the refs/gitano/admin
branch. This lives
outside of the space git usually use for branches, so you can't just
check it out. The easiest way to check it out is to use
git-admin-clone. Run it as git-admin-clone
git@git.example.com:myrepo ~/myrepo-admin
and then edit in
~/myrepo-admin
. Use git to add, commit and push as normal from
there.
To change ACLs for a given repo, you'll want to edit the
rules/main.lace
file. A real-world example can be found in the
NetSurf repository and the lace syntax
might be useful. A lace file consists of four types of lines:
define name conditions
allow "reason" definition [definition ]
deny "reason" definition [definition ]
ref
refs/heads/master
to match updates to the master branch. To create
groupings, you can use the anyof
or allof
verbs in a definition.
Allows and denials are checked against all the definitions listed and
if all of them match, the appropriate action is taken.
Pay some attention to what conditions you group together, since a
basic operation (is_basic_op
, aka op_read
and op_write
) happens
before git is even involved and you don't have a tree at that point,
so rules like:
define is_master ref refs/heads/master
allow "Devs can push" op_is_basic is_master
simply won't work. You'll want to use a group and check on that for
basic operations and then have a separate rule to restrict refs.
Next.