DEB_CHANGELOG_DATETIMEvariable in the
pom.propertiesfile embedded in the jar files.
dh_installexamplesreproducible. Patch by Niko Tyni.
SOURCE_DATE_EPOCHin libxslt upstream. Packages fixed The following packages have become reproducible due to changes in their build dependencies: antlr3/3.5.2-3, clusterssh, cme, libdatetime-set-perl, libgraphviz-perl, liblingua-translit-perl, libparse-cpan-packages-perl, libsgmls-perl, license-reconcile, maven-bundle-plugin/2.4.0-2, siggen, stunnel4, systemd, x11proto-kb. The following packages became reproducible after getting fixed:
armhfnode using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure. diffoscope development diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than
amd64. Package reviews 180 reviews have been removed, 268 added and 59 updated this week. 70 new fail to build from source bugs have been reported by Chris West, Chris Lamb and Niko Tyni. New issue this week: randomness_in_ocaml_preprocessed_files. Misc. Jim MacArthur started to work on a system to rebuild and compare packages built on reproducible.debian.net using
.buildinfoand snapshot.debian.org. On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.
.dtsinto an unpacked kernel tree and then running
make dtbs. Once this works, you need to compile the
w1-gpiokernel module, since Debian hasn't yet enabled that. Run
make menuconfig, find it under "Device drivers", "1-wire", "1-wire bus master", build it as a module. I then had to build a full kernel to get the symversions right, then build the modules. I think there is or should be an easier way to do that, but as I cross-built it on a fast AMD64 machine, I didn't investigate too much. Insmod-ing
w1-gpiothen works, but for me, it failed to detect any sensors. Reading the data sheet, it looked like a pull-up resistor on the data line was needed. I had enabled the internal pull-up, but apparently that wasn't enough, so I added a 4.7kOhm resistor between pin 3 (
VDD_3V3) on P9 and pin (
GPIO_45) on P8. With that in place, my sensors showed up in
/sys/bus/w1/devicesand you can read the values using
cat. In my case, I wanted the data to go into collectd and then to graphite. I first tried using an Exec plugin, but never got it to work properly. Using a [python plugin] worked much better and my graphite installation is now showing me temperatures. Now I just need to add more probes around the house. The most useful references were
I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.
May 3 04:14:38 restaurant apache: 126.96.36.199, 127.0.0.1 - - [03/May/2014:04:14:38 +0000] "GET /firstname.lastname@example.org&fullname=&pw=123456789&pw-conf=123456789&language=en&digest=0&email-button=Subscribe HTTP/1.1" 403 313 "http://spam/index2.html" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"As you can the see attackers were sending all the relevant details needed for the subscription to go forward (and specifically the full name, the email, the digest option and the password for the target list). At first we tried to either stop the spam by banning the subnets where the requests were coming from, then when it was obvious that more subnets were being used and manual intervention was needed we tried banning their User-Agents. Again no luck, the spammers were smart enough to change it every now and then making it to match an existing browser User-Agent. (with a good percentage to have a lot of false-positives) Now you might be wondering why such an attack caused a lot of issues and pain, well, the attackers made use of addresses found around the web for their malicius subscription requests. That means we received a lot of emails from people that have never heard about the GNOME mailing lists but received around 10k subscription requests that were seemingly being sent by themselves. It was obvious we needed to look at a backup solution and luckily someone on our support channel suggested the freedesktop.org sysadmins recently added CAPTCHAs support to Mailman. I m now sharing the patch and providing a few more details on how to properly set it up on either DEB or RPM based distributions. Credits for the patch should be given to Debian Developer Tollef Fog Heen, who has been so kind to share it with us. Before patching your installation make sure to install the python-recaptcha package (tested on Debian with Mailman 2.1.15) on DEB based distributions and python-recaptcha-client on RPM based distributions. (I personally tested it against Mailman release 2.1.15, RHEL 6) The Patch
diff --git a/Mailman/Cgi/listinfo.py b/Mailman/Cgi/listinfo.py index 4a54517..d6417ca 100644 --- a/Mailman/Cgi/listinfo.py +++ b/Mailman/Cgi/listinfo.py @@ -22,6 +22,7 @@ import os import cgi +import sys from Mailman import mm_cfg from Mailman import Utils @@ -30,6 +31,8 @@ from Mailman import Errors from Mailman import i18n from Mailman.htmlformat import * from Mailman.Logging.Syslog import syslog +sys.path.append("/usr/share/pyshared") +from recaptcha.client import captcha # Set up i18n _ = i18n._ @@ -200,6 +203,9 @@ def list_listinfo(mlist, lang): replacements[''] = mlist.FormatFormStart('listinfo') replacements[''] = mlist.FormatBox('fullname', size=30) + # Captcha + replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False) + # Do the expansion. doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang)) print doc.Format() diff --git a/Mailman/Cgi/subscribe.py b/Mailman/Cgi/subscribe.py index 7b0b0e4..c1c7b8c 100644 --- a/Mailman/Cgi/subscribe.py +++ b/Mailman/Cgi/subscribe.py @@ -21,6 +21,8 @@ import sys import os import cgi import signal +sys.path.append("/usr/share/pyshared") +from recaptcha.client import captcha from Mailman import mm_cfg from Mailman import Utils @@ -132,6 +130,17 @@ def process_form(mlist, doc, cgidata, lang): remote = os.environ.get('REMOTE_HOST', os.environ.get('REMOTE_ADDR', 'unidentified origin')) + + # recaptcha + captcha_response = captcha.submit( + cgidata.getvalue('recaptcha_challenge_field', ""), + cgidata.getvalue('recaptcha_response_field', ""), + mm_cfg.RECPTCHA_PRIVATE_KEY, + remote, + ) + if not captcha_response.is_valid: + results.append(_('Invalid captcha')) + # Was an attempt made to subscribe the list to itself? if email == mlist.GetListEmail(): syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)Additional setup Then on the /var/lib/mailman/templates/en/listinfo.html template (right below <mm-digest-question-end>) add:
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False)to
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True)EPEL 6 related details A few additional details should be provided in case you are setting this up against a RHEL 6 host: (or any other machine using the EPEL 6 package python-recaptcha-client-1.0.5-3.1.el6) Importing the recaptcha.client module will fail for some strange reason, importing it correctly can be done this way:
ln -s /usr/lib/python2.6/site-packages/recaptcha/client /usr/lib/mailman/pythonlib/recaptchaand then fix the imports also making sure sys.path.append( /usr/share/pyshared ) is not there:
from recaptcha import captchaThat s not all, the package still won t work as expected given the API_SSL_SERVER, API_SERVER and VERIFY_SERVER variables on captcha.py are outdated (filed as bug #1093855), substitute them with the following ones:
API_SSL_SERVER="https://www.google.com/recaptcha/api" API_SERVER="http://www.google.com/recaptcha/api" VERIFY_SERVER="www.google.com"And then on line 76:
url = "https://%s/recaptcha/api/verify" % VERIFY_SERVER,That should be all! Enjoy!
resolv.confso you use the DHCP-provided resolver stops the redirect loop and you can then log in. Afterwards, you're free to switch back to using your own local resolver.
The reason for the hostname in the line (even though it's overridden) is to be compatible with
#! /usr/bin/python # -* coding: utf-8 -*- import time import sys # format is: # [TIMESTAMP] COMMAND_NAME;argument1;argument2; ;argumentN # # For passive checks, we want PROCESS_SERVICE_CHECK_RESULT with the # format: # # PROCESS_SERVICE_CHECK_RESULT;<host_name>;<service_description>;<return_code>;<plugin_output> # # return code is 0=OK, 1=WARNING, 2=CRITICAL, 3=UNKNOWN # # Read lines from stdin with the format: # $HOSTNAME\t$SERVICE_NAME\t$RETURN_CODE\t$TEXT_OUTPUT if len(sys.argv) != 2: print "Usage: 0 HOSTNAME".format(sys.argv) sys.exit(1) HOSTNAME = sys.argv timestamp = int(time.time()) nagios_cmd = file("/var/lib/nagios3/rw/nagios.cmd", "w") for line in sys.stdin: (_, service, return_code, text) = line.split("\t", 3) nagios_cmd.write(u"[ timestamp ] PROCESS_SERVICE_CHECK_RESULT; hostname ; service ; return_code ; text \n".format (timestamp = timestamp, hostname = HOSTNAME, service = service, return_code = return_code, text = text))
send_nsca's input format. Machines submit check results over SSH using its excellent ForceCommand capabilities, the Chef template for the authorized_keys file looks like:
The actual chef recipe looks like:
<% for host in @nodes %> command="/usr/local/lib/nagios/nagios-passive-check-result <%= host[:hostname] %>",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa <%= host[:keys][:ssh][:host_rsa_public] %> <%= host[:hostname] %> <% end %>
To submit a check, hosts do:
nodes =  search(:node, "*:*") do n # Ignore not-yet-configured nodes next unless n[:hostname] next unless n[:nagios] next if n[:nagios].has_key?(:ignore) nodes << n end nodes.sort! a,b a[:hostname] <=> b[:hostname] print nodes template "/etc/ssh/userkeys/nagios" do source "authorized_keys.erb" mode 0400 variables( :nodes => nodes ) end cookbook_file "/usr/local/lib/nagios/nagios-passive-check-result" do mode 0555 end user "nagios" do action :manage shell "/bin/sh" end
printf "$HOSTNAME\t$SERVICE_NAME\t$RET\t$TEXT\n" ssh -i /etc/ssh/ssh_host_rsa_key -o BatchMode=yes -o StrictHostKeyChecking=no -T nagios@$NAGIOS_SERVER
apt-get updatefailed two days ago) and they're not aggregated. I think we need a system that at its core has level and edge triggers and some way of doing flap detection. Level interrupts means "tell me if a disk is full right now". Edge means "tell me if the checksums have changed, even if they now look ok". Flap detection means "tell me if the nightly
apt-get updatefails more often than once a week". It would be useful if it could extrapolate some notifications too, so it could tell me "your disk is going to be full in $period unless you add more space". The system needs to be able to take in input in a variety of formats: syslog, unstructured output from cron scripts (including their exit codes), snmp, nagios notifications, sockets and fifos and so on. Based on those inputs and any correlations it can pull out of it, it should try to reason about what's happening on the system. If the conclusion there is "something is broken", it should see if it's something that it can reasonably fix by itself. If so, fix it and record it (so it can be used for notification if appropriate: I want to be told if you restart apache every two minutes). If it can't fix it, notify the admin. It should also group similar messages so a single important message doesn't drown in a million unimportant ones. Ideally, this should be cross-host aggregation. The notifications should be possible to escalate if they're not handled within some time period. I'm not aware of such a tool. Maybe one could be rigged together by careful application of logstash, nagios, munin/ganglia/something and sentry. If anybody knows of such a tool, let me know, or if you're working on one, also please let me know.
Something else I am involved in Debian for a very long time now, first as Debian Maintainer and then as Debian Developer and I never thought much about the work the Debian system administrators do. I didn t know how dak worked or how Wanna-build handles the buildds and what exactly the ftpmasters have to do. By not knowing, I mean I knew the very basic theory and what these people do. But this is something different than experiencing how much work setting up and maintaining the infrastructure is and what an awesome job the people do for Debian, keeping it all up and running and secure! Kudos for that, to all people maintaining Debian infrastructure! You rock! (And I will never ever complain about slow buildds or packages which stay in NEW for too long )
sudoto the support user and then run SSH. This has bugged me a fair bit, since there was nothing stopping a person from making a copy of the key onto their laptop, except policy. Thanks to a tip, I got around to implementing this and figured writing up how to do it would be useful. First, you need a directory readable by root only, I use
/var/local/support-sshhere. The other bits you need are a small
sudosnippet and a
profile.dscript. My sudo snippet looks like:
Everybody in group support can run ssh-add as root. The
Defaults!/usr/bin/ssh-add env_keep += "SSH_AUTH_SOCK" %support ALL=(root) NOPASSWD: /usr/bin/ssh-add /var/local/support-ssh/id_rsa
/etc/profile.d/support.shand looks like:
The key is unavailable for the user in question because
if [ -n "$(groups grep -E "(^ )support( $)")" ]; then export SSH_AUTH_ENV="$HOME/.ssh/agent-env" if [ -f "$SSH_AUTH_ENV" ]; then . "$SSH_AUTH_ENV" fi ssh-add -l >/dev/null 2>&1 if [ $? = 2 ]; then mkdir -p "$HOME/.ssh" rm -f "$SSH_AUTH_ENV" ssh-agent > "$SSH_AUTH_ENV" . "$SSH_AUTH_ENV" fi sudo ssh-add /var/local/support-ssh/id_rsa fi
ssh-addis sgid and so runs with group ssh and the process is only debuggable for root. The only thing missing is there's no way to have the agent prompt to use a key and I would like it to die or at least unload keys when the last session for a user is closed, but that doesn't seem trivial to do.
/var/lib/sbuild/buildwhich is exposed as
sbuildruns. The other hard part was getting Varnish itself built.
sbuildexposes two hooks that could work: a
pre-buildhook and a
chroot-setuphook. Neither worked: Pre-build is called before the chroot is set up, so we can't build Varnish. Chroot-setup is run before the build-dependencies are installed and it runs as the user invoking
sbuild, so it can't install packages. Sparc32 and similar architectures use the
linux32tool to set the personality before building packages. I ended up abusing this, so I set HOME to a temporary directory where I create a
$build_env_cmndto a script which in turns unpacks the Varnish source, builds it and then chains to
dpkg-buildpackage. Of course, the build-dependencies for modules don't include all the build-dependencies for Varnish itself, so I have to extract those from the Varnish source package too. No source available at this point, mostly because it's beyond ugly. I'll see if I can get it cleaned up.
doc/subtree) or trivial ("admin can do anything"). Gitano is written by Daniel Silverstone, and I'd like to thank him both for writing it and for holding my hand as I went stumbling through my initial gitano setup. Getting started with Gitano can be a bit tricky, as it's not yet packaged and fairly undocumented. Until it is packaged, it's install from source time. You need luxio, lace, supple, clod, gall and gitano itself.
make install LOCAL=1, the others will be installed to
make install. Once that is installed, create a user to hold the instance. I've named mine
git, but you're free to name it whatever you would like. As that user, run gitano-setup and answer the prompts. I'll use
git.example.comas the host name and
johnas the user I'm setting this up for. To create users, run
ssh email@example.com user add john firstname.lastname@example.org John Doe, then add their SSH key with
ssh email@example.com as john sshkey add workstation < /tmp/john_id_rsa.pub. To create a repository, run
ssh firstname.lastname@example.org repo create myrepo. Out of the box, this only allows the owner (typically "admin", unless overridden) to do anything with it. To change ACLs, you'll want to grab the
refs/gitano/adminbranch. This lives outside of the space git usually use for branches, so you can't just check it out. The easiest way to check it out is to use git-admin-clone. Run it as
git-admin-clone email@example.com:myrepo ~/myrepo-adminand then edit in
~/myrepo-admin. Use git to add, commit and push as normal from there. To change ACLs for a given repo, you'll want to edit the
rules/main.lacefile. A real-world example can be found in the NetSurf repository and the lace syntax might be useful. A lace file consists of four types of lines:
define name conditions
allow "reason" definition [definition ]
deny "reason" definition [definition ]
ref refs/heads/masterto match updates to the master branch. To create groupings, you can use the
allofverbs in a definition. Allows and denials are checked against all the definitions listed and if all of them match, the appropriate action is taken. Pay some attention to what conditions you group together, since a basic operation (
op_write) happens before git is even involved and you don't have a tree at that point, so rules like:
simply won't work. You'll want to use a group and check on that for basic operations and then have a separate rule to restrict refs.
define is_master ref refs/heads/master allow "Devs can push" op_is_basic is_master
Apparently, this is not legal, since we're trying to define v_rc as a macro with no body. It's however not possible to directly define it as an empty string which can later be tested on, you have to do something like:
%define v_rc %define vd_rc % ?v_rc:-% ?v_rc
Now, this doesn't work correctly either.
%define v_rc % nil %define vd_rc % ?v_rc:-% ?v_rc
% ?macrotests if macro is defined, not whether it's an empty string so instead of two lines, we have to write:
%define v_rc % nil %if 0% ?v_rc != 0 %define vd_rc % ?v_rc:-% ?v_rc %endif
0 ?v_rc != 0workaround is there so that we don't accidentially end up with
== 0which would be a syntax error. I think having four lines like that is pretty ugly, so I looked for a workaround and figured that, ok, I'll just rewrite every use of
% ?v_rc:-% ?v_rc. There are only a couple, so the damage is limited. Also, I'd then just comment out the
v_rcdefinition, since that makes it clear what you should uncomment to have a release candidate version. In my naivety, I tried:
# %define v_rc ""
#is used as a comment character in spec files, but apparently not for defines. The define was still processed and the build process stopped pretty quickly. Luckily, doing
# % define ""seems to work fine and is not processed. I have no idea how people put up with this or if I'm doing something very wrong. Feel free to point me at a better way of doing this, of course.
https://sugar/service/v2/rest.php. Usually a POST, but sometimes a GET. It's not documented what to use where. The POST parameters we send when logging in are:
$params is a hash as follows:
method=>"login" input_type=>"JSON" response_type=>"JSON" rest_data=>json($params)
Nothing seems to actually care about the value of
user_auth => user_name => $USERNAME, password => $PW, version => "1.2", , application => "foo",
application, nor about the
user_auth.versionvalue. The password is the md5 of the actual password, hex encoded. I'm not sure why it is, as this adds absolutely no security, but it is. This is also not properly documented. This gives us a JSON object back with a somewhat haphazard selection of attributes (reformatted here for readability):
What is the
"id":"<hex session id>, "module_name":"Users", "name_value_list": "user_id": "name":"user_id", "value":"1" , "user_name": "name":"user_name", "value":"<username>" , "user_language": "name":"user_language", "value":"en_us" , "user_currency_id": "name":"user_currency_id", "value":"-99" , "user_currency_name": "name":"user_currency_name", "value":"Euro"
module_name? No real idea. In general, when you get back an
module_namefield, it tells you that the id exists is an object that exists in the context of the given module. Not here, since the session id is not a user. The worst here is the
name_value_listconcept which is used all over the REST interface. First, it's not a list, it's a hash. Secondly, I have no idea what would be wrong by just using keys directly in the top level object, so the object would have looked somewhat like:
Some people might argue that since you can have custom field names this can cause clashes. Except, it can't, since they're all suffixed with
"id":"<hex session id>, "user_id": 1, "user_name": "<username>, "user_language":"en_us", "user_currency_id": "-99", "user_currency_name": "Euro"
_c. So we're now logged in and can fetch all opportunities. This we do by posting:
method=>"get_entry_list", input_type=>"JSON", response_type=>"JSON", rest_data=>to_json([ $sid, $module, $where, "", $next, $fields, $links, 1000 ])
$sidis our session id from the login
opportunities_cstm.rt_id_c IS NOT NULL. Yes, that's right. An SQL fragment right there and you have to know that you'll join the
opportunitiestables because we are using a custom field. I find this completely crazy.
$nextstarts out at 0 and we're limited to 1000 entries at a time. There is, apparently, no way to say "just give me all you have".
$fieldsis an array, in our case consisting of
rt_status_c. To find out the field names, look at the database schema or poke around in the SugarCRM studio.
$linksis to link records together. I still haven't been able to make this work properly and just do multiple queries.
"result_count" : 16, "relationship_list" : , "entry_list" : [ "name_value_list" : "rt_status_c" : "value" : "resolved", "name" : "rt_status_c" , [ ] , "module_name" : "Opportunities", "id" : "<entry_uuid>" , [ ] ], "next_offset" : 16
entry_listactually is a list here, which is good and all, but there's still the annoying
name_value_listconcept. Last, we want to update the record in Sugar, to do this we do:
method=>"set_entry", input_type=>"JSON", response_type=>"JSON", rest_data=>to_json([ $sid, "Opportunities", $fields ])
$fieldsis not a
name_value_list, but instead is:
Why this works and my attempts at using a proper
"rt_status_c" : "resolved", "id" : "<status text>"
name_value_listdidn't work? I have no idea. I think that pretty much sums it up. I'm sure there are other problems in there (such as the over 100 lines of support code for the about 20 lines of actual code that does useful work), though.