I just went through a debugging exercise that was so ridiculous, I just had to
write it up. Some of this probably should go into a bug report instead of a
rant, but I'm tired. And clearly I don't care anymore.
OK, so I'm doing computer vision work. OpenCV has been providing basic functions
in this area, so I have been using them for a while. Just for really, really
basic stuff, like projection. The C API was kinda weird, and their error
handling is a bit ridiculous (if you give it arguments it doesn't like, it
asserts!), but it has been working fine for a while.
At some point (around OpenCV 3.0) somebody over there decided that they didn't
like their C API, and that this was now a C++ library. Except the docs still
documented the C API, and the website said it supported C, and the code wasn't
actually removed. They just kinda stopped testing it and thinking about it. So
it would mostly continue to work, except some poor saps would see weird
failures; like this and this, for instance. OpenCV 3.2 was the last version
where it was mostly possible to keep using the old C code, even when compiling
without optimizations. So I was doing that for years.
So now, in 2020, Debian is finally shipping a version of OpenCV that
definitively does not work with the old code, so I had to do something. Over
time I stopped using everything about OpenCV, except a few cvProjectPoints2()
calls. So I decided to just write a small C++ shim to call the new version of
that function, expose that with =extern "C"= to the rest of my world, and I'd be
done. And normally I would be, but this is OpenCV we're talking about. I wrote
the shim, and it didn't work. The code built and ran, but the results were
wrong. After some pointless debugging, I boiled the problem down to this test
Well that's no good. The answer is wrong, but it looks like it didn't even write
anything into the output array. Since this is supposed to be a thin shim to C
code, I want this thing to be filling in C arrays, which is what I'm doing here:
This is how the C API has worked forever, and their C++ API works the same way,
I thought. Nothing barfed, not at build time, or run time. Fine. So I went to
figure this out. In the true spirit of C++, the new API is inscrutable. I'm
passing in cv::Mat, but the API wants cv::InputArray for some arguments and
cv::OutputArray for others. Clearly cv::Mat can be coerced into either of
those types (and that's what you're supposed to do), but the details are not
meant to be understood. You can read the snazzy C++-style documentation.
Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're
supposed to click on "_OutputArray", and you get here. Understand what's going
on now? Me neither.
Stepping through the code revealed the problem. cv::projectPoints() looks like
I.e. they're allocating a new data buffer for the output, and giving it back to
me via the OutputArray object. This object already had a buffer, and that's
where I was expecting the output to go. Instead it went to the brand-new buffer
I didn't want. Issues:
The OutputArray object knows it already has a buffer, and they could just
use it instead of allocating a new one
If for some reason my buffer smells bad, they could complain to tell me
they're ignoring it to save me the trouble of debugging, and then bitching
about it on the internet
I think dynamic memory allocation smells bad
Doing it this way means the new function isn't a drop-in replacement for the
Well that's just super. I can call the C++ function, copy the data into the
place it's supposed to go to, and then deallocate the extra buffer. Or I can
pull out the meat of the function I want into my project, and then I can drop
the OpenCV dependency entirely. Clearly that's the way to go.
So I go poking back into their code to grab what I need, and here's what I see:
Looks familiar? It should. Because this is the original C-API function they
replaced. So in their quest to move to C++, they left the original code intact,
C API and everything, un-exposed it so you couldn't call it anymore, and made
a new, shitty C++ wrapper for people to call instead. CvMat is still there. I
have no words.
Yes, this is a massive library, and maybe other parts of it indeed did make some
sort of non-token transition, but this thing is ridiculous. In the end, here's
the function I ended up with (licensed as OpenCV; see the comment)
// The implementation of project_opencv is based on opencv. The sources have// been heavily modified, but the opencv logic remains. This function is a// cut-down cvProjectPoints2Internal() to keep only the functionality I want and// to use my interfaces. Putting this here allows me to drop the C dependency on// opencv. Which is a good thing, since opencv dropped their C API//// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp//// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.// Copyright (C) 2009, Willow Garage Inc., all rights reserved.// Third party copyrights are property of their respective owners.//// Redistribution and use in source and binary forms, with or without modification,// are permitted provided that the following conditions are met://// * Redistribution's of source code must retain the above copyright notice,// this list of conditions and the following disclaimer.//// * Redistribution's in binary form must reproduce the above copyright notice,// this list of conditions and the following disclaimer in the documentation// and/or other materials provided with the distribution.//// * The name of the copyright holders may not be used to endorse or promote products// derived from this software without specific prior written permission.//// This software is provided by the copyright holders and contributors "as is" and// any express or implied warranties, including, but not limited to, the implied// warranties of merchantability and fitness for a particular purpose are disclaimed.// In no event shall the Intel Corporation or contributors be liable for any direct,// indirect, incidental, special, exemplary, or consequential damages// (including, but not limited to, procurement of substitute goods or services;// loss of use, data, or profits; or business interruption) however caused// and on any theory of liability, whether in contract, strict liability,// or tort (including negligence or otherwise) arising in any way out oftypedefunionstructdoublex,y;
voidproject_opencv( // outputspoint2_t* q,
point3_t* dq_dp, // may be NULLdouble* dq_dintrinsics_nocore, // may be NULL// inputsconstpoint3_t* p,
constdoublefx = intrinsics;
constdoublefy = intrinsics;
constdoublecx = intrinsics;
constdoublecy = intrinsics;
doublek = ;
for(inti=0; i<Nintrinsics-4; i++)
k[i] = intrinsics[i+4];
for( inti = 0; i < N; i++ )
doublez_recip = 1./p[i].z;
doublex = p[i].x * z_recip;
doubley = p[i].y * z_recip;
doubler2 = x*x + y*y;
doubler4 = r2*r2;
doubler6 = r4*r2;
doublea1 = 2*x*y;
doublea2 = r2 + 2*x*x;
doublea3 = r2 + 2*y*y;
doublecdist = 1 + k*r2 + k*r4 + k*r6;
doubleicdist2 = 1./(1 + k*r2 + k*r4 + k*r6);
doublexd = x*cdist*icdist2 + k*a1 + k*a2 + k*r2+k*r4;
doubleyd = y*cdist*icdist2 + k*a3 + k*a1 + k*r2+k*r4;
q[i].x = xd*fx + cx;
q[i].y = yd*fy + cy;
if( dq_dp )
doubledx_dp = z_recip, 0, -x*z_recip ;
doubledy_dp = 0, z_recip, -y*z_recip ;
for( intj = 0; j < 3; j++ )
doubledr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
doubledcdist_dp = k*dr2_dp + 2*k*r2*dr2_dp + 3*k*r4*dr2_dp;
doubledicdist2_dp = -icdist2*icdist2*(k*dr2_dp + 2*k*r2*dr2_dp + 3*k*r4*dr2_dp);
doubleda1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
doubledmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
k*da1_dp + k*(dr2_dp + 4*x*dx_dp[j]) + k*dr2_dp + 2*r2*k*dr2_dp);
doubledmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
k*(dr2_dp + 4*y*dy_dp[j]) + k*da1_dp + k*dr2_dp + 2*r2*k*dr2_dp);
dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
if( dq_dintrinsics_nocore )
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;
if( Nintrinsics-4 > 2 )
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
if( Nintrinsics-4 > 4 )
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;
if( Nintrinsics-4 > 5 )
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
if( Nintrinsics-4 > 8 )
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
This does only the stuff I need: projection only (no geometric
transformation), and gradients in respect to the point coordinates and
distortions only. Gradients in respect to fxy and cxy are trivial, and I don't
bother reporting them.
So now I don't compile or link against OpenCV, my code builds and runs on
Debian/sid and (surprisingly) it runs much faster than before. Apparently there
was a lot of pointless overhead happening.
Alright. Rant over.
After losing a fair bit of hair due to quality and reliability issues
with our home laser multifunctional (Brother
which we bought after checking it was meant to work on Linux And it
does, but with a very buggy, proprietary driver Besides being the
printer itself of quite low quality), we decided it was time to survey
the market again, and get a color inkjet printer. I was not very much
an enthusiast of the idea, until I found all of the major
manufacturers now offer refillable ink tanks instead of the darn
expensive cartridges of past decades. Lets see how it goes!
Of course, with over 20 years of training, I did my homework. I was
about to buy an Epson printer, but decided for an HP Ink Tank 410
printer. The day it arrived, not wanting to fuss around too much to
get to see the results, I connected it to my computer using the USB
cable. Everything ran smoothly and happily! No driver hunting needed,
print quality is superb I hope, years from now, we stay with this
Next day, I tried to print over WiFi. Of course, it requires
configuration. And, of course, configuration strongly wants you to
do it from a Windows or MacOS machine which I don t have. OK, fall
back to Android For which an app download is required (and does not
thrill me, but what can I say. Oh and the app needs location
services to even run. Why Maybe because it interacts with the
wireless network in WiFi Direct, non-authenticated way?)
Anyway, things seem to work. But they don t My computers can
identify and connect with the printer from CUPS, but nothing ever
comes out. Printer paused, they say. Entering the printer s web
interface is somewhat ambiguous Following the old HP practices, I
tried http://192.168.1.75:9100/ (no point in hiding my internal IP),
and got a partial webpage sometimes (and nothing at all
othertimes). Seeing the printer got detected over ipps://, my
immediate reaction was to try pointing the browser to port 631. Seems
to work! Got some odd messages But it seems I ll soon debug the
issue away. I am not a familiar meddler in the dark lands of cups,
our faithful print server, but I had to remember my toolkit..
Sucess in enabling, but self-auto-disabled right away lpstat -t
was not more generous, reporting only it was still paused.
Some hours later (mix in attending kids and whatnot), I finally
remember to try cupsctl --debug-logging, and magically,
/var/log/cups/error_log turns from being quiet to being quite
chatty. And, of course, my first print job starts being processed:
D [10/May/2020:23:07:20 -0500] Report: jobs-active=1
D [10/May/2020:23:07:25 -0500] [Job 174] Start rendering...
D [10/May/2020:23:07:25 -0500] [Job 174] STATE: -connecting-to-device
Everything looks fine and dandy so far! But, hey, given nothing came
out of the printer keep reading one more second of logs (a couple
D [10/May/2020:23:07:26 -0500] [Job 174] Connection is encrypted.
D [10/May/2020:23:07:26 -0500] [Job 174] Credentials are expired (Credentials have expired.)
D [10/May/2020:23:07:26 -0500] [Job 174] Printer credentials: HPC37468 / Thu, 01 Jan 1970 00:00:00 GMT / 28A59EF511A480A34798B6712DEEAE74
D [10/May/2020:23:07:26 -0500] [Job 174] No stored credentials.
D [10/May/2020:23:07:26 -0500] [Job 174] update_reasons(attr=0(), s=\"-cups-pki-invalid,cups-pki-changed,cups-pki-expired,cups-pki-unknown\")
D [10/May/2020:23:07:26 -0500] [Job 174] STATE: -cups-pki-expired
D [10/May/2020:23:08:00 -0500] [Job 174] envp="CUPS_ENCRYPTION=IfRequested"
D [10/May/2020:23:08:00 -0500] [Job 174] envp="PRINTER_STATE_REASONS=cups-pki-expired"
My first stabs were attempts to get CUPS not to care about expired
certificates, but it seems to have been hidden or removed from its
usual place. Anyway, I was already frustrated.
WTF Well, yes, turns out that from the Web interface, I paid some
attention to this the first time around, but let it pass (speaks
wonders about my security practices!):
So, the self-signed certificate the printer issued at itself expired
116 years before even being issued. (is this maybe a Y2k38 bug?
Sounds like it!) Interesting thing, my CUPS log mentions the printer
credentials to expire at the beginning of the Unix Epoch
(01 Jan 1970 00:00:00 GMT).
OK, lets clickety-click away on the Web interface Didn t take me
long to get to Network Advanced settings Certificates:
However, clicking on Configure leads me to the not very
I don t remember what I did for the next couple of minutes. Kept
fuming Until I parsed again the output of lpstat -t, and found
# lpstat -t
device for HP_Ink_Tank_Wireless_410_series_C37468_: ipps://HPF43909C37468.local:443/ipp/print
Hmmmm CUPS is connecting using good ol port 443, as if it were a
Web thingy What if I do the same?
Click on New self-signed certificate , click on Next, a couple of
reloads And a very nice color print came out of the printer, yay!
Now, it still baffles me (of course I checked!): The self-signed
certificate is now said to come from
Issuer : CN=HPC37468, L=Vancouver, ST=Washington, C=US, O=HP,OU=HP-IPG,
alright not that it matters (I can import a more meaningful one if I really feel like
it), but, why is it Issued On: 2019-06-14 and set to
Expires On: 2029-06-11?
Anyway, print quality is quite nice. I hope to keep the printer long
enough to rant at the certificate being expired in the future!
Jeff Epler (Adafruit) 2020-05-11 20:39:17 -0500
why is it Issued On: 2019-06-14 and set to Expires On: 2029-06-11?
Because it s 3650 days
Gunnar Wolf 2020-05-11 20:39:17 -0500
Nice catch! Thanks for doing the head-scratching for me
I recently stumbled upon chafa, a tool to display pictures, especially color
pictures on your ANSI text terminal, e.g. inside an xterm.
And I occasionally use aha, the Ansi HTML Adapter to convert a colorful
terminal content into HTML to show off terminal screenshots without
the requirement of a picture so that it also works in
e.g. text browsers or for blinds.
Combining chafa and aha: Examples
A moment ago I had the thought what would happen if I feed the output
of chafa into aha and expected nothing
really usable. But I was surprised by the quality of the outcome.
looks like this after chafa -w 9 -c full -s 160x50 DSCN4692.jpg
Checking the Look in Text Browsers
It even looks not that bad in elinks as far as I know the only text browser
which supports CSS and styles:
In Lynx and Links 2, the text composing the image is displayed only in
black and white, but you at least can recognise the edges in the
Same Functionality in One Tool?
I knew there was a tool which did this in one step. Seems to have been
Tried to play around with it, too, but neither really understood how
to use it (seems to require a text file for the characters to be used
why?) nor did I really got it working. It always ran until I
aborted it and it never filled the target file with any content.
Additionally, png2html insists on one character per pixel, requiring
to first properly resize the image before converting to HTML.
The Keyboard in the Pictures
Oh, and btw., the displayed keyboard is my Zlant. The Zlant is a 40% uniform staggered mechanical keyboard. Currently,
PCBs are available at 1UP Keyboards (USA), i.e. no
It is shown with the SA
Vilebloom key cap set, currently available at MechSupply (UK).
Confession time: I started making these posts (eons ago) because a close
friend did as well, and I enjoyed reading them. But the main reason why I
continue is because the primary way I have to keep track of the books I've
bought and avoid duplicates is, well, grep on these posts.
I should come up with a non-bullshit way of doing this, but time to do
more elegant things is in short supply, and, well, it's my blog. So I'm
boring all of you who read this in various places with my internal
bookkeeping. I do try to at least add a bit of commentary.
This one will be more tedious than most since it includes five separate
Humble Bundles, which
increases the volume a lot. (I just realized I'd forgotten to record
those purchases from the past several months.)
First, the individual books I bought directly:
Ilona Andrews Sweep in Peace (sff)
Ilona Andrews One Fell Sweep (sff)
Steven Brust Vallista (sff)
Nicky Drayden The Prey of Gods (sff)
Meg Elison The Book of the Unnamed Midwife (sff)
Pat Green Night Moves (nonfiction)
Ann Leckie Provenance (sff)
Seanan McGuire Once Broken Faith (sff)
Seanan McGuire The Brightest Fell (sff)
K. Arsenault Rivera The Tiger's Daughter (sff)
Matthew Walker Why We Sleep (nonfiction)
Some new books by favorite authors, a few new releases I heard good things
about, and two (Night Moves and Why We Sleep) from
references in on-line articles that impressed me.
The books from security bundles (this is mostly work reading, assuming
I'll get to any of it), including a blockchain bundle:
Wil Allsop Unauthorised Access (nonfiction)
Ross Anderson Security Engineering (nonfiction)
Chris Anley, et al. The Shellcoder's Handbook (nonfiction)
Conrad Barsky & Chris Wilmer Bitcoin for the Befuddled
Imran Bashir Mastering Blockchain (nonfiction)
Richard Bejtlich The Practice of Network Security
Kariappa Bheemaiah The Blockchain Alternative (nonfiction)
Violet Blue Smart Girl's Guide to Privacy (nonfiction)
Richard Caetano Learning Bitcoin (nonfiction)
Nick Cano Game Hacking (nonfiction)
Bruce Dang, et al. Practical Reverse Engineering (nonfiction)
Chris Dannen Introducing Ethereum and Solidity (nonfiction)
Daniel Drescher Blockchain Basics (nonfiction)
Chris Eagle The IDA Pro Book, 2nd Edition (nonfiction)
Nikolay Elenkov Android Security Internals (nonfiction)
Jon Erickson Hacking, 2nd Edition (nonfiction)
Pedro Franco Understanding Bitcoin (nonfiction)
Christopher Hadnagy Social Engineering (nonfiction)
Peter N.M. Hansteen The Book of PF (nonfiction)
Brian Kelly The Bitcoin Big Bang (nonfiction)
David Kennedy, et al. Metasploit (nonfiction)
Manul Laphroaig (ed.) PoC GTFO (nonfiction)
Michael Hale Ligh, et al. The Art of Memory Forensics
Michael Hale Ligh, et al. Malware Analyst's Cookbook
Michael W. Lucas Absolute OpenBSD, 2nd Edition (nonfiction)
Bruce Nikkel Practical Forensic Imaging (nonfiction)
Sean-Philip Oriyano CEHv9 (nonfiction)
Kevin D. Mitnick The Art of Deception (nonfiction)
Narayan Prusty Building Blockchain Projects (nonfiction)
Prypto Bitcoin for Dummies (nonfiction)
Chris Sanders Practical Packet Analysis, 3rd Edition
Bruce Schneier Applied Cryptography (nonfiction)
Adam Shostack Threat Modeling (nonfiction)
Craig Smith The Car Hacker's Handbook (nonfiction)
Dafydd Stuttard & Marcus Pinto The Web Application Hacker's
Albert Szmigielski Bitcoin Essentials (nonfiction)
David Thiel iOS Application Security (nonfiction)
Georgia Weidman Penetration Testing (nonfiction)
Finally, the two SF bundles:
Buzz Aldrin & John Barnes Encounter with Tiber (sff)
Poul Anderson Orion Shall Rise (sff)
Greg Bear The Forge of God (sff)
Octavia E. Butler Dawn (sff)
William C. Dietz Steelheart (sff)
J.L. Doty A Choice of Treasons (sff)
Harlan Ellison The City on the Edge of Forever (sff)
Toh Enjoe Self-Reference ENGINE (sff)
David Feintuch Midshipman's Hope (sff)
Alan Dean Foster Icerigger (sff)
Alan Dean Foster Mission to Moulokin (sff)
Alan Dean Foster The Deluge Drivers (sff)
Taiyo Fujii Orbital Cloud (sff)
Hideo Furukawa Belka, Why Don't You Bark? (sff)
Haikasoru (ed.) Saiensu Fikushon 2016 (sff anthology)
Joe Haldeman All My Sins Remembered (sff)
Jyouji Hayashi The Ouroboros Wave (sff)
Sergei Lukyanenko The Genome (sff)
Chohei Kambayashi Good Luck, Yukikaze (sff)
Chohei Kambayashi Yukikaze (sff)
Sakyo Komatsu Virus (sff)
Miyuki Miyabe The Book of Heroes (sff)
Kazuki Sakuraba Red Girls (sff)
Robert Silverberg Across a Billion Years (sff)
Allen Steele Orbital Decay (sff)
Bruce Sterling Schismatrix Plus (sff)
Michael Swanwick Vacuum Flowers (sff)
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 1: Dawn
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 2: Ambition
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 3: Endurance
Tow Ubukata Mardock Scramble (sff)
Sayuri Ueda The Cage of Zeus (sff)
Sean Williams & Shane Dix Echoes of Earth (sff)
Hiroshi Yamamoto MM9 (sff)
Timothy Zahn Blackcollar (sff)
Phew. Okay, all caught up, and hopefully won't have to dump something
like this again in the near future. Also, more books than I have any
actual time to read, but what else is new.
I mean, it is easy to make fun of Trump, he is just too stupid and incapable and uneducated. But what the French president Emmanuel Macron did on Bastille Day, in presence of the usual Trumpies, was just above the usual level of making fun of Trump. The French made Trump watch a French band playing a medley of Daft Punk. And as we know Trump seemed to be very unimpressed, most probably because he doesn t have a clue.
I mean, normally you play these pathetic rubbish, look at the average US (or Chinese or North Korean) parades, and here we have the celebration of an event much older then anything the US can put on the table, and they are playing Daft Punk!
France, thanks. You made my day actually not only one!
rejected deletion (despite valid reason) of
borgbackup (no other screenshots)
rejected deletion of
synaptic (garbage in reason field)
discuss mail bounces with a hoster,
check perms of LE results,
add 1 user to a group,
re-sent some TLS cert expiry mail,
clean up mail bounce flood,
approve some debian.net TLS certs,
do the samhain dance thrice,
end 1 samhain mail flood,
diagnose/fix LDAP update issue,
relay DebConf cert expiry mails,
reboot 2 non-responsive VM,
merged patches for debian.org-sources.debian.org meta-package,
osmose-emulator (6 screenshots of non-free games)
xserver-xorg-video-amdgpu (drivers have no UI),
eom (includes image of offensive historical figure),
synaptic (includes non-free background image),
osmose-emulator (6 screenshots of non-free games)
libomxil-bellagio0-components-xvideo (borked upload)
approved deletion of
xserver-xephyr (includes image of non-free OS),
perforate (image of the homepage)
openvas/lv2core (not screenshots)
rejected deletion of
vkeybd (junk in reason field)
quiet a logrotate warning,
investigate issue with DNSSEC and alioth,
deploy fix on our first stretch buildd,
restore alioth git repo after history rewrite,
investigate iptables segfaults on buildd
investigate time issues on a NAS
Debian derivatives census:
delete patches over 5 MiB,
re-enable the service
investigate some 403 errors,
fix alioth KGB config,
deploy theme changes,
close a bogus bug report,
ping 1 user with bouncing email,
whitelist 9 email addresses
whitelist 2 domains
deploy my changes
debug mailing list issue,
security upgrades and reboots
Remove old unused and not working website hosting sponsor functionality
use correct SSL CA cert store,
fix and https links to debian.ch and
fix HTML entity escaping in the mirrors list
apply a patch to userdir-ldap,
ask a local admin to reset a dead powerpc buildd,
remove dead SH4 porterboxen from LDAP,
fix perms on www.d.o OC static mirror,
report false positives in an an automated abuse report,
redirect 1 student to FAQs/support/DebianEdu,
redirect 1 event organiser to partners/trademark/merchandise/DPL,
redirect 1 guest account seeker to NM,
redirect 1 @debian.org desirer to NM,
redirect 1 email bounce to a email@example.com user,
redirect 2 people to the listmasters,
redirect 1 person to Debian Pure Blends,
redirect 1 user to a service admin and
redirect 2 users to support
Debian packages site:
deploy my ports/cruft changes
poke at HP page history and advise a contributor,
whitelist 13 email address,
whitelist 1 domain,
check out history of a banned IP,
direct 1 hoster to DebConf17 sponsors team,
direct 1 user to OpenStack packaging,
direct 1 user to InstallingDebianOn and h-node.org,
direct 2 users to different ways to help Debian and
direct 1 emeritus DD on repository wiki page reorganisation
fix an issue with the PTS news,
remove some debugging cruft I left behind,
fix the usertags on a QA bug and
deploy some code fixes
Debian mentors: security upgrades and service restarts
What happened in the Reproducible
Builds effort between Sunday
December 4 and Saturday December 10 2016:
Toolchain development and fixes
Anders Kaseorg opened a pull
request to asciidoc
upstream, to make it generate reproducible documentation. (#782294)
Reviews of unreproducible packages
47 package reviews have been added, 84 have been updated and 3 have been
removed in this week, adding to our knowledge about identified
1 new issue type has been added: lessc_captures_build_path
Weekly QA work
During our reproducibility testing, some FTBFS bugs have been detected and
Chris Lamb (8)
Chris Lamb fixed a division-by-zero in the progress bar, split out
trydiffoscope into a separate package, and made some performance enhancements.
Maria Glukhova fixed build issues with Python 3.4
Anders Kaseorg added support for .par files, by allowing them to be
treated as Zip archives; and Chris Lamb improved some documentation.
Ximin Luo added the ability to vary the build time using
faketime, as well as other code
quality improvements and cleanups.
He also discovered a little-known fact about faketime - that it also modifies
filesystem timestamps by default. He submitted a PR
to libfaketime upstream to improve the documentation on this, which was quickly
accepted, and also disabled this feature in reprotest's own usage of faketime.
There was further work on buildinfo.debian.net code. Chris Lamb added support
for buildinfo format 0.2 and made rejection notices clearer; and Emanuel
Bronshtein fixed some links to use HTTPS.
This week's edition was written by Ximin Luo and reviewed by a bunch
of Reproducible Builds folks on IRC and via email.
Here are a bunch of security things I m excited about in the newly released Linux v4.9:
Latent Entropy GCC plugin
Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is credited , but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.
vmapped kernel stack and thread_info relocation on x86
Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.
Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.
CONFIG_DEBUG_RODATA mandatory on arm64
As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there s no reason to make the protection optional.
Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.
That s it for now! Let me know if there are other fun things to call attention to in v4.9.
Reviews of unreproducible packages
15 package reviews have been added, 4 have been updated and 26 have been removed in this week,
adding to our knowledge about identified issues.
2 issue types have been added:
Group all "done" and all "open" usertagged bugs together in the bugs graphs and move the "done bugs" from the bottom of these gaps.
Update list of packages installed on .debian.org machines.
Made the maintenance jobs run every 2h instead of 3h.
Various bug fixes and minor improvements.
After thorough review by Mattia, some patches by Valerie were merged in preparation of the switch from sqlite to Postgresql, most notably a conversion to the sqlalchemy expression language.
Holger gave a talk at Profitbricks about how Debian is using 168 cores, 503 GB RAM and 5 TB storage to make jenkins.debian.net and tests.reproducible-builds.org run. Many thanks to Profitbricks for supporting jenkins.debian.net since August 2012!
Holger created a Jenkins job to build reprotest from git master branch.
Finally, the Jenkins Naginator plugin was installed to retry git cloning in case of Alioth/network failures, this will benefit all jobs using Git on jenkins.debian.net.
This week's edition was written by Chris Lamb, Valerie Young, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.
Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.
seccomp support for parisc
Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.
x86 32-bit mmap ASLR vs unlimited stack fixed
Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. ulimit -s unlimited ) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.
x86 execute-only memory
Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for execute only memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I m looking forward to either emulated QEmu support or access to one of these fancy CPUs.
CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86
Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.
On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it s reasonable to continue to leave an out for developers that find themselves tripping over it.
arm64 KASLR text base offset
Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the /chosen/kaslr-seed property) or from UEFI (via EFI_RNG_PROTOCOL), so if you re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.
zero-poison after free
Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called sanity checking ), which can catch another small subset of flaws.
To understand the pieces of this, it s worth describing that the kernel s higher level allocator, the page allocator (e.g. __get_free_pages()) is used by the finer-grained slab allocator (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).
Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn t worth the performance trade-off.
Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.
To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:
and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:
To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add F to slub_debug as slub_debug=PF .
read-only after init
I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).
Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared const in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked __init ) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.
As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it s trivial to declare a new data section ( .data..ro_after_init ) and add it to the existing read-only data section ( .rodata ). Kernel structures can be annotated with the new section (via the __ro_after_init macro), and they ll become read-only once boot has finished.
The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel s attack surface can be made read-only for the majority of its lifetime.
As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).
That s it for v4.6, next up will be v4.7!
The Background Story
A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail  being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).
A month ago, or so, I collected my old M rklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.
Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).
This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.
I should have moved on from here on...
Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again ? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.
The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.
Trivial Source Code Investigation...
So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:
Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.
Getting in touch again, still being really interested and wanting to help...
As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:
I just stumbled over this post  [link reference adapted for this
blog post), which probably is the one you have referred to above.
It seems that Rocrail contains features that require a key or such
for permanent activation. Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will not
appreciate that). As the GPL states that people can share the source
code, programmers can easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they like.
Furthermore, the current COPYING file is really non-protective at
all. It does not really protect you as copyright holder of the
code. Meaning, if people crash their trains with your software, you
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.
In that referenced post above, someone also writes about the nuisance
of license discussions in this forum. I have seen various cases
where people produced software and did not really care for
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code under their copyright holdership and their
own commercial licensing scheme. This is not paranoia, this is what
happens in the Free Software world from time to time.
A model that might be much more appropriate (and more protective to
you as the author), maybe, is a dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g. MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you may use any
non-free license you want (but please not that COPYING file you have
now, it really does not protect your copyright holdership).
The reason for releasing (a reduced set of features of a) software as
Free Software is to extend the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud, GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am sure that the user base of Rocrail will increase.
There may also be developers popping up showing an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that won't waste their free time on working for a non-free
piece of software (without being paid).
If you follow (or want to follow) a business model with Rocrail, then
keep some interesting features in the Commercial Edition and don't
ship that source code. People with deep interest may opt for that.
Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail you are free to juggle with licenses and
apply any license to a release you want. For example, this can be
interesing for a free-again Rocrail being shipped via Apple's iStore.
Last but not least, as you ship the complete source code with all
previous changes as a Git project to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In
the mail I received with my GitBlit credentials, there was some text
that prohibits publishing the code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage. This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the 2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want this.
Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it does not come with a free
license anymore. I won't continue this discussion and move on, unless
you are interested in any of the above information and ask for more
expertise. Ping me here or directly via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the
expertise is offered free of charge ;-).
Wow, the first time I got moderated somewhere... What an experience!
This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...
The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread  a bit later today):
I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.
(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.
Now the political part of this blog post...
Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.
[Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another this time highly idiotic wind of change, where free speech may end up as a precious good.]
To other (Debian) Package Maintainers and Railroad Enthusiasts...
With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.
Now really moving on...
Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.
I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...
cookiecutter, a project template generator, removing nondeterministic keyword arguments from appearing in the documentation. (#800).
pyicu, a Python wraper for the IBM Unicode library. (#27).
Integrated a number of issues raised by @piotr1212 to python-fadvise, my Python interface to posix_fadvise(2), where the API was not being applied to open file descriptors (#1) and moving the .so to a module directory (#2).
Fixed a bug in django-slack, my library to easily post messages to the Slack group-messaging utility, correcting an EncodeError exception under Python 3 (#53) and updated the minimum required version of Django to 1.7 (#54).
Clarified the documentation for travis.debian.net my hosted script to easily test and build Debian packages on the Travis CI continuous integration platform regarding how to integrate with Github (#20).
Whilst anyone can inspect the source code of free software for malicious flaws, most Linux distributions provide binary (or "compiled") packages to end users.
The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously and accidentally during this compilation process by promising identical binary packages are always generated from a given source.
I submitted the following patches to fix reproducibility-related toolchain issues:
Authored the patch & issued DLA 596-1 for extplorer, a web-based file manager, fixing an archive traversal exploit.
Issued DLA 598-1 for suckless-tools, fixing a segmentation fault in the slock screen locking tool.
Issued DLA 599-1 for cracklib2, a pro-active password checker library, fixing a stack-based buffer overflow when parsing large GECOS fields.
Improved the find-work internal tool adding optional colour highlighting and migrating it to Python 3.
Wrote an lts-missing-uploads tool to find mistakes where there was no correponding package in the archive after an announcement.
Added optional colour highlighting to the lts-cve-triage tool.
redis 2:3.2.3-1 New upstream release, move to the DEP-5 debian/copyright format, ensure that we are running as root in LSB initscripts and add a README.Source regarding our local copies of redis.conf and sentinel.conf.
4.2.5-1 Take over package maintenance, completely overhauling the packaging with a new upstream version, move to virtual-mysql-server to support MariaDB, updating package names of dependencies and fix the outdated Apache configuration.
The second pre-conference day was dedicated to books and beers, with a visit to an exquisite print studio, and a beer tasting session at one of the craft breweries in Canada. In addition we could grab a view into the Canadian lifestyle by visiting Pavneet s beautiful house in the countryside, as well as enjoying traditional style pastries from a bakery.
In short, a perfect combination for us typography and beer savvy freaks!
This morning we had somehow an early start from the hotel. Soon the bus left downtown Toronto and entered countryside Ontario, large landscapes filled with huge (for my Japanese feeling) estates and houses, separated by fields, forests and wild landscape. Very beautiful and inviting to live there. On our way to the printing workshop we stopped at Pavneet s house for a very short visit of the exterior, which includes mathematics in the bricking. According to Pavneet, his kids hate to see math on the wall I would be proud to have it.
A bit further on we entered into Erin, where the Porcupine s Quill is located. A small building along the street, one could easily oversee that rare jewel! In addition considering that according to the owners, Google Maps has a bad error which would lead you to a completely different location. This printing workshop, led by Tim and Elke Inkster, is producing books in a traditional style using an old Heidelberg Offset printing machine.
Elke introduced us to the sewing of folded signatures together with a lovely old sewing machine. It was the first time I actually saw one in action.
Tim, the head master of the printing shop, first entertained us with stories about Chinese publishers visiting them in the old cold-war times, before diving into explanations of the actual machines around, like the Heidelberg offset printing machine.
In the back of the basement of the little studio there is also a huge folding machine, which cuts up the big signatures of 16 pages and folds them into bundles. An impressive example of tricky engineering.
Due to the small size of the printing studio, our groups were actually split into two groups, and while the other group got its guided tour, we grabbed coffee and traditional cookies and pastries from the nearby Holtom s bakery. Loads of nice pastries with various filling, my favorite being the slightly salty cherry pie, and above all the rhubarb-raspberry pie.
To my absolute astonishment I also found there a Viennese Kaisersemmel , called Kaiser bun here, but keeping the shape and the idea (but unfortunately not the crispy cracky quality of the original in Vienna). Of course I got two of them, together with a nice jam from the region, and enjoyed these Viennese breakfast the next day morning.
Leaving the Quill we headed for a lunch in a nice pizzeria (I got Pizza Toscana) which also served excellent local beer how would I like to have something like this over in Japan! Our last stop on this excursion was Stone Hammer Brewery, ne of the most famous craft breweries in Canada.
Although they wont win a prize for typography (besides one page of a coaster there that carried a nice pun), their beers are exquisite. We got five different beers to taste, plus extensive explanations on brewing methods and differences. Now I finally understand why most of the new craft breweries in Japan are making Ales: Ales don t need a long process and are ready for sale in rather short time, compared to e.g. lagers.)
Also at the Stone Hammer Brewery I spotted this very nice poster on the wall of the toilet. And I cannot agree more, everything can easily be discussed over a good beer it calms down aversions, makes even the worst enemies friends, and is healthy for both the mind and body.
Filled with excellent beer, some of us (notably an unnamed US TeXnician and politician), stacked up on beers to carry home. I was very tempted to get a huge batch, but putting cans into an airplane seems to be not the optimal idea. Since we are talking cans, I was surprised to hear that many craft beer brewers nowadays prefer cans due to their better protection of the beer from light and oxygen, both killers of good beer.
Before leaving we took a last look at the Periodic Table of Beer Types, which left me in awe about how much I don t know and I probably never will know. In particular, I heard the first time of a Vienna style beer Vienna is not really famous for beer, better to say, it is infamous. So maybe this is a different Vienna than my home town that is meant here.
Another two hour bus ride brought us back to Toronto, where we met with other participants at the reception in a restaurant of Mediterranean cuisine, where I could enjoy for the first time in years a good Tahina and Humus.
All around another excellent day, now I only would like to have two days of holidays, guess I need to relax in the lectures starting from tomorrow.
Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.
If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.
With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.
Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.
For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.
Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.
Can you escape your mobile phone while on vacation?
Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.
Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?
What can be done?
Opt-out of mobile phone authentication schemes.
Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
If your bank provides a relationship manager or other personal contact, this
can also provide a higher level of security as they get to know you.
As some of the world knows full well by now, I've been noodling with Go
for a few years, working through its pros, its cons, and thinking a lot
about how humans use code to express thoughts and ideas. Go's got a lot of
neat use cases, suited to particular problems, and used in the right place,
you can see some clear massive wins.
I've started writing Debian tooling in Go, because it's a pretty natural fit.
Go's fairly tight, and overhead shouldn't be taken up by your operating system.
After a while, I wound up hitting the usual blockers, and started to build up
abstractions. They became pretty darn useful, so, this blog post is announcing
(a still incomplete, year old and perhaps API changing) Debian package for Go.
The Go importable name is pault.ag/go/debian. This contains a lot of utilities
for dealing with Debian packages, and will become an edited down "toolbelt"
for working with or on Debian packages.
Currently, the package contains 4 major sub packages. They're a changelog
parser, a control file parser, deb file format parser, dependency parser
and a version parser. Together, these are a set of powerful building blocks
which can be used together to create higher order systems with reliable
understandings of the world.
The first (and perhaps most incomplete and least tested) is a changelog file
parser.. This provides the
programmer with the ability to pull out the suite being targeted in the
changelog, when each upload was, and the version for each. For example, let's
look at how we can pull when all the uploads of Docker to sid took place:
funcmain()resp,err:=http.Get("http://metadata.ftp-master.debian.org/changelogs/main/d/docker.io/unstable_changelog")iferr!=nilpanic(err)allEntries,err:=changelog.Parse(resp.Body)iferr!=nilpanic(err)for_,entry:=rangeallEntriesfmt.Printf("Version %s was uploaded on %s\n",entry.Version,entry.When)
The output of which looks like:
Version 1.8.3~ds1-2 was uploaded on 2015-11-04 00:09:02 -0800 -0800
Version 1.8.3~ds1-1 was uploaded on 2015-10-29 19:40:51 -0700 -0700
Version 1.8.2~ds1-2 was uploaded on 2015-10-29 07:23:10 -0700 -0700
Version 1.8.2~ds1-1 was uploaded on 2015-10-28 14:21:00 -0700 -0700
Version 1.7.1~dfsg1-1 was uploaded on 2015-08-26 10:13:48 -0700 -0700
Version 1.6.2~dfsg1-2 was uploaded on 2015-07-01 07:45:19 -0600 -0600
Version 1.6.2~dfsg1-1 was uploaded on 2015-05-21 00:47:43 -0600 -0600
Version 1.6.1+dfsg1-2 was uploaded on 2015-05-10 13:02:54 -0400 EDT
Version 1.6.1+dfsg1-1 was uploaded on 2015-05-08 17:57:10 -0600 -0600
Version 1.6.0+dfsg1-1 was uploaded on 2015-05-05 15:10:49 -0600 -0600
Version 1.6.0+dfsg1-1~exp1 was uploaded on 2015-04-16 18:00:21 -0600 -0600
Version 1.6.0~rc7~dfsg1-1~exp1 was uploaded on 2015-04-15 19:35:46 -0600 -0600
Version 1.6.0~rc4~dfsg1-1 was uploaded on 2015-04-06 17:11:33 -0600 -0600
Version 1.5.0~dfsg1-1 was uploaded on 2015-03-10 22:58:49 -0600 -0600
Version 1.3.3~dfsg1-2 was uploaded on 2015-01-03 00:11:47 -0700 -0700
Version 1.3.3~dfsg1-1 was uploaded on 2014-12-18 21:54:12 -0700 -0700
Version 1.3.2~dfsg1-1 was uploaded on 2014-11-24 19:14:28 -0500 EST
Version 1.3.1~dfsg1-2 was uploaded on 2014-11-07 13:11:34 -0700 -0700
Version 1.3.1~dfsg1-1 was uploaded on 2014-11-03 08:26:29 -0700 -0700
Version 1.3.0~dfsg1-1 was uploaded on 2014-10-17 00:56:07 -0600 -0600
Version 1.2.0~dfsg1-2 was uploaded on 2014-10-09 00:08:11 +0000 +0000
Version 1.2.0~dfsg1-1 was uploaded on 2014-09-13 11:43:17 -0600 -0600
Version 1.0.0~dfsg1-1 was uploaded on 2014-06-13 21:04:53 -0400 EDT
Version 0.11.1~dfsg1-1 was uploaded on 2014-05-09 17:30:45 -0400 EDT
Version 0.9.1~dfsg1-2 was uploaded on 2014-04-08 23:19:08 -0400 EDT
Version 0.9.1~dfsg1-1 was uploaded on 2014-04-03 21:38:30 -0400 EDT
Version 0.9.0+dfsg1-1 was uploaded on 2014-03-11 22:24:31 -0400 EDT
Version 0.8.1+dfsg1-1 was uploaded on 2014-02-25 20:56:31 -0500 EST
Version 0.8.0+dfsg1-2 was uploaded on 2014-02-15 17:51:58 -0500 EST
Version 0.8.0+dfsg1-1 was uploaded on 2014-02-10 20:41:10 -0500 EST
Version 0.7.6+dfsg1-1 was uploaded on 2014-01-22 22:50:47 -0500 EST
Version 0.7.1+dfsg1-1 was uploaded on 2014-01-15 20:22:34 -0500 EST
Version 0.6.7+dfsg1-3 was uploaded on 2014-01-09 20:10:20 -0500 EST
Version 0.6.7+dfsg1-2 was uploaded on 2014-01-08 19:14:02 -0500 EST
Version 0.6.7+dfsg1-1 was uploaded on 2014-01-07 21:06:10 -0500 EST
Next is one of the most complex, and one of the oldest parts of go-debian,
which is the control file parser
(otherwise sometimes known as deb822). This module was inspired by the way
that the json module works in Go, allowing for files to be defined in code
with a struct. This tends to be a bit more declarative, but also winds up
putting logic into struct tags, which can be a nasty anti-pattern if used too
The first primitive in this module is the concept of a Paragraph, a struct
containing two values, the order of keys seen, and a map of string to string.
All higher order functions dealing with control files will go through this
type, which is a helpful interchange format to be aware of. All parsing of
meaning from the Control file happens when the Paragraph is unpacked into
a struct using reflection.
The idea behind this strategy that you define your struct, and let the Control
parser handle unpacking the data from the IO into your container, letting you
maintain type safety, since you never have to read and cast, the conversion
will handle this, and return an Unmarshaling error in the event of failure.
Additionally, Structs that define an anonymous member of control.Paragraph
will have the raw Paragraph struct of the underlying file, allowing the
programmer to handle dynamic tags (such as X-Foo), or at least, letting
them survive the round-trip through go.
The default decoder
contains an argument, the ability to verify the input control file using an
OpenPGP keyring, which is exposed to the programmer through the
(*Decoder).Signer() function. If the passed argument is nil, it will not
check the input file signature (at all!), and if it has been passed, any
signed data must be found or an error will fall out of the NewDecoder call.
On the way out, the opposite happens, where the struct is introspected,
turned into a control.Paragraph, and then written out to the io.Writer.
Here's a quick (and VERY dirty) example showing the basics of reading and
writing Debian Control files with go-debian.
packagemainimport("fmt""io""net/http""strings""pault.ag/go/debian/control")typeAllowedPackagestructPackagestringFingerprintstringfunc(a*AllowedPackage)UnmarshalControl(instring)errorin=strings.TrimSpace(in)chunks:=strings.SplitN(in," ",2)iflen(chunks)!=2returnfmt.Errorf("Syntax sucks: '%s'",in)a.Package=chunksa.Fingerprint=chunks[1:len(chunks)-1]returnniltypeDMUAstructFingerprintstringUidstringAllowedPackagesAllowedPackage control:"Allow" delim:"," funcmain()resp,err:=http.Get("http://metadata.ftp-master.debian.org/dm.txt")iferr!=nilpanic(err)decoder,err:=control.NewDecoder(resp.Body,nil)iferr!=nilpanic(err)fordmua:=DMUAiferr:=decoder.Decode(&dmua);err!=niliferr==io.EOFbreakpanic(err)fmt.Printf("The DM %s is allowed to upload:\n",dmua.Uid)for_,allowedPackage:=rangedmua.AllowedPackagesfmt.Printf(" %s [granted by %s]\n",allowedPackage.Package,allowedPackage.Fingerprint)
Output (truncated!) looks a bit like:
The DM Allison Randal <firstname.lastname@example.org> is allowed to upload:
parrot [granted by A4F455C3414B10563FCC9244AFA51BD6CDE573CB]
The DM Benjamin Barenblat <email@example.com> is allowed to upload:
boogie [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
dafny [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
transmission-remote-gtk [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
urweb [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
The DM <firstname.lastname@example.org> is allowed to upload:
covered [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
dico [granted by 6ADD5093AC6D1072C9129000B1CCD97290267086]
drawtiming [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
fonts-hosny-amiri [granted by BD838A2BAAF9E3408BD9646833BE1A0A8C2ED8FF]
Next up, we've got the deb module. This contains code to handle reading
Debian 2.0 .deb files. It contains a wrapper that will parse the control
member, and provide the data member through the
Here's an example of how to read a .deb file, access some metadata, and
iterate over the tar archive, and print the filenames of each of the
The dependency package provides an interface to parse and compute
dependencies. This package is a bit odd in that, well, there's no other
library that does this. The issue is that there are actually two different
parsers that compute our Dependency lines, one in Perl (as part of dpkg-dev)
and another in C (in dpkg).
To date, this has resulted in me filing
I also found a broken package in the
which actually resulted in another bug being (totally accidentally)
I hope to continue to run the archive through my parser in hopes of finding
more bugs! This package is a bit complex, but it basically just returns what
amounts to be an AST
for our Dependency lines. I'm positive there are bugs, so file them!
funcmain()dep,err:=dependency.Parse("foo bar, baz, foobar [amd64] bazfoo [!sparc], fnord:armhf [gnu-linux-sparc]")iferr!=nilpanic(err)anySparc,err:=dependency.ParseArch("sparc")iferr!=nilpanic(err)for_,possi:=rangedep.GetPossibilities(*anySparc)fmt.Printf("%s (%s)\n",possi.Name,possi.Arch)
Gives the output:
Right off the bat, I'd like to thank
Michael Stapelberg for letting me graft this
out of dcs and into the go-debian package.
This was nearly entirely his work (with a one or two line function I added
later), and was amazingly helpful to have. Thank you!
This module implements Debian version comparisons and parsing, allowing for
sorting in lists, checking to see if it's native or not, and letting the
programmer to implement smart(er!) logic based on upstream (or Debian)
This module is extremely easy to use and very straightforward, and not worth
writing an example for.
This is more of a "Yeah, OK, this has been useful enough to me at this point
that I'm going to support this" rather than a "It's stable!" or even
"It's alive!" post. Hopefully folks can report bugs and help iterate on
this module until we have some really clean building blocks to build
solid higher level systems on top of. Being able to have multiple libraries
interoperate by relying on go-debian will be a massive ease.
I'm in need of more documentation, and to finalize some parts of the older
sub package APIs, but I'm hoping to be at a "1.0" real soon now.
There has been some silence on the Gammu release front and it's time to change that. Today all Gammu, python-gammu and Wammu have been released. As you might guess all are bugfix releases.
List of changes for Gammu 1.37.3: