I finally received a copy of the Norwegian Bokm l edition of "The Debian Administrator's Handbook". This test copy arrived in the mail a few days ago, and I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition is available from lulu.com. If you buy it quickly, you save 25% on the list price. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, as can be read online as a web page. This is the second book I publish (the first was the book "Free Culture" by Lawrence Lessig in English, French and Norwegian Bokm l), and I am very excited to finally wrap up this project. I hope "H ndbok for Debian-administratoren" will be well received.
|Title / language||Quantity|
|2016 jan-jun||2016 jul-dec||2017 jan-may|
|Culture Libre / French||3||6||15|
|Fri kultur / Norwegian||7||1||0|
|Free Culture / English||14||27||16|
To verify the timestamp, you first need to download the public key of the trusted timestamp service, for example using this command:openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \ curl -s -H "Content-Type: application/timestamp-query" \ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
Note, the public key should be stored alongside the timestamps in the archive to make sure it is also available 100 years from now. It is probably a good idea to standardise how and were to store such public keys, to make it easier to find for those trying to verify documents 100 or 1000 years from now. :) The verification itself is a simple openssl command:wget -O ca-cert.txt \ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
Is there any reason this approach would not work? Is it somehow against the Noark 5 specification?openssl ts -verify -data $inputfile -in $sha256.tsr \ -CAfile ca-cert.txt -text
You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester. In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification. The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 0 - Title of the test case file created 2017-03-18T23:49:32.103446 1 - Title of the test file created 2017-03-18T23:49:32.103446 Select which mappe you want (or search term): 0 Uploading mangelmelding/mangler.pdf PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 ~/src//noark5-tester$
nfs: server nfsserver not responding, still tryingIt is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed. While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux. The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me. The content for a NFS mount point look similar to this:
nfs: server nfsserver OK
The key number to look at is the third number in the per-op list. It is the number of NFS timeouts experiences per file system operation. Here 22 write timeouts and 5 access timeouts. If these numbers are increasing, I believe the machine is experiencing NFS hang. Unfortunately the timeout value do not start to increase right away. The NFS operations need to time out first, and this can take a while. The exact timeout value depend on the setup. For example the defaults for TCP and UDP mount points are quite different, and the timeout value is affected by the soft, hard, timeo and retrans NFS mount options. The only way I have been able to get working on Debian and RedHat Enterprise Linux for getting the timeout count is to peek in /proc/. But according to Solaris 10 System Administration Guide: Network Services, the 'nfsstat -c' command can be used to get these timeout values. But this do not work on Linux, as far as I can tell. I asked Debian about this, but have not seen any replies yet. Is there a better way to figure out if a Linux NFS client is experiencing NFS hangs? Is there a way to detect which processes are affected? Is there a way to get the NFS mount going quickly once the network problem causing the NFS hang has been cleared? I would very much welcome some clues, as we regularly run into NFS hangs.[...] device /dev/mapper/Debian-var mounted on /var with fstype ext3 device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1 opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=220.127.116.11,mountvers=3,mountport=4048,mountproto=udp,local_lock=all age: 7863311 caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255 sec: flavor=1,pseudoflavor=1 events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 RPC iostats version: 1.0 p/v: 100003/3 (nfs) xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820 per-op statistics NULL: 0 0 0 0 0 0 0 0 GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132 SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943 LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511 ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140 READLINK: 125 125 0 20472 18620 0 1112 1118 READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693 WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771 CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398 MKDIR: 3680 3680 0 773980 993920 26 23990 24245 SYMLINK: 903 903 0 233428 245488 6 5865 5917 MKNOD: 80 80 0 20148 21760 0 299 304 REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636 RMDIR: 3367 3367 0 645112 484848 22 5782 6002 RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288 LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579 READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917 READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937 FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022 FSINFO: 2 2 0 232 328 0 1 1 PATHCONF: 1 1 0 116 140 0 0 0 COMMIT: 0 0 0 0 0 0 0 0 device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc [...]
The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:% cat /proc/sys/kernel/random/entropy_avail; \ dd bs=1M if=/dev/random of=/dev/null count=1; \ for n in $(seq 1 5); do \ cat /proc/sys/kernel/random/entropy_avail; \ sleep 1; \ done 300 0+1 oppf ringer inn 0+1 oppf ringer ut 28 byte kopiert, 0,000264565 s, 106 kB/s 4 8 12 17 21 %
Quite the difference. :) I bought a few more than I need, in case someone want to buy one here in Norway. :) Update: The dongle was presented at Debconf last year. You might find the talk recording illuminating. It explains exactly what the source of randomness is, if you are unable to spot it from the schema drawing available from the ChaosKey web site linked at the start of this blog post.% cat /proc/sys/kernel/random/entropy_avail; \ dd bs=1M if=/dev/random of=/dev/null count=1; \ for n in $(seq 1 5); do \ cat /proc/sys/kernel/random/entropy_avail; \ sleep 1; \ done 1079 0+1 oppf ringer inn 0+1 oppf ringer ut 104 byte kopiert, 0,000487647 s, 213 kB/s 433 1028 1031 1035 1038 %
On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesb , EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen. The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka kokrim) decided on their own, to seize a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent. I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal. From the other side came two people from kokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. kokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously. If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happens the money will be put to good use. If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.
When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.
I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.
In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.
Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority. Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project. Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet. PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
% ical-archiver t/2004-2016.ics Found 3612 vevents Found 6 vtodos Found 2 vjournals Writing t/2004-2016.ics-subset-vevent-2004.ics Writing t/2004-2016.ics-subset-vevent-2005.ics Writing t/2004-2016.ics-subset-vevent-2006.ics Writing t/2004-2016.ics-subset-vevent-2007.ics Writing t/2004-2016.ics-subset-vevent-2008.ics Writing t/2004-2016.ics-subset-vevent-2009.ics Writing t/2004-2016.ics-subset-vevent-2010.ics Writing t/2004-2016.ics-subset-vevent-2011.ics Writing t/2004-2016.ics-subset-vevent-2012.ics Writing t/2004-2016.ics-subset-vevent-2013.ics Writing t/2004-2016.ics-subset-vevent-2014.ics Writing t/2004-2016.ics-subset-vjournal-2007.ics Writing t/2004-2016.ics-subset-vjournal-2011.ics Writing t/2004-2016.ics-subset-vtodo-2012.ics Writing t/2004-2016.ics-remaining.ics %As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections. The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
% appstreamcli what-provides modalias \ usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00 Identifier: pymissile [generic] Name: pymissile Summary: Control original Striker USB Missile Launcher Package: pymissile % appstreamcli what-provides modalias usb:v0694p0002d0000 Identifier: libnxt [generic] Name: libnxt Summary: utility library for talking to the LEGO Mindstorms NXT brick Package: libnxt --- Identifier: t2n [generic] Name: t2n Summary: Simple command-line tool for Lego NXT Package: t2n --- Identifier: python-nxt [generic] Name: python-nxt Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot Package: python-nxt --- Identifier: nbc [generic] Name: nbc Summary: C compiler for LEGO Mindstorms NXT bricks Package: nbc %A similar query can be done using the combined AppStream and Isenkram databases using the isenkram-lookup tool:
% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00 pymissile % isenkram-lookup usb:v0694p0002d0000 libnxt nbc python-nxt t2n %You can find modalias values relevant for your machine using cat $(find /sys/devices/ -name modalias). If you want to make this system a success and help Debian users make the most of the hardware they have, please helpadd AppStream metadata for your package following the guidelines documented in the wiki. So far only 11 packages provide such information, among the several hundred hardware specific packages in Debian. The Isenkram database on the other hand contain 101 packages, mostly related to USB dongles. Most of the packages with hardware mapping in AppStream are LEGO Mindstorms related, because I have, as part of my involvement in the Debian LEGO team given priority to making sure LEGO users get proposed the complete set of packages in Debian for that particular hardware. The team also got a nice Christmas present today. The nxt-firmware package made it into Debian. With this package in place, it is now possible to use the LEGO Mindstorms NXT unit with only free software, as the nxt-firmware package contain the source and firmware binaries for the NXT brick. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
% isenkram-lookup bluez cheese ethtool fprintd fprintd-demo gkrellm-thinkbat hdapsd libpam-fprintd pidgin-blinklight thinkfan tlp tp-smapi-dkms tp-smapi-source tpb %It can also list the firware package providing firmware requested by the load kernel modules, which in my case is an empty list because I have all the firmware my machine need:
% /usr/sbin/isenkram-autoinstall-firmware -l info: did not find any firmware files requested by loaded kernel modules. exiting %The last few days I had a look at several of the around 250 packages in Debian with udev rules. These seem like good candidates to install when a given hardware dongle is inserted, and I found several that should be proposed by isenkram. I have not had time to check all of them, but am happy to report that now there are 97 packages packages mapped to hardware by Isenkram. 11 of these packages provide hardware mapping using AppStream, while the rest are listed in the modaliases file provided in isenkram. These are the packages with hardware mappings at the moment. The marked packages are also announcing their hardware support using AppStream, for everyone to use: air-quality-sensor, alsa-firmware-loaders, argyll, array-info, avarice, avrdude, b43-fwcutter, bit-babbler, bluez, bluez-firmware, brltty, broadcom-sta-dkms, calibre, cgminer, cheese, colord, colorhug-client, dahdi-firmware-nonfree, dahdi-linux, dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, libnxt, libpam-fprintd, lomoco, madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, nbc, nqc, nut-hal-drivers, ola, open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, pcscd, pidgin-blinklight, printer-driver-splix, pymissile, python-nxt, qlandkartegt, qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, xserver-xorg-input-wacom, xserver-xorg-video-qxl, xserver-xorg-video-vmware, yubikey-personalization and zd1211-firmware If you know of other packages, please let me know with a wishlist bug report against the isenkram-cli package, and ask the package maintainer to add AppStream metadata according to the guidelines to provide the information for everyone. In time, I hope to get rid of the isenkram specific hardware mapping and depend exclusively on AppStream. Note, the AppStream metadata for broadcom-sta-dkms is matching too much hardware, and suggest that the package with with any ethernet card. See bug #838735 for the details. I hope the maintainer find time to address it soon. In the mean time I provide an override in isenkram.
In my early years, I played the epic game Elite on my PC. I spent many months trading and fighting in space, and reached the 'elite' fighting status before I moved on. The original Elite game was available on Commodore 64 and the IBM PC edition I played had a 64 KB executable. I am still impressed today that the authors managed to squeeze both a 3D engine and details about more than 2000 planet systems across 7 galaxies into a binary so small. I have known about the free software game Oolite inspired by Elite for a while, but did not really have time to test it properly until a few days ago. It was great to discover that my old knowledge about trading routes were still valid. But my fighting and flying abilities were gone, so I had to retrain to be able to dock on a space station. And I am still not able to make much resistance when I am attacked by pirates, so I bougth and mounted the most powerful laser in the rear to be able to put up at least some resistance while fleeing for my life. :) When playing Elite in the late eighties, I had to discover everything on my own, and I had long lists of prices seen on different planets to be able to decide where to trade what. This time I had the advantages of the Elite wiki, where information about each planet is easily available with common price ranges and suggested trading routes. This improved my ability to earn money and I have been able to earn enough to buy a lot of useful equipent in a few days. I believe I originally played for months before I could get a docking computer, while now I could get it after less then a week. If you like science fiction and dreamed of a life as a vagabond in space, you should try out Oolite. It is available for Linux, MacOSX and Windows, and is included in Debian and derivatives since 2011. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
This should ask d-i to install the package inside the d-i environment early in the installation sequence. Having it installed in d-i in turn will make sure the relevant scripts are called just after debootstrap filled /target/ with the freshly installed Debian system to configure apt to run dpkg with eatmydata. This is enough to speed up the installation process. There is a proposal to extend the idea a bit further by using /etc/ld.so.preload instead of apt.conf, but I have not tested its impact.preseed/early_command="anna-install eatmydata-udeb"