Search Results: "acid"

24 September 2015

Joachim Breitner: The Incredible Proof Machine

In a few weeks, I will have the opportunity to offer a weekend workshop to selected and motivated high school students1 to a topic of my choice. My idea is to tell them something about logic, proofs, and the joy of searching and finding proofs, and the gratification of irrevocable truths. While proving things on paper is already quite nice, it is much more fun to use an interactive theorem prover, such as Isabelle, Coq or Agda: You get immediate feedback, you can experiment and play around if you are stuck, and you get lots of small successes. Someone2 once called interactive theorem proving the worlds most geekiest videogame . Unfortunately, I don t think one can get high school students without any prior knowledge in logic, or programming, or fancy mathematical symbols, to do something meaningful with a system like Isabelle, so I need something that is (much) easier to use. I always had this idea in the back of my head that proving is not so much about writing text (as in normally written proofs) or programs (as in Agda) or labeled statements (as in Hilbert-style proofs), but rather something involving facts that I have proven so far floating around freely, and way to combine these facts to new facts, without the need to name them, or put them in a particular order or sequence. In a way, I m looking for labVIEW wrestled through the Curry-Horward-isomorphism. Something like this:
A proof of implication currying

A proof of implication currying

So I set out, rounded up a few contributors (Thanks!), implemented this, and now I proudly present: The Incredible Proof Machine3 This interactive theorem prover allows you to do perform proofs purely by dragging blocks (representing proof steps) onto the paper and connecting them properly. There is no need to learn syntax, and hence no frustration about getting that wrong. Furthermore, it comes with a number of example tasks to experiment with, so you can simply see it as a challenging computer came and work through them one by one, learning something about the logical connectives and how they work as you go. For the actual workshop, my plan is to let the students first try to solve the tasks of one session on their own, let them draw their own conclusions and come up with an idea of what they just did, and then deliver an explanation of the logical meaning of what they did. The implementation is heavily influenced by Isabelle: The software does not know anything about, say, conjunction ( ) and implication ( ). To the core, everything is but an untyped lambda expression, and when two blocks are connected, it does unification4 of the proposition present on either side. This general framework is then instantiated by specifying the basic rules (or axioms) in a descriptive manner. It is quite feasible to implement other logics or formal systems on top of this as well. Another influence of Isabelle is the non-linear editing: You neither have to create the proof in a particular order nor have to manually manage a proof focus . Instead, you can edit any bit of the proof at any time, and the system checks all of it continuously. As always, I am keen on feedback. Also, if you want to use this for your own teaching or experimenting needs, let me know. We have a mailing list for the project, the code is on GitHub, where you can also file bug reports and feature requests. Contributions are welcome! All aspects of the logic are implemented in Haskell and compiled to JavaScript using GHCJS, the UI is plain hand-written and messy JavaScript code, using JointJS to handle the graph interaction. Obviously, there is still plenty that can be done to improve the machine. In particular, the ability to create your own proof blocks, such as proof by contradiction, prove them to be valid and then use them in further proofs, is currently being worked on. And while the page will store your current progress, including all proofs you create, in your browser, it needs better ways to save, load and share tasks, blocks and proofs. Also, we d like to add some gamification, i.e. achievements ( First proof by contradiction , 50 theorems proven ), statistics, maybe a share theorem on twitter button. As the UI becomes more complicated, I d like to investigating moving more of it into Haskell world and use Functional Reactive Programming, i.e. Ryan Trickle s reflex, to stay sane. Customers who liked The Incredible Proof Machine might also like these artifacts, that I found while looking whether something like this exists:

  1. Students with migration background supported by the START scholarship
  2. Does anyone know the reference?
  3. We almost named it Proofcraft , which would be a name our current Minecraft-wild youth would appreciate, but it is alreay taken by Gerwin Kleins blog. Also, the irony of a theorem prover being in-credible is worth something.
  4. Luckily, two decades ago, Tobias Nipkow published a nice implementation of higher order pattern unification as ML code, which I transliterated to Haskell for this project.

23 July 2015

Antoine Beaupr : Is it safe to use open wireless access points?

I sometimes get questions when people use my wireless access point, which, for as long as I can remember, has been open to everyone; that is without any form of password protection or encryption. I arguably don't use the access point much myself, as I prefer the wired connection for the higher bandwidth, security and reliability it provides. Apart from convenience for myself and visitors, the main reason why I leave my wireless access open is that I believe in a free (both as in beer and freedom) internet, built with principles of solidarity rather than exploitation and profitability. In these days of ubiquitous surveillance, freedom often goes hand in hand with anonymity, which implies providing free internet access to everyone. I also believe that, as more and more services get perniciously transferred to the global internet, access to the network is becoming a basic human right. This is therefore my small contribution to the struggle, now also part of the R seau Libre project. So here were my friends question, in essence:
My credit card info was stolen when I used a wifi hotspot in an airport... Should I use open wifi networks? Is it safe to use my credit card for shopping online?
Here is a modified version of an answer I sent to a friend recently which I thought could be useful to the larger internet community. The short answer is "sorry about that", "it depends, you generally can, but be careful" and "your credit card company is supposed to protect you".

Sorry! First off, sorry to hear that our credit card was stolen in an airport! That has to be annoying... Did the credit card company reimburse you? Normally, the whole point of credit cards is that they protect you in case of theft like this and they are supposed to reimburse you if you credit card gets stolen or abused...

The complexity and unreliability of passwords Now of course, securing every bit of your internet infrastructure helps in protecting against such attacks. However: there is a trade-off! First off, it does makes it more complicated for people to join the network. You need to make up some silly password (which has its own security problems: passwords can be surprisingly easy to guess!) that you will post on the fridge or worst, forget all the time! And if it's on the fridge, anyone with a view to that darn fridge, be it one-time visitor or sneaky neighbor, can find the password and steal your internet access (although, granted, that won't allow them to directly spy on your internet connection). In any case, if you choose to use a password, you should use the tricks I wrote in the koumbit wiki to generate the password and avoid writing it on the fridge.

The false sense of security of wireless encryption Second, it can also give a false sense of security: just because a wifi access point appears "secure" (ie. that the communication between your computer and the wifi access point is encrypted) doesn't mean the whole connection is secure. In fact, one attack that can be done against access points is exactly to masquerade as an existing access point, with no security security at all. That way, instead of connecting to the real secure and trusted access point, you connect to an evil one which spies on our connection. Most computers will happily connect to such a hotspot even with degraded security without warning. It may be what happened at the airport, in fact. Of course this particular attack would be less likely to happen if you live in the middle of the woods than an airport, but it's some important distinction to keep in mind, because the same attack can be performed after the wireless access point, for example by your countryside internet access provider or someone attacking it. Your best protection for your banking details is to rely on good passwords (for your back account) but also, and more importantly, what we call end-to-end encryption. That is usually implemented using the "HTTPS" with a pad lock icon in your address bar. This ensures that the communication between your computer and the bank or credit card company is secure, that is: that no wifi access point or attacker between your computer and them can intercept your credit card number.

The flaws of internet security Now unfortunately, even the HTTPS protocol doesn't bring complete security. For example, one attack that can be done is similar to the previous one and that is to masquerade as a legitimate bank site, but either strip out the encryption or even fake the encryption. So you also need to look at the address of the website you are visiting. Attackers are often pretty clever and will use many tricks to hide the real address of the website in the address bar. To work around this, I always explicitly type my bank website address (https://accesd.desjardins.com/ in my case) directly myself instead of clicking on links, bookmarks or using a search engine to find my bank site. In the case of credit cards, it is much trickier because when you buy stuff online, you end up putting that credit card number on different sites which you do not necessarily trust. There's no good solution but complaining to your credit card company if you believe a website you used has stolen your credit card details. You can also use services like Paypal, Dwolla or Bitcoin that hide your credit card details from the seller, if they support the service. I usually try to avoid putting my credit card details on sites I do not trust, and limit myself to known parties (e.g. Via Rail, Air Canada, etc). Also, in general, I try to assume the network connection between me and the website I visit is compromised. This forced me to get familiar with online security and use of encryption. It is more accessible to me than trying to secure the infrastructure i am using, because i often do not control it at all (e.g. internet cafes...). Internet security is unfortunately a hard problem, and things are not getting easier as more things move online. The burden is on us programmers and system administrators to create systems that are more secure and intuitive for our users so, as I said earlier, sorry the internet sucks so much, we didn't think so many people would join the acid trip of the 70s. ;)

12 July 2015

Lunar: Reproducible builds: week 11 in Stretch cycle

Debian is undertaking a huge effort to develop a reproducible builds system. I'd like to thank you for that. This could be Debian's most important project, with how badly computer security has been going.

PerniciousPunk in Reddit's Ask me anything! to Neil McGovern, DPL. What happened in the reproducible builds effort this week: Toolchain fixes More tools are getting patched to use the value of the SOURCE_DATE_EPOCH environment variable as the current time:

In the reproducible experimental toolchain which have been uploaded: Johannes Schauer followed up on making sbuild build path deterministic with several ideas. Packages fixed The following 311 packages became reproducible due to changes in their build dependencies : 4ti2, alot, angband, appstream-glib, argvalidate, armada-backlight, ascii, ask, astroquery, atheist, aubio, autorevision, awesome-extra, bibtool, boot-info-script, bpython, brian, btrfs-tools, bugs-everywhere, capnproto, cbm, ccfits, cddlib, cflow, cfourcc, cgit, chaussette, checkbox-ng, cinnamon-settings-daemon, clfswm, clipper, compton, cppcheck, crmsh, cupt, cutechess, d-itg, dahdi-tools, dapl, darnwdl, dbusada, debian-security-support, debomatic, dime, dipy, dnsruby, doctrine, drmips, dsc-statistics, dune-common, dune-istl, dune-localfunctions, easytag, ent, epr-api, esajpip, eyed3, fastjet, fatresize, fflas-ffpack, flann, flex, flint, fltk1.3, fonts-dustin, fonts-play, fonts-uralic, freecontact, freedoom, gap-guava, gap-scscp, genometools, geogebra, git-reintegrate, git-remote-bzr, git-remote-hg, gitmagic, givaro, gnash, gocr, gorm.app, gprbuild, grapefruit, greed, gtkspellmm, gummiboot, gyp, heat-cfntools, herold, htp, httpfs2, i3status, imagetooth, imapcopy, imaprowl, irker, jansson, jmapviewer, jsdoc-toolkit, jwm, katarakt, khronos-opencl-man, khronos-opengl-man4, lastpass-cli, lava-coordinator, lava-tool, lavapdu, letterize, lhapdf, libam7xxx, libburn, libccrtp, libclaw, libcommoncpp2, libdaemon, libdbusmenu-qt, libdc0, libevhtp, libexosip2, libfreenect, libgwenhywfar, libhmsbeagle, libitpp, libldm, libmodbus, libmtp, libmwaw, libnfo, libpam-abl, libphysfs, libplayer, libqb, libsecret, libserial, libsidplayfp, libtime-y2038-perl, libxr, lift, linbox, linthesia, livestreamer, lizardfs, lmdb, log4c, logbook, lrslib, lvtk, m-tx, mailman-api, matroxset, miniupnpd, mknbi, monkeysign, mpi4py, mpmath, mpqc, mpris-remote, musicbrainzngs, network-manager, nifticlib, obfsproxy, ogre-1.9, opal, openchange, opensc, packaging-tutorial, padevchooser, pajeng, paprefs, pavumeter, pcl, pdmenu, pepper, perroquet, pgrouting, pixz, pngcheck, po4a, powerline, probabel, profitbricks-client, prosody, pstreams, pyacidobasic, pyepr, pymilter, pytest, python-amqp, python-apt, python-carrot, python-django, python-ethtool, python-mock, python-odf, python-pathtools, python-pskc, python-psutil, python-pypump, python-repoze.tm2, python-repoze.what, qdjango, qpid-proton, qsapecng, radare2, reclass, repsnapper, resource-agents, rgain, rttool, ruby-aggregate, ruby-albino, ruby-archive-tar-minitar, ruby-bcat, ruby-blankslate, ruby-coffee-script, ruby-colored, ruby-dbd-mysql, ruby-dbd-odbc, ruby-dbd-pg, ruby-dbd-sqlite3, ruby-dbi, ruby-dirty-memoize, ruby-encryptor, ruby-erubis, ruby-fast-xs, ruby-fusefs, ruby-gd, ruby-git, ruby-globalhotkeys, ruby-god, ruby-hike, ruby-hmac, ruby-integration, ruby-jnunemaker-matchy, ruby-memoize, ruby-merb-core, ruby-merb-haml, ruby-merb-helpers, ruby-metaid, ruby-mina, ruby-net-irc, ruby-net-netrc, ruby-odbc, ruby-ole, ruby-packet, ruby-parseconfig, ruby-platform, ruby-plist, ruby-popen4, ruby-rchardet, ruby-romkan, ruby-ronn, ruby-rubyforge, ruby-rubytorrent, ruby-samuel, ruby-shoulda-matchers, ruby-sourcify, ruby-test-spec, ruby-validatable, ruby-wirble, ruby-xml-simple, ruby-zoom, rumor, rurple-ng, ryu, sam2p, scikit-learn, serd, shellex, shorewall-doc, shunit2, simbody, simplejson, smcroute, soqt, sord, spacezero, spamassassin-heatu, spamprobe, sphinxcontrib-youtube, splitpatch, sratom, stompserver, syncevolution, tgt, ticgit, tinyproxy, tor, tox, transmissionrpc, tweeper, udpcast, units-filter, viennacl, visp, vite, vmfs-tools, waffle, waitress, wavtool-pl, webkit2pdf, wfmath, wit, wreport, x11proto-input, xbae, xdg-utils, xdotool, xsystem35, yapsy, yaz. Please note that some packages in the above list are falsely reproducible. In the experimental toolchain, debhelper exported TZ=UTC and this made packages capturing the current date (without the time) reproducible in the current test environment. The following packages became reproducible after getting fixed: Ben Hutchings upstreamed several patches to fix Linux reproducibility issues which were quickly merged. Some uploads fixed some reproducibility issues but not all of them: Uploads that should fix packages not in main: Patches submitted which have not made their way to the archive yet: reproducible.debian.net A new package set has been added for lua maintainers. (h01ger) tracker.debian.org now only shows reproducibility issues for unstable. Holger and Mattia worked on several bugfixes and enhancements: finished initial test setup for NetBSD, rewriting more shell scripts in Python, saving UDD requests, and more debbindiff development Reiner Herrmann fixed text comparison of files with different encoding. Documentation update Juan Picca added to the commands needed for a local test chroot installation of the locales-all package. Package reviews 286 obsolete reviews have been removed, 278 added and 243 updated this week. 43 new bugs for packages failing to build from sources have been filled by Chris West (Faux), Mattia Rizzolo, and h01ger. The following new issues have been added: timestamps_in_manpages_generated_by_ronn, timestamps_in_documentation_generated_by_org_mode, and timestamps_in_pdf_generated_by_matplotlib. Misc. Reiner Herrmann has submitted patches for OpenWrt. Chris Lamb cleaned up some code and removed cruft in the misc.git repository. Mattia Rizzolo updated the prebuilder script to match what is currently done on reproducible.debian.net.

5 July 2015

Robert Edmonds: Git packaging workflow for py-lmdb

Recently, I packaged the py-lmdb Python binding for the LMDB database library. This package is going to be team maintained by the pkg-db group, which is responsible for maintaining BerkeleyDB and LMDB packages. Below are my notes on (re-)Debianizing this package and how the Git repository for the source package is laid out. The upstream py-lmdb developer has a Git-centric workflow. Development is done on the master branch, with regular releases done as fast-forward merges to the release branch. Release tags of the form py-lmdb_X.YZ are provided. The only tarballs provided are the ones that GitHub automatically generates from tags. Since these tarballs are synthetic and the content of these tarballs matches the content on the corresponding tag, we will ignore them in favor of using the release tags directly. (The --git-pristine-tar-commit option to gbp-buildpackage will be used so that .orig.tar.gz files can be replicated so that the Debian archive will accept subsequent uploads, but tarballs are otherwise irrelevant to our workflow.) To make it clear that the release tags come from upstream's repository, they should be prefixed with upstream/, which would preferably result in a DEP-14 compliant scheme. (Unfortunately, since upstream's release tags begin with py-lmdb_, this doesn't quite match the pattern that DEP-14 recommends.) Here is how the local packaging repository is initialized. Note that git clone isn't used, so that we can customize how the tags are fetched. Instead, we create an empty Git repository and add the upstream repository as the upstream remote. The --no-tags option is used, so that git fetch does not import the remote's tags. However, we also add a custom fetch refspec refs/tags/*:refs/tags/upstream/* so that the remote's tags are explicitly fetched, but with the upstream/ prefix.
$ mkdir py-lmdb
$ cd py-lmdb
$ git init
Initialized empty Git repository in /home/edmonds/debian/py-lmdb/.git/
$ git remote add --no-tags upstream https://github.com/dw/py-lmdb
$ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*'
$ git fetch upstream
remote: Counting objects: 3336, done.
remote: Total 3336 (delta 0), reused 0 (delta 0), pack-reused 3336
Receiving objects: 100% (3336/3336), 2.15 MiB   0 bytes/s, done.
Resolving deltas: 100% (1958/1958), done.
From https://github.com/dw/py-lmdb
 * [new branch]      master     -> upstream/master
 * [new branch]      release    -> upstream/release
 * [new branch]      win32-sparse-patch -> upstream/win32-sparse-patch
 * [new tag]         last-cython-version -> upstream/last-cython-version
 * [new tag]         py-lmdb_0.1 -> upstream/py-lmdb_0.1
 * [new tag]         py-lmdb_0.2 -> upstream/py-lmdb_0.2
 * [new tag]         py-lmdb_0.3 -> upstream/py-lmdb_0.3
 * [new tag]         py-lmdb_0.4 -> upstream/py-lmdb_0.4
 * [new tag]         py-lmdb_0.5 -> upstream/py-lmdb_0.5
 * [new tag]         py-lmdb_0.51 -> upstream/py-lmdb_0.51
 * [new tag]         py-lmdb_0.52 -> upstream/py-lmdb_0.52
 * [new tag]         py-lmdb_0.53 -> upstream/py-lmdb_0.53
 * [new tag]         py-lmdb_0.54 -> upstream/py-lmdb_0.54
 * [new tag]         py-lmdb_0.56 -> upstream/py-lmdb_0.56
 * [new tag]         py-lmdb_0.57 -> upstream/py-lmdb_0.57
 * [new tag]         py-lmdb_0.58 -> upstream/py-lmdb_0.58
 * [new tag]         py-lmdb_0.59 -> upstream/py-lmdb_0.59
 * [new tag]         py-lmdb_0.60 -> upstream/py-lmdb_0.60
 * [new tag]         py-lmdb_0.61 -> upstream/py-lmdb_0.61
 * [new tag]         py-lmdb_0.62 -> upstream/py-lmdb_0.62
 * [new tag]         py-lmdb_0.63 -> upstream/py-lmdb_0.63
 * [new tag]         py-lmdb_0.64 -> upstream/py-lmdb_0.64
 * [new tag]         py-lmdb_0.65 -> upstream/py-lmdb_0.65
 * [new tag]         py-lmdb_0.66 -> upstream/py-lmdb_0.66
 * [new tag]         py-lmdb_0.67 -> upstream/py-lmdb_0.67
 * [new tag]         py-lmdb_0.68 -> upstream/py-lmdb_0.68
 * [new tag]         py-lmdb_0.69 -> upstream/py-lmdb_0.69
 * [new tag]         py-lmdb_0.70 -> upstream/py-lmdb_0.70
 * [new tag]         py-lmdb_0.71 -> upstream/py-lmdb_0.71
 * [new tag]         py-lmdb_0.72 -> upstream/py-lmdb_0.72
 * [new tag]         py-lmdb_0.73 -> upstream/py-lmdb_0.73
 * [new tag]         py-lmdb_0.74 -> upstream/py-lmdb_0.74
 * [new tag]         py-lmdb_0.75 -> upstream/py-lmdb_0.75
 * [new tag]         py-lmdb_0.76 -> upstream/py-lmdb_0.76
 * [new tag]         py-lmdb_0.77 -> upstream/py-lmdb_0.77
 * [new tag]         py-lmdb_0.78 -> upstream/py-lmdb_0.78
 * [new tag]         py-lmdb_0.79 -> upstream/py-lmdb_0.79
 * [new tag]         py-lmdb_0.80 -> upstream/py-lmdb_0.80
 * [new tag]         py-lmdb_0.81 -> upstream/py-lmdb_0.81
 * [new tag]         py-lmdb_0.82 -> upstream/py-lmdb_0.82
 * [new tag]         py-lmdb_0.83 -> upstream/py-lmdb_0.83
 * [new tag]         py-lmdb_0.84 -> upstream/py-lmdb_0.84
 * [new tag]         py-lmdb_0.85 -> upstream/py-lmdb_0.85
 * [new tag]         py-lmdb_0.86 -> upstream/py-lmdb_0.86
$
Note that at this point we have content from the upstream remote in our local repository, but we don't have any local branches:
$ git status
On branch master
Initial commit
nothing to commit (create/copy files and use "git add" to track)
$ git branch -a
  remotes/upstream/master
  remotes/upstream/release
  remotes/upstream/win32-sparse-patch
$
We will use the DEP-14 naming scheme for the packaging branches, so the branch for packages targeted at unstable will be called debian/sid. Since I already made an initial 0.84-1 upload, we need to start the debian/sid branch from the upstream 0.84 tag and import the original packaging content from that upload. The --no-track flag is passed to git checkout initially so that Git doesn't consider the upstream release tag upstream/py-lmdb_0.84 to be the upstream branch for our packaging branch.
$ git checkout --no-track -b debian/sid upstream/py-lmdb_0.84
Switched to a new branch 'debian/sid'
$
At this point I imported the original packaging content for 0.84-1 with git am. Then, I signed the debian/0.84-1 tag:
$ git tag -s -m 'Debian release 0.84-1' debian/0.84-1
$ git verify-tag debian/0.84-1
gpg: Signature made Sat 04 Jul 2015 02:49:42 PM EDT using RSA key ID AAF6CDAE
gpg: Good signature from "Robert Edmonds <edmonds@mycre.ws>" [ultimate]
gpg:                 aka "Robert Edmonds <edmonds@fsi.io>" [ultimate]
gpg:                 aka "Robert Edmonds <edmonds@debian.org>" [ultimate]
$
New upstream releases are integrated by fetching new upstream tags and non-fast-forward merging into the packaging branch. The latest release is 0.86, so we merge from the upstream/py-lmdb_0.86 tag.
$ git fetch upstream --dry-run
[...]
$ git fetch upstream
[...]
$ git checkout debian/sid
Already on 'debian/sid'
$ git merge --no-ff --no-edit upstream/py-lmdb_0.86
Merge made by the 'recursive' strategy.
 ChangeLog                        46 ++++++++++++++
 docs/index.rst                   46 +++++++++++++-
 docs/themes/acid/layout.html      4 +-
 examples/dirtybench-gdbm.py       6 ++
 examples/dirtybench.py           19 ++++++
 examples/nastybench.py           18 ++++--
 examples/parabench.py             6 ++
 lib/lmdb.h                       37 ++++++-----
 lib/mdb.c                       281 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------
 lib/midl.c                        2 +-
 lib/midl.h                        2 +-
 lib/py-lmdb/preload.h            48 ++++++++++++++
 lmdb/__init__.py                  2 +-
 lmdb/cffi.py                    120 ++++++++++++++++++++++++-----------
 lmdb/cpython.c                   86 +++++++++++++++++++------
 lmdb/tool.py                      5 +-
 misc/gdb.commands                21 ++++++
 misc/runtests-travisci.sh         3 +-
 misc/runtests-ubuntu-12-04.sh    28 ++++----
 setup.py                          2 +
 tests/crash_test.py              22 +++++++
 tests/cursor_test.py             37 +++++++++++
 tests/env_test.py                73 +++++++++++++++++++++
 tests/testlib.py                 14 +++-
 tests/txn_test.py                20 ++++++
 25 files changed, 773 insertions(+), 175 deletions(-)
 create mode 100644 lib/py-lmdb/preload.h
 create mode 100644 misc/gdb.commands
$
Here I did some additional development work like editing the debian/gbp.conf file and applying a fix for #790738 to make the package build reproducibly. The package is now ready for an 0.86-1 upload, so I ran the following gbp dch command:
$ gbp dch --release --auto --new-version=0.86-1 --commit
gbp:info: Found tag for topmost changelog version '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5'
gbp:info: Continuing from commit '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5'
gbp:info: Changelog has been committed for version 0.86-1
$
This automatically generates a changelog entry for 0.86-1, but it includes commit summaries for all of the upstream commits since the last release, which I had to edit out. Then, I used gbp buildpackage with BUILDER=pbuilder to build the package in a clean, up-to-date sid chroot. After checking the result, I signed the debian/0.86-1 tag:
$ git tag -s -m 'Debian release 0.86-1' debian/0.86-1
$
The package is now ready to be pushed to git.debian.org. First, a bare repository is initialized:
$ ssh git.debian.org
edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ umask 002
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ mkdir py-lmdb.git
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ cd py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git --bare init --shared
Initialized empty shared Git repository in /srv/git.debian.org/git/pkg-db/py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ echo 'py-lmdb Debian packaging' > description
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ mv hooks/post-update.sample hooks/post-update
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ chmod a+x hooks/post-update
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout
Shared connection to git.debian.org closed.
Then, we add a new debian remote to our local packaging repository. Per our repository conventions, we need to ensure that only branch names matching debian/* and pristine-tar and tag names matching debian/* and upstream/* are pushed to the debian remote when we run git push debian, so we add a a set of remote.debian.push refspecs that correspond to these conventions. We also add an explicit remote.debian.fetch refspec to fetch tags.
$ git remote add debian ssh://git.debian.org/git/pkg-db/py-lmdb.git
$ git config --add remote.debian.push 'refs/tags/debian/*'
$ git config --add remote.debian.push 'refs/tags/upstream/*'
$ git config --add remote.debian.push 'refs/heads/debian/*'
$ git config --add remote.debian.push 'refs/heads/pristine-tar'
$ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*'
We now run the initial push to the remote Git repository. The --set-upstream option is used so that our local branches will be configured to track the corresponding remote branches. Also note that the debian/* and upstream/* tags are pushed as well.
$ git push debian --set-upstream
Counting objects: 3333, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (1083/1083), done.
Writing objects: 100% (3333/3333), 1.37 MiB   0 bytes/s, done.
Total 3333 (delta 2231), reused 3314 (delta 2218)
To ssh://git.debian.org/git/pkg-db/py-lmdb.git
 * [new branch]      pristine-tar -> pristine-tar
 * [new branch]      debian/sid -> debian/sid
 * [new tag]         debian/0.84-1 -> debian/0.84-1
 * [new tag]         debian/0.86-1 -> debian/0.86-1
 * [new tag]         upstream/last-cython-version -> upstream/last-cython-version
 * [new tag]         upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 * [new tag]         upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 * [new tag]         upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 * [new tag]         upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 * [new tag]         upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 * [new tag]         upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 * [new tag]         upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 * [new tag]         upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 * [new tag]         upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 * [new tag]         upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 * [new tag]         upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 * [new tag]         upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 * [new tag]         upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 * [new tag]         upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 * [new tag]         upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 * [new tag]         upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 * [new tag]         upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 * [new tag]         upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 * [new tag]         upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 * [new tag]         upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 * [new tag]         upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 * [new tag]         upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 * [new tag]         upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 * [new tag]         upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 * [new tag]         upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 * [new tag]         upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 * [new tag]         upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 * [new tag]         upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 * [new tag]         upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 * [new tag]         upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 * [new tag]         upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 * [new tag]         upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 * [new tag]         upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 * [new tag]         upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 * [new tag]         upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 * [new tag]         upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 * [new tag]         upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 * [new tag]         upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 * [new tag]         upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 * [new tag]         upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
Branch pristine-tar set up to track remote branch pristine-tar from debian.
Branch debian/sid set up to track remote branch debian/sid from debian.
$
After the initial push, we need to configure the remote repository so that clones will checkout the debian/sid branch by default:
$ ssh git.debian.org
edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git symbolic-ref HEAD refs/heads/debian/sid
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout
Shared connection to git.debian.org closed.
We can check if there are any updates in upstream's Git repository with the following command:
$ git fetch upstream --dry-run -v
From https://github.com/dw/py-lmdb
 = [up to date]      master     -> upstream/master
 = [up to date]      release    -> upstream/release
 = [up to date]      win32-sparse-patch -> upstream/win32-sparse-patch
 = [up to date]      last-cython-version -> upstream/last-cython-version
 = [up to date]      py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      py-lmdb_0.86 -> upstream/py-lmdb_0.86
We can check if any co-maintainers have pushed updates to the git.debian.org repository with the following command:
$ git fetch debian --dry-run -v
From ssh://git.debian.org/git/pkg-db/py-lmdb
 = [up to date]      debian/sid -> debian/debian/sid
 = [up to date]      pristine-tar -> debian/pristine-tar
 = [up to date]      debian/0.84-1 -> debian/0.84-1
 = [up to date]      debian/0.86-1 -> debian/0.86-1
 = [up to date]      upstream/last-cython-version -> upstream/last-cython-version
 = [up to date]      upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
$
We can check if anything needs to be pushed from our local repository to the git.debian.org repository with the following command:
$ git push debian --dry-run -v
Pushing to ssh://git.debian.org/git/pkg-db/py-lmdb.git
To ssh://git.debian.org/git/pkg-db/py-lmdb.git
 = [up to date]      debian/sid -> debian/sid
 = [up to date]      pristine-tar -> pristine-tar
 = [up to date]      debian/0.84-1 -> debian/0.84-1
 = [up to date]      debian/0.86-1 -> debian/0.86-1
 = [up to date]      upstream/last-cython-version -> upstream/last-cython-version
 = [up to date]      upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
Everything up-to-date
Finally, in order to set up a fresh local clone of the git.debian.org repository that's configured like the local repository created above, we have to do the following:
$ git clone --origin debian ssh://git.debian.org/git/pkg-db/py-lmdb.git
Cloning into 'py-lmdb'...
remote: Counting objects: 3333, done.
remote: Compressing objects: 100% (1070/1070), done.
remote: Total 3333 (delta 2231), reused 3333 (delta 2231)
Receiving objects: 100% (3333/3333), 1.37 MiB   1.11 MiB/s, done.
Resolving deltas: 100% (2231/2231), done.
Checking connectivity... done.
$ cd py-lmdb
$ git remote add --no-tags upstream https://github.com/dw/py-lmdb
$ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*'
$ git fetch upstream
remote: Counting objects: 56, done.
remote: Total 56 (delta 25), reused 25 (delta 25), pack-reused 31
Unpacking objects: 100% (56/56), done.
From https://github.com/dw/py-lmdb
 * [new branch]      master     -> upstream/master
 * [new branch]      release    -> upstream/release
 * [new branch]      win32-sparse-patch -> upstream/win32-sparse-patch
$ git branch --track pristine-tar debian/pristine-tar 
Branch pristine-tar set up to track remote branch pristine-tar from debian.
$ git config --add remote.debian.push 'refs/tags/debian/*'
$ git config --add remote.debian.push 'refs/tags/upstream/*'
$ git config --add remote.debian.push 'refs/heads/debian/*'
$ git config --add remote.debian.push 'refs/heads/pristine-tar'
$ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*'
$
This is a fair amount of effort beyond a simple git clone, though, so I wonder if anything can be done to optimize this.

17 June 2015

Norbert Preining: Gaming: Portal

Ok, I have to admit, I sometimes do game and recently I finished Portal. Quite old (released in 2007), but still lots of fun. I started playing it about one year ago, off and on, until I recently finished the last level. Took me about 1 year of playing to finish the actual playing time of about 10h I guess you can see how much an addict I am  portalhazards I have never been a gamer, and I think there are only three set of games I played for extended periods of time:
plus one more game, which got me hooked somehow: Hard-core board gamer who I am (I prefer playing with people real games without computer), I loved the Myst series for its crazy riddles, where solving them often needs a combination of logical thinking, recognizing patterns in images and sounds, and piecing together long list of hints. This is something a normal board game cannot provide. From the Descent series I loved the complete freedom of movement. Normal first-person shooters are just like humans running around, a bit of jumping and crouching, but Descent gives you 6D freedom which led to some people getting sick while watching me playing. From the Civilization series I don t know what I liked particularly, but it got you involved and allowed you to play long rounds. After these sins of youngsters, I haven t played for long long time, until a happy coincidence (of being Debian Developer) brought Steam onto my (Linux) machine together with a bunch of games I received for free. One of the games was Portal. Portal is in the style of Myst games one can place dual portals in various places, and by entering one of the portals, one leaves through the other. Using this one has to manage to solve loads of puzzle, evade being shot, dissolved in acid, crashed to death, etc etc, with the only aim to leave the underground station. portal-ex Besides shooting these portals there are some cubes that one can carry around and use for a variety of purposes, like putting them onto buttons, using them as stairs, protecting yourself from being shot, etc. But that s already all the tools one has. Despite of this, the levels pose increasingly difficult problems, and one is surprised how strange things one can achieve with these limited abilities and no, one cannot buy new power-ups, its not WoW. Logical thinking, tactic, and a certain level of reaction suffices. While not as philosophical as Myst, it was still a lot of fun. The only thing I am a bit unclear is, where to go from here. There are two possible successors: The logical one would be Portal 2. But I recently found a game that reminded me even more of the Myst series, combined with Portal: The Talos Principle, with stunning graphics: talos1 talos2 And filled with riddles again, maybe not as involved as in the Myst series (I don t know by now), but still a bit more challenging than Portal s one:
talos3 talos4 Difficult decision. If you have any other suggestions, please let me know!

4 May 2015

Lunar: Reproducible builds: first week in Stretch cycle

Debian Jessie has been released on April 25th, 2015. This has opened the Stretch development cycle. Reactions to the idea of making Debian build reproducibly have been pretty enthusiastic. As the pace is now likely to be even faster, let's see if we can keep everyone up-to-date on the developments. Before the release of Jessie The story goes back a long way but a formal announcement to the project has only been sent in February 2015. Since then, too much work has happened to make a complete report, but to give some highlights: Lunar did a pretty improvised lightning talk during the Mini-DebConf in Lyon. This past week It seems changes were pilling behind the curtains given the amount of activity that happened in just one week. Toolchain fixes We also rebased the experimental version of debhelper twice to merge the latest set of changes. Lunar submitted a patch to add a -creation-date to genisoimage. Reiner Herrmann opened #783938 to request making -notimestamp the default behavior for javadoc. Juan Picca submitted a patch to add a --use-date flag to texi2html. Packages fixed The following packages became reproducible due to changes of their build dependencies: apport, batctl, cil, commons-math3, devscripts, disruptor, ehcache, ftphs, gtk2hs-buildtools, haskell-abstract-deque, haskell-abstract-par, haskell-acid-state, haskell-adjunctions, haskell-aeson, haskell-aeson-pretty, haskell-alut, haskell-ansi-terminal, haskell-async, haskell-attoparsec, haskell-augeas, haskell-auto-update, haskell-binary-conduit, haskell-hscurses, jsch, ledgersmb, libapache2-mod-auth-mellon, libarchive-tar-wrapper-perl, libbusiness-onlinepayment-payflowpro-perl, libcapture-tiny-perl, libchi-perl, libcommons-codec-java, libconfig-model-itself-perl, libconfig-model-tester-perl, libcpan-perl-releases-perl, libcrypt-unixcrypt-perl, libdatetime-timezone-perl, libdbd-firebird-perl, libdbix-class-resultset-recursiveupdate-perl, libdbix-profile-perl, libdevel-cover-perl, libdevel-ptkdb-perl, libfile-tail-perl, libfinance-quote-perl, libformat-human-bytes-perl, libgtk2-perl, libhibernate-validator-java, libimage-exiftool-perl, libjson-perl, liblinux-prctl-perl, liblog-any-perl, libmail-imapclient-perl, libmocked-perl, libmodule-build-xsutil-perl, libmodule-extractuse-perl, libmodule-signature-perl, libmoosex-simpleconfig-perl, libmoox-handlesvia-perl, libnet-frame-layer-ipv6-perl, libnet-openssh-perl, libnumber-format-perl, libobject-id-perl, libpackage-pkg-perl, libpdf-fdf-simple-perl, libpod-webserver-perl, libpoe-component-pubsub-perl, libregexp-grammars-perl, libreply-perl, libscalar-defer-perl, libsereal-encoder-perl, libspreadsheet-read-perl, libspring-java, libsql-abstract-more-perl, libsvn-class-perl, libtemplate-plugin-gravatar-perl, libterm-progressbar-perl, libterm-shellui-perl, libtest-dir-perl, libtest-log4perl-perl, libtext-context-eitherside-perl, libtime-warp-perl, libtree-simple-perl, libwww-shorten-simple-perl, libwx-perl-processstream-perl, libxml-filter-xslt-perl, libxml-writer-string-perl, libyaml-tiny-perl, mupen64plus-core, nmap, openssl, pkg-perl-tools, quodlibet, r-cran-rjags, r-cran-rjson, r-cran-sn, r-cran-statmod, ruby-nokogiri, sezpoz, skksearch, slurm-llnl, stellarium. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Improvements to reproducible.debian.net Mattia Rizzolo has been working on compressing logs using gzip to save disk space. The web server would uncompress them on-the-fly for clients which does not accept gzip content. Mattia Rizzolo worked on a new page listing various breakage: missing or bad debbindiff output, missing build logs, unavailable build dependencies. Holger Levsen added a new execution environment to run debbindiff using dependencies from testing. This is required for packages built with GHC as the compiler only understands interfaces built by the same version. debbindiff development Version 17 has been uploaded to unstable. It now supports comparing ISO9660 images, dictzip files and should compare identical files much faster. Documentation update Various small updates and fixes to the pages about PDF produced by LaTeX, DVI produced by LaTeX, static libraries, Javadoc, PE binaries, and Epydoc. Package reviews Known issues have been tagged when known to be deterministic as some might unfortunately not show up on every single build. For example, two new issues have been identified by building with one timezone in April and one in May. RD and help2man add current month and year to the documentation they are producing. 1162 packages have been removed and 774 have been added in the past week. Most of them are the work of proper automated investigation done by Chris West. Summer of code Finally, we learned that both akira and Dhole were accepted for this Google Summer of Code. Let's welcome them! They have until May 25th before coding officialy begins. Now is the good time to help them feel more comfortable by sharing all these little bits of knowledge on how Debian works.

28 April 2015

Gunnar Wolf: Bestest birthday ever

Bestest birthday ever
That's all I need to enjoy the best best party ever. Oh! Shall I mention that we got a beautiful present for the kids from our very dear DebConf official Laminatrix! Photos not yet available, but will provide soon.

28 February 2015

Gunnar Wolf: Welcome to the world, little ones!

Welcome to the world, little ones!
Welcome little babies! Yesterday night, we entered the hospital. Nervous, heavy, and... Well, would we ever be ready? As ready as we could. A couple of hours later, Alan and Elena Wolf Daichman became individuals on their own right. As is often the case in the case of twins, they were brought to this world after a relatively short preparation (34 weeks, that's about 7.5 months). At 1.820 and 1.980Kg, they are considerably smaller than either of the parents... But we will be working on that! Regina is recovering from the operation, the babies are under observation. As far as we were told, they seem to be quite healthy, with just minor issues to work on during neonatal care. We are waiting for our doctors to come today and allow us to spend time with them. And as for us... It's a shocking change to finally see the so long expected babies. We are very very very happy... And the new reality is hard to grasp, to even begin understanding :) PS- Many people have told me that my blog often errors out under load. I expect it to happen today :) So, if you cannot do it here, there are many other ways to contact us. Use them! :)

9 December 2014

Joey Hess: podcasts that don't suck, 2014 edition

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars. PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

20 November 2014

Gunnar Wolf: UNAM. Viva M xico, viva en paz.

UNAM. Viva M xico, viva en paz.
We have had terrible months in Mexico; I don't know how much has appeared about our country in the international media. The last incidents started on the last days of September, when 43 students at a school for rural teachers were forcefully disappeared (in our Latin American countries, this means they were taken by force and no authority can yet prove whether they are alive or dead; forceful disappearance is one of the saddest and most recognized traits of the brutal military dictatorships South America had in the 1970s) in the Iguala region (Guerrero state, South of the country) and three were killed on site. An Army regiment was stationed few blocks from there and refused to help. And yes, we live in a country where (incredibly) this news by themselves would not seem so unheard of... But in this case, there is ample evidence they were taken by the local police forces, not by a gang of (assumed) wrongdoers. And they were handed over to a very violent gang afterwards. Several weeks later, with far from a thorough investigation, we were told they were killed, burnt and thrown to a river. The Iguala city major ran away, and was later captured, but it's not clear why he was captured at two different places. The Guerrero state governor resigned and a new governor was appointed. But this was not the result of a single person behaving far from what their voters would expect It's a symptom of a broken society where policemen will kill when so ordered, where military personnel will look away when pointed out to the obvious, where the drug dealers have captured vast regions of the country where are stronger than the formal powers. And then, instead of dealing with the issue personally as everybody would expect, the president goes on a commercial mission to China. Oh, to fix some issues with a building company. That coincidentally or not was selling a super-luxury house to his wife. A house that she, several days later, decided to sell because it was tarnishing her family's honor and image. And while the president is in China, the person who dealt with the social pressure and told us about the probable (but not proven!) horrible crime where the "bad guys" for some strange and yet unknown reason (even with tens of them captured already) decided to kill and burn and dissolve and disappear 43 future rural teachers presents his version, and finishes his speech saying that "I'm already tired of this topic". Of course, our University is known for its solidarity with social causes; students in our different schools are the first activists in many protests, and we have had a very tense time as the protests are at home here at the university. This last weekend, supposed policemen entered our main campus with a stupid, unbelievable argument (they were looking for a phone reported as stolen three days earlier), get into an argument with some students, and end up firing shots at the students; one of them was wounded in the leg. And the university is now almost under siege: There are policemen surrounding us. We are working as usual, and will most likely finish the semester with normality, but the intimidation (in a country where seeing a policeman is practically never a good sign) is strong. And... Oh, I could go on a lot. Things feel really desperate and out of place. Today I will join probably tens or hundreds of thousands of Mexicans sick of this simulation, sick of this violence, in a demonstration downtown. What will this achieve? Very little, if anything at all. But we cannot just sit here watching how things go from bad to worse. I do not accept to live in a state of exception. So, this picture is just right: A bit over a month ago, two dear friends from Guadalajara city came, and we had a nice walk in the University. Our national university is not only huge, it's also beautiful and loaded with sights. And being so close to home, it's our favorite place to go with friends to show around. This is a fragment of the beautiful mural in the Central Library. And, yes, the University stands for "Viva M xico". And the university stands for "Peace". And we need it all. Desperately.

21 August 2014

Gunnar Wolf: Walking without crutches

Walking without crutches
I still consider myself a newbie teacher. I'm just starting my fourth semester. And yes, I really enjoy it. Now, how did I come to teaching? Well, my training has been mostly on stages for different conferences. More technical, more social, whatever I have been giving ~10 talks a year for ~15 years, and I must have learnt something from that. Some good things, some bad habits. When giving presentations, a most usual technique is to prepare a set of slides to follow/support the ideas. And yes, that's what I did for my classes: Since my first semester, I prepared a nice set of slides, thematically split in 17 files, with ~30 to ~110 pages each (yes, huge variation). Given the course spans 32 classes (72 hours, 2 hours per class), each slide lasts for about two classes. But, yes, this tends to make the class much less dynamic, much more scripted, rigid, and... Boring. From my feedback, I understand the students don't think I am a bad teacher, but still, I want to improve! So, today I was to give the introduction to memory management. Easy topic, with few diagrams and numbers, mostly talking about the intuitive parts of a set of functions. I started scribbling and shortening the main points on a piece of paper (yes, the one on the picture). I am sure I can get down to more reduction But this does feel like an improvement! The class was quite successful. I didn't present the 100% of the material (which is one of the reasons I cling to my presentations I don't want to skip important material), and at some point I do feel I was a bit going in circles. However, Operating Systems is a very intuitive subject, and getting the students to sketch by themselves the answers that describe the working of real operating systems was a very pleasant experience! Of course, when I use my slides I do try to make it as interactive and collaborative as possible. But it is often unfeasible when I'm following a script. Today I was able to go around with the group's questions, find my way back to the outline I prepared. I don't think I'll completely abandon my slides, specially for some subjects which include many diagrams or pictures. But I'll try to have this alternative closer to my mind.

15 July 2014

Mario Lang

Mixing vinyl again The turntables have me back, after quite some long-term mixing break. I used to do straight 4-to-the-floor, mostly acid or hardtek. You can find an old mix of mine on SoundCloud. This one is actually back from 2006. But currently I am more into drum and bass. It is an interesting mixing experience, since considerably harder. Here is a small but very recent minimix. Experts in the genre might notice that I am mostly spinning stuff from BlackOutMusicNL, admittedly my favourite label right now.

30 June 2014

Russ Allbery: Review: The Knowledge

Review: The Knowledge, by Lewis Dartnell
Publisher: Penguin
Copyright: 2014
ISBN: 0-698-15165-8
Format: Kindle
Pages: 328
The cover pitch for The Knowledge is that technological civilization has collapsed. A war, an asteroid, a pandemic, or some other catastrophic event has ended life as we know it. Survivors have recovered enough to stabilize the population, but all the industry and infrastructure that supports the modern world has been crippled and the knowledge lost. What do they need to know to rebuild the world? (The subtitle, How to Rebuild Our World from Scratch, is deceptive. Dartnell spends a lot of time talking about what could be scavenged from the ruins after an event like a pandemic that leaves buildings and tools intact.) This brief description may bring several incorrect ideas to mind, so a few clarifications are in order. First, this is not a book about how to survive an apocalypse, or how to stabilize society afterwards. Dartnell skips over anything related to basic survival or politics, and this book would only be of limited interest to preppers. Rather, it is focused on the key concepts and key processes that underlie much of what makes up our civilization, and how those processes can be bootstrapped and recreated, hopefully much more quickly than the thousands of years they took to develop the first time around. Second, as much as I think Dartnell would have liked it to be, The Knowledge is not an instruction manual. There's just too much involved in modern machinery, metallurgy, chemistry, and other necessary science and engineering. What it is instead is a tour of the most important ideas, basic processes, and approaches that can lead to creation of an industrial economy. This means it's also a tour of the most critical technology and discoveries that we rely on for day-to-day living. Each idea presented here is a bare sketch, and would require extensive research and experimentation to put into practice. The Knowledge is not a scientific encyclopedia. Rather, it's a collection of signposts and suggested directions along with some suggestions about sequencing, with special emphasis on the ideas that were unintuitive or surprising, or which took humans much longer to stumble across than would have been necessary. I will say up front that I am quite dubious about the stated goal of this book. I'm not sure if it would be of significant assistance to people who found themselves in the circumstances that Dartnell postulates, and I'm also not sure it would be as difficult as Dartnell supposes to reconstruct these details from a good library, although the point is well-taken that most modern industrial techniques would be unavailable to a civilization starting from scratch and many historic techniques are no longer practiced and therefore not well-documented. But this book does not have to succeed at its stated goal to be worth reading. The Knowledge provides a fascinating insight into the way our current capabilities developed, other paths that development could have followed, and what building blocks are vital for the industry that we rely on. If you are like me, the first thing The Knowledge will remind you of is Minecraft, particularly with some of the mods (such as TerraFirmaCraft) that add more realistic detail to the technological bootstrapping. It's a treasure trove for anyone who might be working on that sort of mod or game. Dartnell tries to cover as much of the spectrum of technology as possible, from farming through medicine, from basic materials to industrial chemistry, and from electrical power through transportation. Each chapter necessarily covers only the highlights, the most vital technologies and the early priorities, and many of those are relatively vague pointers. But that's enough for a fascinating orientation. It's also full of memorable trivia: the huge variety of uses to which wood can be put, the difficulty in re-establishing production of industrial acids, the shelf life of gasoline, the global seed vault within the Arctic Circle, and the importance of a long-threaded screw for a lathe, just to name a few examples. In short, The Knowledge may or may not be a practical aid in rebuilding the world, but it's a lot of fun to read and full of tidbits that are fun to think about or pass on to others. It's also a great orientation in applied technology. The Knowledge is particularly fascinating if you have any interest in the simplified recreation of technological and industrial processes, such as in Minecraft and similar games. For authors of Minecraft mods who are aiming for realism, it might even be required reading. I'd love to see more mods based heavily on the approaches described in this book. The Knowledge also conveys, better than any book or course I've seen, an intuitive feel for the amazing complexity and ingenuity underlying industrial technology. I pre-ordered a copy of this book as soon as I heard about it, and I was not disappointed. Despite my certainty that I would be one of the early casualties in any general social collapse, I thoroughly enjoyed reading it, and would like to read it again at some point. Recommended, albeit with the caveat that I do not have the background required to verify any of the facts in this book, and am assuming that Dartnell has done his research properly. Rating: 9 out of 10

2 June 2014

Russ Allbery: Review: Debt

Review: Debt, by David Graeber
Publisher: Melville House
Copyright: 2011, 2012
Printing: October 2012
ISBN: 1-61219-129-0
Format: Trade paperback
Pages: 453
You probably remember this story from introductory economics. Originally, people exchanged things through barter. I needed grain but had a cow. You had a farm but needed milk. I gave you milk for your grain. But this was tedious and awkward, and required finding the right combination of people who needed something the other person had on hand. Then, people invented money: small objects with an agreed-upon value that you could keep and use later when you needed something else. In other words, both a medium of exchange and a store of value. And that improved matters for a long time, although war-torn societies or societies that collapsed (the supposed "Dark Ages" often raises its head) would sometimes "revert to barter" for some period of time until things were stable enough for currency to reappear. Following that story was usually some story about the emergence of money based on credit: goldsmith shops that started a side business in storing other people's coins and giving them receipts, leading to the receipts trading like the coins since the shops were trustworthy, and from there into the Bank of England and eventually the fractional reserve banking system, fiat money, and all the modern machinery of finance. All of this was dated from the early Enlightenment and the development of modern finance in Europe. It's a very neat story, and it makes a great deal of intuitive sense. That barter system does feel like what you'd have to do without money, but one can immediately see how obnoxious and limiting it would be. These stories play into our intuitive sense of history: people started with emergent properties of the physical world (different people have different goods and different skills and want to exchange them) and then develop layers of abstraction on top of them, eventually leading to the sophistication of modern societies. However, David Graeber is not an economist. He's an anthropologist: the profession devoted to understanding how people actually do things rather than how we've reconstructed our history based on our modern perspective. And, as he points out, engagingly and comprehensively, in chapter two of Debt, there is no evidence that this story about economic history has anything whatsoever to do with reality. And quite a bit of evidence that it does not. As best as we've been able to determine, not only is there no society in the world that deals with routine, day-to-day needs like grain and milk through barter, there never has been such a society in human history. Instead, history and anthropology shows that credit is, in a sense, older than currency: the earliest recorded economic transactions were built on a rich system of credit, but in the form of purchases from shopkeepers on credit, or credit clearinghouses through the local government or temple. Those credit records were often denominated in some standardized commodity so many cattle or bushels of grain that could create the impression that the economy worked on barter. But the anthropological evidence indicates that this was more an accounting technique than a practical currency. People rarely brought the named commodity to the temple to pay off some debt. Rather, there was an agreed conversion from various other goods to that standardized commodity, and its primary purpose was consistent bookkeeping. Specie minted coins appear to come later, and wax and wane throughout history depending on local circumstance. For example, Europe did not "revert to barter" after the collapse of the Roman Empire, but the inhabitants did stop using specie and returned to a system of credit and record-keeping... records that continued to be denominated in Roman currency, even though few people still used the actual coins. That's one fascinating observation with which Graeber begins this book. The other is the question of why we have such a strong social and moral belief that people must pay their debts. This moral belief is ever-present in discussions about the 2008 financial collapse, and more broadly in discussions about the modern economy, but it's not as obvious of a belief as it might appear on the surface. After all, the banking and investment system is founded on the principle that not everyone will repay their debts, and therefore lenders receive a risk premium based on the likelihood that the debt won't be repaid. But, despite building the possibility of non-repayment into the system, debt forgiveness or intentional default is almost unthinkable and considered a huge moral problem. To this, Graeber brings the perspective of historical anthropology: human societies have struggled with the problems of debt and repayment from the beginning of recorded history, and have attempted a wide variety of solutions to those problems, including massive debt forgiveness. The jubilee described in the Bible was not novel; rather, it reflected a common practice to keep abuses of debt under control in ancient Sumer. The subtitle of Debt is The First 5,000 Years, and this book is a historical survey. But Graeber puts off the history to first lay an intellectual groundwork for our understanding of debt, and I found that preliminary discussion extremely valuable. Most memorable was the way Graeber divides human economic relationships into communism (in the old sense, not the political sense), exchange, and hierarchy. Capitalism has consumed our economic analysis to such a degree that exchange is the only economic basis that gets much discussion, but the other two are both obvious and pervasive once Graeber points them out. Human civilization could not exist without all three. And Graeber also points out one aspect of exchange-based economics that had not previously occurred to me: it's the economic relationship that one creates with strangers. Debt has the unique characteristic that it can be discharged, at which point the relationship ceases. This has far more complex and far-reaching moral and social implications than one might initially realize, and Graeber did a wonderful job opening my eyes to some of the subtleties. Debt is clearly a scholarly work, but Graeber's writing is clear and engaging. I found most of this book to be surprisingly easy reading. The hardest going was Graeber's discussion of societies that use a form of currency to arrange relationships between people (marriages, births, and deaths, primarily), but not day-to-day economic transactions. I suspect this area is closer to Graeber's areas of personal research and field work, which resulted in more technical detail. I'm still not sure I completely grasp the principles that Graeber was trying to communicate. But I was struck by the observation of alienation's role in turning a human being into a commodity, and how that links with debt's role as the economic transaction one has with strangers. Graeber covers slavery only glancingly, but makes some memorable points about the use of violence to rip someone out of their social context, and how that is necessary in human cultures before humans can be reduced to a commodity. There's a lot here, and I've only scratched the surface. I haven't mentioned, for example, the fascinatingly elegant theory that coining money and then requiring taxes be paid in the same money is a simple and highly effective way of funding armies, an explanation for specie that is largely unproven but that I find more compelling than the ones I've previously heard. Approaching debt from an anthropological instead of economic perspective is surprisingly enlightening. Debt is primarily a historical and cultural discussion rather than a set of proposed solutions, but Graeber does effectively show that debt as a moral obligation is not an unquestionable moral stance, but rather has a long history as one side of a two-sided political debate. I also came away from this book more conscious of the social implications and costs of debt-structured interactions, and wanting to push more of the language of debt out of my day-to-day dealings. Graeber is well known as one of the supporters of Occupy Wall Street, and Debt, while a well-defended academic work, certainly does advocate a position. But the academic analysis is more prominent than the advocacy, and I found his positions well-defended and well-argued. I do need to give the caveat that I don't have the anthropological background to distinguish the statements from Graeber that are well-established common knowledge in anthropology from the ones that are more controversial, and I would be a bit leery of taking this book as the final word on the topic. But it fully deserves its popularity and reputation as a thought-provoking and valuable contribution to the conversation. It's another book that I want to re-read someday to digest further, and the sort of book whose observations keep occurring to me in subsequent discussions or news stories. If you're at all interested in the way in which we construct the morality around economics and debt, I think this is a book that you should read. It's thoughtful, challenging, and surprising, and it passes my acid test for books of this sort with flying colors: after reading it, you realize that many things are more complicated, more historical, and less novel than you had originally thought. Highly recommended. Rating: 9 out of 10

27 May 2014

Gunnar Wolf: On how tech enthusiasts become tech detractors

On how tech enthusiasts become tech detractors
As often is the case, the Saturday Morning Breakfast Cereal webcomic (http://smbc-comics.com/ gets it right. And I cannot help but share today's comic. The picture explains it much better than what I ever could.

15 February 2014

Gunnar Wolf: Like a Lord and Lady, with my dearest passions...

Like a Lord and Lady, with my dearest passions...
For those of you who didn't yet know it: My mother is a painter. A serious, professional, respected painter. But she sometimes goes to the funny side as well Of course, with all due professionalism! So, she gave us this great gift: She took one of our pictures from DebConf12 (from the "Conference Dinner" night), and painted it. Real size even! So, next time you come to our house, even if we are not around to greet you, we will be glad to welcome you to the Residence!

14 February 2014

Paul Tagliamonte: Introducing: Acid!

I ve been hacking (on and off) on a small bit of code (written in Hy) called Acid. So, all of this is *really* fluid, and I m going to change it s API and code (it s currently just one massive hack), but I think the idea has kinda jelled enough for me to chat a bit about. Acid is a DSL for writing event-driven Hy. Currently it s just using global events (that ll change soon), and it s a monumental hack (not sure how much I can do there), but it works pretty well. It s built on top of tulip / asyncio, which is an amazing new Python 3.4 standard library package for working with async code. Although asyncio is designed around network-based usecases, I ve been trying to shoehorn in an event system on top of it. It s going well so far. A basic Acid script looks something like:
(trip
  (on :startup
    (every 1 second (emit :clock-pulse nil)))
  (on :clock-pulse (print "."))
  (emit :startup nil))
which will print a dot ( . ) every second forever. Something a bit more advanced (fetch the MBTA Red line T information once a minute forever)
(defn get-endpoint-url [line]
  (.format "http://developer.mbta.com/lib/rthr/ 0 .json"
    (get  :red-line  "red"
          :blue-line "blue"  line)))
(trip
  ;; OK. Let's do some work with MBTA feeds.
  (on :update-feed
      ;; let's just update feeds on a cron.
      (print (.json (.get requests (get-endpoint-url event))))
      (emit :feed-updated event))
  (every 1 minute
         (emit :update-feed :red-line)))
Clearly, both of these examples are just as clear as their straight functional counterparts, but I m keen to see what I can do with temporal recursion for long-running daemons. I think my first job will be porting snitch's codebase to Acid. Let me know if you have any ideas!

10 February 2014

Mario Lang: Neurofunkcasts

I have always loved Drum and Bass. In 2013 I rediscovered my love for Darkstep and Neurofunk, and found that these genres have developed quite a lot in the recent years. Some labels like Black Sun Empire and Evol Intent produce mixes/sets on a regular basis as podcasts these days. This article aggregates some neurofunk podcasts I like a lot, most recent first. Enjoy 33 hours and 57 minutes of fun with dark and energizing beats. Thanks to BSE Contrax and Evol Intent for providing such high quality sets. You can also see the Python source for the program that was used to generate this page.

2 February 2014

Gunnar Wolf: CuBox-i4Pro

CuBox-i4Pro
Somewhere back in August or September, I pre-ordered a CuBox-i A nicely finished, completely hackable, and reasonably powerful ARM system, nicely packaged and meant to be used to hack on. A sweet deal! There are four models (you can see the different models' specs here) I went for the top one, and bought a CuBox-i4Pro. That means, I have a US$130 nice little box, with 4 ARM7 cores, 2GB RAM, WiFi, and... well, all of its basic goodies and features. For some more details, look at the CuBox-i block diagram. I got it delivered by early January, and (with no real ARM experience on my side) I finally got to a point where I can, I believe, contribute something to its adoption/usage: How to get a basic Debian system installed and running in it. The ARM world is quite different to the x86 one: Compatibility is much harder, the computing platform does not self-describe properly, and a kernel must first understand how a specific subarchitecture is before being able to boot on it. Somewhere in the CuBox forums (or was it the IRC channel?) I learnt that the upstream Linux kernel does not yet boot on the i.MX6 chip (although support is rumored to be merged for the 3.14 release), so I am using both a kernel and an uBoot bootloader not built for (or by) Debian people. Besides that, the result I will describe is a kosher Debian install. Yes, I know that my orthodox friends and family will say that 99% kosher is taref... But remember I'm never ever that dogmatic. (yeah, right!) Note that there is a prebuilt image you can run if you are so inclined: In the CuBox-i forums and wiki, you will find links to a pre-installed Debian image you can use... But I cannot advise to do so. First, it is IMO quite bloated (you need a 4GB card for a very basic Debian install? Seriously?) Second, it has a whole desktop environment (LXDE, if I recall correctly) and a whole set of packages I will probably not use in this little box. Third, there is a preinstalled user, and that's a no-no (user: debian, password: debian). But, most importantly, fourth: It is a nightly build of the Testing (Jessie) suite... Built back in December. So no, as a Debian Developer, it's not something we should recommend our users to run! So, in the end and after quite a bit of frustration due to my lack of knowledge, here goes the list of steps I followed:
Using the CuBox
On the i2 and i4 models, you can use it either with a USB keyboard and a HDMI monitor, or by a serial consoles (smaller models do not have a serial console). I don't have a HDMI monitor handy (only a projector), so I prefer to use the serial terminal. Important details to avoid frustration: The USB keyboard has to be connected to the lower USB port, or it will be ignored during the boot process. And make sure your serial terminal is configured not to use hardware flow control. Minicom is configured by default to use hardware flow control, so it was not sending any characters to the CuBox. ^A-O gets you to the Minicom configuration, select Serial port setup, and disable it.
Set up the SD card
I created a 2GB partition, but much less can suffice; I'd leave it at least to 1GB to do the base install, although it can be less once the system is set up (more on this later). Partition and format using your usual tools (fdisk+mke2fs, or gparted, or whatever suits your style).
Install the bootloader
I followed up the instructions on this CuBox-i forums thread to get the SPL and uBoot bootloader running. In short, from this Google Drive folder, download the SPL-U-Boot.img.xz file, uncompress it (xz --decompress SPL-U-Boot.img.xz), and write it to the SD card just after the partition map: As root,
# dd if=SPL-U-Boot.img of=/dev/mmcblk0 bs=1024 seek=1.
Actually, to be honest: As I wanted something basic to be able to debug from, I downloaded (from the same Google Drive) the busybox.img.gz file. That's a bit easier to install from: xz --decompress busybox.img.xz, and just dump it into the SD from the beginning (as it does already include a partition table):
# dd if=busybox.img of=/dev/mmcblk0
This card is already bootable and minimal, and allows to debug some bits from the CuBox-i itself (as we will see shortly).
After this step, I created a second partition, as I said earlier. So, my mmcblk0p1 partition holds Busybox, and the second will hold Debian. We are still working from the x86 system, so we mount the SD card in /media/mmcblk0p2
Installing the base system
Without debian-installer to do the heavy lifting, I went for debootstrap. As I ran it from my PC, debootstrap's role will be for this first stage only to download and do a very initial pre-unpacking of the files: Bootstrapping a foreign architecture implies, right, using the --foreign switch:
debootstrap --foreign --arch=armhf wheezy /media/mmcblk0p2 http://http.debian.net/debian
You can add some packages you often use by specifying --include=foo,bar,baz
So, take note notes: This board is capable of running the armhf architecture (HF for Hardware Float). It can also run armel, but I understand it is way slower.
First boot (with busybox)
So, once debootstrap finishes, you are good to go to the real hardware! Unmount the SD card, put it in the little guy, plug your favorite console in (I'm using the serial port), and plug the power in! You should immediately see something like:
  1. U-Boot SPL 2013.10-rc4-gd05c5c7-dirty (Jan 12 2014 - 02:18:28)
  2. Boot Device: SD1
  3. reading u-boot.img
  4. Load image from RAW...
  5. U-Boot 2013.10-rc4-gd05c5c7-dirty (Jan 12 2014 - 02:18:28)
  6. CPU: Freescale i.MX6Q rev1.2 at 792 MHz
  7. Reset cause: POR
  8. Board: MX6-CuBox-i
  9. DRAM: 2 GiB
  10. MMC: FSL_SDHC: 0
  11. In: serial
  12. Out: vga
  13. Err: vga
  14. Net: phydev = 0x0
  15. Phy not found
  16. PHY reset timed out
  17. FEC
  18. (Re)start USB...
  19. USB0: USB EHCI 1.00
  20. scanning bus 0 for devices... 1 USB Device(s) found
  21. scanning usb for storage devices... 0 Storage Device(s) found
  22. scanning usb for ethernet devices... 0 Ethernet Device(s) found
  23. Hit any key to stop autoboot: 3

Let it boot (that means, don't stop autoboot), and you will soon see a familiar #, showing you are root in the busybox environment. Great! Now, mount the Debian partition:
# mount /dev/mmcblk0p2 /mnt
Finishing debootstrap's task
With everything in place, it's time for debootstrap to work. Chroot into the Debian partition:
# chroot /mnt
And ask Debootstrap to finish what it started:
# debootstrap --second-stage
Be patient, as this step takes quite a bit to be finished.
Some extra touches...
After this is done, your Debian system is almost ready to be booted into. Why almost? Because it still does not have any users, does not know its own name nor knows I want to use it via a serial terminal. Three very simple tasks to fix. First two:
  1. # passwd
  2. Enter new UNIX password:
  3. Retype new UNIX password:
  4. passwd: password updated successfully
  5. # echo cubox-i.gwolf.org > /etc/hostname

For the second one, add a line to /etc/inittab specifying the details of the serial console. You can just do this:
# echo 'T0:23:respawn:/sbin/getty -L ttymxc0 115200 vt100' >> /etc/inittab
Boot into Debian!
So, ready to boot Debian? Ok, first exit the chroot shell, to go back to the Busybox shell, unmount the Debian partition, and set the root partition read-only:
  1. # exit
  2. # umount /mnt
  3. # mount / -o remount,ro

Disconnect and connect power, and now, do interrupt the boot process when you see the Hit any key to stop automount prompt. To see the configuration of uboot, you can type printenv We will only modify the parameters given to the kernel:
  1. CuBox-i U-Boot > setenv root /dev/mmcblk0p2 rootfstype=ext3 ro rootwait
  2. CuBox-i U-Boot > boot

So, the kernel will load, and a minimal Debian system will be initialized. In my case, I get the following output:
  1. ** File not found /boot/busyEnv.txt **
  2. 4703740 bytes read in 390 ms (11.5 MiB/s)
  3. ## Booting kernel from Legacy Image at 10000000 ...
  4. Image Name: Linux-3.0.35-8
  5. Image Type: ARM Linux Kernel Image (uncompressed)
  6. Data Size: 4703676 Bytes = 4.5 MiB
  7. Load Address: 10008000
  8. Entry Point: 10008000
  9. Verifying Checksum ... OK
  10. Loading Kernel Image ... OK
  11. Starting kernel ...
  12. Unable to get enet.0 clock
  13. pwm-backlight pwm-backlight.0: unable to request PWM for backlight
  14. pwm-backlight pwm-backlight.1: unable to request PWM for backlight
  15. _regulator_get: get() with no identifier
  16. mxc_sdc_fb mxc_sdc_fb.2: NO mxc display driver found!
  17. INIT: version 2.88 booting
  18. [info] Using makefile-style concurrent boot in runlevel S.
  19. [....] Starting the hotplug events dispatcher: udevd. ok
  20. [....] Synthesizing the initial hotplug events...done.
  21. [....] Waiting for /dev to be fully populated...done.
  22. [....] Activating swap...done.
  23. [....] Cleaning up temporary files... /tmp. ok
  24. [....] Activating lvm and md swap...done.
  25. [....] Checking file systems...fsck from util-linux 2.20.1
  26. done.
  27. [....] Mounting local filesystems...done.
  28. [....] Activating swapfile swap...done.
  29. [....] Cleaning up temporary files.... ok
  30. [....] Setting kernel variables ...done.
  31. [....] Configuring network interfaces...done.
  32. [....] Cleaning up temporary files.... ok
  33. [....] Setting up X socket directories... /tmp/.X11-unix /tmp/.ICE-unix. ok
  34. INIT: Entering runlevel: 2
  35. [info] Using makefile-style concurrent boot in runlevel 2.
  36. [....] Starting enhanced syslogd: rsyslogd. ok
  37. [....] Starting periodic command scheduler: cron. ok
  38. Debian GNU/Linux 7 cubox-i.gwolf.org ttymxc0
  39. cubox-i login:

And that's it, the system is live and ready for my commands!
So, how big is this minimal Debian installed system? I cheated a bit on this, as I had already added emacs and screen to the system, so yours will be a small bit smaller. But anyway Lets clear our cache of downloaded packages, and see the disk usage information:
  1. root@cubox-i:~# apt-get clean
  2. root@cubox-i:~# df -h
  3. Filesystem Size Used Avail Use% Mounted on
  4. rootfs 1008M 347M 611M 37% /
  5. /dev/root 1008M 347M 611M 37% /
  6. devtmpfs 881M 0 881M 0% /dev
  7. tmpfs 177M 132K 177M 1% /run
  8. tmpfs 5.0M 0 5.0M 0% /run/lock
  9. tmpfs 353M 0 353M 0% /run/shm

So, instead of a 4GB install, we have a 350MB one. Great improvement! Now, lets get it to do something useful, in a most Debianic way!

23 January 2014

Gunnar Wolf: Ligatured iceweasel

Ligatured iceweasel
I am not (yet?) reporting this as a bug as this happened with a several days old session open, and just while I was upgrading my Sid system, after a long time without doing so (probably since before the vacations started... In December 2013). But I cannot avoid sharing this interesting screenshot. Of course, this does not happen in other browsers. And AFAICT it only happens while reading the Debian Policy (either online or locally, even recoding it to UTF-8). Funniest thing, the Debian policy specifies no Javascript, no stylesheets at all... (Hey, and FWIW... Why is the online copy of the Debian policy still in iso-8859-1 It's not 1995 anymore...) [update] Of course, it's the default font, not only the Debian policy. Just as an example, the following text:
  1. <html><body><p>Ufffiii flat different!</p></body></html>
Yields the following output: [update 2] And, of course, after finishing the update process... I got a new version of Iceweasel. Restarted it, and everything is back to normal :-

Next.

Previous.