Search Results: "Vincent Sanders"

18 March 2017

Vincent Sanders: A rose by any other name would smell as sweet

Often I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of "code smell" to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.

Such smells may include:
I am most certainly not alone in using this approach and Fowler et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from never to always but in my experience making the old working code smell nice is almost always less effort and risk than a re-write.
TestsWhen I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.

Test coverage is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both Fowler and Marick have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.

Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.

Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.

It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.

For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.
ExampleA short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header that helps the browser interpret the resource. However this meta-data is often problematic (sorry that should read "a misleading lie") so the actual content must be examined to get a better answer for the user. This is known as mime sniffing and of course there is a living specification.

The source code that provides this API (Linked to it rather than included for brevity) has a few smells:
While some of these are obvious the non-use of the global string table and the API complexity needed detailed knowledge of the codebase, just to highlight how subjective the sniff test can be. There is also one huge air freshener in all of this which definitely comes from experience and that is the modules author. Their name at the top of this would ordinarily be cause for me to move on, but I needed an example!

First thing to check is the API use

$ git grep -i -e mimesniff_compute_effective_type --or -e mimesniff_init --or -e mimesniff_fini
content/hlcache.c: error = mimesniff_compute_effective_type(handle, NULL, 0,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/mimesniff.c:nserror mimesniff_init(void)
content/mimesniff.c:void mimesniff_fini(void)
content/mimesniff.c:nserror mimesniff_compute_effective_type(llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_compute_effective_type(struct llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_init(void);
content/mimesniff.h:void mimesniff_fini(void);
desktop/netsurf.c: ret = mimesniff_init();
desktop/netsurf.c: mimesniff_fini();

This immediately shows me that this API is used in only a very small area, this is often not the case but the general approach still applies.

After a little investigation the usage is effectively that the mimesniff_init API must be called before the mimesniff_compute_effective_type API and the mimesniff_fini releases the initialised resources.

A simple test case was added to cover the API, this exercised the behaviour both when the init was called before the computation and not. Also some simple tests for a limited number of well behaved inputs.

By changing to using the global string table the initialisation and finalisation API can be removed altogether along with a large amount of global context and pre-processor macros. This single change removes a lot of smell from the module and raises test coverage both because the global string table already has good coverage and because there are now many fewer lines and conditionals to check in the mimesniff module.

I stopped the refactor at this point but were this more than an example I probably would have:
ConclusionThe approach examined here reduce the smell of code in an incremental, testable way to improve the codebase going forward. This is mainly necessary on larger complex codebases where technical debt and bit-rot are real issues that can quickly overwhelm a codebase if not kept in check.

This technique is subjective but helps a programmer to quantify and examine a piece of code in a structured fashion. However it is only a tool and should not be over applied nor used as a metric to proxy for code quality.

13 February 2017

Vincent Sanders: The minority yields to the majority!

Deng Xiaoping (who succeeded Mao) expounded this view and obviously did not depend on a minority to succeed. In open source software projects we often find ourselves implementing features of interest to a minority of users to keep our software relevant to a larger audience.

As previously mentioned I contribute to the NetSurf project and the browser natively supports numerous toolkits for numerous platforms. This produces many challenges in development to obtain the benefits of a more diverse user base. As part of the recent NetSurf developer weekend we took the opportunity to review all the frontends to make a decision on their future sustainability.

Each of the nine frontend toolkits were reviewed in turn and the results of that discussion published. This task was greatly eased because we we able to hold the discussion face to face, over time I have come to the conclusion some tasks in open source projects greatly benefit from this form of interaction.

Netsurf running on windows showing this blog post
Coding and day to day discussions around it can be easily accommodated va IRC and email. Decisions affecting a large area of code are much easier with the subtleties of direct interpersonal communication. An example of this is our decision to abandon the cocoa frontend (toolkit used on Mac OS X) against that to keep the windows frontend.

The cocoa frontend was implemented by Sven Weidauer in 2011, unfortunately Sven did not continue contributing to this frontend afterwards and it has become the responsibility of the core team to maintain. Because NetSuf has a comprehensive CI system that compiles the master branch on every commit any changes that negatively affected the cocoa frontend were immediately obvious.

Thus issues with the compilation were fixed promptly but because these fixes were only ever compile tested and at some point the Mac OS X build environments changed resulting in an application that crashes when used. Despite repeatedly asking for assistance to fix the cocoa frontend over the last eighteen months no one had come forward.

And when the topic was discussed amongst the developers it quickly became apparent that no one had any objections to removing the cocoa support. In contrast the windows frontend, which despite having many similar issues to cocoa, we decided to keep. These were almost immediate consensus on the decision, despite each individual prior to the discussion not advocating any position.

This was a single example but it highlights the benefits of a disparate development team having a physical meeting from time to time. However this was not the main point I wanted to discuss, this incident highlights that supporting a feature only useful to a minority of users can have a disproportionate cost.

The cost of a feature for an open source project is usually a collection of several factors:
Developer time
Arguably the greatest resource of a project is the time its developers can devote to it. Unless it is a very large, well supported project like the Kernel or libreoffice almost all developer time is voluntary.
Developer focus
Any given developer is likely to work on an area of code that interests them in preference to one that does not. This means if a developer must do work which does not interest them they may loose focus and not work on the project at all.
Developer skillset
A given developer may not have the skillset necessary to work on a feature, this is especially acute when considering minority platforms which often have very, very few skilled developers available.
Developer access
It should be obvious that software that only requires commodity hardware and software to develop is much cheaper than that which requires special hardware and software. To use our earlier example the cocoa frontend required an apple computer running MAC OS X to compile and test, this resource was very limited and the project only had access to two such systems via remote desktop. These systems also had to serve as CI builders and required physical system administration as they could not be virtualized.
Support
Once a project releases useful software it generally gains users outside of the developers. Supporting users consumes developer time and generally causes them to focus on things other than code that interests them.

While most developers have enough pride in what they produce to fix bugs, users must always remember that the main freedom they get from OSS is they recived the code and can change it themselves, there is no requirement for a developer to do anything for them.
Resources
A project requires a website, code repository, wiki, CI systems etc. which must all be paid for. Netsurf for example is fortunate to have Pepperfish look after our website hosting at favorable rates, Mythic beasts provide exceptionally good rates for the CI system virtual machine along with hardware donations (our apple macs were donated by them) and Collabora for providing physical hosting for our virtual machine server.

Despite these incredibly good deals the project still spends around 200gbp (250usd) a year on overheads, these services obviously benefit the whole project including minority platforms but are generally donated by users of the more popular platforms.
The benefits of a feature are similarly varied:
Developer learning
A developer may implement a feature to allow them to learn a new technology or skill
Project diversity
A feature may mean the project gets built in a new environment which reveals issues or opportunities in unconnected code. For example the Debian OS is built on a variety of hardware platforms and sometimes reveals issues in software by compiling it on big endian systems. These issues are often underlying bugs that are causing errors which are simply not observed on a little endian platform.
More users
Gaining users of the software is often a benefit and although most OSS developers are contributing for personal reasons having their work appreciated by others is often a factor. This might be seen as the other side of the support cost.

In the end the maintainers of a project often have to consider all of these factors and more to arrive at a decision about a feature, especially those only useful to a minority of users. Such decisions are rarely taken lightly as they often remove another developers work and the question is often what would I think about my contributions being discarded?

As a postscript, if anyone is willing to pay the costs to maintain the NetSurf cocoa frontend I have not removed the code just yet.

23 October 2016

Vincent Sanders: Rabbit of Caerbannog

Subsequent to my previous use of American Fuzzy Lop (AFL) on the NetSurf bitmap image library I applied it to the gif library which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%

I then turned my attention to the SVG processing library. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.

The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an imagemagick mvg file.

The libsvg processing uses the NetSurf DOM library which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.

Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.

I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking dumb questions.

After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.

Rabbit of Caerbannog. Death awaits you with pointy teeth
I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.

Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.

For example Daniel Silverstone fixed an interesting bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.

I found and squashed several others including dealing with SVG which has no valid root element and division by zero errors when things like colour gradients have no points.

I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.

Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.

Summary stats
=============

Fuzzers alive : 10
Total run time : 68 days, 7 hours
Total execs : 9268 million
Cumulative speed : 15698 execs/sec
Pending paths : 0 faves, 2501 total
Pending per fuzzer : 0 faves, 250 total (on average)
Crashes found : 9 locally unique

After burning almost seventy days of processor time AFL found me another nine crashes and possibly more importantly a test corpus that generates over 90% coverage.

A useful tool that AFL provides is afl-cmin. This reduces the number of test files in a corpus to only those that are required to exercise all the code paths reached by the test set. In this case it reduced the number of files from 8242 to 2612

afl-cmin -i queue_all/ -o queue_cmin -- test_decode_svg @@ 1.0 /dev/null
corpus minimization tool for afl-fuzz by

[+] OK, 1447 tuples recorded.
[*] Obtaining traces for input files in 'queue_all/'...
Processing file 8242/8242...
[*] Sorting trace sets (this may take a while)...
[+] Found 23812 unique tuples across 8242 files.
[*] Finding best candidates for each tuple...
Processing file 8242/8242...
[*] Sorting candidate list (be patient)...
[*] Processing candidates and writing output files...
Processing tuple 23812/23812...
[+] Narrowed down to 2612 files, saved in 'queue_cmin'.

Additionally the actual information within the test files can be minimised with the afl-tmin tool. This must be run on each file individually and can take a relatively long time. Fortunately with GNU parallel one can run many of these jobs simultaneously which merely required another three days of CPU time to process. The resulting test corpus weighs in at a svelte 15 Megabytes or so against the 25 Megabytes before minimisation.

The result is yet another NetSurf library significantly improved by the use of AFL both from finding and squashing crashing bugs and from having a greatly improved test corpus to allow future library changes with a high confidence there will not be any regressions.

11 October 2016

Vincent Sanders: The pine stays green in winter... wisdom in hardship.

In December 2015 I saw the kickstarter for the Pine64. The project seemed to have a viable hardware design and after my experience with the hikey I decided it could not be a great deal worse.

Pine64 board in my case design
The system I acquired comprises of:
Hardware based kickstarter projects are susceptible to several issues and the usual suspects occurred causing delays:
My personal view is that PINE 64 inc. handled it pretty well, much better than several other projects I have backed and as my Norman Douglas quotation suggests I think they have gained some wisdom from this.

I received my hardware at the beginning of April only a couple of months after their initial estimated shipping date which as these things go is not a huge delay. I understand some people who had slightly more complex orders were just receiving their orders in late June which is perhaps unfortunate but still well within kickstarter project norms.

As an aside: I fear that many people simply misunderstand the crowdfunding model for hardware projects and fail to understand that they are not buying a finished product, on the other side of the debate I think many projects need to learn expectation management much better than they do. Hyping the product to get interest is obviously the point of the crowdfunding platform, but over promising and under delivering always causes unhappy customers.

Pine64 board dimensions
Despite the delays in production and shipping the information available for the board was (and sadly remains) inadequate. As usual I wanted to case my board and there were no useful dimension drawings so I had to make my own from direct measurements together with a STL 3D model.

Also a mental sigh for "yet another poor form factor decision" so another special case size and design. After putting together a design and fabricating with the laser cutter I moved on to the software.

Once more this is where, once again, the story turns bleak. We find a very pretty website but no obvious link to the software (hint scroll to the bottom and find the "support" wiki link) once you find the wiki you will eventually discover that the provided software is either an Android 5.1.1 image (which failed to start on my board) or relies on some random guy from the forums who has put together his own OS images using a hacked up Allwinner Board Support Package (BSP) kernel.

Now please do not misunderstand me, I think the work by Simon Eisenmann (longsleep) to get a working kernel and Lenny Raposo to get viable OS images is outstanding and useful. I just feel that Allwinner and vendors like Pine64 Inc. should have provided something much, much better than they have. Even the efforts to get mainline support for this hardware are all completely volunteer community efforts and are are making slow progress as a result.

Assuming I wanted to run a useful OS on this hardware and not just use it as a modern work of art I installed a basic Debian arm64 using Lenny Raposo's pine64 pro site downloads. I was going to use the system for compiling and builds so used the "Debian Base" image to get a minimal setup. After generating unique ssh keys, renaming the default user and checking all the passwords and permissions I convinced myself the system was reasonably trustworthy.

The standard Debian Jessie OS runs as expected with few surprises. The main concern I have is that there are a number of unpackaged scripts installed (prefixed with pine64_) which perform several operations from reporting system health (using sysfs entries) to upgrading the kernel and bootloader.

While I understand these scripts have been provided for the novice users to reduce support burden, doing even more of the vendors job, I would much rather have had proper packages for these scripts, kernel and bootloader which apt could manage. This would have reduced image creation to a simple debootstrap giving much greater confidence in the images provenance.

The 3.10 based kernel is three years old at the time of writing and lacks a great number of features for the aarch64 ARM processors introduced since release. However I was pleasantly surprised at kvm apparently being available.

# dmesg grep -i kvm
[ 7.592896] kvm [1]: Using HYP init bounce page @b87c4000
[ 7.593361] kvm [1]: interrupt-controller@1c84000 IRQ25
[ 7.593778] kvm [1]: timer IRQ27
[ 7.593801] kvm [1]: Hyp mode initialized successfully

I installed the libvirt packages (and hence all their dependencies like qemu) and created a bridge ready for the virtual machines.

I needed access to storage for the host disc images and while I could have gone the route of using USB attached SATA as with the hikey I decided to try and use network attached storage instead. Initially I investigated iSCSI but it seems the Linux target (iSCSI uses initiator for client and target for server) support is either old, broken or unpackaged.

I turned to network block device (nbd) which is packaged and seems to have reasonable stability out of the box on modern distributions. This appeared to work well, indeed over the gigabit Ethernet interface I managed to get a sustained 40 megabytes a second read and write rate in basic testing. This is better performance than a USB 2.0 attached SSD on the hikey

I fired up the guest and perhaps I should have known better than to expect a 3.10 vendor kernel to cope. The immediate hard crashes despite tuning many variables convinced me that virtualisation was not viable with this kernel.

So abandoning that approach I attempted to run the CI workload directly on the system. To my dismay this also proved problematic. The processor has the bad habit of throttling due to thermal issues (despite a substantial heatsink) and because the storage is network attached throttling the CPU also massively impacts I/O.

The limitations meant that the workload caused the system to move between high performance and almost no progress on a roughly ten second cycle. This caused a simple NetSurf recompile CI job to take over fifteen minutes. For comparison the same task takes the armhf builder (CubieTruck) four minutes and a 64 bit x86 build which takes around a minute.

If the workload is tuned to a single core which does not trip thermal throttling the build took seven minutes. which is almost identical to the existing single core virtual machine instance running on the hikey.

In conclusion the Pine64 is an interesting bit of hardware with fatally flawed software offering. Without Simon and Lenny providing their builds to the community the device would be practically useless rather than just performing poorly. There appears to have been no progress whatsoever on the software offering from Pine64 in the six months since I received the device and no prospect of mainline Allwinner support for the SoC either.

Effectively I have spent around 50usd (40 for the board and 10 for the enclosure) on a failed experiment. Perhaps in the future the software will improve sufficiently for it to become useful but I do not hold out much hope that this will come from Pine64 themselves.

1 October 2016

Vincent Sanders: Paul Hollywood and the pistoris stone

There has been a great deal of comment among my friends recently about a particularly British cookery program called "The Great British Bake Off". There has been some controversy as the program is moving from the BBC to a commercial broadcaster.

Part of this discussion comes from all the presenters, excepting Paul Hollywood, declining to sign with the new broadcaster and partly because of speculation the BBC might continue with a similar format show with a new name.

Rob Kendrick provided the start to this conversation by passing on a satirical link suggesting Samuel L Jackson might host "cakes on a plane"

This caused a large number of suggestions for alternate names which I will be reporting but Rob Kendrick, Vivek Das Mohapatra, Colin Watson, Jonathan McDowell, Oki Kuma, Dan Alderman, Dagfinn Ilmari Manns ke, Lesley Mitchell and Daniel Silverstone are the ones to blame.




So that is our list, anyone else got better ideas?

25 September 2016

Vincent Sanders: I'll huff, and I'll puff, and I'll blow your house in

Sometimes it really helps to have a different view on a problem and after my recent writings on my Public Suffix List (PSL) library I was fortunate to receive a suggestion from my friend Enrico Zini.

I had asked for suggestions on reducing the size of the library further and Enrico simply suggested Huffman coding. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.

A small subset of the Public Suffix List as stored within libnspslHuffman coding named for David A. Huffman is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.

every step of huffman encoding tree build for the example string tableSo if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a prefix code and there are several ways to generate them.

Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.

The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.

The resulting huffman tree and the binary representation of the input symbols
The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.

To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.

If we perform this algorithm on the example string table *!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.

Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.

The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.

$ ls -alh test_nspsl
-rwxr-xr-x 1 vince vince 59K Sep 25 23:58 test_nspsl
$ ls -al public_suffix_list.dat.gz
-rw-r--r-- 1 vince vince 62K Sep 1 08:52 public_suffix_list.dat.gz

To be clear the statically linked program can determine if a domain is in the PSL with no additional heap allocations and includes the entire PSL ordered tree, the domain label string table and the huffman decode table to read it.

An unexpected side effect is that because the decode loop is small it sits in the processor cache. This appears to cause the string comparison function huffcasecmp() (which is not locale dependant because we know the data will be limited ASCII) performance to be close to using strcasecmp() indeed on ARM32 systems there is a very modest improvement in performance.

I think this is as much work as I am willing to put into this library but I am pleased to have achieved a result which is on par with the best of breed (libpsl still has a data representation 20KB smaller than libnspsl but requires additional libraries for additional functionality) and I got to (re)learn an important algorithm too.

20 September 2016

Vincent Sanders: If I see an ending, I can work backward.

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

Chocolate chip cookie are much tastier than HTTP cookiesThe HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

 NetSurf cookie manager showing a supercookieThere is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk
I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode  
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
;

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode  
struct
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
label;
struct
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
child;
;

static const union pnode pnodes[8580] =
/* root entry */
.label = 0, 0, 1 , .child = 2, 1553 ,
/* entries 2 to 1794 */
.label = 37, 2, 1 , .child = 1795, 6 ,

...

/* entries 8577 to 8578 */
.label = 31820, 6, 1 , .child = 8579, 1 ,
/* entry 8579 */
.label = 0, 1, 0 ,

;

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

22 August 2016

Vincent Sanders: Down the rabbit hole

My descent began with a user reporting a bug and I fear I am still on my way down.

Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg
The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg
We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete

I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null

The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

AFL master node display
To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique

All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0)   (height == 0))          
return BMP_DATA_ERROR;
if (height < 0)
bmp->reversed = true;
height = -height;

The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

13 March 2016

Vincent Sanders: I changed my mind, Erase and rewind

My recent rack design turned out to simply not be practical. It did not hold all the SBC I needed it to and most troubling accessing connectors was impractical. I was forced to remove the enclosure from the rack and go back to piles of SBC on a shelf.

View of the acrylic being laser cut through the heavily tinted window
This sent me back to the beginning of the design process. The requirement for easy access to connectors had been compromised on in my first solution because I wanted a compact 1U size. This time I returned to my initial toast rack layout but retaining the SBC inside their clip cases.

By facing the connectors downwards and providing basic cable management the design should be much more practical.

My design process is to use the QCAD package to create layered 2D outlines which are then converted from DXF into toolpaths with Lasercut CAM software. The toolpaths are then uploaded to the laser cutter directly from the PC running Lasercut.

Assembled sub rack enclosureDespite the laser cutters being professional grade systems the Lasercut software is a continuous cause of issues for many users, it is the only closed source piece of software in the production process and it has a pretty poor user interface. On this occasion my main issue with it was my design was quite large at 700mm by 400mm which caused the software to crash repeatedly. I broke the design down into two halves and this allowed me to continue.

Once I defeated the software the design was laser cut from 3mm clear extruded acrylic. The assembled is secured with 72 off M3 nuts and bolts. The resulting construction is very strong and probably contains much more material than necessary.

One interesting thing I discovered is that in going from a 1U enclosure holding 5 units to a 2U design holding 11 units I had increased the final weight from 320g to 980g and when all 11 SBC are installed that goes up to a whopping 2300g. Fortunately this is within the mechanical capabilities of the material but it is the heaviest thing I have ever constructed from 3mm acrylic.

bolted into the rack and operatingOnce installed in the rack with all SBC inserted and connected this finally actually works and provides a practical solution. The self is finally clear of SBC and has enough space for all the other systems I need to accommodate for various projects.

As usual the design files are all freely available though I really cannot see anyone else needing to replicate this.

1 March 2016

Vincent Sanders: Hope is tomorrow's veneer over today's disappointment.

Recently I have been very hopeful about the 96boards Hikey SBC and as Evan Esar predicted I have therefore been very disappointed. I was given a Hikey after a Linaro connect event some time ago by another developer who could not get the system working usefully and this is the tale of what followed.
The Standard Design
This board design was presented as Linaro creating a standard for the 64bit Single Board Computer (SBC) market. So I had expected that a project with such lofty goals would have considered many factors and provided at least as good a solution as the existing 32bit boards.

The lamentable hubris of creating a completely new form factor unfortunately sets a pattern for the whole enterprise. Given the aim towards "makers" I would have accepted that the system would not be a ATX PC size motherboard, but mini/micro/nano and pico ITX have been available for several years.

If opting for a smaller "credit card" form factor why not use one of the common ones that have already been defined by systems such as the Raspberry Pi B+? Instead now every 96Board requires special cases and different expansion boards.

Not content with defining their own form factor the design also uses a 8-18V supply, this is the only SBC I own that is not fed from a 5V supply. I understand that a system might require more current than a micro USB connector can provide, but for example the Banana Pi manages with a DC barrel jack readily capable of delivering 25W which would seem more than enough

The new form factor forced the I/O connectors to be placed differently to other SBC, given the opportunity to concentrate all connectors on one edge (like ATX designs) and avoid the issues where two or three sides are used. The 96board design instead puts connectors on two edges removing any possible benefit this might have given.

The actual I/O connectors specified are rather strange. There is a mandate for HDMI removing the possibility of future display technology changes. The odd USB arrangement of two single sockets instead of a stacked seems to be an attempt to keep height down but the expansion headers and CPU heatsink mean this is largely moot.

The biggest issue though is mandating WIFI but not Ethernet (even as an option), everything else in the design I could logically understand but this makes no sense. It means the design is not useful for many applications without adding USB devices.

Expansion is presented as a 2mm pitch DIL socket for "low speed" signals and a high density connector for "high speed" signals. The positioning and arrangement of these connectors proffered an opportunity to improve upon previous SBC designs which was not taken. The use of 2mm pitch and low voltage signals instead of the more traditional 2.54mm pitch 3.3v signals means that most maker type applications will need adapting from the popular Raspberry Pi and Arduino style designs.

In summary the design appears to have been a Linaro project to favour one of their members which took a Hisilicon Android phone reference design and put it onto a board with no actual thought beyond getting it done. Then afterwards attempted to turn that into a specification, this has simply not worked as an approach.

My personal opinion is that this specification is fatally flawed and, is a direct cause of, the bizarre situation where the "consumer" specification exists alongside the "enterprise" edition which itself has an option of microATX form factor anyhow!
The ImplementationIf we ignore the specification appearing to be nothing more than a codification of the original HiKey design we can look at the HiKey as an implementation.

Initially the board required modifying to add headers to attach a USB to 1.8V LVTTL serial adaptor on the UART0 serial port. Once Andy Simpkins had made this change for me I was able to work through the instructions and attempt to install a bootloader and OS image.

The initial software was essentially HiSilicon vendor code using the Android fastboot system to configure booting. There was no source and the Ubuntu OS images were an obvious afterthought to the Android images. Just getting these images installed required a great deal of effort, repetition and debugging. It was such a dreadful experience this signalled the commencement one of the repeated hiatuses throughout this project, the allure of 64 bit ARM computing has its limits even for me.

When I returned to the project I attempted to use the system from the on-board eMMC but the pre-built binary only kernel and OS image were very limited. Building a replacement kernel , or even modules for the existing one proved fruitless and the system was dreadfully unstable.

I wanted to use the system as a builder for some Open Source projects but the system instability ruled this out. I considered attempting to use virtualisation which would also give better system isolation for builder systems. By using KVM running a modern host kernel and OS as a guest this would also avoid issues with the host systems limitations. At which point I discovered the system had no virtualisation enabled apparently because the bootloader lacked support.

In addition to these software issues there were hardware problems, despite forcing the use of USB for all additional connectivity the USB implementation was dreadful. For a start all USB peripherals have to run at the same speed! One cannot mix full (12Mbit) and high speed (480Mbit) devices which makes adding a USB Ethernet and SATA device challenging when you cannot use a keyboard.

And because I needed more challenges only one of the USB root hubs was functional. In effect this made the console serial port critical as it was the only reliable way to reconfigure the system without a keyboard or network link (and WIFI was not reliable either)

After another long pause in proceedings I decided that I should house all the components together and that perhaps being out on my bench might be the cause of some instability. I purchased a powered Amazon basics USB 2 hub, an Ethernet adaptor and a USB 2 SATA interface in the hope of accessing some larger mass storage.

The USB hub power supply was 12V DC which matched the Hikey requirements so I worked out I could use a single 4A capable supply and run a 3.5inch SATA hard drive too. I designed a laser cut enclosure and mounted all the components. As it turned out I only had a 2.5inch hard drive handy so the enclosure is a little over size. If I were redoing this design I would attempt to make it fit in 1U of height and be mountable in a 19inch rack instead it is an 83mm high (under 2U) box.

A new software release had also become available which purported to use an UEFI bootloader after struggling to install this version unsuccessfully, made somewhat more problematic by the undocumented change from UART0 (unpopulated header) to UART3 on the low speed 2mm pitch header. The system seemed to start the kernel which panicked and hung either booting from eMMC or SD card. Once again the project went on hold after spending tens of hours trying to make progress.
Third time's a charmAs the year rolled to a close I once again was persuaded to look at the hikey, I followed the much improved instructions and installed the shiny new November software release which appears to have been made for the re-release of the Hikey through LeMaker. This time I obtained a Debian "jessie" system that booted from the eMMC.

Having a booted system meant I could finally try and use it. I had decided to try and use the system to host virtual machines used as builders within the NetSurf CI system.

The basic OS uses a mixture of normal Debian packages with some replacements from Linaro repositories. I would have prefered to see more use of Debain packages even if they were from the backports repositories but on the positive side it is good to see the use of Debian instead of Ubuntu.

The kernel is a heavily patched 3.18 built in a predominantly monolithic (without modules) manner with the usual exceptions such as the mali and wifi drivers (both of which appear to be binary blobs). The use of a non-mainline capable SoC means the standard generic distribution kernels cannot be used and unless Linaro choose to distribute a kernel with the feature built in it is necessary to compile your own from sources.

The default install has a linaro user which I renamed to my user and ensured all the ssh keys and passwords on the system were changed. This is an important step when using this pre-supplied images as often a booted system is identical to every other copy.

To access mass storage my only option was via USB, indeed to add any additional expansion that is the only choice. The first issue here is that the USB host support is compiled in so when the host ports are initialised it is not possible to select a speed other than 12MBit. The speed is changed to 480Mbit by using a user space application found in the users home directory (why this is not a tool provided by a package and held in sbin I do not know).

When the usb_speed tool is run there is a chance that the previously enumerated devices will be rescanned and what was /dev/sda has become /dev/sdb if this happens there is a high probability that the system must be rebooted to prevent random crashes due to the "zombie" device.

Because the speed change operation is unreliable it cannot be reliably placed in the boot sequence so this must be executed by hand on each boot to get access to the mass storage.

NetSurf project already uses a x86_64 virtual host system which runs an LLVM physical volume from which we allocate logical volumes for each VM. I initially hoped to do this with the hikey but as soon as I tried to use the logical volume with a VM the system locked up with nothing shown on console. I did not really try very hard to discover why and instead simply used files on disc for virtual drives which seemed to work.

To provide reliable network access I used a USB attached Ethernet device, this like the mass storage suffered from unreliable enumeration and for similar reasons could not be automated requiring manually using the serial console to start the system.

Once the system was started I needed to install the guest VM. I had hoped I might be able to install locally from Debian install media as I do for x86 using the libvirt tools. After a great deal of trial and error I finally was forced to abandon this approach when I discovered the Linaro kernel is lacking iso9660 support so installing from standard media was not possible.

Instead I used the instructions provided by Leif Lindholm to create a virtual machine image on my PC and copied the result across. These instructions are great except I used version 2.5 of Qemu instead of 2.2 which had no negative effect. I also installed the Debian backports for Jessie to get an up to date 4.3 kernel.

After copying the image to the Hikey I started it by hand from the command line as a four core virtual machine and was successfully able to log in. The guest would operate for up to a day before stopping with output such as

$
Message from syslogd@ciworker13 at Jan 29 07:45:28 ...
kernel:[68903.702501] BUG: soft lockup - CPU#0 stuck for 27s! [mv:24089]

Message from syslogd@ciworker13 at Jan 29 07:45:28 ...

kernel:[68976.958028] BUG: soft lockup - CPU#2 stuck for 74s! [swapper/2:0]

Message from syslogd@ciworker13 at Jan 29 07:47:39 ...
kernel:[69103.199724] BUG: soft lockup - CPU#3 stuck for 99s! [swapper/3:0]

Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69140.321145] BUG: soft lockup - CPU#3 stuck for 30s! [rs:main Q:Reg:505]

Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69192.880804] BUG: soft lockup - CPU#0 stuck for 21s! [jbd2/vda3-8:107]

Message from syslogd@ciworker13 at Jan 29 07:53:21 ...
kernel:[69444.805235] BUG: soft lockup - CPU#3 stuck for 22s! [swapper/3:0]

Message from syslogd@ciworker13 at Jan 29 07:55:21 ...
kernel:[69570.177600] BUG: soft lockup - CPU#1 stuck for 112s! [systemd:1]

Timeout, server 192.168.7.211 not responding.

After this output the host system would not respond and had to be power cycled never mind the guest!

Once I changed to single core operation the system would run for some time until the host suffered from the dreaded kernel OOM killer. I was at a loss as to why the oom killer was running as the VM was only allocated half the physical memory (512MB) allowing the host what I presumed to be an adequate amount.

By adding a 512MB swapfile the system was able to push the few hundred kilobytes it wanted to swap and the system was now stable! The swapfile of course has to be started by hand as the external storage is unreliable and unavailable at boot.

I converted the qemu command line to a libvirt config using the virsh tool
virsh domxml-from-native qemu-argv cmdln.args
The converted configuration required manual editing to get a working system but now I have a libvirt based VM guest I can control along with all my other VM using the virt-manager tool.

This system is now stable and has been in production use for a month at time of writing. The one guest VM is a single core 512MB aarch64 system which takes over 1100 seconds (19 minutes) to do what a Banana Pi 2 dual core 1GB memory 32bit native ARM system manages in 300 seconds.

It seems the single core limited memory system with USB SATA attached storage is very, very slow.

I briefly attempted to run the CI system job natively within the host system but within minutes it crashed hard and required a power cycle to retrieve, it had also broken the UEFI boot. I must thank Leif for walking me through recovering the system otherwise I would have needed to start over.
ConclusionsI must stress these conclusions and observations are my own and do not represent anyone else.

My main conclusions are:

This project has taken almost a year to get to the final state and has been one of the least enjoyable within that time. The only reason I have a running system at the end of it is sheer bloody mindedness because after spending hundreds of hours of my free time I was not prepared to see it all go to waste.

To be fair, the road I travelled is now much smoother and if the application is suited to having a mobile phone without a screen then the Hikey probably works as a solution. For me, however, the Hikey product with the current hardware and software limitations is not something I would recommend in preference to other options.

21 February 2016

Vincent Sanders: Stack 'em, pack 'em and rack 'em.

As you may be aware I have a bit of a problem with Single Board Computers in that I have a lot of them. Keeping them organised has turned into a bit of a problem.

cluttered shelf of SBC
I designed clip cases for many of these systems giving me a higher storage density on my rack shelves and built a power supply to reduce the cabling complexity. These helped but I still ended up with a cluttered shelf full of SBC.

I decided I would make a rack enclosure to hold the SBC, I was limited to material I could easily CNC machine which limited me to acrylic plastics or wood.

laser cutting the design, viewed through heavily tinted filterInitially I started with the idea of housing the individual boards in a toast rack arrangement. This would mean that the enclosure would have to be at least 2U high to fit the boards all the existing cases would have to be discarded. This approach was dropped when the drawbacks of having no flexibility and only being able to fit the units that were selected at design time became apparent (connector cutouts and mounting hole placement.

Instead I changed course to try and incorporate the existing cases which already solved the differing connector and mounting placement problem and gave me a uniform size to consider. Once I had this approach the design came pretty quickly. I used a tube girder construction 1U in height to get as much strength as possible from the 3mm acrylic plastic I would use.

laser cut pieces arranged for assembly still with protective film on
The design was simply laser cut from sheet stock and fastened together with M3 nut and bolts. Once I corrected the initial design errors (I managed to get almost every important dimension wrong on the first attempt) the result was a success.

working prototype resting on initial version
The prototype is a variety of colours because makespace ran out of suitably sized clear acrylic stock but the colouring has no effect on the result other than aesthetical. The structure gives a great deal of rigidity and there is no sagging or warping, indeed testing on the prototype got to almost 50Kg loading without a failure (one end clamped and the other end loaded at 350mm distance)

I added some simple rotating latches at the front which keep the modules held in place and allow units to be removed quickly if necessary.

rack slots installed and in use
Overall this project was successful and I can now pack five SBC per U neatly. It does limit me to using systems cased in my "slimline" designs (68x30x97mm) which currently means the Raspberry Pi B+ style and the Orange Pi PC.

Once small drawback is access to I/O and power connectors. These need to be right angled and must be unplugged before unit removal which can be a little fiddly. Perhaps a toast rack design of cases would have given easier connector access but I am happy with this trade off of space for density.

As usual the design files are freely available, perhaps they could be useful as a basis for other laser cut rack enclosure designs.

31 January 2016

Daniel Silverstone: The Beer'o'Meter project

As some of you may know, I have been working on a small hardware project called the Beer'o'Meter whose purpose is to allow us to extend Ye Olde Vic's beer board to indicate the approximate fullness of each cask. For some time now, we've been operating an electronic beer board at the Vic which you may see tweeted out from time to time. The pumpotron has become very popular with the visitors to the pub, especially that it can be viewed online in a basic textual form. Of course, as many of you who visit pubs know only too well. That a beer is "on" is no indication of whether or not you need to get there sharpish to have a pint, or if you can take your time and have a curry first. As a result, some of us have noticed a particular beer on, come to the pub after dinner, and then been very sad that if only we'd come 30 minutes previously, we'd have had a chance at the very beer we were excited about. Combine this kind of sadness with a two week break at Christmas, and I started to develop a Beer'o'Meter to extend the pumpotron with an indication of how much of a given beer had already been served. Recently my boards came back from Elecrow along with various bits and bobs, and I have spent some time today building one up for test purposes. As always, it's important to start with some prep work to collect all the necessary components. I like to use cake cases as you may have noticed on the posting yesterday about the oscilloscope I built. Component prep for the Beer'o'Meter Naturally, after prep comes the various stages of assembly. You start with the lowest-height components, so here's the board after I fitted the ceramic capacitors: Step 1, ceramic capacitors And here's after I fitted the lying-down electrolytic decoupling capacitor for the 3.3 volt line: Step 2, capacitors which lie down Next I should have fitted the six transitors from the middle cake case, but I discovered that I'd used the wrong pinout for them. Even after weeks of verification by myself and others, I'd made a mistake. My good friend Vincent Sanders recently posted about how creativity is allowing yourself to make mistakes and here I had made a doozy I hadn't spotted until I tried to assemble the board. Fortunately TO-92 transistors have nice long legs and I have a pair of tweezers and some electrical tape. As such I soon had six transistors doing the river dance: Transistors doing the river-dance With that done, I noticed that the transistors now stood taller than the pins (previously I had been intending to fit the transistors before the pins) so I had to shuffle things around and fit all my 0.1" pins and sockets next: Step 3, pins and sockets Then I could fit my dancing transistors: Step 4, transistors We're almost finished now, just one more capacitor to provide some input decoupling on the 9v power supply: Finished -- decoupling fitted Of course, it wouldn't be complete without the ESP8266Huzzah I acquired from AdaFruit though I have to say that I'm unlikely to use these again, but rather I might design in the surface-mount version of the module instead. Fitted with the module And since this is the very first Beer'o'Meter to be made, I had to go and put a 1 on the serial-number space on the back of the board. I then tried to sign my name in the box, made a hash of it, so scribbled in the gap :-) The back of the finished module Finally I got to fit all six of my flow meters ready for some testing. I may post again about testing the unit, but for now, here's a big spider of a flow meter for beer: The Beer'o'Meter spider This has been quite a learning experience for me, and I hope in the future to be able to share more of my hardware projects, perhaps from an earlier stage. I have plans for a DAC board, and perhaps some other things.

26 January 2016

Vincent Sanders: Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.

It seems Scott Adams insights sometimes reach beyond his disturbingly accurate satire. I have written before about my iterative approach to designing the things I make. such as my attempts at furniture and more recently enclosures for various projects.

Raspberry Pi B case design with the faliures
In the workshop today I had a selection of freshly laser cut.completed cases for several single board computers out on the desk. I was asked by a new member of the space how I was able to produce these with no failures?

I was slightly taken aback at the question and had to explain that the designs I was happily running off on the laser cutter are all the result of mistakes, lots of them. The person I was talking to was surprised when I revealed that I was not simply generating fully formed working designs first time.

We chatted for a while and it became apparent that they had been holding themselves back from actually making something because they were afraid the result would be wrong. I went to my box and retrieved the failures from my most recent case design for a Raspberry Pi model B to put alongside the successful end product to try and encourage them.

I explained that my process was fairly iterative, sure I attempted to get it right first time by reusing existing working solutions as a basis but that when the cost of iterating is relatively small it is sometimes worthwhile to just accept the failures.

For example in this latest enclosure:

My collection of failed cases
Generally I do not need this many iterations and get it right second time, however experience caused me to use offcuts and scrap material for the initial versions expecting to have issues. The point is that I was willing to make the iterations and not see them as failures.

The person I was talking to could not get past the possibility of having a pile of scrap material, it was wasteful in their view and my expectation to fail was unfathomable. They left with somewhat of a bad view of me and my approach.

I pondered this turn of events for a time and they did have a point in that I have a collection of thirty or so failures from all my various designs most of which is unusable. I then realised I have produced over fifty copies of those designs not just for myself but for other people and published them for anyone else to replicate, so on balance I think I am doing ok on wastage.

The stronger argument for me personally is that I have made something. I love making things, be that software, electronics or physical designs. It may not always be the best solution but I usually end up with something that works.

That makespace member may not like my approach but in the final reckoning, I have made something, their idea is still just an idea. So Scott I may not be an artist but I am at least creative and that is halfway there.

14 January 2016

Vincent Sanders: Ampere was the Newton of Electricity.

I think Maxwell was probably right, certainly the unit of current Ampere gives his name to has been a concern of mine recently.

Regular readers may have possibly noticed my unhealthy obsession with single board computers. I have recently rehomed all the systems into my rack which threw up a small issue of powering them all. I had been using an ad-hoc selection of USB wall warts and adapters but this ended up needing nine mains sockets and short of purchasing a very expensive PDU for the rack would have needed a lot of space.

Additionally having nine separate convertors from mains AC to low voltage DC was consuming over 60Watts for 20W of load! The majority of these supplies were simply delivering 5V either via micro USB or DC barrel jack.

Initially I considered using a ten port powered USB hub but this seemed expensive as I was not going to use the data connections, it also had a limit of 5W per port and some of my systems could potentially use more power than that so I decided to build my own supply.

PSU module from ebay
A quick look on ebay revealed that a 150W (30A at 5V) switching supply could be had from a UK vendor for 9.99 which seemed about right. An enclosure, fused and switched IEC inlet, ammeter/voltmeter with shunt and suitable cables were acquired for another 15

Top view of the supply all wired up
A little careful drilling and cutting of the enclosure made openings for the inlets, cables and display. These were then wired together with crimped and insulated spade and ring connectors. I wanted this build to be safe and reliable so care was taken to get the neatest layout I could manage with good separation between the low and high voltage cabling.

Completed supply with all twelve outputs wired up
The result is a neat supply with twelve outputs which i can easily extend to eighteen if needed. I was pleasantly surprised to discover that even with twelve SBC connected generating 20W load the power drawn by the supply was 25W or about 80% efficiency instead of the 33% previously achieved.

The inbuilt meter allows me to easily see the load on the supply which so far has not risen above 5A even at peak draw, despite the cubitruck and BananaPi having spinning rust hard drives attached, so there is plenty of room for my SBC addiction to grow (I already pledged for a Pine64).

Supply installed in the rack with some of the SBC connected
Overall I am pleased with how this turned out and while there are no detailed design files for this project it should be easy to follow if you want to repeat it. One note of caution though, this project has mains wiring and while I am confident in my own capabilities dealing with potentially lethal voltages I cannot be responsible for anyone else so caveat emptor!

27 December 2015

Vincent Sanders: The only pleasure I get from moving house is stumbling across books I had forgotton I owned

I have to agree with John Burnside on that statement, after having recently moved house again rediscovering our book collection has been a salve for an otherwise exhausting undertaking. I returned to Cambridge four years ago, initially on my own and then subsequently the family moved down to be with me.

We rented a house but, with two growing teenagers, the accommodation was becoming a little crowded. Melodie and I decided the relocation was permanent and started looking for our own property, eventually finding something to our liking in Cottenham village.

Melodie took the opportunity to have the house cleaned and decorated while empty because of overlapping time with our rental property. This meant we had to be a little careful while moving in as there was still wet paint in places.

Some of our books
Moving weekend was made bearable by Steve, Jonathan and Jo lending a hand especially on the trips to Yorkshire to retrieve, amongst other things, the aforementioned book collection. We were also fortunate to have Andy and Jane doing many other important jobs around the place while the rest of us were messing about in vans.

The desk in the study
The seemingly obligatory trip to IKEA to acquire furniture was made much more fun by trying to park a luton van which was only possible because Steve and Jonathan helped me. Though it turns out IKEA ship mattresses rolled up so tight they can be moved in an estate car so taking the van was unnecessary.

Alex under his loft bed
Having moved in it seems like every weekend is filled with a never ending "todo" list of jobs. From clearing gutters to building a desk in the study. Eight weeks on and the list seems to be slowly shrinking meaning I can even do some lower priority things like the server rack which was actually a fun project.


Joshua in his completed roomThe holidays this year afforded me some time to finish the boys bedrooms. They both got loft beds with a substantial area underneath. This allows them both to have double beds along with a desk and plenty of storage. Completing the rooms required the construction of some flat pack furniture which rather than simply do myself I supervised the boys doing it themselves.

Alexander building flat pack furniture
Teaching them by letting them get on with it was a surprisingly effective and both of them got the hang of the construction method pretty quickly. There was only a couple of errors from which they learned immediately and did not repeat (draw bottoms having a finished side and front becomes back when you are constructing upside down)

Joshua assembling flat pack furniture
The house is starting to feel like home and soon all the problems will fade from memory while the good will remain. Certainly our first holiday season has been comfortable here and I look forward to many more re-reading our books.

8 December 2015

Vincent Sanders: I said it was wired like a Christmas tree

I have recently acquired a 27U high 19 inch rack in which I hope to consolidate all the computing systems in my home that do not interact well with humans.

My main issue is that modern systems are just plain noisy, often with multiple small fans whining away. I have worked to reduce this noise by using quieter components as replacements but in the end it is simply better to be able to put these systems in a box out of the way.

The rack was generously given to me by Andy Simpkins and aside from being a little dirty having been stored for some time was in excellent condition. While the proverbs "never look a gift horse in the mouth" and "beggars cannot be choosers" are very firmly at the front of my mind there were a few minor obstacles to overcome to make it fit in its new role with a very small budget.

The new home for the rack was to be a space under the stairs where, after careful measurement, I determined it would just fit. After an hour or two attempting to manoeuvre a very heavy chunk of steel into place I determined it was simply not possible while it was assembled. So I ended up disassembling and rebuilding the whole rack in a confined space.

The rack is 800mm wide IMRAK 1400 rather than the more common 600mm width which means it employs "cable reducing channels" to allow the mounting of standard width rack units. Most racks these days come with four posts in the corners to allow for longer kit to be supported front and back. This particular rack was not fitted with the rear posts and a brief call to the supplier indicated that any spares from them would be eyewateringly expensive (almost twice the cost of purchasing a new rack from a different supplier) so I had to get creative.

Shelves that did not require the rear rails were relatively straightforward and I bought two 500mm deep cantilever type from Orion (I have no affiliation with them beyond being a satisfied customer).

I took a trip to the local hardware store and purchased some angle brackets and 16mm steel square tube. From this I made support rails which means the racked kit has support to its rear rather than relying solely on being supported by its rack ears.

The next problem was the huge hole in the bottom of the rack where I was hoping to put the UPS and power switching. This hole is intended for use with raised flooring where cables enter from below, when not required it is filled in with a "bottom gland plate". Once again the correct spares for the unit were not within my budget.

Around a year ago I built several systems for open source projects from parts generously donated by Mythic Beasts (yes I did recycle servers used to build a fort). I still had some leftover casework from one of those servers so ten minutes with an angle grinder and a drill and I made myself a suitable plate.

The final problem I faced is that it is pretty dark under the stairs and while putting kit in the rack I could not see what I was doing. After some brief Googling I decided that all real rack lighting solutions were pretty expensive and not terribly effective.

At this point I was interrupted by my youngest son trying to assemble the Christmas tree and the traditional "none of the lights work" so we went off to the local supermarket to buy some bulbs. Instead we bought a 240 LED string for 10 (15usd) in the vague hope that next year they will not be broken.

I immediately had a light bulb moment and thought how a large number of efficient LED bulbs at a low price would be ideal for lighting a rack. So my rack is indeed both wired like and as a Christmas tree!

Now I just have to finish putting all the systems in there and I will be able to call the project a success.

1 December 2015

Vincent Sanders: HTTP to screen

I recently presented a talk at the Debian miniconf in Cambridge. This was a new talk explaining what goes on in a web browser to get a web page on screen.

The presentation was filmed and my slides are also available. I think it went over pretty well despite the venues lighting adding a strobe ambiance to part of proceedings.

I thought the conference was a great success overall and enjoyed participating. I should like to thank Cosworth for allowing me time to attend and for providing some sponsorship.

4 November 2015

Vincent Sanders: I am not a number I am a free man

Once more the NetSurf developers tried to escape from a mysterious village by writing web browser code.

Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders at NetSurf Developer workshop
The sixth developer workshop was an opportunity for us to gather together in person to contribute to NetSurf.

We were hosted by Codethink in their Manchester offices which provided a comfortable and pleasant space to work in.

Four developers managed to attend in person from around the UK: Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders.

The main focus of the weekends activities was to work on improving our JavaScript implementation. At the previous workshop we had laid the groundwork for a shift to the Duktape JavaScript engine and since then put several hundred hours of time into completing this transition.

During this weekend Daniel built upon this previous work and managed to get DOM events working. This was a major missing piece of implementation which will mean NetSurf will be capable of interpreting JavaScript based web content in a more complete fashion. This work revealed several issues with our DOM library which were also resolved.

We were also able to merge several improvements provided by the Duktape upstream maintainer Sami Vaarala which addressed performance problems with regular expressions which were causing reports of "hangs" on slow processors.

The responsiveness of Sami and the Ducktape project has been a pleasant surprise making our switch to the library look like an increasingly worthwhile effort.

Overall some good solid progress was made on JavaScript support. Around half of the DOM interfaces in the specifications have now been implemented leaving around fifteen hundred methods and properties remaining. There is an aim to have this under the thousand mark before the new year which should result in a generally useful implementation of the basic interfaces.

Once the DOM interfaces have been addressed our focus will move onto the dynamic layout engine necessary to allow rendering of the changing content.

The 3.4 release is proposed to occur sometime early in the new year and depends on getting the JavaScript work to a suitable stable state.

Dave joined us for the first time, he was principally concerned with dealing with bugs and the bug tracker. It was agreeable to have a new face at the meeting and some enthusiasm for the RISC OS port which has been lacking an active maintainer for some time.

The turnout for this workshop was the same as the previous one and the issues raised then are still true. We still have a very small active core team who can commit only limited time which is making progress very slow and are lacking significant maintenance for several frontends.

Overall we managed to pack 16 hours of work into the weekend and addressed several significant problems.

24 October 2015

Vincent Sanders: It takes courage to sit on a jury. How many of us want to decide the fate of another person's life or freedom?

I think Regina Brett has a point although having now experienced being a juror in a British crown court I have a much better understanding of both the process and effectiveness of the jury system.

The actual process of becoming a juror on a case is something I had not been aware of previously. You simply receive a letter telling you to be at the court on a specific date and that you are required to be available for at least ten days, possibly more. The only qualification to receive the letter is to be on the electoral roll and it is an invitation with few options to refuse without serious repercussions.

When you arrive at the court you are directed to the Jury lounge (practical hint: take a book) where you notice there are over forty people, which would seem odd until you realise there are three courtrooms and they each need a jury, even then there are an excess of people which is because of the selection process.

The process of jury selection is fairly simple, an usher for a court comes and calls fourteen names which forms the jury in waiting. The group is taken up to the court waiting room (this room gets terribly familiar over the forthcoming weeks) and then twelve names are called.

As each person is called they enter the jury box in order which persists for the entire trial (practical hint:remember your juror number). Before each person is sworn or affirmed there is the possibility they will be found unsuitable and will be replaced by one of the previously unselected jurors. Any unselected jurors are then sent back to the jury lounge and become available for forming another jury in waiting.

Anyone unselected at the end of the process has to remain available to return to the court to form a jury in waiting when a previous trial ends until they have exhausted their duty. There were a few of these unfortunate people who were kept in a state of limbo for several days and I am relieved this did not happen to me.

Being a juror, from a purely practical perspective, felt like working an office job for ten days. Duties consisting of attending a series of meetings with strange rules and the typically understated British approach to mentioning the result of breaking them.

I participated in two cases, both of which were (almost by definition) unpleasant happenings, these were a case of Grievous bodily harm and an offence under section 7 of the Sexual Offences Act 2003.

Both cases were challenging in their own ways the first because of the way the case was presented and the second because of its subject matter. One of the most important rules is "Do not discuss anything with anyone as it might be perjury" so going along with that I will not be discussing any details. Because I cannot be specific this post has become a little impersonal, you will have to forgive me as I found I had to remove a great deal of more content which was not appropriate.

An important thing to note is that the trials bore no resemblance to TV courtroom drama. The trials proceed in a professional manner with very little theatrics. The prosecution barrister commences outlining the case against the accused, calling witnesses and reading into evidence uncontested material. The defence then gets to present their case, again calling witnesses and placing documents into evidence.

One of the striking things about this process is that if the barristers do not call a witness or present evidence that would seem to be pertinent, the jury must not draw inference from that omission, which is especially bizarre when a central witness referred to by almost everyone involved with the case is not called.

Once the case is presented the jury is sequestered in a room and must come to a unanimous decision on each of the charges. This was, for me, the most challenging part of the whole process. Twelve people with unique views on the information presented have to attempt to discuss the evidence and not simply go with their first impressions based on their preconceptions.

The jury is allowed to ask for some evidence to be repeated and if deliberations take some time the jury may be instructed that a majority of 10 to 2 may be accepted. I imagine at some point the jury will run out of time to make a decision and something else will happen but I did not experience this.

Overall the experience was enlightening if not enjoyable, I understand the process a lot more and am happy to have discharged my duty and am equally glad the responsibility will not come around again for at least a few years.

16 October 2015

Vincent Sanders: Raspberries are not the only fruit

I have worked with ARM based systems for longer than I care to admit to myself. From the Acorn Archimedes 305 in 1987 through to modern 64bit systems I have seen many many changes in the ARM community. One big change has been the rise of the inexpensive single board computer (SBC).

Arguably the Raspberry Pi (RPi) was responsible for starting this trend. Before RPi there were small development boards available, I was even involved in producing some of them, none of these really became a big thing and were principally vehicles for silicon vendors to showcase their SoC in an accessible way. When I say accessible I mean for silicon vendors who were previously used to charging many thousands of pounds for their development boards now only charging hundreds.

Raspberry Pi 2 B+ in my case
In my opinion the RPi was a complete disruption to the SBC market. In early 2012 a complete ARM computer system could now be purchased for $35 ( 25) which was substantially cheaper than the best contemporary competitor the Beaglebone $89 ( 60)

To be clear, the reason the RPi succeeded (millions sold, household name) was not on price alone but also the large amount of good supporting software and how easy it was made for teachers and makers to use.

There were several areas that the RPi managed to change perceived issues into opportunities for using what was already available or third parties to provide. There were also several issues raised about the original RPi:
Some of these have been addressed since release as the foundation now sells cases, hardware revisions with much more powerful processors, pre-configured storage, cameras and displays. The important thing to note here though is the RPi has made the bare SBC a much more widely accepted product where all the non critical parts are considered "extras" and a lot is forgiven because of the price.

With that acceptance there have been many, many new SBC coming to market with better peripherals and increasingly competitive pricing. These are technically not clones as none of them use the Broadcom processor of the RPi but they often share many features and possibly even a compatible expansion header.

Banana pi in my case with a 2.5inch drive bay
The Banana Pi was one of these copycats which I acquired for a similar price as an RPi in 2014. The main processor of this system was a 1Ghz dual core Allwinner A20 processor (a considerable advance on the 700MHz single core of the original RPi) coupled to a gigabyte of memory. Additionally the board benefited from having SATA and gigabit Ethernet MAC which made for a much more versatile system. Various third parties filled in the missing peripherals including my own attempt at a case.

I acquired a cubietruck for the NetSurf project to use as a build node in their CI system this is again based on the A20 but with more memory and somewhat better peripheral support but at a substantial cost over the Banana Pi.

The most recent addition to this form factor is the introduction by Xunlong Software of the Orange Pi PC. This little board is the same footprint as the RPi2 B+ design but with differing connector placement. The processor is a quad core 1.6GHz Allwinner H3 with a gigabyte of memory and has has Ethernet and USB but no SATA.

Pile of Orange Pi PC in my cases
The big news about this board though is the price, at $15 ( 10) it resets the price expectations just as the RPi did before it. I was initially sceptical of the quality of the product (or if it would arrive at all) but I have acquired five of these boards and every one of them came well packaged, boxed and in a static bag, just like the RPi does, and they all worked.

I created a case design based on my RPi slimline case so they would be protected when piled up with all the other boards. The use of a DC barrel jack instead of micro USB for power is better in that the connector is more robust and intended for higher current draws but does mean additional leads are needed. There are, however, two flies in the ointment, neither are showstoppers but make the board a little more difficult to use.

Orange Pi in my case with heatsinkOne is a simple necessity of a substantial heatsink on the H3 processor. Initially I used a small 20 x 20mm copper heatsink (around 800mm square surface area) but this was insufficient under full load. I did not want to have to use a fan so I milled some slots into copper round bar, then cut off sections and faced them on a lathe. The completed design had more than 2200 square mm surface area and cost around $2.5 ( 1.5) in material (and a couple of hours in the workshop but that was fun and I made something)

The second issue and arguably much more serious is that of software. Let me be honest, it is dreadful, I mean very bad indeed. The images provided from the Orange Pi website are some of the worst examples of "do something quick" I have experienced.

Fortunately a user on the forums named loboris decided to create scripts that generate a Debian (and Ubuntu and Fedora) distribution images that can be installed from SD card. He relies on a somewhat patched 3.4 kernel full of Allwinner vendor changes and the inevitable binary blobs for the Marli GPU but the result does work.

I have had a few units acting as distcc compiler slaves for two weeks now at 100% CPU loading and they are still running. The processor does not get overly warm with the heatsink installed and Debian behaves just fine. The main caveat being that it is definitely not going to work if you try and update the kernel through packaging.

My ARM build farm as a pile of SBC
The state of software support and Xunlong Software relying on a forum user to complete their product does tarnish an otherwise impressive and possibly market changing SBC.

Perhaps I expect too much for fifteen bucks? I guess when the cost of the case, heatsink, cables and memory storage card is similar to the rest of the computer there is simply no margin left for anything else.

In conclusion hopefully this brief overview has provided some insight into what is available in this market and that the Raspberry Pi is indeed not the only option.

I finish with an image is of my ARM build farm consisting of every SBC I mention here (including a couple of RPi)

Next.