Search Results: "servo"

26 August 2022

Antoine Beaupr : How to nationalize the internet in Canada

Rogers had a catastrophic failure in July 2022. It affected emergency services (as in: people couldn't call 911, but also some 911 services themselves failed), hospitals (which couldn't access prescriptions), banks and payment systems (as payment terminals stopped working), and regular users as well. The outage lasted almost a full day, and Rogers took days to give any technical explanation on the outage, and even when they did, details were sparse. So far the only detailed account is from outside actors like Cloudflare which seem to point at an internal BGP failure. Its impact on the economy has yet to be measured, but it probably cost millions of dollars in wasted time and possibly lead to life-threatening situations. Apart from holding Rogers (criminally?) responsible for this, what should be done in the future to avoid such problems? It's not the first time something like this has happened: it happened to Bell Canada as well. The Rogers outage is also strangely similar to the Facebook outage last year, but, to its credit, Facebook did post a fairly detailed explanation only a day later. The internet is designed to be decentralised, and having large companies like Rogers hold so much power is a crucial mistake that should be reverted. The question is how. Some critics were quick to point out that we need more ISP diversity and competition, but I think that's missing the point. Others have suggested that the internet should be a public good or even straight out nationalized. I believe the solution to the problem of large, private, centralised telcos and ISPs is to replace them with smaller, public, decentralised service providers. The only way to ensure that works is to make sure that public money ends up creating infrastructure controlled by the public, which means treating ISPs as a public utility. This has been implemented elsewhere: it works, it's cheaper, and provides better service.

A modest proposal Global wireless services (like phone services) and home internet inevitably grow into monopolies. They are public utilities, just like water, power, railways, and roads. The question of how they should be managed is therefore inherently political, yet people don't seem to question the idea that only the market (i.e. "competition") can solve this problem. I disagree. 10 years ago (in french), I suggested we, in Qu bec, should nationalize large telcos and internet service providers. I no longer believe is a realistic approach: most of those companies have crap copper-based networks (at least for the last mile), yet are worth billions of dollars. It would be prohibitive, and a waste, to buy them out. Back then, I called this idea "R seau-Qu bec", a reference to the already nationalized power company, Hydro-Qu bec. (This idea, incidentally, made it into the plan of a political party.) Now, I think we should instead build our own, public internet. Start setting up municipal internet services, fiber to the home in all cities, progressively. Then interconnect cities with fiber, and build peering agreements with other providers. This also includes a bid on wireless spectrum to start competing with phone providers as well. And while that sounds really ambitious, I think it's possible to take this one step at a time.

Municipal broadband In many parts of the world, municipal broadband is an elegant solution to the problem, with solutions ranging from Stockholm's city-owned fiber network (dark fiber, layer 1) to Utah's UTOPIA network (fiber to the premises, layer 2) and municipal wireless networks like Guifi.net which connects about 40,000 nodes in Catalonia. A good first step would be for cities to start providing broadband services to its residents, directly. Cities normally own sewage and water systems that interconnect most residences and therefore have direct physical access everywhere. In Montr al, in particular, there is an ongoing project to replace a lot of old lead-based plumbing which would give an opportunity to lay down a wired fiber network across the city. This is a wild guess, but I suspect this would be much less expensive than one would think. Some people agree with me and quote this as low as 1000$ per household. There is about 800,000 households in the city of Montr al, so we're talking about a 800 million dollars investment here, to connect every household in Montr al with fiber and incidentally a quarter of the province's population. And this is not an up-front cost: this can be built progressively, with expenses amortized over many years. (We should not, however, connect Montr al first: it's used as an example here because it's a large number of households to connect.) Such a network should be built with a redundant topology. I leave it as an open question whether we should adopt Stockholm's more minimalist approach or provide direct IP connectivity. I would tend to favor the latter, because then you can immediately start to offer the service to households and generate revenues to compensate for the capital expenditures. Given the ridiculous profit margins telcos currently have 8 billion $CAD net income for BCE (2019), 2 billion $CAD for Rogers (2020) I also believe this would actually turn into a profitable revenue stream for the city, the same way Hydro-Qu bec is more and more considered as a revenue stream for the state. (I personally believe that's actually wrong and we should treat those resources as human rights and not money cows, but I digress. The point is: this is not a cost point, it's a revenue.) The other major challenge here is that the city will need competent engineers to drive this project forward. But this is not different from the way other public utilities run: we have electrical engineers at Hydro, sewer and water engineers at the city, this is just another profession. If anything, the computing science sector might be more at fault than the city here in its failure to provide competent and accountable engineers to society... Right now, most of the network in Canada is copper: we are hitting the limits of that technology with DSL, and while cable has some life left to it (DOCSIS 4.0 does 4Gbps), that is nowhere near the capacity of fiber. Take the town of Chattanooga, Tennessee: in 2010, the city-owned ISP EPB finished deploying a fiber network to the entire town and provided gigabit internet to everyone. Now, 12 years later, they are using this same network to provide the mind-boggling speed of 25 gigabit to the home. To give you an idea, Chattanooga is roughly the size and density of Sherbrooke.

Provincial public internet As part of building a municipal network, the question of getting access to "the internet" will immediately come up. Naturally, this will first be solved by using already existing commercial providers to hook up residents to the rest of the global network. But eventually, networks should inter-connect: Montr al should connect with Laval, and then Trois-Rivi res, then Qu bec City. This will require long haul fiber runs, but those links are not actually that expensive, and many of those already exist as a public resource at RISQ and CANARIE, which cross-connects universities and colleges across the province and the country. Those networks might not have the capacity to cover the needs of the entire province right now, but that is a router upgrade away, thanks to the amazing capacity of fiber. There are two crucial mistakes to avoid at this point. First, the network needs to remain decentralised. Long haul links should be IP links with BGP sessions, and each city (or MRC) should have its own independent network, to avoid Rogers-class catastrophic failures. Second, skill needs to remain in-house: RISQ has already made that mistake, to a certain extent, by selling its neutral datacenter. Tellingly, MetroOptic, probably the largest commercial dark fiber provider in the province, now operates the QIX, the second largest "public" internet exchange in Canada. Still, we have a lot of infrastructure we can leverage here. If RISQ or CANARIE cannot be up to the task, Hydro-Qu bec has power lines running into every house in the province, with high voltage power lines running hundreds of kilometers far north. The logistics of long distance maintenance are already solved by that institution. In fact, Hydro already has fiber all over the province, but it is a private network, separate from the internet for security reasons (and that should probably remain so). But this only shows they already have the expertise to lay down fiber: they would just need to lay down a parallel network to the existing one. In that architecture, Hydro would be a "dark fiber" provider.

International public internet None of the above solves the problem for the entire population of Qu bec, which is notoriously dispersed, with an area three times the size of France, but with only an eight of its population (8 million vs 67). More specifically, Canada was originally a french colony, a land violently stolen from native people who have lived here for thousands of years. Some of those people now live in reservations, sometimes far from urban centers (but definitely not always). So the idea of leveraging the Hydro-Qu bec infrastructure doesn't always work to solve this, because while Hydro will happily flood a traditional hunting territory for an electric dam, they don't bother running power lines to the village they forcibly moved, powering it instead with noisy and polluting diesel generators. So before giving me fiber to the home, we should give power (and potable water, for that matter), to those communities first. So we need to discuss international connectivity. (How else could we consider those communities than peer nations anyways?c) Qu bec has virtually zero international links. Even in Montr al, which likes to style itself a major player in gaming, AI, and technology, most peering goes through either Toronto or New York. That's a problem that we must fix, regardless of the other problems stated here. Looking at the submarine cable map, we see very few international links actually landing in Canada. There is the Greenland connect which connects Newfoundland to Iceland through Greenland. There's the EXA which lands in Ireland, the UK and the US, and Google has the Topaz link on the west coast. That's about it, and none of those land anywhere near any major urban center in Qu bec. We should have a cable running from France up to Saint-F licien. There should be a cable from Vancouver to China. Heck, there should be a fiber cable running all the way from the end of the great lakes through Qu bec, then up around the northern passage and back down to British Columbia. Those cables are expensive, and the idea might sound ludicrous, but Russia is actually planning such a project for 2026. The US has cables running all the way up (and around!) Alaska, neatly bypassing all of Canada in the process. We just look ridiculous on that map. (Addendum: I somehow forgot to talk about Teleglobe here was founded as publicly owned company in 1950, growing international phone and (later) data links all over the world. It was privatized by the conservatives in 1984, along with rails and other "crown corporations". So that's one major risk to any effort to make public utilities work properly: some government might be elected and promptly sell it out to its friends for peanuts.)

Wireless networks I know most people will have rolled their eyes so far back their heads have exploded. But I'm not done yet. I want wireless too. And by wireless, I don't mean a bunch of geeks setting up OpenWRT routers on rooftops. I tried that, and while it was fun and educational, it didn't scale. A public networking utility wouldn't be complete without providing cellular phone service. This involves bidding for frequencies at the federal level, and deploying a rather large amount of infrastructure, but it could be a later phase, when the engineers and politicians have proven their worth. At least part of the Rogers fiasco would have been averted if such a decentralized network backend existed. One might even want to argue that a separate institution should be setup to provide phone services, independently from the regular wired networking, if only for reliability. Because remember here: the problem we're trying to solve is not just technical, it's about political boundaries, centralisation, and automation. If everything is ran by this one organisation again, we will have failed. However, I must admit that phone services is where my ideas fall a little short. I can't help but think it's also an accessible goal maybe starting with a virtual operator but it seems slightly less so than the others, especially considering how closed the phone ecosystem is.

Counter points In debating these ideas while writing this article, the following objections came up.

I don't want the state to control my internet One legitimate concern I have about the idea of the state running the internet is the potential it would have to censor or control the content running over the wires. But I don't think there is necessarily a direct relationship between resource ownership and control of content. Sure, China has strong censorship in place, partly implemented through state-controlled businesses. But Russia also has strong censorship in place, based on regulatory tools: they force private service providers to install back-doors in their networks to control content and surveil their users. Besides, the USA have been doing warrantless wiretapping since at least 2003 (and yes, that's 10 years before the Snowden revelations) so a commercial internet is no assurance that we have a free internet. Quite the contrary in fact: if anything, the commercial internet goes hand in hand with the neo-colonial internet, just like businesses did in the "good old colonial days". Large media companies are the primary censors of content here. In Canada, the media cartel requested the first site-blocking order in 2018. The plaintiffs (including Qu becor, Rogers, and Bell Canada) are both content providers and internet service providers, an obvious conflict of interest. Nevertheless, there are some strong arguments against having a centralised, state-owned monopoly on internet service providers. FDN makes a good point on this. But this is not what I am suggesting: at the provincial level, the network would be purely physical, and regional entities (which could include private companies) would peer over that physical network, ensuring decentralization. Delegating the management of that infrastructure to an independent non-profit or cooperative (but owned by the state) would also ensure some level of independence.

Isn't the government incompetent and corrupt? Also known as "private enterprise is better skilled at handling this, the state can't do anything right" I don't think this is a "fait accomplit". If anything, I have found publicly ran utilities to be spectacularly reliable here. I rarely have trouble with sewage, water, or power, and keep in mind I live in a city where we receive about 2 meters of snow a year, which tend to create lots of trouble with power lines. Unless there's a major weather event, power just runs here. I think the same can happen with an internet service provider. But it would certainly need to have higher standards to what we're used to, because frankly Internet is kind of janky.

A single monopoly will be less reliable I actually agree with that, but that is not what I am proposing anyways. Current commercial or non-profit entities will be free to offer their services on top of the public network. And besides, the current "ha! diversity is great" approach is exactly what we have now, and it's not working. The pretense that we can have competition over a single network is what led the US into the ridiculous situation where they also pretend to have competition over the power utility market. This led to massive forest fires in California and major power outages in Texas. It doesn't work.

Wouldn't this create an isolated network? One theory is that this new network would be so hostile to incumbent telcos and ISPs that they would simply refuse to network with the public utility. And while it is true that the telcos currently do also act as a kind of "tier one" provider in some places, I strongly feel this is also a problem that needs to be solved, regardless of ownership of networking infrastructure. Right now, telcos often hold both ends of the stick: they are the gateway to users, the "last mile", but they also provide peering to the larger internet in some locations. In at least one datacenter in downtown Montr al, I've seen traffic go through Bell Canada that was not directly targeted at Bell customers. So in effect, they are in a position of charging twice for the same traffic, and that's not only ridiculous, it should just be plain illegal. And besides, this is not a big problem: there are other providers out there. As bad as the market is in Qu bec, there is still some diversity in Tier one providers that could allow for some exits to the wider network (e.g. yes, Cogent is here too).

What about Google and Facebook? Nationalization of other service providers like Google and Facebook is out of scope of this discussion. That said, I am not sure the state should get into the business of organising the web or providing content services however, but I will point out it already does do some of that through its own websites. It should probably keep itself to this, and also consider providing normal services for people who don't or can't access the internet. (And I would also be ready to argue that Google and Facebook already act as extensions of the state: certainly if Facebook didn't exist, the CIA or the NSA would like to create it at this point. And Google has lucrative business with the US department of defense.)

What does not work So we've seen one thing that could work. Maybe it's too expensive. Maybe the political will isn't there. Maybe it will fail. We don't know yet. But we know what does not work, and it's what we've been doing ever since the internet has gone commercial.

Subsidies The absurd price we pay for data does not actually mean everyone gets high speed internet at home. Large swathes of the Qu bec countryside don't get broadband at all, and it can be difficult or expensive, even in large urban centers like Montr al, to get high speed internet. That is despite having a series of subsidies that all avoided investing in our own infrastructure. We had the "fonds de l'autoroute de l'information", "information highway fund" (site dead since 2003, archive.org link) and "branchez les familles", "connecting families" (site dead since 2003, archive.org link) which subsidized the development of a copper network. In 2014, more of the same: the federal government poured hundreds of millions of dollars into a program called connecting Canadians to connect 280 000 households to "high speed internet". And now, the federal and provincial governments are proudly announcing that "everyone is now connected to high speed internet", after pouring more than 1.1 billion dollars to connect, guess what, another 380 000 homes, right in time for the provincial election. Of course, technically, the deadline won't actually be met until 2023. Qu bec is a big area to cover, and you can guess what happens next: the telcos threw up their hand and said some areas just can't be connected. (Or they connect their CEO but not the poor folks across the lake.) The story then takes the predictable twist of giving more money out to billionaires, subsidizing now Musk's Starlink system to connect those remote areas. To give a concrete example: a friend who lives about 1000km away from Montr al, 4km from a small, 2500 habitant village, has recently got symmetric 100 mbps fiber at home from Telus, thanks to those subsidies. But I can't get that service in Montr al at all, presumably because Telus and Bell colluded to split that market. Bell doesn't provide me with such a service either: they tell me they have "fiber to my neighborhood", and only offer me a 25/10 mbps ADSL service. (There is Vid otron offering 400mbps, but that's copper cable, again a dead technology, and asymmetric.)

Conclusion Remember Chattanooga? Back in 2010, they funded the development of a fiber network, and now they have deployed a network roughly a thousand times faster than what we have just funded with a billion dollars. In 2010, I was paying Bell Canada 60$/mth for 20mbps and a 125GB cap, and now, I'm still (indirectly) paying Bell for roughly the same speed (25mbps). Back then, Bell was throttling their competitors networks until 2009, when they were forced by the CRTC to stop throttling. Both Bell and Vid otron still explicitly forbid you from running your own servers at home, Vid otron charges prohibitive prices which make it near impossible for resellers to sell uncapped services. Those companies are not spurring innovation: they are blocking it. We have spent all this money for the private sector to build us a private internet, over decades, without any assurance of quality, equity or reliability. And while in some locations, ISPs did deploy fiber to the home, they certainly didn't upgrade their entire network to follow suit, and even less allowed resellers to compete on that network. In 10 years, when 100mbps will be laughable, I bet those service providers will again punt the ball in the public courtyard and tell us they don't have the money to upgrade everyone's equipment. We got screwed. It's time to try something new.

Updates There was a discussion about this article on Hacker News which was surprisingly productive. Trigger warning: Hacker News is kind of right-wing, in case you didn't know. Since this article was written, at least two more major acquisitions happened, just in Qu bec: In the latter case, vMedia was explicitly saying it couldn't grow because of "lack of access to capital". So basically, we have given those companies a billion dollars, and they are not using that very money to buy out their competition. At least we could have given that money to small players to even out the playing field. But this is not how that works at all. Also, in a bizarre twist, an "analyst" believes the acquisition is likely to help Rogers acquire Shaw. Also, since this article was written, the Washington Post published a review of a book bringing similar ideas: Internet for the People The Fight for Our Digital Future, by Ben Tarnoff, at Verso books. It's short, but even more ambitious than what I am suggesting in this article, arguing that all big tech companies should be broken up and better regulated:
He pulls from Ethan Zuckerman s idea of a web that is plural in purpose that just as pool halls, libraries and churches each have different norms, purposes and designs, so too should different places on the internet. To achieve this, Tarnoff wants governments to pass laws that would make the big platforms unprofitable and, in their place, fund small-scale, local experiments in social media design. Instead of having platforms ruled by engagement-maximizing algorithms, Tarnoff imagines public platforms run by local librarians that include content from public media.
(Links mine: the Washington Post obviously prefers to not link to the real web, and instead doesn't link to Zuckerman's site all and suggests Amazon for the book, in a cynical example.) And in another example of how the private sector has failed us, there was recently a fluke in the AMBER alert system where the entire province was warned about a loose shooter in Saint-Elz ar except the people in the town, because they have spotty cell phone coverage. In other words, millions of people received a strongly toned, "life-threatening", alert for a city sometimes hours away, except the people most vulnerable to the alert. Not missing a beat, the CAQ party is promising more of the same medicine again and giving more money to telcos to fix the problem, suggesting to spend three billion dollars in private infrastructure.

16 July 2022

Petter Reinholdtsen: Automatic LinuxCNC servo PID tuning?

While working on a CNC with servo motors controlled by the LinuxCNC PID controller, I recently had to learn how to tune the collection of values that control such mathematical machinery that a PID controller is. It proved to be a lot harder than I hoped, and I still have not succeeded in getting the Z PID controller to successfully defy gravity, nor X and Y to move accurately and reliably. But while climbing up this rather steep learning curve, I discovered that some motor control systems are able to tune their PID controllers. I got the impression from the documentation that LinuxCNC were not. This proved to be not true The LinuxCNC pid component is the recommended PID controller to use. It uses eight constants Pgain, Igain, Dgain, bias, FF0, FF1, FF2 and FF3 to calculate the output value based on current and wanted state, and all of these need to have a sensible value for the controller to behave properly. Note, there are even more values involved, theser are just the most important ones. In my case I need the X, Y and Z axes to follow the requested path with little error. This has proved quite a challenge for someone who have never tuned a PID controller before, but there is at least some help to be found. I discovered that included in LinuxCNC was this old PID component at_pid claiming to have auto tuning capabilities. Sadly it had been neglected since 2011, and could not be used as a plug in replacement for the default pid component. One would have to rewriting the LinuxCNC HAL setup to test at_pid. This was rather sad, when I wanted to quickly test auto tuning to see if it did a better job than me at figuring out good P, I and D values to use. I decided to have a look if the situation could be improved. This involved trying to understand the code and history of the pid and at_pid components. Apparently they had a common ancestor, as code structure, comments and variable names were quite close to each other. Sadly this was not reflected in the git history, making it hard to figure out what really happened. My guess is that the author of at_pid.c took a version of pid.c, rewrote it to follow the structure he wished pid.c to have, then added support for auto tuning and finally got it included into the LinuxCNC repository. The restructuring and lack of early history made it harder to figure out which part of the code were relevant to the auto tuning, and which part of the code needed to be updated to work the same way as the current pid.c implementation. I started by trying to isolate relevant changes in pid.c, and applying them to at_pid.c. My aim was to make sure the at_pid component could replace the pid component with a simple change in the HAL setup loadrt line, without having to "rewire" the rest of the HAL configuration. After a few hours following this approach, I had learned quite a lot about the code structure of both components, while concluding I was heading down the wrong rabbit hole, and should get back to the surface and find a different path. For the second attempt, I decided to throw away all the PID control related part of the original at_pid.c, and instead isolate and lift the auto tuning part of the code and inject it into a copy of pid.c. This ensured compatibility with the current pid component, while adding auto tuning as a run time option. To make it easier to identify the relevant parts in the future, I wrapped all the auto tuning code with '#ifdef AUTO_TUNER'. The end result behave just like the current pid component by default, as that part of the code is identical. The end result entered the LinuxCNC master branch a few days ago. To enable auto tuning, one need to set a few HAL pins in the PID component. The most important ones are tune-effort, tune-mode and tune-start. But lets take a step back, and see what the auto tuning code will do. I do not know the mathematical foundation of the at_pid algorithm, but from observation I can tell that the algorithm will, when enabled, produce a square wave pattern centered around the bias value on the output pin of the PID controller. This can be seen using the HAL Scope provided by LinuxCNC. In my case, this is translated into voltage (+-10V) sent to the motor controller, which in turn is translated into motor speed. So at_pid will ask the motor to move the axis back and forth. The number of cycles in the pattern is controlled by the tune-cycles pin, and the extremes of the wave pattern is controlled by the tune-effort pin. Of course, trying to change the direction of a physical object instantly (as in going directly from a positive voltage to the equivalent negative voltage) do not change velocity instantly, and it take some time for the object to slow down and move in the opposite direction. This result in a more smooth movement wave form, as the axis in question were vibrating back and forth. When the axis reached the target speed in the opposing direction, the auto tuner change direction again. After several of these changes, the average time delay between the 'peaks' and 'valleys' of this movement graph is then used to calculate proposed values for Pgain, Igain and Dgain, and insert them into the HAL model to use by the pid controller. The auto tuned settings are not great, but htye work a lot better than the values I had been able to cook up on my own, at least for the horizontal X and Y axis. But I had to use very small tune-effort values, as my motor controllers error out if the voltage change too quickly. I've been less lucky with the Z axis, which is moving a heavy object up and down, and seem to confuse the algorithm. The Z axis movement became a lot better when I introduced a bias value to counter the gravitational drag, but I will have to work a lot more on the Z axis PID values. Armed with this knowledge, it is time to look at how to do the tuning. Lets say the HAL configuration in question load the PID component for X, Y and Z like this:
loadrt pid names=pid.x,pid.y,pid.z
Armed with the new and improved at_pid component, the new line will look like this:
loadrt at_pid names=pid.x,pid.y,pid.z
The rest of the HAL setup can stay the same. This work because the components are referenced by name. If the component had used count=3 instead, all use of pid.# had to be changed to at_pid.#. To start tuning the X axis, move the axis to the middle of its range, to make sure it do not hit anything when it start moving back and forth. Next, set the tune-effort to a low number in the output range. I used 0.1 as my initial value. Next, assign 1 to the tune-mode value. Note, this will disable the pid controlling part and feed 0 to the output pin, which in my case initially caused a lot of drift. In my case it proved to be a good idea with X and Y to tune the motor driver to make sure 0 voltage stopped the motor rotation. On the other hand, for the Z axis this proved to be a bad idea, so it will depend on your setup. It might help to set the bias value to a output value that reduce or eliminate the axis drift. Finally, after setting tune-mode, set tune-start to 1 to activate the auto tuning. If all go well, your axis will vibrate for a few seconds and when it is done, new values for Pgain, Igain and Dgain will be active. To test them, change tune-mode back to 0. Note that this might cause the machine to suddenly jerk as it bring the axis back to its commanded position, which it might have drifted away from during tuning. To summarize with some halcmd lines:
setp pid.x.tune-effort 0.1
setp pid.x.tune-mode 1
setp pid.x.tune-start 1
# wait for the tuning to complete
setp pid.x.tune-mode 0
After doing this task quite a few times while trying to figure out how to properly tune the PID controllers on the machine in, I decided to figure out if this process could be automated, and wrote a script to do the entire tuning process from power on. The end result will ensure the machine is powered on and ready to run, home all axis if it is not already done, check that the extra tuning pins are available, move the axis to its mid point, run the auto tuning and re-enable the pid controller when it is done. It can be run several times. Check out the run-auto-pid-tuner script on github if you want to learn how it is done. My hope is that this little adventure can inspire someone who know more about motor PID controller tuning can implement even better algorithms for automatic PID tuning in LinuxCNC, making life easier for both me and all the others that want to use LinuxCNC but lack the in depth knowledge needed to tune PID controllers well. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

16 January 2022

Chris Lamb: Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on after all, nobody needs to hear that The Godfather is a good movie.) It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to make a tour of the landmarks of the first century of cinema, and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here. Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) the latter being an almost perfect example of late-20th century cultural exhaustion. But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

Dogtooth (2009) A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well. Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and F lix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

Holy Motors (2012) There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Bu uel and famed artist Salvador Dal . A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest. This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!") Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work. Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

Manchester by the Sea (2016) An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

McCabe & Mrs. Miller (1971) Roger Ebert called this movie one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come. But whilst it is difficult to disagree with his sentiment, Ebert's choice of sad is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some. Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure. As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: This is not the kind of movie where the characters are introduced. They are all already here. Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.) On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

Detour (1945) Detour was filmed in less than a week, and it's difficult to decide out of the actors and the screenplay which is its weakest point.... Yet it still somehow seemed to drag me in. The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control. It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema. In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

Safe (1995) Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she doesn't feel right, Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist. Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies. On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.) As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

The Driver (1978) Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978. The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective. If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well. To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

The Souvenir (2019) The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film. Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

The Wrestler (2008) Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback. The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler. In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s. Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat... But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

Thief (1981) Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity. Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie. The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps. I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

Uncut Gems (2019) The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?) No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what. Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

Woman in the Dunes (1964) I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life. Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

A Quiet Place (2018) Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here. The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet that is to say, by staying stoic suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.) I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

16 October 2017

Gustavo Noronha Silva: Who knew we still had low-hanging fruits?

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coru a, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year. Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/ 20171002_204405 It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google s Robert Kroeger, about the way forward for Chromium on Wayland. We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way! 20171002_193518 The idea seems to have been mostly welcomed, the only concern being that Wayland s interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes. On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView. 20171002_101952 Some people noticed that when the connection was lost Mattermost would take a very long time to notice and reconnect its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura. I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)! screenshot-from-2017-10-16-11-13-39 I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either! That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual. I d like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other Collaborans. It was a great hackfest and I m looking forward to the next one! See you in 2018 =)

1 April 2017

Russ Allbery: Review: Two Serpents Rise

Review: Two Serpents Rise, by Max Gladstone
Series: Craft #2
Publisher: Tor
Copyright: October 2013
ISBN: 1-4668-0204-9
Format: Mobi
Pages: 350
This is the second book in the Craft Sequence, coming after Three Parts Dead, but it's not a sequel. The only thing shared between the books is the same universe and magical system. Events in Two Serpents Rise were sufficiently distant from the events of the first book that it wasn't obvious (nor did it matter) where it fit chronologically. Caleb is a gambler and an investigator for Red King Consolidated, the vast firm that controls the water supply, and everything else, in the desert city of Dresediel Lex. He has a fairly steady and comfortable job in a city that's not comfortable for many, one of sharp divisions between rich and poor and which is constantly one water disturbance away from riot. His corporate work life frustrates his notorious father, a legendary priest of the old gods who were defeated by the Red King and who continues to fight an ongoing terrorist resistance to the new corporate order. But Caleb has as little as possible to do with that. Two Serpents Rise opens with an infiltration of the Bright Mirror Reservoir, one of the key components of Dresediel Lex's water supply. It's been infested with Tzimet: demon-like creatures that, were they to get into the city's water supply, would flow from faucets and feed on humans. Red King Incorporated discovered this one and sealed the reservoir before the worst could happen, but it's an unsettling attack. And while Caleb is attempting to determine what happened, he has an unexpected encounter with a cliff runner: a daredevil parkour enthusiast with an unexpected amulet of old Craft that would keep her invisible from most without the magical legacy Caleb is blessed (or cursed) with. He doesn't think her presence is related to the attack, but he can't be sure, particularly with the muddling fact that he finds her personally fascinating. Like Three Parts Dead, you could call Two Serpents Rise an urban fantasy in that it's a fantasy that largely takes place in cities and is concerned with such things as infrastructure, politics, and the machinery of civilization. However, unlike Three Parts Dead, it takes itself much more seriously and has less of the banter and delightful absurdity of the previous book. The identification of magic with contracts and legalities is less amusingly creative here and more darkly sinister. Partly this is because the past of Dresediel Lex is full of bloodthirsty gods and human sacrifice, and while Red King Consolidated has put an end to that practice, it lurks beneath the surface and is constantly brought to mind by some grisly artifacts. I seem to always struggle with fantasy novels based loosely on central American mythology. An emphasis on sacrifice and terror always seems to emerge from that background, and it verges too close to horror for me. It also seems prone to clashes of divine power and whim instead of thoughtful human analysis. That's certainly the case here: instead of Tara's creative sleuthing and analysis, Caleb's story is more about uncertainty, obsession, gambling, and shattering revelations. Magical rituals are described more in terms of their emotional impact than their world-building magical theory. I think this is mostly a matter of taste, and it's possible others would like Two Serpents Rise better than the previous book, but it wasn't as much my thing. The characters are a mixed bag. Caleb was a bit too passive to me, blown about by his father and his employer and slow to make concrete decisions. Mal was the highlight of the book for me, but I felt at odds with the author over that, which made the end of the book somewhat frustrating. Caleb has some interesting friends, but this is one of those books where I would have preferred one of the supporting cast to be the protagonist. That said, it's not a bad book. There are some very impressive set pieces, the supporting cast is quite good, and I am wholeheartedly in favor of fantasy novels that are built around the difficulties of water supply to a large, arid city. This sort of thing has far more to do with human life than the never-ending magical wars over world domination that most fantasy novels focus on, and it's not at all boring when told properly. Gladstone is a good writer, and despite the focus of this book not being as much my cup of tea, I'll keep reading this series. Followed by Full Fathom Five. Rating: 7 out of 10

5 October 2016

Gustavo Noronha Silva: Web Engines Hackfest 2016!

I had a great time last week and the web engines hackfest! It was the 7th web hackfest hosted by Igalia and the 7th hackfest I attended. I m almost a local Galician already. Brazilian Portuguese being so close to Galician certainly helps! Collabora co-sponsored the event and it was great that two colleagues of mine managed to join me in attendance. It had great talks that will eventually end up in videos uploaded to the web site. We were amazed at the progress being made to Servo, including some performance results that blew our minds. We also discussed the next steps for WebKitGTK+, WebKit for Wayland (or WPE), our own Clutter wrapper to WebKitGTK+ which is used for the Apertis project, and much more.
Zan giving his talk on WPE (former WebKitForWayland)Zan giving his talk on WPE (former WebKitForWayland)
One thing that drew my attention was how many Dell laptops there were. Many collaborans (myself included) and igalians are now using Dells, it seems. Sure, there were thinkpads and macbooks, but there was plenty of inspirons and xpses as well. It s interesting how the brand make up shifted over the years since 2009, when the hackfest could easily be mistaken with a thinkpad shop. Back to the actual hackfest: with the recent release of Gnome 3.22 (and Fedora 25 nearing release), my main focus was on dealing with some regressions suffered by users experienced after a change that made putting the final rendering composited by the nested Wayland compositor we have inside WebKitGTK+ to the GTK+ widget so it is shown on the screen. One of the main problems people reported was applications that use WebKitGTK+ not showing anything where the content was supposed to appear. It turns out the problem was caused by GTK+ not being able to create a GL context. If the system was simply not able to use GL there would be no problem: WebKit would then just disable accelerated compositing and things would work, albeit slower. The problem was WebKit being able to use an older GL version than the minimum required by GTK+. We fixed it by testing that GTK+ is able to create GL contexts before using the fast path, falling back to the slow glReadPixels codepath if not. This way we keep accelerated compositing working inside WebKit, which gives us nice 3D transforms and less repainting, but take the performance hit in the final blit .
Introducing "WebKitClutterGTK+"Introducing WebKitClutterGTK+
Another issue we hit was GTK+ not properly updating its knowledge of the window s opaque region when painting a frame with GL, which led to some really interesting issues like a shadow appearing when you tried to shrink the window. There was also an issue where the window would not use all of the screen when fullscreen which was likely related. Both were fixed. Andr Magalh es also worked on a couple of patches we wrote for customer projects and are now pushing upstream. One enables the use of more than one frontend to connect to a remote web inspector server at once. This can be used to, for instance, show the regular web inspector on a browser window and also use IDE integration for setting breakpoints and so on. The other patch was cooked by Philip Withnall and helped us deal with some performance bottlenecks we were hitting. It improves the performance of painting scroll bars. WebKitGTK+ does its own painting of scrollbars (we do not use the GTK+ widgets for various reasons). It turns out painting scrollbars can be quite a hit when the page is being scrolled fast, if not done efficiently. Emanuele Aina had a great time learning more about meson to figure out a build issue we had when a more recent GStreamer was added to our jhbuild environment. He came out of the experience rather sane, which makes me think meson might indeed be much better than autotools.
Igalia 15 years cakeIgalia 15 years cake
It was a great hackfest, great seeing everyone face to face. We were happy to celebrate Igalia s 15 years with them. Hope to see everyone again next year =)

10 June 2016

Jaminy Prabaharan: Weekly Report for GSoC16-week 1 and week2

After introducing ourselves to the community, we start contributing to the open and free source software. Since this is the first week, I have went through theories which would help me in coding. Before coding it is always safer to refer to theories so that we don t need to spare time in debugging. The following were my week 1-4 plans:

These two weeks I have completed my first two tasks in the list.I have committed and pushed my works in GitHub.https://github.com/Jaminy/GSoC

Successfully logged into the email account using IMAP and examined each folders.Examined inbox to extract To , From and CC of the last message received.Trying to extract all message details to put in CSV file.


13 April 2016

Mike Gabriel: IPv6: Broken by Design; Digital Ocean - How are we doing?

Recently, Digital Ocean (which I am a customer of) asked me "how they were doing". Well, yet another survey... Let's ignore this one for now... I thought some days ago. And then yesterday, I added IPv6 support to my main mail server (which runs at Hetzner, Germany). All my hosted/rented/whatever systems report back to this main mailserver. Now that that main mail server finally has its AAAA record and its own IPv6 address, all associated systems try to reach this main mail server via IPv6. Of course. Crippling IPv6 support by adding Port Blocks But, then, I see messages like these in my syslog files on droplets hosted at Digital Ocean:
Apr 13 10:10:59 <do-droplet> postfix/smtp[23469]: connect to mail.<mydomain>[<ipv6-address>]:25: Connection timed out
After some more research [1], I realized that the folks out there at DO really apply some port blockings to IPv6 networks, but not to IPv4 networks. Pardon me? From my DO droplets, I can nmap any port on my mail server (25,80,143,443, 465, 587, etc.) via the IPv4 connection, but not over the IPv6 connection. Wait, not fully true: ports 80 and 443 are not blocked, but the other aforementioned ports are definitely blocked. Is Digital Ocean a professional ISP or a WiFi hotspot provider at my nearest coffee place? (This really makes me scratch my head...). Routing only the first 16 addresses of allocated /64 prefixes The above was the second IPv6 brokeness I learned about DO, recently. An earlier issue with DO's IPv6 support, I encountered while I was deploying an IPv6 capable OpenVPN internet gateway via a droplet hosted at DO. Digital Ocean assigns full IPv6 /64 prefixes to each individual droplet (which is great), but only properly routes the first 16 IP addresses of such a /64 prefix [2]. Urgh? I had to work around this flaw by adding an IPv6-over-IPv4 tunnel and attaching an IPv6 /56 prefix obtained from Hurricane Electrics' tunnel broker service [3] to the OpenVPN server. Thanks, Digital Ocean, for reminding me about giving feedback So, today, I luckily received a reminder mail about DO's yet-another-survey survey. My opportunity!!! Here is the feedback, I gave:
DO service is basically good.
BUT: You as a provider SUCK when it comes to IPv6.
(1) http://pixelschatten.net/blocked-ipv6-ports/
-> SMTP/IMAP traffic blocked over IPv6, but not over IPv4... WTF?). I normally have all my systems report back to my main mail server. I expect this to work as it is the default on all Linux hosts nowadays, and that is: prefer IPv6 over IPv4.
(2) https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestion...
-> Droplets get a full /64 prefix assigned, but only the first 16 addresses (or such) get routed properly. WTF?
Please do your homework on IPv6 and don't cripple your service by offering crippled IPv6 support.
I tell people, DO is great, but their IPv6 support is broken-by-design. Let me know, once this is about to change.
Mike Gabriel (aka sunweaver at debian dot org, Debian Developer)
Apology for the tone of the wording Now reading the feedback given, I realize that my tone has been quite impolite. I am sorry about this. However, the experienced IPv6 issues are indeed annoying. So please excuse me for having expressed my annoyance with such harsh words. And... I am still annoyed about myself paying an ISP for such a crippled IPv6 support. (I need to consider migrating the VMs to another hoster, unless there will be some dynamics observable in the near future). @Digital Ocean: Keep up the good work that you do in the realm of VM hosting. Evolve and grow up in the realm of IPv6 networking. Thank you! light+love
Mike [1] http://pixelschatten.net/blocked-ipv6-ports/
[2] https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestion...
[3] https://tunnelbroker.net/

11 January 2016

Keith Packard: Altos1.6.2

AltOS 1.6.2 TeleMega v2.0 support, bug fixes and documentation updates Bdale and I are pleased to announce the release of AltOS version 1.6.2. AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software. This is a minor release of AltOS, including support for our new TeleMega v2.0 board, a small selection of bug fixes and a major update of the documentation AltOS Firmware TeleMega v2.0 added The updated six-channel flight computer, TeleMega v2.0, has a few changes from the v1.0 design: None of these change the basic functionality of the device, but they do change the firmware a bit so there's a new package. AltOS Bug Fixes We also worked around a ground station limitation in the firmware: AltosUI and TeleGPS applications A few minor new features are in this release Documentation I spent a good number of hours completely reformatting and restructuring the Altus Metrum documentation.

15 December 2014

Gustavo Noronha Silva: Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest. It was a very productive and exciting event. It has already been covered by Manuel Rego, Philippe Normand, Sebastian Dr ge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more. With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that. One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them. To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes. GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though. The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it. For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being. Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to a TV to work well. We could think of something next time ;D. I d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!
Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

28 September 2014

Ean Schuessler: RoboJuggy at JavaOne

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson. I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David s friend and collaborators at OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results. Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden s RSMedia and various versions of David s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk. Hunting and gathering I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the netinst Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don t need. I d recommend anyone interested I m duplicating our work to stay their journey there: Raspbian UA Net Installer Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions: Setting up ROS Hydro on the Raspberry Pi Building from source means that all your install ends up being isolated (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:
sudo dd bs=4M if=/dev/sdb   gzip > /home/your_username/image date +%d%m%y .gz
Getting Java in the mix Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM. Java 8 JDK for ARM At this point you are ready to start installing the ROS Java packages. I m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the install from source link in the page to see the right stuff: Installing ROS Java on Hydro Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently chain workspaces in ROS but I didn t understand this well enough to get it working so what I did was this:
> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y
and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.
> catkin_make_isolated --install
Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same build from source installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver. Putting together the pieces Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box: wpid-20140911_142904.jpg Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores. Complete pan and tilt rig with motors for $25 A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won t regret buying it. Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net. That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later. Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly: wpid-20140921_114742.jpg wpid-20140921_113054.jpg wpid-20140921_112619.jpg Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes. Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here: Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress. wpid-20140920_191551.jpg As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good! I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not built in . I had tried to follow the provided instructions but was not (and still have not) been able to get that working. Using unofficial messages with ROS Java I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions: Connecting to the RSMedia Linux Console Port After some sweaty time bent over a magnifying glass I had success: wpid-20140921_143327.jpg I had previously purchased the USB-TTL232 accessory described in the article from Dallas awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner. It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful resource which let me get all kinds of body movements going: A collection of useful commands for the RSMedia At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going. Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work. The last day was spent doing things that I wouldn t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully. I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn t have made it out the door. Welcome to San Francisco THIS ARTICLE IS STILL BEING WRITTEN

1 September 2013

Eddy Petri&#537;or: Integrating Beyond Compare with Semanticmerge

Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.



Semanticmerge is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have started working on the support for C.

Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.

When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using kdiff3 as the text-based merge tool, which is nice, but I don't use kdiff3, I use Meld, another open source visual tool for merges and comparisons.


OTOH, Beyond Compare is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.

Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.

I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.


Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples


The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""
Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards to the text-based merge tool and exports the issue to the SCM, at the price of overcomplicating the command line. I already mentioned this issue in my license request mail and added the issue and my fix suggestion in their voting system of features to be implemented.

The upside was that by comparing the command line for kdiff3 invocations, the kdiff3 documentation and, by comparison, the Beyond Compare SCM integration information, I could deduce what is the command line necessary for Semanticmerge to use Beyond Compare as an external merge and diff tool.

The -edt, -emt and -e2mt options are the ones which specify how the external diff tool, external 3-way merge tool and external 2-way merge tool is to be called. Once I understood that, I split the problem in its obvious parts, each invocation had to be mapped, from kdiff3 options to beyond compare options, adding the occasional bell and whistle, if possible.

The parts to figure out, ordered by compexity, were:

  1. -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"

  2. -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""

  3. -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\""

Semantic merge integrates with kdiff3 in diff mode via the -edt option. This was easy to map to Beyond Compare, I just replaced kdiff3 with bcompare:
-edt="bcompare \"#sourcefile\" \"#destinationfile\""
Integration for 2-way merges was also quite easy, the mapping to Beyond Compare was:
-e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\""
For the 3-way merge I was a little confused because the Beyond Compare documentation and options were inconsistent between Windows and Linux. On Windows, for some of the SCMs, the options that set the titles for the panes are '/title1', '/title2', '/title3' and '/title4' (way too descriptive for my taste /sarcasm), but for some others are '/lefttitle', '/centertitle', '/righttitle', '/outputtitle', while on Linux the options are the more explicit kind, but with a '-' instead of a '/'.

The basic things were easy, ordering the parameters in the 'source, destination, base, output' instead of kdiff3's 'base, source, destination, -o ouptut', so I wanted to add the bells and whistles, since it really makes more sense for the developer to see something like 'Destination: [method] readOptions' instead of '/tmp/tmp4327687242.tmp', and because that's exactly what is necessary for Semanticmerge when merging methods, since on conflicts the various versions of the functions are placed in temporary files which don't mean anything.

So, after some digging into the examples from Beyond Compare and kdiff3 documentation, I ended up with:
-emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'"

Sadly, I wasn't able to identify the symbolic name for the output, so I added the hard coded 'merge result'. If Codice people would like to help with with this information (or if it exists), I would be more than willing to do update the information and do the necessary changes.

Then I added the bells and whistles for the -edt and -e2mt options, so I ended up with an even more complicated command line. The end result was this monstrosity:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="bcompare \"#sourcefile\" \"#destinationfile\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'" -emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'" -e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'"
So when I 3-way merge a function I get something like this (sorry for high resolution, lower resolutions don't do justice to the tools):



I don't expect this post to remain relevant for too much time, because after sending my feedback to Codice, they were open to my suggestion to have persistent settings for the external tool integration, so, in the future, the command line could probably be as simple as:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java
And the integration could be done via the GUI, while the command line can become a way to override the defaults.

14 July 2013

Dirk Eddelbuettel: Go home, Feedly, you're drunk!

With the demise of Google Reader, many of us have scrambled to find alternatives which cover many/most of the features we were used to. For me, synchronised access from web and mobile counts quite high, and I want the (Android) mobile experience to be pretty pleasant too. So with that, and like many other folks, I had shifted over to Feedly. And before I carry on, let me say that Feedly actually does pretty well. They handled the onslaught of new users; they also listened and changed their UI a little to be more compact and Reader-alike And most importantly, it mostly just works. Until it doesn't. Around the time Feedly shifted folks to their own cloud-based backend, posts from my very own CRANberries aggregator of CRAN package changes for R started to show only truncated posts. See this screenshot from reading Feedly: Feedly truncates the CRANberries post and compare it with this one from TheOldReader: OldReader shows the full CRANberries post which contains the diffstat output, nicely marked up and all. Now, it so happens that I am the creator of the very RSS feed I am consuming here, and I wrote it so that I could read the very diffstat output that is now missing. And I know full well that neither the code, nor the hosting (on my own box), nor any other aspect changed. Before confirming this via TheOldReader, I looked at the RSS output, and I tried other frontend and apps --- and it became clear that the bug seems to be at the Feedly cloud storage level. And trying to be a good sport, I submitted a bug report / suggestion, but to no avail (apart from two other chiming in that they see this elsewhere too). Given that I am the coder behind the feed that is displayed in a truncated manner, I am aware that I am using a code stack that is stale (the original Blosxom static web generator) yet which has not posed another problem anywhere in the decade I used it. Nor does the CRANberries feed pose a problem when aggregated and viewed via the TheOldReader code path, or for that matter via Planet R which also carries (parts of) CRANberries. So for now, I have to either pivot out of the CRANberries RSS reading in Feedly and go directly to the webpage (fine, but cumbersome and in need of a working connection) or use a second subscription mechanism. I appreciate what the TheOldReader folks are doing---presumably with a minor fraction of the resources available to Feedly---and may hang with them for now. But it would be awfully nice if Feedly could sleep off its hangover and fix this. In which case I'd be much happier recommending its service.

16 June 2013

Joey Hess: little disasters

Interesting times.. While the big disasters are ongoing, little ones have been spicing up my life lately. A pleasant week by the beach ended with a tropical storm passing over the beach house. I've never experienced this before, and though Andrea was diminished by passing over land, it was still more wind than I've ever seen. I love wind, and this was thrilling, right on the edge of danger but not quite there. At least, if you have sense to stay out of the water. Leaving the beach, I heard of someone who tried to go surfing that day, and drowned. The night before last, I was startled to find nearly an inch of water seeping up from underneath the tile floor of the kitchen. Probably it has something to do with the pressure tank pumping system, which was repaired while I was away, and means I actually have indoor running water here. (Overrated.) This saw me scrambling to close every water valve, and out with a flashlight at 2 am closing the cutoff at the 1000 gallon water reservoir before it all drained into the house. While sopping up dozens of gallons of water from the floor at 3 am probably doesn't sound like fun, I found myself going through the motions elatedly.. Because this means I finally am coming to understand the source of the damp that infests the most earth-sheltered corner of this house. It's not condensation. It's bad plumbing! Then yesterday, I went out to try a dip in the river, stopped by the neighborhood eatery and bait shop, and ended up sitting out on the back deck eating ribs and listening to a band with "possum playboys" in their name (which makes the full name fairly irrelevant), while looking out over the river and the old-timey green metal bridge. Which was unexpected fun, and the kind of thing you have to take in when it happens, but getting stuck in a newly installed hole in my driveway was not. My car was spinning, and I gave up and called it a night. Here's the thing. I could feel my brain working on this stupid "underpowered car is stuck in a small rut" issue all night long. Same mental pathways activating that chew over bugs and design issues. Got up this morning with a set of plans and contingency plans all ready to go. The first one, of jacking it up and putting something under the tire was stymied; it seems I am missing a jack. But the second, of digging out all around the tire, and then filling in with gravel and cat litter (a tip from some offroading website I blearily surfed last night), and then riding the gas while releasing the bake, worked great. All of which is to say, bring em on! But I still prefer my disasters in the form of software bugs.

26 May 2013

Andrew Pollock: [life] Maker Fair 2013 virtual trip report

The Maker Faire is one of the Bay Area things that I'll really miss. Zoe had a ball last year (for weeks afterwards every outing was "Maker Faire!" regardless of what or where it was) Last year I didn't really get to cover it very well, so I ended up doing a bit of a virtual tour via the exhibitor list on their website, and so this year I thought I'd be there in spirit by doing the same thing again. Here's my picks:

10 August 2010

Tim Retout: Sunny Southampton

On my last night in New York, I didn't sleep much. At 6am, I said farewell to Central Park by running round the reservoir, which I hadn't yet done. There was a very nice red sunrise to be seen from the west side. Unfortunately I didn't sleep much on the flight home either. The British accents sounded quite unusual when we landed in Heathrow, and it was quite confusing not being able to find a Starbucks. Once I was back home, I crashed, and woke up at 10pm. I spent last night clearing the pkg-perl review queue - gregoa is taking a short break after DebConf. Then I went running at sunrise again. This is quite a different experience to Central Park - first, you have to run 2.5km just to get to Southampton Common, and secondly it is raining quite heavily. I dug out some winter gear that had turned out to be completely inappropriate for New York.

14 June 2010

John Goerzen: Camping with 2 Boys

Friday I took the day off for us to go camping. Due to the birth of Oliver, our last time camping was in 2008. Back then, Terah had asked me what I wanted for father s day, and I suggested camping. Except when she was pregnant last year (and then Oliver was little), we have sort of had a standing plan to go camping twice a year: around father s day and my birthday. She says that s way more camping in a year than anyone needs. Anyhow, Jacob doesn t remember the last time we went camping; he was not quite 2 at the time. I explained that we would sleep in tents, and he d get one just his size. He laughed at that. Then I explained that he would get a bag to sleep in, and he thought that was hilarious. He kept talking about tents and even more so about sleeping bags until he actually got to be in one. It seems like each time we go camping, we wind up somewhere else. We ve camped at Kanoplis Lake and Marion Park and Lake in the past few years. But this time we went to Cedar Valley Reservoir near Garnett, KS. It was a nice place to be, with some wilderness camping areas far from any RV hookups (though right next to a gravel driveway where we could park). Almost as soon as we got out of the car, Jacob found a partially-burned stick left in a fire ring. So of course his hands were black pretty quick. He kept track of that stick the whole time we were there, and used it for everything he could think of. His favorite use, though, was at the lake to splash water: 2010-06-11 15.20.21.jpg The lake really was his favorite feature. He loved to throw rocks and sticks in it with me, to splash himself, and to stick his feet in. Oh, and to splash me. IMG_4057.JPG The pier was also a great thing for him he at first thought it was a boat, then really wanted to go. I made him hold my hand the whole time, and he liked walking on it. But he was frustrated that he couldn t reach the water. Somehow, Jacob acted as if he could sit in the hot sun for hours, just making small splashes with a stick. I, on the other hand, felt the 90-degree heat strongly, and eventually we found a shady spot to play in the lake. Oliver enjoyed crawling around on a blanket or on the grass. Though Terah put a stop to that after she noticed that he came across a pile of fish heads and was about to reach for one and put it in his mouth. We cooked over a campfire Friday night and Saturday morning. It took me a lot longer to start a fire than normal. Part of the problem was that there had been a lot of rain in the area, and it was humid, so there was little kindling to be found. Plenty of larger pieces of wood, so once the fire got going, it burned nicely. We cooked brats, brought along some homemade bread, had grilled foil-wrapped potatoes, and had stir-fry vegetables. Oliver got to eat with us too, of course: IMG_4022.JPG Terah is a big believer in smores, so after supper we made some smores Jacob predictably got sticky from the marshmallows and chocolate. Jacob started asking when I d set up the tent almost as soon as he found his stick. All afternoon he kept asking. Finally we got it set up, and of course he was so excited that he played inside it for an hour before he fell asleep. Terah had bought a $4 lantern at Walmart to give to him to use in his tent. I told him that there would be a surprise lantern in his tent for him to use. And he loved his lantern. It could turn on, off, and flash red. At one point he asked me, Dad, do you have a surprise lantern too? No, I will just use my flashlight. He looked a bit sad about that. For about 5 seconds. Then he turned on his lantern again. Here he is with it, still playing happily about 2 hours after his bedtime: IMG_4100.JPG And in the morning, of course he was still wanting that, but also wanted to play with his digital camera (a kid s version that also has a simple game or two on it). Without leaving the tent, of course. IMG_4105.JPG Oliver was getting hungry while we worked on breakfast, so I gave him some bread to munch on. IMG_4119.JPG We used our cast iron skillet to fry some bacon for breakfast. Then Terah made some pancakes in the skillet (with the bacon grease). Greasiest and best pancakes ever. After that, we had some fried and scrambled eggs. After that, we had more smores (I wasn t kidding about Terah being a big believer in them.) Jacob decided that he wanted to play in the other tent. He enjoyed playing with Terah, Oliver, and the air mattress (that Terah appreciated as much as I thought it took up too much space in the car). There s a little window on the back that can open to a screen or open completely. Jacob and Oliver enjoyed looking out of it while open. Or rather, Jacob enjoyed that, while Oliver enjoyed licking it. IMG_4161.JPG I had my Droid with me to keep an eye on the weather. The forecast for several days had called for a 20% chance of storms on Saturday. I checked Saturday morning at 7, and it still called for 20% chance of storms after 1PM. We decided we d pack up our tents after breakfast so we could stay as long as we liked or leave on short notice if we preferred. Clouds seemed to be building, though, so I checked the forecast again at maybe 9. Now it called for 60% chance of thunderstorms and heavy rain before 1PM. Guess I shouldn t have been surprised; it is Kansas after all. As I had the first tent about 90% put away, it started to rain. We rushed to get the rest of the stuff in the car, and by the time we did, it was raining heavily. We had planned to spend at least part of the afternoon there, but decided we d look for another place to stop on the way home. We stopped first at Garnett s North Lake, as the rain had let up for awhile. We spotted a fishing pier that Jacob wanted to walk on, but the bridge out to it was underwater, so we passed on that. While Terah fed Oliver in a shelter, Jacob and I played on some swings. Then we walked down to that lake. Jacob carefully came to an abrupt stop and looked both ways at each disused road between us and the lake, saying OK when it was clear to go. The rain picked up again, so we started heading towards home. It got quite heavy, with thunder and everything, which Jacob didn t care for at all. Both boys went to sleep, though. We stopped at Marion Reservoir on the way home. Jacob hadn t been able to play in the lake at Garnett (the place we stopped had a bank that was too steep) and so we played there. He waded into the water at the swimming area, and predictably enjoyed yes throwing rocks into the lake. When we got home, I set out the wet tent and tarps (it s not good to store that stuff wet) to dry there hadn t been rain at home yet. Camping s a lot of work, but it was a good family activity. I m looking forward to our next time, and hope we can choose a weekend a little less hot and damp.

5 May 2010

Alastair McKinstry: EGU 2010 - Monday afternoon

The EGU conference is packed, and this blog isn't anything remotely like real-time, but some interesting talks came up on Monday evening : Simon Katterhorn gave a neat presentarion on icy moon tectonics, specifically on Europa and Enceladus. He showed the cycloid tracks on Europa, and how these are probably generated by tidally-driven ice tectonics. The presence of these of different ages shows the existence of a global ocean underneath, and the decoupled ice shell on top. More on this can be seen in Greenbergs book on Europa; but he also showed further follow-on work on Enceladus. Enceladus is small enough that people thought it had cooled and could not support an ocean, explaining the geysers and plumes by "small reservoirs". He shows instead that the 'tiger stripes' at the south pole are also tidally-created stresses, and moreover older generations of stripes, at an angle to the current ones, are also present: the plumes occur at the intersections. This shows free rotation of the ice shell and almost certainly an ocean. As to how to heat it: tidal heating is the primary candidate. I'm looking forward to Tidal heating and orbital evolution of Enceladus on Thursday. Tags , , ,

4 April 2009

Adrian von Bidder: Supernatural

Long time since my last movies posting ... Just discovered the U.S. TV series Supernatural, in an episode showing the two heroes being introduced for the first time to the writer of the book series Supernatural , which contains the life of the two heroes. (Strangely, it showed up when I searched for Tarantino's Reservoir Dogs on a torrent search engine.) I'm not quite sure what to think about the series, judging from that episode. I read a lot of fantasy and I like the real-world / fantasy world crossover every now and then, but Supernatural is quite cheaply made. Watching this one episode was fun, but I have no idea if I would actually watch it regularly. (I find, generally, that fantasy is difficult in movies. Lord of the Rings is very well executed and mostly presents the world as just a normal world , and I'm forever thankful to Jackson that he shares my opinion I think he says as much in the bonus materials on one of the DVDs that magic with smoke and flashes mostly ends up in major silliness. I have neither read nor watched anything of the Harry Potter epic, so I can't comment on that.) While I can't comment on Reservoir Dogs yet, I finally got around to watch From Dusk Till Dawn by director Robert Rodriguez, and with Tarantino (also co-writer of the movide, and if you watch the making of, co-director even if it's not in the titles) and Clooney as two of the main characters, together with a wonderful Juliette Lewis. That's certainly one of the movies I'll watch again, several times. Great soundtrack, too. Yesterday, I was a bit disappointed after I watched Time Bandits; Gilliam's later films got much better. It takes something to make a stop motion movie in today's time of cheap and easy computer animation ... so I really liked Tim Burton's Corpse Bride, if not for the rather straightforward story, but for the look and the atmosphere. The movie is also more a musical than a film, which I like very much as well.

17 July 2008

Thijs Kinkhorst: FEE error on Nikon DSLR - fixed

Recently my Nikon D70s, when using a new Sigma lens, displayed the following error in the aperture display: fEE. As it took me some time to find out the cause and fix it, I'll explain it here perhaps for the benefit of others. What does it mean? Some lenses require that the aperture is set to smallest when they are connected to the body (the largest f-number; this is usually coloured orange). fEE is indicated when the lens is connected wrongly and the camera refuses to operate until the lens is reconnected. lensbody If like me you still get the fEE even though you've connected the lens correctly, then obviously something is broken. The camera "knows" whether the aperture ring is set to the right value due to a notch on the lens (rightmost picture) and a switch on the body ("EE Servo Coupling Post", left picture). In my case the switch on the body had broken off. You can of course send your camera in for repair, but for me it was easily repaired by sticking a hairpin in the switch. A little piece of plastic and some superglue could work as well.

Next.