Search Results: "wart"

14 January 2016

Vincent Sanders: Ampere was the Newton of Electricity.

I think Maxwell was probably right, certainly the unit of current Ampere gives his name to has been a concern of mine recently.

Regular readers may have possibly noticed my unhealthy obsession with single board computers. I have recently rehomed all the systems into my rack which threw up a small issue of powering them all. I had been using an ad-hoc selection of USB wall warts and adapters but this ended up needing nine mains sockets and short of purchasing a very expensive PDU for the rack would have needed a lot of space.

Additionally having nine separate convertors from mains AC to low voltage DC was consuming over 60Watts for 20W of load! The majority of these supplies were simply delivering 5V either via micro USB or DC barrel jack.

Initially I considered using a ten port powered USB hub but this seemed expensive as I was not going to use the data connections, it also had a limit of 5W per port and some of my systems could potentially use more power than that so I decided to build my own supply.

PSU module from ebay
A quick look on ebay revealed that a 150W (30A at 5V) switching supply could be had from a UK vendor for 9.99 which seemed about right. An enclosure, fused and switched IEC inlet, ammeter/voltmeter with shunt and suitable cables were acquired for another 15

Top view of the supply all wired up
A little careful drilling and cutting of the enclosure made openings for the inlets, cables and display. These were then wired together with crimped and insulated spade and ring connectors. I wanted this build to be safe and reliable so care was taken to get the neatest layout I could manage with good separation between the low and high voltage cabling.

Completed supply with all twelve outputs wired up
The result is a neat supply with twelve outputs which i can easily extend to eighteen if needed. I was pleasantly surprised to discover that even with twelve SBC connected generating 20W load the power drawn by the supply was 25W or about 80% efficiency instead of the 33% previously achieved.

The inbuilt meter allows me to easily see the load on the supply which so far has not risen above 5A even at peak draw, despite the cubitruck and BananaPi having spinning rust hard drives attached, so there is plenty of room for my SBC addiction to grow (I already pledged for a Pine64).

Supply installed in the rack with some of the SBC connected
Overall I am pleased with how this turned out and while there are no detailed design files for this project it should be easy to follow if you want to repeat it. One note of caution though, this project has mains wiring and while I am confident in my own capabilities dealing with potentially lethal voltages I cannot be responsible for anyone else so caveat emptor!

5 January 2016

Benjamin Mako Hill: Celebrate Aaron Swartz in Seattle (or Atlanta, Chicago, Dallas, NYC, SF)

I m organizing an event at the University of Washington in Seattle that involves a reading, the screening of a documentary film, and a Q&A about Aaron Swartz. The event coincides with the third anniversary of Aaron s death and the release of a new book of Swartz s writing that I contributed to. aaronsw-tiob_bwcstw The event is free and open the public and details are below:

WHEN: Wednesday, January 13 at 6:30-9:30 p.m.

WHERE: Communications Building (CMU) 120, University of Washington

We invite you to celebrate the life and activism efforts of Aaron Swartz, hosted by UW Communication professor Benjamin Mako Hill. The event is next week and will consist of a short book reading, a screening of a documentary about Aaron s life, and a Q&A with Mako who knew Aaron well details are below. No RSVP required; we hope you can join us.

Aaron Swartz was a programming prodigy, entrepreneur, and information activist who contributed to the core Internet protocol RSS and co-founded Reddit, among other groundbreaking work. However, it was his efforts in social justice and political organizing combined with his aggressive approach to promoting increased access to information that entangled him in a two-year legal nightmare that ended with the taking of his own life at the age of 26.

January 11, 2016 marks the third anniversary of his death. Join us two days later for a reading from a new posthumous collection of Swartz s writing published by New Press, a showing of The Internet s Own Boy (a documentary about his life), and a Q&A with UW Communication professor Benjamin Mako Hill a former roommate and friend of Swartz and a contributor to and co-editor of the first section of the new book. If you re not in Seattle, there are events with similar programs being organized in Atlanta, Chicago, Dallas, New York, and San Francisco. All of these other events will be on Monday January 11 and registration is required for all of them. I will be speaking at the event in San Francisco.

4 January 2016

Benjamin Mako Hill: The Boy Who Could Change the World: The Writings of Aaron Swartz

The New Press has published a new collection of Aaron Swartz s writing called The Boy Who Could Change the World: The Writings of Aaron Swartz. I worked with Seth Schoen to introduce and help edit the opening section of book that includes Aaron s writings on free culture, access to information and knowledge, and copyright. Seth and I have put our introduction online under an appropriately free license (CC BY-SA). aaronsw_book_coverOver the last week, I ve read the whole book again. I think the book really is a wonderful snapshot of Aaron s thought and personality. It s got bits that make me roll my eyes, bits that make me want to shout in support, and bits that continue to challenge me. It all makes me miss Aaron terribly. I strongly recommend the book. Because the publication is post-humous, it s meant that folks like me are doing media work for the book. In honor of naming the book their progressive pick of the week, Truthout has also published an interview with me about Aaron and the book. Other folks who introduced and/or edited topical sections in the book are David Auerbach (Computers), David Segal (Politics), Cory Doctorow (Media), James Grimmelmann (Books and Culture), and Astra Taylor (Unschool). The book is introduced by Larry Lessig.

2 January 2016

Daniel Pocock: The great life of Ian Murdock and police brutality in context

Tributes: (You can Follow or Tweet about this blog on Twitter) Over the last week, people have been saying a lot about the wonderful life of Ian Murdock and his contributions to Debian and the world of free software. According to one news site, a San Francisco police officer, Grace Gatpandan, has been doing the opposite, starting a PR spin operation, leaking snippets of information about what may have happened during Ian's final 24 hours. Sadly, these things are now starting to be regurgitated without proper scrutiny by the mainstream press (note the erroneous reference to SFGate with link to SFBay.ca, this is British tabloid media at its best). The report talks about somebody (no suggestion that it was even Ian) "trying to break into a residence". Let's translate that from the spin-doctor-speak back to English: it is the silly season, when many people have a couple of extra drinks and do silly things like losing their keys. "a residence", or just their own home perhaps? Maybe some AirBNB guest arriving late to the irritation of annoyed neighbours? Doesn't the choice of words make the motive sound so much more sinister? Nobody knows the full story and nobody knows if this was Ian, so snippets of information like this are inappropriate, especially when somebody is deceased. Did they really mean to leave people with the impression that one of the greatest visionaries of the Linux world was also a cat burglar? That somebody who spent his life giving selflessly and generously for the benefit of the whole world (his legacy is far greater than Steve Jobs, as Debian comes with no strings attached) spends the Christmas weekend taking things from other people's houses in the dark of the night? The report doesn't mention any evidence of a break-in or any charges for breaking-in. If having a few drinks and losing your keys in December is such a sorry state to be in, many of us could potentially be framed in the same terms at some point in our lives. That is one of the reasons I feel so compelled to write this: somebody else could be going through exactly the same experience at the moment you are reading this. Any of us could end up facing an assault as unpleasant as the tweets imply at some point in the future. At least I can console myself that as a privileged white male, the risk to myself is much lower than for those with mental illness, the homeless, transgender, Muslim or black people but as the tweets suggest, it could be any of us. The story reports that officers didn't actually come across Ian breaking in to anything, they encountered him at a nearby street corner. If he had weapons or drugs or he was known to police that would have almost certainly been emphasized. Is it right to rush in and deprive somebody of their liberties without first giving them an opportunity to identify themselves and possibly confirm if they had a reason to be there? The report goes on, "he was belligerent", "he became violent", "banging his head" all by himself. How often do you see intelligent and successful people like Ian Murdock spontaneously harming themselves in that way? Can you find anything like that in any of the 4,390 Ian Murdock videos on YouTube? How much more frequently do you see reports that somebody "banged their head", all by themselves of course, during some encounter with law enforcement? Do police never make mistakes like other human beings? If any person was genuinely trying to spontaneously inflict a head injury on himself, as the police have suggested, why wouldn't the police leave them in the hospital or other suitable care? Do they really think that when people are displaying signs of self-harm, rounding them up and taking them to jail will be in their best interests? Now, I'm not suggesting this started out with some sort of conspiracy. Police may have been at the end of a long shift (and it is a disgrace that many US police are not paid for their overtime) or just had a rough experience with somebody far more sinister. On the other hand, there may have been a mistake, gaps in police training or an inappropriate use of a procedure that is not always justified, like a strip search, that causes profound suffering for many victims. A select number of US police forces have been shamed around the world for a series of incidents of extreme violence in recent times, including the death of Michael Brown in Ferguson, shooting Walter Scott in the back, death of Freddie Gray in Baltimore and the attempts of Chicago's police to run an on-shore version of Guantanamo Bay. Beyond those highly violent incidents, the world has also seen the abuse of Ahmed Mohamed, the Muslim schoolboy arrested for his interest in electronics and in 2013, the suicide of Aaron Swartz which appears to be a direct consequence of the "Justice" department's obsession with him. What have the police learned from all this bad publicity? Are they changing their methods, or just hiring more spin doctors? If that is their response, then doesn't it leave them with a cruel advantage over those people who were deceased? Isn't it standard practice for some police to simply round up anybody who is a bit lost and write up a charge sheet for resisting arrest or assaulting an officer as insurance against questions about their own excessive use of force? When British police executed Jean Charles de Menezes on a crowded tube train and realized they had just done something incredibly outrageous, their PR office went to great lengths to try and protect their image, even photoshopping images of Menezes to make him look more like some other suspect in a wanted poster. To this day, they continue to refer to Menezes as a victim of the terrorists, could they be any more arrogant? While nobody believes the police woke up that morning thinking "let's kill some random guy on the tube", it is clear they made a mistake and like many people (not just police), they immediately prioritized protecting their reputation over protecting the truth. Nobody else knows exactly what Ian was doing and exactly what the police did to him. We may never know. However, any disparaging or irrelevant comments from the police should be viewed with some caution. The horrors of incarceration It would be hard for any of us to understand everything that an innocent person goes through when detained by the police. The recently released movie about The Stanford Prison Experiment may be an interesting place to start, a German version produced in 2001, Das Experiment, is also very highly respected. The United States has the largest prison population in the world and the second-highest per-capita incarceration rate. Many, including some on death row, are actually innocent, in the wrong place at the wrong time, without the funds to hire an attorney. The system, and the police and prison officers who operate it, treat these people as packages on a conveyor belt, without even the most basic human dignity. Whether their encounter lasts for just a few hours or decades, is it any surprise that something dies inside them when they discover this cruel side of American society? Worldwide, there is an increasing trend to make incarceration as degrading as possible. People may be innocent until proven guilty, but this hasn't stopped police in the UK from locking up and strip-searching over 4,500 children in a five year period, would these children go away feeling any different than if they had an encounter with Jimmy Saville or Rolf Harris? One can only wonder what they do to adults. What all this boils down to is that people shouldn't really be incarcerated unless it is clear the danger they pose to society is greater than the danger they may face in a prison. What can people do for Ian and for justice? Now that these unfortunate smears have appeared, it would be great to try and fill the Internet with stories of the great things Ian has done for the world. Write whatever you feel about Ian's work and your own experience of Debian. While the circumstances of the final tweets from his Twitter account are confusing, the tweets appear to be consistent with many other complaints about US law enforcement. Are there positive things that people can do in their community to help reduce the harm? Sending books to prisoners (the UK tried to ban this) can make a difference. Treat them like humans, even if the system doesn't. Recording incidents of police activities can also make a huge difference, such as the video of the shooting of Walter Scott or the UK police making a brutal unprovoked attack on a newspaper vendor. Don't just walk past a situation and assume everything is under control. People making recordings may find themselves in danger, it is recommended to use software that automatically duplicates each recording, preferably to the cloud, so that if the police ask you to delete such evidence, you can let them watch you delete it and still have a copy. Can anybody think of awards that Ian Murdock should be nominated for, either in free software, computing or engineering in general? Some, like the prestigious Queen Elizabeth Prize for Engineering can't be awarded posthumously but others may be within reach. Come and share your ideas on the debian-project mailing list, there are already some here. Best of all, Ian didn't just build software, he built an organization, Debian. Debian's principles have helped to unite many people from otherwise different backgrounds and carry on those principles even when Ian is no longer among us. Find out more, install it on your computer or even look for ways to participate in the project.

30 December 2015

Bits from Debian: Debian mourns the passing of Ian Murdock

Ian Murdock With a heavy heart Debian mourns the passing of Ian Murdock, stalwart proponent of Free Open Source Software, Father, Son, and the 'ian' in Debian. Ian started the Debian project in August of 1993, releasing the first versions of Debian later that same year. Debian would go on to become the world's Universal Operating System, running on everything from embedded devices to the space station. Ian's sharp focus was on creating a Distribution and community culture that did the right thing, be it ethically, or technically. Releases went out when they were ready, and the project's staunch stance on Software Freedom are the gold standards in the Free and Open Source world. Ian's devotion to the right thing guided his work, both in Debian and in the subsequent years, always working towards the best possible future. Ian's dream has lived on, the Debian community remains incredibly active, with thousands of developers working untold hours to bring the world a reliable and secure operating system. The thoughts of the Debian Community are with Ian's family in this hard time. His family has asked for privacy during this difficult time and we very much wish to respect that. Within our Debian and the larger Linux community condolences may be sent to in-memoriam-ian@debian.org where they will be kept and archived.

15 November 2015

Manuel A. Fernandez Montecelo: Work on aptitude

Midsummer for me is also known as Noite do Lume Novo (literally New Fire Night ), one of the big calendar events of the year, marking the end of the school year and the beginning of summer. On this day, there are celebrations not very unlike the bonfires in the Guy Fawkes Night in England or Britain [1]. It is a bit different in that it is not a single event for the masses, more of a friends and neighbours thing, and that it lasts for a big chunk of the night (sometimes until morning). Perhaps for some people, or outside bigger towns or cities, Guy Fawkes Night is also celebrated in that way and that's why during the first days of November there are fireworks rocketing and cracking in the neighbourhoods all around. Like many other celebrations around the world involving bonfires, many of them also happening around the summer solstice, it is supposed to be a time of renewal of cycles, purification and keeping the evil spirits away; with rituals to that effect like jumping over the fire when the flames are not high and it is safe enough. So it was fitting that, in the middle of June (almost Midsummer in the northern hemisphere), I learnt that I was about to leave my now-previous job, which is a pretty big signal and precursor for renewal (and it might have something to do with purifying and keeping the evil away as well ;-) ). Whatever... But what does all of this have to do with aptitude or Debian, anyway? For one, it was a question of timing. While looking for a new job (and I am still at it), I had more spare time than usual. DebConf 15 @ Heidelberg was within sight, and for the first time circumstances allowed me to attend this event. It also coincided with the time when I re-gained access to commit to aptitude on the 19th of June. Which means Renewal. End of June was also the time of the announcement of the colossal GCC-5/C++11 ABI transition in Debian, that was scheduled to start on the 1st of August, just before the DebConf. Between 2 and 3 thousand source packages in Debian were affected by this transition, which a few months later is not yet finished (although the most important parts were completed by mid-end September). aptitude itself is written in C++, and depends on several libraries written in C++, like Boost, Xapian and SigC++. All of them had to be compiled with the new C++11 ABI of GCC-5, in unison and in a particular order, for aptitude to continue to work (and for minimal breakage). aptitude and some dependencies did not even compile straight away, so this transition meant that aptitude needed attention just to keep working. Having recently being awarded again with the Aptitude Hat, attending DebConf for the first time and sailing towards the Transition Maelstrom, it was a clear sign that Something Had to Be Done (to avoid the sideways looks and consequent shame at DebConf, if nothing else). Happily (or a bit unhappily for me, but let's pretend...), with the unexpected free time in my hands, I changed the plans that I had before re-gaining the Aptitude Hat (some of them involving Debian, but in other ways maybe I will post about that soon). In July I worked to fix the problems before the transition started, so aptitude would be (mostly) ready, or in the worst case broken only for a few days, while the chain of dependencies was rebuilt. But apart from the changes needed for the new GCC-5, it was decided at the last minute that Boost 1.55 would not be rebuilt with the new ABI, and that the only version with the new ABI would be 1.58 (which caused further breakage in aptitude, was added to experimental only a few days before, and was moved to unstable after the transition had started). Later, in the first days of the transition, aptitude was affected for a few days by breakage in the dependencies, due to not being compiled in sequence according to the transition levels (so with a mix of old and new ABI). With the critical intervention of Axel Beckert (abe / XTaran), things were not so bad as they could have been. He was busy testing and uploading in the critical days when I was enjoying a small holiday on my way to DebConf, with minimal internet access and communicating almost exclusively with him; and he promptly tended the complaints arriving in the Bug Tracking System and asked for rebuilds of the dependencies with the new ABI. He also brought the packaging up to shape, which had decayed a bit in the last few years. Gruesome Challenges But not all was solved yet, more storms were brewing and started to appear in the horizon, in the form of clouds of fire coming from nearby realms. The APT Deities, which had long ago spilled out their secret, inner challenge (just the initial paragraphs), were relentless. Moreover, they were present at Heidelberg in full force, in or close to their home grounds, and they were Marching Decidedly towards Victory: apt BTS Graph, 2015-11-15 In the talk @ DebConf This APT has Super Cow Powers (video available), by David Kalnischkies, they told us about the niceties of apt 1.1 (still in experimental but hopefully coming to unstable soon), and they boasted about getting the lead in our arms race (should I say bugs race?) by a few open bug reports. This act of provocation further escalated the tensions. The fierce competition which had been going on for some time gained new heights. So much so that APT Deities and our team had to sit together in the outdoor areas of the venue and have many a weissbier together, while discussing and fixing bugs. But beneath the calm on the surface, and while pretending to keep good diplomatic relations, I knew that Something Had to Be Done, again. So I could only do one thing jump over the bonfire and Keep the Evil away, be that Keep Evil bugs Away or Keep Evil APT Deities Away from winning the challenge, or both. After returning from DebConf I continued to dedicate time to the project, more than a full time job in some weeks, and this is what happened in the last few months, summarised in another graph, showing the evolution of the BTS for aptitude: aptitude BTS Graph, 2015-11-15 The numbers for apt right now (15th November 2015) are: The numbers for aptitude right now are: The Aftermath As we can see, for the time being I could keep the Evil at bay, both in terms of bugs themselves and re-gaining the lead in the bugs race the Evil APT Deities were thwarted again in their efforts. ... More seriously, as most of you suspected, the graph above is not the whole truth, so I don't want to boast too much. A big part of the reduction in the number of bugs is because of merging duplicates, closing obsolete bugs, applying translations coming from multiple contributors, or simple fixes like typos and useful suggestions needing minor changes. Many of remaining problems are comparatively more difficult or time consuming that the ones addressed so far (except perhaps avoiding the immediate breakage of the transition, that took weeks to solve), and there are many important problems still there, chief among those is aptitude offering very poor solutions to resolve conflicts. Still, even the simplest of the changes takes effort, and triaging hundreds of bugs is not fun at all and mostly a thankless effort althought there is the occasionally kind soul that thanks you for handling a decade-old bug. If being subjected to the rigours of the BTS and reading and solving hundreds of bug reports is not Purification, I don't know what it is. Apart from the triaging, there were 118 bugs closed (or pending) due to changes made in the upstream part or the packaging in the last few months, and there are many changes that are not reflected in bugs closed (like most of the changes needed due to the C++11 ABI transition, bugs and problems fixed that had no report, and general rejuvenation or improvement of some parts of the code). How long this will last, I cannot know. I hope to find a job at some point, which obviously will reduce the time available to work on this. But in the meantime, for all aptitude users: Enjoy the fixes and new features! Notes [1] ^ Some visitors of the recent mini-DebConf @ Cambridge perhaps thought that the fireworks and throngs gathered were in honour of our mighty Universal Operating System, but sadly they were not. They might be, some day. In any case, the reports say that the visitors enjoyed the fireworks.

7 November 2015

Mehdi Dogguy: 3rd annual Aaron Swartz Day, November 7-8

This weekend is organized the Aaron Swartz Day across the world. There are events organized in many cities and video streams available. It is important that we remember Aaron's projects and fights. If you want to know more about Aaron Swartz, you may start by watching the excellent documentary The Internet's Own Boy : The Story of Aaron Swartz. His work was very inspirational and should not be forgotten!

8 October 2015

Petter Reinholdtsen: The Story of Aaron Swartz - Let us all weep!

The movie "The Internet's Own Boy: The Story of Aaron Swartz" is both inspiring and depressing at the same time. The work of Aaron Swartz has inspired me in my work, and I am grateful of all the improvements he was able to initiate or complete. I wish I am able to do as much good in my life as he did in his. Every minute of this 1:45 long movie is inspiring in documenting how much impact a single person can have on improving the society and this world. And it is depressing in documenting how the law enforcement of USA (and other countries) is corrupted to a point where they can push a bright kid to his death for downloading too many scientific articles. Aaron is dead. Let us all weep. The movie is also available on Youtube. I wish there were Norwegian subtitles available, so I could show it to my parents.

Petter Reinholdtsen: The Story of Aron Swartz - Let us all weep!

The movie "The Internet's Own Boy: The Story of Aaron Swartz" is both inspiring and depressing at the same time. The work of Aaron Swartz has inspired me in my work, and I am grateful of all the improvements he was able to initiate or complete. I wish I am able to do as much good in my life as he did in his. Every minute of this 1:45 long movie is inspiring in documenting how much impact a single person can have on improving the society and this world. And it is depressing in documenting how the law enforcement of USA (and other countries) is corrupted to a point where they can push a bright kid to his death for downloading too many scientific articles. Aron is dead. Let us all weep. The movie is also available on Youtube. I wish there were Norwegian subtitles available, so I could show it to my parents.

2 May 2015

Andreas Metzler: balance sheet snowboarding season 2014/15

A very late start into the season, with a nice ending. We had about zero snow until after christmas, and not just down in the valley, but also at 2000m in the mountains. My first run was therefore very late, on January 1st, followed by two short excursions (8:45 - 11:20) due to too many people on January 5th and 6th. After that we had more than enough snow which allowed me to go to Diedamskopf most of the time (basically only natural snow there, which makes better slopes). I went there almost exclusively until they closed on Easter sunday after heavy snowfall in Holy week, and had 5 more days therafter In Dam&uumlls and Warth/Schr cken. Last run was on April 19. Here is the balance sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15
number of (partial) days25172937303025233024
Dam ls10105101623104299
Diedamskopf15424231341419113
Warth/Schr cken0304131002
total meters of altitude12463474096219936226774202089203918228588203562274706224909
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m13278
# of runs309189503551462449516468597530
What does not show up here is the number of times I walked (with and without snowshoes) up the mountain and used the lift down and obviously also tobogganing.

26 April 2015

Erich Schubert: Your big data toolchain is a big security risk!

This post is a follow-up to my earlier post on the "sad state of sysadmin in the age of containers". While I was drafting this post, that story got picked up by HackerNews, Reddit and Twitter, sending a lot of comments and emails my way. Surprisingly many of the comments are supportive of my impression - I would have expected to see much more insults along the lines "you just don't like my-favorite-tool, so you rant against using it". But a lot of people seem to share my concerns. Thanks, you surprised me!
Here is the new rant post, in the slightly different context of big data:

Everybody is doing "big data" these days. Or at least, pretending to do so to upper management. A lot of the time, there is no big data. People do more data anylsis than before, and therefore stick the "big data" label on them to promote themselves and get green light from management, isn't it?
"Big data" is not a technical term. It is a business term, referring to any attempt to get more value out of your business by analyzing data you did not use before. From this point of view, most of such projects are indeed "big data" as in "data-driven revenue generation" projects. It may be unsatisfactory to those interested in the challenges of volume and the other "V's", but this is the reality how the term is used.
But even in those cases where the volume and complexity of the data would warrant the use of all the new toys tools, people overlook a major problem: security of their systems and of their data.

The currently offered "big data technology stack" is all but secure. Sure, companies try to earn money with security add-ons such as Kerberos authentication to sell multi-tenancy, and with offering their version of Hadoop (their "Hadoop distribution").
The security problem is deep inside the "stack". It comes from the way this world ticks: the world of people that constantly follow the latest tool-of-the-day. In many of the projects, you no longer have mostly Linux developers that co-function as system administrators, but you see a lot of Apple iFanboys now. They live in a world where technology is outdated after half a year, so you will not need to support product longer than that. They love reinstalling their development environment frequently - because each time, they get to change something. They also live in a world where you would simply get a new model if your machine breaks down at some point. (Note that this will not work well for your big data project, restarting it from scratch every half year...)
And while Mac users have recently been surprisingly unaffected by various attacks (and unconcerned about e.g. GoToFail, or the fail to fix the rootpipe exploit) the operating system is not considered to be very secure. Combining this with users who do not care is an explosive mixture...
This type of developer, who is good at getting a prototype website for a startup kicking in a short amount of time, rolling out new features every day to beta test on the live users is what currently makes the Dotcom 2.0 bubble grow. It's also this type of user that mainstream products aim at - he has already forgotten what was half a year ago, but is looking for the next tech product to announced soon, and willing to buy it as soon as it is available...
This attitude causes a problem at the very heart of the stack: in the way packages are built, upgrades (and safety updates) are handled etc. - nobody is interested in consistency or reproducability anymore.
Someone commented on my blog that all these tools "seem to be written by 20 year old" kids. He probably is right. It wouldn't be so bad if we had some experienced sysadmins with a cluebat around. People that have experience on how to build systems that can be maintained for 10 years, and securely deployed automatically, instead of relying on puppet hacks, wget and unzipping of unsigned binary code.
I know that a lot of people don't want to hear this, but:
Your Hadoop system contains unsigned binary code in a number of places, that people downloaded, uploaded and redownloaded a countless number of times. There is no guarantee that .jar ever was what people think it is.
Hadoop has a huge set of dependencies, and little of this has been seriously audited for security - and in particular not in a way that would allow you to check that your binaries are built from this audited code anyway.
There might be functionality hidden in the code that just sits there and waits for a system with a hostname somewhat like "yourcompany.com" to start looking for its command and control server to steal some key data from your company. The way your systems are built they probably do not have much of a firewall guarding against such. Much of the software may be constantly calling home, and your DevOps would not notice (nor would they care, anyway).
The mentality of "big data stacks" these days is that of Windows Shareware in the 90s. People downloading random binaries from the Internet, not adequately checked for security (ever heard of anybody running an AntiVirus on his Hadoop cluster?) and installing them everywhere.
And worse: not even keeping track of what they installed over time, or how. Because the tools change every year. But what if that developer leaves? You may never be able to get his stuff running properly again!
Fire-and-forget.
I predict that within the next 5 years, we will have a number of security incidents in various major companies. This is industrial espionage heaven. A lot of companies will cover it up, but some leaks will reach mass media, and there will be a major backlash against this hipster way of stringing together random components.
There is a big "Hadoop bubble" growing, that will eventually burst.
In order to get into a trustworthy state, the big data toolchain needs to:
  • Consolidate. There are too many tools for every job. There are even too many tools to manage your too many tools, and frontends for your frontends.
  • Lose weight. Every project depends on way too many other projects, each of which only contributes a tiny fragment for a very specific use case. Get rid of most dependencies!
  • Modularize. If you can't get rid of a dependency, but it is still only of interest to a small group of users, make it an optional extension module that the user only has to install if he needs this particular functionality.
  • Buildable. Make sure that everybody can build everything from scratch, without having to rely on Maven or Ivy or SBT downloading something automagically in the background. Test your builds offline, with a clean build directory, and document them! Everything must be rebuildable by any sysadmin in a reproducible way, so he can ensure a bug fix is really applied.
  • Distribute. Do not rely on binary downloads from your CDN as sole distribution channel. Instead, encourage and support alternate means of distribution, such as the proper integration in existing and trusted Linux distributions.
  • Maintain compatibility. successful big data projects will not be fire-and-forget. Eventually, they will need to go into production and then it will be necessary to run them over years. It will be necessary to migrate them to newer, larger clusters. And you must not lose all the data while doing so.
  • Sign. Code needs to be signed, end-of-story.
  • Authenticate. All downloads need to come with a way of checking the downloaded files agree with what you uploaded.
  • Integrate. The key feature that makes Linux systems so very good at servers is the all-round integrated software management. When you tell the system to update - and you have different update channels available, such as a more conservative "stable/LTS" channel, a channel that gets you the latest version after basic QA, and a channel that gives you the latest versions shortly after their upload to help with QA. It covers almost all software on your system, so it does not matter whether the security fix is in your kernel, web server, library, auxillary service, extension module, scripting language etc. - it will pull this fix and update you in no time.
Now you may argue that Hortonworks, Cloudera, Bigtop etc. already provide packages. Well ... they provide crap. They have something they call a "package", but it fails by any quality standards. Technically, a Wartburg is a car; but not one that would pass todays safety regulations...
For example, they only support Ubuntu 12.04 - a three year old Ubuntu is the latest version they support... Furthermore, these packages are roughly the same. Cloudera eventually handed over their efforts to "the community" (in other words, they gave up on doing it themselves, and hoped that someone else would clean up their mess); and Hortonworks HDP (any maybe Pivotal HD, too) is derived from these efforts, too. Much of what they do is offering some extra documentation and training for the packages they built using Bigtop with minimal effort.
The "spark" .deb packages of Bigtop, for example, are empty. They forgot to include the .jars in the package. Do I really need to give more examples of bad packaging decisions? All bigtop packages now depend on their own version of groovy - for a single script. Instead of rewriting this script in an already required language - or in a way that it would run on the distribution-provided groovy version - they decided to make yet another package, bigtop-groovy.
When I read about Hortonworks and IBM announcing their "Open Data Platform", I could not care less. As far as I can tell, they are only sticking their label on the existing tools anyway. Thus, I'm also not surprised that Cloudera and MapR do not join this rebranding effort - given the low divergence of Hadoop, who would need such a label anyway?
So why does this matter? Essentially, if anything does not work, you are currently toast. Say there is a bug in Hadoop that makes it fail to process your data. Your business is belly-up because of that, no data is processed anymore, your are vegetable. Who is going to fix it? All these "distributions" are built from the same, messy, branch. There is probably only a dozen of people around the world who have figured this out well enough to be able to fully build this toolchain. Apparently, none of the "Hadoop" companies are able to support a newer Ubuntu than 2012.04 - are you sure they have really understood what they are selling? I have doubts. All the freelancers out there, they know how to download and use Hadoop. But can they get that business-critical bug fix into the toolchain to get you up and running again? This is much worse than with Linux distributions. They have build daemons - servers that continuously check they can compile all the software that is there. You need to type two well-documented lines to rebuild a typical Linux package from scratch on your workstation - any experienced developer can follow the manual, and get a fix into the package. There are even people who try to recompile complete distributions with a different compiler to discover compatibility issues early that may arise in the future.
In other words, the "Hadoop distribution" they are selling you is not code they compiled themselves. It is mostly .jar files they downloaded from unsigned, unencrypted, unverified sources on the internet. They have no idea how to rebuild these parts, who compiled that, and how it was built. At most, they know for the very last layer. You can figure out how to recompile the Hadoop .jar. But when doing so, your computer will download a lot of binaries. It will not warn you of that, and they are included in the Hadoop distributions, too.
As is, I can not recommend to trust your business data into Hadoop.
It is probably okay to copy the data into HDFS and play with it - in particular if you keep your cluster and development machines isolated with strong firewalls - but be prepared to toss everything and restart from scratch. It's not ready yet for prime time, and as they keep on adding more and more unneeded cruft, it does not look like it will be ready anytime soon.

One more examples of the immaturity of the toolchain:
The scala package from scala-lang.org cannot be cleanly installed as an upgrade to the old scala package that already exists in Ubuntu and Debian (and the distributions seem to have given up on compiling a newer Scala due to a stupid Catch-22 build process, making it very hacky to bootstrap scala and sbt compilation).
And the "upstream" package also cannot be easily fixed, because it is not built with standard packaging tools, but with an automagic sbt helper that lacks important functionality (in particular, access to the Replaces: field, or even cleaner: a way of splitting the package properly into components) instead - obviously written by someone with 0 experience in packaging for Ubuntu or Debian; and instead of using the proven tools, he decided to hack some wrapper that tries to automatically do things the wrong way...

I'm convinced that most "big data" projects will turn out to be a miserable failure. Either due to overmanagement or undermanagement, and due to lack of experience with the data, tools, and project management... Except that - of course - nobody will be willing to admit these failures. Since all these projects are political projects, they by definition must be successful, even if they never go into production, and never earn a single dollar.

13 March 2015

Dirk Eddelbuettel: Why Drat? A Guest Post by Steven Pav

Editorial Note: The following post was kindly contributed by Steven Pav.

Why Drat? After playing around with drat for a few days now, my impressions of it are best captured by Dirk's quote:
It just works.

Demo To get some idea of what I mean by this, suppose you are a happy consumer of R packages, but want access to, say, the latest, greatest releases of my distribution package, sadist. You can simply add the following to your .Rprofile file:
drat::add("shabbychef")
After this, you instantly have access to new releases in the github/shabbychef drat store via the package tools you already know and tolerate. You can use
install.package('sadists')
to install the sadists package from the drat store, for example. Similarly, if you issue
update.packages(ask=FALSE)
all the drat stores you have added will be checked for package updates, along with their dependencies which may well come from other repositories including CRAN.

Use cases The most obvious use cases are:
  1. Micro releases. For package authors, this provides a means to get feedback from the early adopters, but also allows one to push small changes and bug fixes without burning through your CRAN karma (if you have any left). My personal drat store tends to be a few minor releases ahead of my CRAN releases.
  2. Local repositories. In my professional life, I write and maintain proprietary packages. Pushing package updates used to involve saving the package .tar.gz to a NAS, then calling something like R CMD INSTALL package_name_0.3.1.9001.tar.gz. This is not something I wanted to ask of my colleagues. With drat, they can instead add the following stanza to .Rprofile: drat:::addRepo('localRepo','file:///mnt/NAS/r/local/drat'), and then rely on update.packages to do the rest.
I suspect that in the future, drat might be (ab)used in the following ways:
  1. Rolling your own vanilla CRAN mirror, though I suspect there are better existing ways to accomplish this.
  2. Patching CRAN. Suppose you found a bug in a package on CRAN (inconceivable!). As it stands now, you email the maintainer, and wait for a fix. Maybe the patch is trivial, but suppose it is never delivered. Now, you can simply make the patch yourself, pick a higher revision number, and stash it in your drat store. The only downside is that eventually the package maintainer might bump their revision number without pushing a fix, and you are stuck in an arms race of version numbers.
  3. Forgoing CRAN altogether. While some package maintainers might find this attractive, I think I would prefer a single huge repository, warts and all, to a landscape of a million microrepos. Perhaps some enterprising group will set up a CRAN-like drat store on github, and accept packages by pull request (whether github CDN can or will support the traffic that CRAN does is another matter), but this seems a bit too futuristic for me now.

My wish list In exchange for writing this blog post, I get to lobby Dirk for some features in drat:
  1. I shudder at the thought of hundreds of tiny drat stores. Perhaps there should be a way to aggregate addRepo commands in some way. This would allow curators to publish their suggested lists of repos.
  2. Drat stores are served in the gh-pages branch of a github repo. I wish there were some way to keep the index.html file in that directory reflect the packages present in the sources. Maybe this could be achieved with some canonical RMarkdown code that most people use.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 March 2015

Joey Hess: 7drl 2015 day 3 movement at last

Got the player moving in the map! And, got the map to be deadly in its own special way.
        HeadCrush -> do
                showMessage "You die."
                endThread
Even winning the game is implemented. The game has a beginning, a middle, and an end. I left the player movement mostly unconstrained, today, while I was working on things to do with the end of the game, since that makes it easier to play through and test them. Tomorrow, I will turn on fully constrained movement (an easy change), implement inventory (which is very connected to movement constraints in Scroll), and hope to start on the spell system too.
At this point, Scroll is 622 lines of code, including content. Of which, I notice, fully 119 are types and type classes. Only 4 days left! Eep! I'm very glad that scroll's central antagonist is already written. I don't plan to add other creatures, which will save some time.
Last night as I was drifting off to sleep, it came to me a way to implement my own threading system for my roguelike. Since time in a roguelike happens in discrete ticks, as the player takes each action, normal OS threads are not suitable. And in my case, I'm doing everything in pure code anyway and certianly cannot fork off a thread for some background job. But, since I'm using continuation passing style, I can just write my own fork, that takes two continuations and combines them, causing both to be run on each tick, and recursing to handle combining the resulting continuations. It was really quite simple to implement. Typechecked on the first try even!
fork :: M NextStep -> M NextStep -> M NextStep
fork job rest = do
        jn <- job
        rn <- rest
        runthread jn rn
  where
        runthread (NextStep _ (Just contjob)) (NextStep v (Just contr)) =
                return $ NextStep v $ Just $ \i -> do
                        jn <- contjob i
                        rn <- contr i
                        runthread jn rn
        runthread (NextStep _ Nothing) (NextStep v (Just contr)) =
                return $ NextStep v (Just contr)
        runthread _ (NextStep v Nothing) =
                return $ NextStep v Nothing
endThread :: M NextStep
endThread = nextStep Nothing
background :: M NextStep -> M NextStep
background job = fork job continue
demo :: M NextStep
demo = do
    showMessage "foo"
    background $ next $ const $
        clearMessage >> endThread
That has some warts, but it's good enough for my purposes, and pretty awesome for a threading system in 66 LOC.

17 February 2015

John Goerzen: Has Linux lost its way? comments prompt a Debian developer to revisit FreeBSD after 20 years

I ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn t have the great Reference that it does now. Anyhow, some comments in my recent posts ( Has modern Linux lost its way? and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian. The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn t have issues like Linux does is false and misleading. In many cases, it s running the exact same stack. In others, it s better, but there are also others where it s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian s warts as well. The experience My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even. Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this. But then you start remembering the reasons you didn t like Linux a few years ago, or the commercial Unixes: maybe it s that programs like apache are still not as well supported, or maybe it s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it s that root s default shell is csh. Or perhaps it s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like paste this XML file into some obscure polkit location to make your mouse work or something. Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way. The amazing Let s get this out there: I ve used ZFS too much to use any OS that doesn t support it or something like it. Right now, I m not aware of anything like ZFS that is generally stable and doesn t cost a fortune, so pretty much: if your Unix doesn t do ZFS, I m not interested. (btrfs isn t there yet, but will be awesome when it is.) That s why I picked FreeBSD for this, rather than NetBSD or OpenBSD. ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven t seen ZFS this well integrated outside the Solaris-type OSs. I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones. FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight boot environments , so you can select which to boot into. This is useful, say, before doing upgrades. Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that you won t hear any of the LXC maintainers tell you that LXC is secure and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low. FreeBSD s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right. The simply different People have been throwing around the word distribution with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a base system project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync. In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole. FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a just install it and it works, tweak later sort of setup. Debian straddles the middle ground, offering both approaches in many cases. There are pros and cons to each approach. Generally, I don t think either one is better. They are just different. The not-quite-there I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here. Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work). Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux. ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms. FreeBSD s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn t work with ext3/ext4 in many situations. Frankly I don t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS. Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect. One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn t taken into account. The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn t set up yet to install optional docs. I guess the devs aren t installing the docs in testing. There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux s inotify. The Linux Dropbox does not work in FreeBSD s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently. The desktop environments tend to need a lot more configuration work to get them going than on Linux. There s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn t really configure them for you as much out of the box. FreeBSD doesn t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you ll see people refer to a list of other platforms that are supported , but they don t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64. The bad: package management Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD s package management even pkg-ng in 10.1-RELEASE to be lacking in a number of important ways. To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection ( ports being the way to install from source, and packages being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a cron option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary. freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues. The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious. If that s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is pkg upgrade , but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian s are. The pkg tool doesn t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts. It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also. Some of these things just make me question the design of pkg. If I can t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it? Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the master branch of the ports/packages. That s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well. There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed. For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian s main, contrib, and non-free. It s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say never even show me packages that aren t DFSG-free. This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something. The bad: ABI stability I m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system. This is not necessarily a showstopper for me, but it is a hassle for a lot of people. Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating After a major version upgrade, all installed packages and ports need to be upgraded . I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error. Conclusions As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD s around the base system, and Debian s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian s approach also produces some leading features in the way of package management and security maintainability beyond the small base. My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn t lost its way like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it s a draw. You pick the best for your use case. If you re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don t apply to you (you may not even need FreeBSD s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian s package management and security handling may well be attractive. I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

13 November 2014

Joey Hess: on leaving

I left Debian. I don't really have a lot to say about why, but I do want to clear one thing up right away. It's not about systemd. As far as systemd goes, I agree with my friend John Goerzen:
I promise you 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years.
read the rest And with Jonathan Corbet:
However things turn out, if it becomes clear that there is a better solution than systemd available, we will be able to move to it.
read the rest I have no problem with trying out a piece of Free Software, that might have abrasive authors, all kinds of technical warts, a debatable design, scope creep etc. None of that stopped me from giving Linux a try in 1995, and I'm glad I jumped in with both feet. It's important to be unafraid to make a decision, try it out, and if it doesn't work, be unafraid to iterate, rethink, or throw a bad choice out. That's how progress happens. Free Software empowers us to do this. Debian used to be a lot better at that than it is now. This seems to have less to do with the size of the project, and more to do with the project having aged, ossified, and become comfortable with increasing layers of complexity around how it makes decisions. To the point that I no longer feel I can understand the decision-making process at all ... or at least, that I'd rather be spending those scarce brain cycles on understanding something equally hard but more useful, like category theory. It's been a long time since Debian was my main focus; I feel much more useful when I'm working in a small nimble project, making fast and loose decisions and iterating on them. Recent events brought it to a head, but this is not a new feeling. I've been less and less involved in Debian since 2007, when I dropped maintaining any packages I wasn't the upstream author of, and took a year of mostly ignoring the larger project. Now I've made the shift from being a Debian developer to being an upstream author of stuff in Debian (and other distros). It seems best to make a clean break rather than hang around and risk being sucked back in. My mailbox has been amazing over the past week by the way. I've heard from so many friends, and it's been very sad but also beautiful.

30 September 2014

Gunnar Wolf: Diego G mez: Imprisoned for sharing

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles. Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd. Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us. And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics! Some links with further information:

23 May 2014

Mike Gabriel: X2Go on FLOSS Weekly

On May 21st 2014, the two Mikes (Gabriel DePaulo) from the X2Go core developer team were interviewed about X2Go by the famous Randal L. Schwartz (merlyn) and equally famous Randi Harper (freebsdgirl) on the FLOSS Weekly Netcast [1]. If you're having trouble watching the embedded video on that page, try one of the below alternatives: HD Video [2]
SD Video, large [3]
SD Video, small [4]
Audio only [5] light+love,
Mike [1] http://twit.tv/floss295 read more

13 April 2014

Andreas Metzler: balance sheet snowboarding season 2013/14

Little snow, but above-average season. The macro weather situation was very stable this year, very high snowfall in Austria's south (eastern tyrol and carinthia), and long periods of warm and sunny weather with little precipitation on the northern side of the alps (i.e. us). This had me going snowboarding a lot, but almost exclusively in Dam ls since it is characterized by a) grassy terrain (no stones) and b) huge numbers of snow cannons. I started early (December 7) with another 6 days on piste in December. If there had been more snow the season would have been a long one, too. - Season's end depends on the timimg of easter (because of the holidays) which would have been late. However I again stopped rather early, last day was March 30. In addition to the days listed below I had an early season's opening at the glacier in Pitztal. I attended the pureboarding in November (21st to 23rd). Looking back at the season I am not quite satisfied with my progress, I just have not managed to implement and practise the technique I should have learned there. It is next to impossible when the slopes are full, and when they aren't one likes to give it a run. ;-) Here is the balance sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14
number of (partial) days251729373030252330
Dam ls1010510162310429
Diedamskopf154242313414191
Warth/Schr cken030413100
total meters of altitude12463474096219936226774202089203918228588203562274706
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m
# of runs309189503551462449516468597

12 April 2014

Mario Lang: Emacs Chess

Between 2001 and 2004, John Wielgley wrote emacs-chess, a rather complete Chess library for Emacs. I found it around 2004, and was immediately hooked. Why? Because Emacs is configurable, and I was hoping that I could customize the chessboard display much more than with any other console based chess program I have ever seen. And I was right. One of the four chessboard display types is exactly what I was looking for, chess-plain.el:
  
8 tSlDjLsT 
7 XxXxXxXx 
6         
5         
4         
3         
2 pPpPpPpP 
1 RnBqKbNr 
  
  abcdefgh
This might look confusing at first, but I have to admit that I grew rather fond of this way of displaying chess positions as ASCII diagrams. In this configuration, initial letters for (mostly) German chess piece names are used for the black pieces, and English chess piece names are used for the white pieces. Uppercase is used to indicate if a piece is on a black square and braille dot 7 is used to indicate an empty black square. chess-plain is completely configurable though, so you can have more classic diagrams like this as well:
  
8 rnbqkbnr 
7 pppppppp 
6  + + + + 
5 + + + +  
4  + + + + 
3 + + + +  
2 PPPPPPPP 
1 RNBQKBNR 
  
  abcdefgh
Here, upper case letters indicate white pieces, and lower case letters black pieces. Black squares are indicated with a plus sign. However, as with many Free Software projects, Emacs Chess was rather dormant in the last 10 years. For some reason that I can not even remember right now, my interest in Emacs Chess has been reignited roughly 5 weeks ago.
Universal Chess Interface It all began when I did a casual apt-cache serch for chess engines, only to discover that a number of free chess engines had been developed and packaged for Debian in the last 10 years. In 2004 there was basically only GNUChess, Phalanx and Crafty. These days, a number of UCI based chess engines have been added, like Stockfish, Glaurung, Fruit or Toga2. So I started by learning how the new chess engine communication protocol, UCI, actually works. After a bit of playing around, I had a basic engine module for Emacs Chess that could play against Stockfish. After I had developed a thin layer for all things that UCI engines have in common (chess-uci.el), it was actually very easy to implement support for Stockfish, Glaurung and Fruit in Emacs Chess. Good, three new free engines supported.
Opening books When I learnt about the UCI protocol, I discovered that most UCI engines these days do not do their own book handling. In fact, it is sort of expected from the GUI to do opening book moves. And here one thing led to another. There is quite good documentation about the Polyglot chess opening book binary format on the net. And since I absolutely love to write binary data decoders in Emacs Lisp (don't ask, I don't know why) I immediately started to write Polyglot book handling code in Emacs Lisp, see chess-polyglot.el. It turns out that it is relatively simple and actually performs very good. Even a lookup in an opening book bigger than 100 megabytes happens more or less instantaneously, so you do not notice the time required to find moves in an opening book. Binary search is just great. And binary searching binary data in Emacs Lisp is really fun :-). So Emacs Chess can now load and use polyglot opening book files. I integrated this functionality into the common UCI engine module, so Emacs Chess, when fed with a polyglot opening book, can now choose moves from that book instead of consulting the engine to calculate a move. Very neat! Note that you can create your own opening books from PGN collections, or just download a polyglot book made by someone else.
Internet Chess Servers Later I reworked the internet chess server backend of Emacs Chess a bit (sought games are now displayed with tabulated-list-mode), and found and fixed some (rather unexpected) bugs in the way how legal moves are calculated (if we take the opponents rook, their ability to castle needs to be cleared). Emacs Chess supports two of the most well known internet chess servers. The Free Internet Chess Server (FICS) and chessclub.com (ICC).
A Chess engine written in Emacs Lisp And then I rediscovered my own little chess engine implemented in Emacs Lisp. I wrote it back in 2004, but never really finished it. After I finally found a small (but important) bug in the static position evaluation function, I was motivated enough to fix my native Emacs Lisp chess engine. I implemented quiescence search so that captue combinations are actually evaluated and not just pruned at a hard limit. This made the engine quite a bit slower, but it actually results in relatively good play. Since the thinking time went up, I implemented a small progress bar so one can actually watch what the engine is doing right now. chess-ai.el is a very small Lisp impelemtnation of a chess engine. Static evaluation, alpha beta and quiescence search included. It covers the basics so to speak. So if you don't have any of the above mentioned external engines installed, you can even play a game of Chess against Emacs directly.
Other features The feature list of Emacs Chess is rather impressive. You can not just play a game of Chess against an engine, you can also play against another human (either via ICS or directly from Emacs to Emacs), view and edit PGN files, solve chess puzzles, and much much more. Emacs Chess is really a universal chess interface for Emacs.
Emacs-chess 2.0 In 2004, John and I were already planning to get emacs-chess 2.0 out the door. Well, 10 years have passed, and both of us have forgotten about this wonderful codebase. I am trying to change this. I am in development/maintainance mode for emacs-chess again. John has also promised to find a bit of time to work on a final 2.0 release. If you are an Emacs user who knows and likes to play Chess, please give emacs-chess a whirl. If you find any problems, please file an Issue on Github, or better yet, send us a Pull Requests. There is an emacs-chess Debian package which has not been updated in a while. If you want to test the new code, be sure to grab it from GitHub directly. Once we reach a state that at least feels like stable, I am going to update the Debian package of course.

31 January 2014

Andrew Pollock: [life] Day 4, Brazilian jiu jitsu, Science Friday and the Lunar New Year

I want Zoe to do one "extra curricular" activity per term this year. Something dance-related, something gymnastics-related and maybe some other form of sport (I'm thinking soccer). My girlfriend and I were wandering through Westfield Carindale on Saturday, and we happened upon a guy from Infinity Martial Arts touting his wares. I thought I'd suss it out, and they had an introductory offer where the sign-up fee and uniform fee were significantly reduced, and they had a pretty flexible timetable. They were just starting up their East Brisbane location, and the close proximity to home along with the reduced price sealed the deal. Long-term, I'd like for Zoe to learn Tae Kwon Do, for self-defense, but BJJ seemed as good as anything to get her introduced to the idea of martial arts. The class for 2-4 year olds was billed as "Fun and Fitness 4 Kids" so it's really a combination of listening to the instructor, some basic gymnastics-style stuff and a little bit of martial arts. We biked over this morning (going up Hawthorne Road is a slog) and got there in about 15 minutes via the direct route. It's in the upstairs of a gym in the middle of an industrial area, but it was pretty easily accessible by bike. They're still waiting on some of the equipment, so the space was a little spartan. It was just Zoe and I and a mother of two not quite 2 year old twin girls taking a trial class. First up, we got her uniform, and she looked so cute. There were pants and a jacket and a belt. I'm going to have to video the instructor tying up the belt next week so I can learn how to do it the right way. The class started with the kids standing on these flat coloured circles ("mushrooms") and effectively playing "Simon Says" ("instructor says") without being caught out. It was a pretty sneaky way of doing a bunch of warm up exercises like rotating the knees and ankles. Zoe did very well, but there were a few that she just point blank refused to do. Next the instructor set up a bunch of "stations" around the room. The first station involved me crouching in a fetal position with a football and Zoe had to try and tip me over to get the football. That was a load of fun. The next station was a few steps and foam-filled vinyl ramp, and Zoe just had to do a somersault down that. The next station was pretty much the same but taller, and Zoe had to do a "sausage roll" on her side down that. The next station was just a small exercise ball and Zoe had to do some "donkey kicks" on it. The final station involved me waving a couple of cut-off pool noodles at arm's length, and Zoe had to run in covering her head and give me a bear hug. We did a few rotations of these stations. It was heaps of fun. Next, the instructor got a whole bunch of ball pit balls of different colours, and scattered them over the floor, and put a basket at each end of the room. The kids were then instructed to retrieve specific colours as fast as possible. Zoe started out trying to get as many as possible in her arms before returning to the basket, but the idea was to do it one ball at a time. A lot of running back and forth. Finally, we did some actual BJJ (I think). It was called the "sleeping crocodile hold" or something like that. I had to lie on my back, and Zoe had to sneak up to me from my side and grapple me with one arm behind my head and the other around my waist and a knee in my side. I have no interest in Zoe learning mixed martial arts, but this class was so much fun. I was feeling a bit tired this morning before we headed out, but by the end of it I was so pumped. It was just the right combination of daddy/daughter rough and tumble, with a bit of gymnastics and following instruction. I'm pretty certain Zoe enjoyed it. I liked that the instructor stopped for a water break between each activity, so the kids were kept well hydrated throughout. We took the "scenic route" home, because we had no particular time constraints. It was more like 25 minutes and involved the Norman Park Greenway. I was so glad we went that way. It was a beautiful ride that I didn't know existed. Very indirect, it involved going through Woolloongabba, Coorparoo and Norman Park, around the back of Coorparoo State High School along the side of Norman Creek. It was semi-wetland conditions.The only part that was a bit annoying was where Norman Avenue met Wynnum Road. It was quite steep and the green light didn't last very long. I had wanted to do story time at Bulimba Library at 10:30am on Fridays, but I'd rather bike home via the Norman Park Greenway instead, because it's a nice ride. I've since decided that I'll just use the story time at the library during wet weather, when we'd be driving to BJJ anyway, and be able to make it to the library in time after class. I've also got to figure out where I'm going to fit doing some Science into the schedule. Fridays are going to be busy I think. I managed to get Zoe down for a nap by a bit after 12:30pm today. She was pretty knackered after the class (as was I from biking home) so I let her watch a bit of TV while I prepared lunch. She was funny, she saw how sweaty I was when we got home, and suggested I take a shower while she watch some TV. I was a bit unprepared for my first Science Friday, though. I'd been considering going to the Sir Thomas Brisbane Planetarium after BJJ, but I discovered it's closed until February 7 for maintenance and an upgrade, so that cunning plan was thwarted. I used her nap time to order 365 Science Experiments and 50 Dangerous Things (you Should Let Your Children Do). I should get plenty of inspiration out of those two. After she woke up, we did the old "vinegar and sodium bicarbonate" trick as our science experiment. I've finally got a use for my Google Labs Founders' Award lab coat. I need to try and find some child-sized safety glasses. The adult ones barely stay on her little nose. After that, we went for a walk to our local toy store to see if they sold child-sized safety glasses (they didn't) and then Zoe watched a little bit of TV and then we walked to the CityCat to go to Teneriffe to catch a bus to Chinatown. I couldn't have timed it better if I tried. Just as we got to Chinatown, they started doing a thing with the Chinese dragons, and I hoisted Zoe up on my shoulders so she could watch. I wasn't sure how she was going to take it, with all the noise from the drums and the dragons themselves, but she was enthralled. We watched other acts and then my girlfriend joined us, and we went to a Chinese restaurant for dinner. I bought Zoe these training chopsticks a while ago, and she's taken to them like a duck to water. I brought them with us so she could use them at the restaurant, and she ate the biggest dinner I've ever seen her eat. After dinner, we caught the tail end of a procession and had some ice cream. We were starting to leave, and there was another dragon, and Zoe was brave enough to go and touch it on her own, for good luck. We then made our way back to the bus. By the time the bus arrived and got us back to Teneriffe, and a CityCat finally arrived, it was very late, but Zoe was pretty good the whole time. It was probably the latest night she's ever had with me while we've been out on the go, and she only got a bit ratty once we were home. It didn't help that she'd forgotten that she'd put Cowie in a cupboard this morning and we burned some more time tracking her down. Today was a fantastic (and very full) day. I enjoyed it a lot, and I think Zoe did too. Fortunately, Fridays won't always be this full on.

Next.

Previous.