Search Results: "hector"

31 December 2021

Chris Lamb: Favourite books of 2021: Fiction

In my two most recent posts, I listed the memoirs and biographies and followed this up with the non-fiction I enjoyed the most in 2021. I'll leave my roundup of 'classic' fiction until tomorrow, but today I'll be going over my favourite fiction. Books that just miss the cut here include Kingsley Amis' comic Lucky Jim, Cormac McCarthy's The Road (although see below for McCarthy's Blood Meridian) and the Complete Adventures of Tintin by Herg , the latter forming an inadvertently incisive portrait of the first half of the 20th century. Like ever, there were a handful of books that didn't live up to prior expectations. Despite all of the hype, Emily St. John Mandel's post-pandemic dystopia Station Eleven didn't match her superb The Glass Hotel (one of my favourite books of 2020). The same could be said of John le Carr 's The Spy Who Came in from the Cold, which felt significantly shallower compared to Tinker, Tailor, Soldier, Spy again, a favourite of last year. The strangest book (and most difficult to classify at all) was undoubtedly Patrick S skind's Perfume: The Story of a Murderer, and the non-fiction book I disliked the most was almost-certainly Beartown by Fredrik Bachman. Two other mild disappointments were actually film adaptions. Specifically, the original source for Vertigo by Pierre Boileau and Thomas Narcejac didn't match Alfred Hitchock's 1958 masterpiece, as did James Sallis' Drive which was made into a superb 2011 neon-noir directed by Nicolas Winding Refn. These two films thus defy the usual trend and are 'better than the book', but that's a post for another day.

A Wizard of Earthsea (1971) Ursula K. Le Guin How did it come to be that Harry Potter is the publishing sensation of the century, yet Ursula K. Le Guin's Earthsea is only a popular cult novel? Indeed, the comparisons and unintentional intertextuality with Harry Potter are entirely unavoidable when reading this book, and, in almost every respect, Ursula K. Le Guin's universe comes out the victor. In particular, the wizarding world that Le Guin portrays feels a lot more generous and humble than the class-ridden world of Hogwarts School of Witchcraft and Wizardry. Just to take one example from many, in Earthsea, magic turns out to be nurtured in a bottom-up manner within small village communities, in almost complete contrast to J. K. Rowling's concept of benevolent government departments and NGOs-like institutions, which now seems a far too New Labour for me. Indeed, imagine an entire world imbued with the kindly benevolence of Dumbledore, and you've got some of the moral palette of Earthsea. The gently moralising tone that runs through A Wizard of Earthsea may put some people off:
Vetch had been three years at the School and soon would be made Sorcerer; he thought no more of performing the lesser arts of magic than a bird thinks of flying. Yet a greater, unlearned skill he possessed, which was the art of kindness.
Still, these parables aimed directly at the reader are fairly rare, and, for me, remain on the right side of being mawkish or hectoring. I'm thus looking forward to reading the next two books in the series soon.

Blood Meridian (1985) Cormac McCarthy Blood Meridian follows a band of American bounty hunters who are roaming the Mexican-American borderlands in the late 1840s. Far from being remotely swashbuckling, though, the group are collecting scalps for money and killing anyone who crosses their path. It is the most unsparing treatment of American genocide and moral depravity I have ever come across, an anti-Western that flouts every convention of the genre. Blood Meridian thus has a family resemblance to that other great anti-Western, Once Upon a Time in the West: after making a number of gun-toting films that venerate the American West (ie. his Dollars Trilogy), Sergio Leone turned his cynical eye to the western. Yet my previous paragraph actually euphemises just how violent Blood Meridian is. Indeed, I would need to be a much better writer (indeed, perhaps McCarthy himself) to adequately 0utline the tone of this book. In a certain sense, it's less than you read this book in a conventional sense, but rather that you are forced to witness successive chapters of grotesque violence... all occurring for no obvious reason. It is often said that books 'subvert' a genre and, indeed, I implied as such above. But the term subvert implies a kind of Puck-like mischievousness, or brings to mind court jesters licensed to poke fun at the courtiers. By contrast, however, Blood Meridian isn't funny in the slightest. There isn't animal cruelty per se, but rather wanton negligence of another kind entirely. In fact, recalling a particular passage involving an injured horse makes me feel physically ill. McCarthy's prose is at once both baroque in its language and thrifty in its presentation. As Philip Connors wrote back in 2007, McCarthy has spent forty years writing as if he were trying to expand the Old Testament, and learning that McCarthy grew up around the Church therefore came as no real surprise. As an example of his textual frugality, I often looked for greater precision in the text, finding myself asking whether who a particular 'he' is, or to which side of a fight some two men belonged to. Yet we must always remember that there is no precision to found in a gunfight, so this infidelity is turned into a virtue. It's not that these are fair fights anyway, or even 'murder': Blood Meridian is just slaughter; pure butchery. Murder is a gross understatement for what this book is, and at many points we are grateful that McCarthy spares us precision. At others, however, we can be thankful for his exactitude. There is no ambiguity regarding the morality of the puppy-drowning Judge, for example: a Colonel Kurtz who has been given free license over the entire American south. There is, thank God, no danger of Hollywood mythologising him into a badass hero. Indeed, we must all be thankful that it is impossible to film this ultra-violent book... Indeed, the broader idea of 'adapting' anything to this world is, beyond sick. An absolutely brutal read; I cannot recommend it highly enough.

Bodies of Light (2014) Sarah Moss Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage within an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, this poignant book follows the studiously intelligent Alethia 'Ally' Moberly who is struggling to gain the acceptance of herself, her mother and the General Medical Council. You can read my full review from July.

House of Leaves (2000) Mark Z. Danielewski House of Leaves is a remarkably difficult book to explain. Although the plot refers to a fictional documentary about a family whose house is somehow larger on the inside than the outside, this quotidian horror premise doesn't explain the complex meta-commentary that Danielewski adds on top. For instance, the book contains a large number of pseudo-academic footnotes (many of which contain footnotes themselves), with references to scholarly papers, books, films and other articles. Most of these references are obviously fictional, but it's the kind of book where the joke is that some of them are not. The format, structure and typography of the book is highly unconventional too, with extremely unusual page layouts and styles. It's the sort of book and idea that should be a tired gimmick but somehow isn't. This is particularly so when you realise it seems specifically designed to create a fandom around it and to manufacturer its own 'cult' status, something that should be extremely tedious. But not only does this not happen, House of Leaves seems to have survived through two exhausting decades of found footage: The Blair Witch Project and Paranormal Activity are, to an admittedly lesser degree, doing much of the same thing as House of Leaves. House of Leaves might have its origins in Nabokov's Pale Fire or even Derrida's Glas, but it seems to have more in common with the claustrophobic horror of Cube (1997). And like all of these works, House of Leaves book has an extremely strange effect on the reader or viewer, something quite unlike reading a conventional book. It wasn't so much what I got out of the book itself, but how it added a glow to everything else I read, watched or saw at the time. An experience.

Milkman (2018) Anna Burns This quietly dazzling novel from Irish author Anna Burns is full of intellectual whimsy and oddball incident. Incongruously set in 1970s Belfast during The Irish Troubles, Milkman's 18-year-old narrator (known only as middle sister ), is the kind of dreamer who walks down the street with a Victorian-era novel in her hand. It's usually an error for a book that specifically mention other books, if only because inviting comparisons to great novels is grossly ill-advised. But it is a credit to Burns' writing that the references here actually add to the text and don't feel like they are a kind of literary paint by numbers. Our humble narrator has a boyfriend of sorts, but the figure who looms the largest in her life is a creepy milkman an older, married man who's deeply integrated in the paramilitary tribalism. And when gossip about the narrator and the milkman surfaces, the milkman beings to invade her life to a suffocating degree. Yet this milkman is not even a milkman at all. Indeed, it's precisely this kind of oblique irony that runs through this daring but darkly compelling book.

The First Fifteen Lives of Harry August (2014) Claire North Harry August is born, lives a relatively unremarkable life and finally dies a relatively unremarkable death. Not worth writing a novel about, I suppose. But then Harry finds himself born again in the very same circumstances, and as he grows from infancy into childhood again, he starts to remember his previous lives. This loop naturally drives Harry insane at first, but after finding that suicide doesn't stop the quasi-reincarnation, he becomes somewhat acclimatised to his fate. He prospers much better at school the next time around and is ultimately able to make better decisions about his life, especially when he just happens to know how to stay out of trouble during the Second World War. Yet what caught my attention in this 'soft' sci-fi book was not necessarily the book's core idea but rather the way its connotations were so intelligently thought through. Just like in a musical theme and varations, the success of any concept-driven book is far more a product of how the implications of the key idea are played out than how clever the central idea was to begin with. Otherwise, you just have another neat Borges short story: satisfying, to be sure, but in a narrower way. From her relatively simple premise, for example, North has divined that if there was a community of people who could remember their past lives, this would actually allow messages and knowledge to be passed backwards and forwards in time. Ah, of course! Indeed, this very mechanism drives the plot: news comes back from the future that the progress of history is being interfered with, and, because of this, the end of the world is slowly coming. Through the lives that follow, Harry sets out to find out who is passing on technology before its time, and work out how to stop them. With its gently-moralising romp through the salient historical touchpoints of the twentieth century, I sometimes got a whiff of Forrest Gump. But it must be stressed that this book is far less certain of its 'right-on' liberal credentials than Robert Zemeckis' badly-aged film. And whilst we're on the topic of other media, if you liked the underlying conceit behind Stuart Turton's The Seven Deaths of Evelyn Hardcastle yet didn't enjoy the 'variations' of that particular tale, then I'd definitely give The First Fifteen Lives a try. At the very least, 15 is bigger than 7. More seriously, though, The First Fifteen Lives appears to reflect anxieties about technology, particularly around modern technological accelerationism. At no point does it seriously suggest that if we could somehow possess the technology from a decade in the future then our lives would be improved in any meaningful way. Indeed, precisely the opposite is invariably implied. To me, at least, homo sapiens often seems to be merely marking time until we can blow each other up and destroying the climate whilst sleepwalking into some crisis that might precipitate a thermonuclear genocide sometimes seems to be built into our DNA. In an era of cli-fi fiction and our non-fiction newspaper headlines, to label North's insight as 'prescience' might perhaps be overstating it, but perhaps that is the point: this destructive and negative streak is universal to all periods of our violent, insecure species.

The Goldfinch (2013) Donna Tartt After Breaking Bad, the second biggest runaway success of 2014 was probably Donna Tartt's doorstop of a novel, The Goldfinch. Yet upon its release and popular reception, it got a significant number of bad reviews in the literary press with, of course, an equal number of predictable think pieces claiming this was sour grapes on the part of the cognoscenti. Ah, to be in 2014 again, when our arguments were so much more trivial. For the uninitiated, The Goldfinch is a sprawling bildungsroman that centres on Theo Decker, a 13-year-old whose world is turned upside down when a terrorist bomb goes off whilst visiting the Metropolitan Museum of Art, killing his mother among other bystanders. Perhaps more importantly, he makes off with a painting in order to fulfil a promise to a dying old man: Carel Fabritius' 1654 masterpiece The Goldfinch. For the next 14 years (and almost 800 pages), the painting becomes the only connection to his lost mother as he's flung, almost entirely rudderless, around the Western world, encountering an array of eccentric characters. Whatever the critics claimed, Tartt's near-perfect evocation of scenes, from the everyday to the unimaginable, is difficult to summarise. I wouldn't label it 'cinematic' due to her evocation of the interiority of the characters. Take, for example: Even the suggestion that my father had close friends conveyed a misunderstanding of his personality that I didn't know how to respond it's precisely this kind of relatable inner subjectivity that cannot be easily conveyed by film, likely is one of the main reasons why the 2019 film adaptation was such a damp squib. Tartt's writing is definitely not 'impressionistic' either: there are many near-perfect evocations of scenes, even ones we hope we cannot recognise from real life. In particular, some of the drug-taking scenes feel so credibly authentic that I sometimes worried about the author herself. Almost eight months on from first reading this novel, what I remember most was what a joy this was to read. I do worry that it won't stand up to a more critical re-reading (the character named Xandra even sounds like the pharmaceuticals she is taking), but I think I'll always treasure the first days I spent with this often-beautiful novel.

Beyond Black (2005) Hilary Mantel Published about five years before the hyperfamous Wolf Hall (2004), Hilary Mantel's Beyond Black is a deeply disturbing book about spiritualism and the nature of Hell, somewhat incongruously set in modern-day England. Alison Harte is a middle-aged physic medium who works in the various towns of the London orbital motorway. She is accompanied by her stuffy assistant, Colette, and her spirit guide, Morris, who is invisible to everyone but Alison. However, this is no gentle and musk-smelling world of the clairvoyant and mystic, for Alison is plagued by spirits from her past who infiltrate her physical world, becoming stronger and nastier every day. Alison's smiling and rotund persona thus conceals a truly desperate woman: she knows beyond doubt the terrors of the next life, yet must studiously conceal them from her credulous clients. Beyond Black would be worth reading for its dark atmosphere alone, but it offers much more than a chilling and creepy tale. Indeed, it is extraordinarily observant as well as unsettlingly funny about a particular tranche of British middle-class life. Still, the book's unnerving nature that sticks in the mind, and reading it noticeably changed my mood for days afterwards, and not necessarily for the best.

The Wall (2019) John Lanchester The Wall tells the story of a young man called Kavanagh, one of the thousands of Defenders standing guard around a solid fortress that envelopes the British Isles. A national service of sorts, it is Kavanagh's job to stop the so-called Others getting in. Lanchester is frank about what his wall provides to those who stand guard: the Defenders of the Wall are conscripted for two years on the Wall, with no exceptions, giving everyone in society a life plan and a story. But whilst The Wall is ostensibly about a physical wall, it works even better as a story about the walls in our mind. In fact, the book blends together of some of the most important issues of our time: climate change, increasing isolation, Brexit and other widening societal divisions. If you liked P. D. James' The Children of Men you'll undoubtedly recognise much of the same intellectual atmosphere, although the sterility of John Lanchester's dystopia is definitely figurative and textual rather than literal. Despite the final chapters perhaps not living up to the world-building of the opening, The Wall features a taut and engrossing narrative, and it undoubtedly warrants even the most cursory glance at its symbolism. I've yet to read something by Lanchester I haven't enjoyed (even his short essay on cheating in sports, for example) and will be definitely reading more from him in 2022.

The Only Story (2018) Julian Barnes The Only Story is the story of Paul, a 19-year-old boy who falls in love with 42-year-old Susan, a married woman with two daughters who are about Paul's age. The book begins with how Paul meets Susan in happy (albeit complicated) circumstances, but as the story unfolds, the novel becomes significantly more tragic and moving. Whilst the story begins from the first-person perspective, midway through the book it shifts into the second person, and, later, into the third as well. Both of these narrative changes suggested to me an attempt on the part of Paul the narrator (if not Barnes himself), to distance himself emotionally from the events taking place. This effect is a lot more subtle than it sounds, however: far more prominent and devastating is the underlying and deeply moving story about the relationship ends up. Throughout this touching book, Barnes uses his mastery of language and observation to avoid the saccharine and the maudlin, and ends up with a heart-wrenching and emotive narrative. Without a doubt, this is the saddest book I read this year.

11 April 2017

Riku Voipio: Deploying OBS

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions. Things to learned while setting up OBSMe coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise. Well done packagingUsually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows. OBS does automatic rebuilds of reverse dependenciesAka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices. OBS server and worker do http requests in both directionsOn startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here.. Signing repositories is complicatedWith Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;) Git integration is rather bolted-on than integratedOBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git. The upstream community is friendlyIncluding the happiest thanks from an upstream I've seen recently. SummaryAll in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you. *How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.

7 December 2016

Jonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned. In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row. Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself. Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:
It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not. There are many "levels" of physical access. [...] In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations. And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.
While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts. The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least. Hector and Ismael argue that the default should be changed for enhanced security:
[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.
For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree. But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well. To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs. But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion. For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual: After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached. 2. tries=n option ignored, local brute-force slightly cheaper Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords. Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug. The reason for the bug was twofold:
  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:
    setup_mapping()
     
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...]
    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:
    setup_mapping()
     
      [...]
      while [ $crypttries -le 0 ]   [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
     
    
    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!
  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks. In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid. The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function. The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned. Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes. But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System. Honestly, this headline is missleading - if not wrong - in several ways:
  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.
Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer. If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all. After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

1 October 2016

Kees Cook: security things in Linux v4.6

Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella. seccomp support for parisc Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working. x86 32-bit mmap ASLR vs unlimited stack fixed Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. ulimit -s unlimited ) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead. x86 execute-only memory Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for execute only memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I m looking forward to either emulated QEmu support or access to one of these fancy CPUs. CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86 Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel. On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it s reasonable to continue to leave an out for developers that find themselves tripping over it. arm64 KASLR text base offset Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the /chosen/kaslr-seed property) or from UEFI (via EFI_RNG_PROTOCOL), so if you re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader. zero-poison after free Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called sanity checking ), which can catch another small subset of flaws. To understand the pieces of this, it s worth describing that the kernel s higher level allocator, the page allocator (e.g. __get_free_pages()) is used by the finer-grained slab allocator (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator). Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn t worth the performance trade-off. Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well. To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:
CONFIG_DEBUG_PAGEALLOC=n
CONFIG_PAGE_POISONING=y
CONFIG_PAGE_POISONING_NO_SANITY=y
CONFIG_PAGE_POISONING_ZERO=y
CONFIG_SLUB_DEBUG=y
and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:
page_poison=on slub_debug=P
To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add F to slub_debug as slub_debug=PF . read-only after init I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above). Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared const in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked __init ) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc. As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it s trivial to declare a new data section ( .data..ro_after_init ) and add it to the existing read-only data section ( .rodata ). Kernel structures can be annotated with the new section (via the __ro_after_init macro), and they ll become read-only once boot has finished. The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel s attack surface can be made read-only for the majority of its lifetime. As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these). That s it for v4.6, next up will be v4.7!

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

10 November 2014

Neil Williams: On getting NEW packages into stable

There s a lot of discussion / moaning /arguing at this time, so I thought I d post something about how LAVA got into Debian Jessie, the work involved and the lessons I ve learnt. Hopefully, it will help someone avoid the disappointment of having their package missing the migration into a future stable release. This was going to be a talk at the Minidebconf-uk in Cambridge but I decided to put this out as a permanent blog entry in the hope that it will be a useful reference for the future, not just Jessie. Context LAVA relies on a number of dependencies which were at the time all this started NEW to Debian as well as many others already in Debian. I d been running LAVA using packages on my own system for a few months before the packages were ready for use on the main servers (I never actually installed LAVA using the old virtualenv method on my own systems, except in a VM). I did do quite a lot of this on my own but I also had a team supporting the effort and valuing the benefits of moving to a packaged system. At the time, LAVA was based on Ubuntu (12.04 LTS Precise Pangolin) and a new Ubuntu LTS was close (Trusty Tahr 14.04) but I started work on this in 2013. By the time my packages were ready for general usage, it was winter 2013 and much too close to get anything into Ubuntu in time for Trusty. So I started a local repo using space provided by Linaro. At the same time, I started uploading the dependencies to Debian. json-schema-validator, django-testscenarios and others arrived in April and May 2014. (Trusty was released in April). LAVA arrived in NEW in May, being accepted into unstable at the end of June. LAVA arrived in testing for the first time in July 2014. Upstream development continued apace and a regular monthly upload, with some hotfixes in between, continued until close to the freeze. At this point, note that although upstream is a medium sized team, the Debian packaging also has a team but all the uploads were made by me. I planned ahead. I knew that I would be going to Macau for Linaro Connect in February a critical stage in the finalisation of the packages and the migration of existing instances from the old methods. I knew that I would be on vacation from August through to the end of September 2014 including at least two weeks with absolutely no connectivity of any kind. Right at this time, Django1.7 arrived in experimental with the intent to go into unstable and hence into Jessie. This was a headache for me, I initially sought to delay the migration until after Jessie. However, we discussed it upstream, allocated time within the busy schedule and also sought help from within Debian with the RFH tag. Rapha l Hertzog contributed patches for django1.7 support and we worked on those patches upstream, once I was back from vacation. (The final week of my vacation was a work conference, so we had everyone together at one hacking table.) Still there was more to do, the django1.7 patches allowed the unit tests to complete but broke other parts of the lava-server package and needed subsequent tweaks and fixes. Even with all this, the auto-removal from testing for packages affected by RC bugs in their dependencies became very important to monitor (it still is). It would be useful if some packages had less complex dependency chains (I m looking at you, uwsgi) as the auto-removal also covers build-depends. This led to some more headaches with libmatheval. I m not good with functional programming languages, I did have some exposure to Scheme when working on Gnucash upstream but it wasn t pleasant. The thought of fixing a scheme problem in the test suite of libmatheval was daunting. Again though, asking for help, I found people in the upstream team who wanted to refresh their use of scheme and were able to help out. The fix migrated into testing in October. Just for added complications, lava-server gained a few RC bugs of it s own during that time too fixed upstream but awkward nonetheless. Achievement unlocked So that s how a complex package like lava-server gets into stable. With a lot of help. The main problem with top-level packages like this is the sheer weight of the dependency chain. Something seemingly unrelated (like libmatheval) can seriously derail the migrations. The package doesn t use the matheval support provided by uwsgi. The bug in matheval wasn t in the parts of matheval used by uwsgi. It wasn t in a language I am at all comfortable in fixing but it s my name on the changelog of the NMU. That happened because I asked for help. OK, when django1.7 was scheduled to arrive in Debian unstable and I knew that lava was not ready, I reacted out of fear and anxiety. However, I sought help, help was provided and that help was enough to get upstream to a point where the side-effects of the required changes could be fixed. Maintaining a top-level package in Debian is becoming more like maintaining a core package in Debian and that is a good thing. When your package has a lot of dependencies, those dependencies become part of the maintenance workload of your package. It doesn t matter if those are install time dependencies, build dependencies or reverse dependencies. It doesn t actually matter if the issues in those packages are in languages you would personally wish to be expunged from the archive. It becomes your problem but not yours alone. Debian has a lot of flames right now and Enrico encouraged us to look at what else is actually happening in Debian besides those arguments. Well, on top of all this with lava, I also did what I could to help the arm64 port along and I m very happy that this has been accepted into Jessie as an official release architecture. That s a much bigger story than LAVA yet LAVA was and remains instrumental in how arm64 gained the support in the kernel and various upstreams which allowed patches to be accepted and fixes to be incorporated into Debian packages. So a roll call of helpers who may otherwise not have been recognised via changelogs, in no particular order: Also general thanks to the Debian FTP and Release teams. Lessons learnt
  1. Allow time! None of the deadlines or timings involved in this entire process were hidden or unexpected. NEW always takes a finite but fairly lengthy amount of time but that was the only timeframe with any amount of uncertainty. That is actually a benefit it reminds you that this entire process is going to take a significant amount of time and the only loser if you try to rush it is going to be you and your package. Plan for the time and be sceptical about how much time is actually required.
  2. Ask for help! Everyone in Debian is a volunteer. Yes, the upstream for this project is a team of developers paid to work on this code (and largely only this code) but the upstream also has priorities, requirements, objectives and deadlines. It s no good expecting upstream to do everything. It s no good leaving upstream insufficient time to fit the required work into the existing upstream schedules. So ask for help within upstream and within Debian ask for help wherever you can. You don t know who may be able to help you until you ask. Be clear when asking for help how would someone test their proposed fix? Exactly what are you asking for help doing? (Hint: everything is not a good answer.)
  3. Keep on top of announcements and changes. The release team in Debian have made the timetable strict and have published regular updates, guidelines and status notes. As maintainer, it is your responsibility to keep up with those changes and make others in the upstream team aware of the changes and the implications. Upstream will rely on you to provide accurate information about these requirements. This is almost more important than actually providing the uploads or fixes. Without keeping people informed, even asking for help can turn out to be counter-productive. Communicate within Debian too talk to the teams, send status updates to bugs (even if the status is tag 123456 + help).
  4. Be realistic! Life happens around us, things change, personal timetables get torn up. Time for voluntary activity can appear and disappear (it tends to disappear far more often than extends, so take that into account too).
  5. Do not expect others to do the work for you asking for help is one thing, leaving the work to others is quite another. No complaining to the release team that they are blocking your work and avoid pleading or arguing when a decision is made. The policies and procedures within Debian are generally clear and there are quite enough arguments without adding more. Read the policies, read the guidelines, watch how other packages and other maintainers are handled and avoid those mistakes. Make it easy for others to help deliver what you want.
  6. Get to know your dependency chain follow the links on the packages.debian.org pages and get a handle on which packages are relevant to your package. Subscribe to the bug pages for some of the more high-risk packages. There are tools to help. rc-alert can help you spot problems with runtime dependencies (you do have your own package installed on a system running unstable if not, get that running NOW). Watching build-dependencies is more difficult, especially build-dependencies of a runtime dependency, so watch the RC bug lists for packages in your dependency chain.
Above all else, remember why you and upstream want the packages in Debian in the first place. Debian is a respected distribution and has an acknowledged reputation for stability and portability. The very qualities that you and your upstream desire from having your package in Debian have direct implications for the amount of work and the amount of time that will be required to get your packages into Debian and keep them there. Having your package in Debian will bring considerable benefits but you will be required to invest a considerable amount of time. It is this contribution which is valuable to Debian and it is this work which will deliver the benefits you seek. Being an expert in the one package is wildly inadequate. Debian is about the system, the whole distribution and sooner or later, you as the maintainer will be absolutely required to handle something which is so far out of your comfort zone it s untrue. The reality is that you are not expected to fix that problem you are expected to handle that problem and that includes seeking and acknowledging the help of others. The story isn t over until release day. Having your package in testing the day before the freeze is one step. It may be a large step, but it is only one. The status of that package still needs monitoring. That long dependency chain can still come back and bite. Don t wait for problems to surprise you. Finally One thing I do ask is that other upstream teams and maintainers think about the dependency chain they are creating. It may sound nice to have bindings for every interpreted language possible when releasing your compiled library but it does not help people using that library. Yes, it is more work releasing the bindings separately because a stable API is going to be needed to allow version 1.2.3 to work alongside 1.2.2 and 1.3.0 or the entire effort is pointless. Consider how your upstream package migrates. Consider how adding yet another build-dependency for an optional component makes things exponentially harder for those who need to rely on that upstream. If it is truly optional, release it separately and keep backwards compatibility on each side. It is more work but in reality, all that is happening is that the work is being transferred from the distribution (where it helps only that one distribution and causes duplication into other distributions) into the upstream (where it helps all distributions). Think carefully about what constitutes core functionality and release the rest separately. Combining bindings for php, ruby, python, java, lua and xslt into a single upstream release tarball is a complete nonsense. It simply means that the package gets blocked from new uploads by the constant churn of being involved in every transition that occurs in the distribution. There is a very real risk that the package will miss a stable release simply by having fingers in too many pies. That hurts not only this upstream but every upstream trying to use any part of your code. Every developer likes to think that people are using and benefiting from their effort. It s not nice to directly harm the interests of other developers trying to use your code. It is not enough for the binary packages to be discrete migrations happen by source package and the released tarball needs to not include the optional bindings. It must be this way because it is the source package which determines whether version 1.2.3 of plugin foo can work with version 1.2.0 of the library as well as with version 1.3.0. Maintainers regularly deal with these issues so talk to your upstream teams and explain why this is important to that particular team. Help other maintainers use your code and help make it easier to make a stable release of Debian. The quicker the freeze & release process becomes, the quicker new upstream versions can be uploaded and backported.

7 November 2014

Riku Voipio: Adventures in setting up local lava service

Linaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to Neil Williams work on packaging, installation has got a lot easier. Follow the Official Install Doc and Official install to debian Doc, roughly looking like: 1. Install Jessie into kvm

kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso
2. Install lava-server

apt-get update; apt-get install -y postgresql nfs-kernel-server apache2
apt-get install lava-server
# answer debconf questions
a2dissite 000-default && a2ensite lava-server.conf
service apache2 reload
lava-server manage createsuperuser --username default --email=foo.bar@example.com
$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is right
That's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made. 3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board.

cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/
sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001
This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config:

7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT
7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBIT
TODO: make ser2net not run as root and ensure usb2serial devices always get same name.. For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack. The USB relay is driven with a short script, hard-reset-1

stty -F /dev/ttyACM0 9600
echo -e '\xFF\x01\x00' > /dev/ttyACM0
sleep 1
echo -e '\xFF\x01\x01' > /dev/ttyACM0
Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3" Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like:

device_type = arndale
hostname = arndale-01
connection_command = telnet aimless 7001
hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1
Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on. Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python!

cd out/
python -m SimpleHTTPServer
Go to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job

$ sudo apt-get install lava-tool
$ lava-tool auth-add http://default@lava-server/RPC2/
$ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.json
submitted as job id: 1
$
The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.

8 May 2014

Riku Voipio: Arm builder updates

Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary: The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april: Qemu build times. We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup: webkitgtk build times. This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:

# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# MAKEARGUMENTS += -j$(NUMJOBS)
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules. During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building. For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go. Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen. Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out... [1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.

26 December 2013

Ian Campbell: Running ARM Grub on U-boot on Qemu

At the Mini-debconf in Cambridge back in November there was an ARM Sprint (which Hector wrote up as a Bits from ARM porters mail). During this there a brief discussion about using GRUB as a standard bootloader, particularly for ARM server devices. This has the advantage of providing a more "normal" (which in practice means "x86 server-like") as well as flexible solution compared with the existing flash-kernel tool which is often used on ARM. On ARMv7 devices this will more than likely involve chain loading from the U-Boot supplied by the manufacturer. For test and development it would be useful to be able to set up a similar configuration using Qemu. Cross-compilers Although this can be built and run on an ARM system I am using a cross compiler here. I'm using gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux from Linaro, which can be downloaded from the linaro-toolchain-binaries page on Launchpad. (It looks like 2013.10 is the latest available right now, I can't see any reason why that wouldn't be fine). Once the cross-compiler has been downloaded unpack it somewhere, I will refer to the resulting gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux directory as $CROSSROOT. Make sure $CROSSROOT/bin (which contains arm-linux-gnueabihf-gcc etc) is in your $PATH. Qemu I'm using the version packaged in Jessie, which is 1.7.0+dfsg-2. We need both qemu-system-arm for running the final system and qemu-user to run some of the tools. I'd previously tried an older version of qemu (1.6.x?) and had some troubles, although they may have been of my own making... Das U-boot for Qemu First thing to do is to build a suitable u-boot for use in the qemu emulated environment. Since we need to make some configuration changes we need to build from scratch. Start by cloning the upstream git tree:
$ git clone git://git.denx.de/u-boot.git
$ cd u-boot
I am working on top of e03c76c30342 "powerpc/mpc85xx: Update CONFIG_SYS_FSL_TBCLK_DIV for T1040" dated Wed Dec 11 12:49:13 2013 +0530. We are going to use the Versatile Express Cortex-A9 u-boot but first we need to enable some additional configuration options: You can add all these to include/configs/vexpress_common.h:
#define CONFIG_API
#define CONFIG_SYS_MMC_MAX_DEVICE   1
#define CONFIG_CMD_EXT2
#define CONFIG_CMD_ECHO
Or you can apply the patch which I sent upstream:
$ wget -O - http://patchwork.ozlabs.org/patch/304786/raw   git apply --index
$ git commit -m "Additional options for grub-on-uboot"
Finally we can build u-boot:
$ make CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ca9x4_config
$ make CROSS_COMPILE=arm-linux-gnueabihf-
The result is a u-boot binary which we can load with qemu. GRUB for ARM Next we can build grub. Start by cloning the upstream git tree:
$ git clone git://git.sv.gnu.org/grub.git
$ cd grub
By default grub is built for systems which have RAM at address 0x00000000. However the Versatile Express platform which we are targeting has RAM starting from 0x60000000 so we need to make a couple of modifications. First in grub-core/Makefile.core.def we need to change arm_uboot_ldflags, from:
-Wl,-Ttext=0x08000000
to
-Wl,-Ttext=0x68000000
and second we need make a similar change to include/grub/offsets.h changing GRUB_KERNEL_ARM_UBOOT_LINK_ADDR from 0x08000000 to 0x68000000. Now we are ready to build grub:
$ ./autogen.sh
$ ./configure --host arm-linux-gnueabihf
$ make
Now we need to build the final grub "kernel" image, normally this would be taken care of by grub-install but because we are cross building grub we cannot use this and have to use grub-mkimage directly. However the version we have just built is for the ARM target and not for host we are building things on. I've not yet figured out how to build grub for ARM while building the tools for the host system (I'm sure it is possible somehow...). Luckily we can use qemu to run the ARM binary:
$ cat load.cfg
set prefix=(hd0)
$ qemu-arm -r 3.11 -L $CROSSROOT/arm-linux-gnueabihf/libc \
    ./grub-mkimage -c load.cfg -O arm-uboot -o core.img -d grub-core/ \
    fat ext2 probe terminal scsi ls linux elf msdospart normal help echo
Here we create load.cfg which is the setup script which will be built into the grub kernel, our version just sets the root device so that grub can find the rest of its configuration. Then we use qemu-arm-static to invoke grub-mkimage. The "-r 3.11" option tells qemu to pretend to be a 3.11 kernel (which is required by the libc used by our cross compiler, without this you will get a fatal: kernel too old message) and "-L $CROSSROOT/..." tells it where to find the basic libraries, such as the dynamic linker (luckily grub-mkimage doesn't need much in the way of libraries so we don't need a full cross library environment. The grub-mkimage command passes in the load.cfg and requests an output kernel targeting arm-uboot, core.img is the output file and the modules are in grub-core (because we didn't actually install grub in the target system, normally these would be found in /boot/grub). Lastly we pass in a list of default modules to build into the kernel, including filesystem drivers (fat, ext2), disk drivers (scsi), partition handling (msdos), loaders (linux, elf), the menu system (normal) and various other bits and bobs. So after all the we now have our grub kernel in core.img. Putting it all together Before we can launch qemu we need to create various disk images. Firstly we need some images for the 2 64M flash devices:
$ dd if=/dev/zero of=pflash0.img bs=1M count=64
$ dd if=/dev/zero of=pflash1.img bs=1M count=64
We will initialise these later from the u-boot command line. Secondly we need an image for the root filesystem on an MMC device. I'm using a FAT formatted image here simply for the convenience of using mtools to update the images during development.
$ dd if=/dev/zero of=mmc.img bs=1M count=16
$ /sbin/mkfs.vfat mmc.img
Thirdly we need a kernel, device tree and grub configuration on our root filesystem. For the first two I extracted them from the standard armmp kernel flavour package. I used the backports.org version 3.11-0.bpo.2-armmp version and extracted /boot/vmlinuz-3.11-0.bpo.2-armmp as vmlinuz and /usr/lib/linux-image-3.11-0.bpo.2-armmp/vexpress-v2p-ca9.dtb as dtb. Then I hand coded a simple grub.cfg:
menuentry 'Linux'  
        echo "Loading vmlinuz"
        set root='hd0'
        linux /vmlinuz console=ttyAMA0 ro debug
        devicetree /dtb
 
In a real system the kernel and dtb would be provided by the kernel packages and grub.cfg would be generated by update-grub. Now that we have all the bits we need copy them into the root of mmc.img. Since we are using a FAT formatted image we can use mcopy from the mtools package.
$ mcopy -v -o -n -i mmc.img core.img dtb vmlinuz grub.cfg ::
Finally after all that we can run qemu passing it our u-boot binary and the mmc and flash images and requesting a Cortex-A9 based Versatile Express system with 1GB of RAM:
$ qemu-system-arm -M vexpress-a9 -kernel u-boot -m 1024m -sd mmc.img \
    -nographic -pflash pflash0.img -pflash pflash1.img
Then at the VExpress# prompt we can configure the default bootcmd to load grub and save the environment to the flash images. The backslash escapes (\$ and \;) should be included as written here so that e.g. the variables are only evaluated when bootcmd is evaluated and not immediately when setting bootcmd and the bootm is set as part of bootcmd instead of executed immediately:
VExpress# setenv bootcmd fatload mmc 0:0 \$ loadaddr  core.img \; bootm \$ loadaddr 
VExpress# saveenv
Now whenever we boot the system it will automatically load boot grub from the mmc and launch it. Grub in turn will load the Linux binary and DTB and launch those. I haven't actually configure Linux with a root filesystem here so it will eventually panic after failing to find root. Future work The most pressing issue is the hard coded load address built in to the grub kernel image. This is something which needs to be discussed with the upstream grub maintainers as well as the Debian package maintainers. Now that the ARM packages have hit Debian (in experimental in the 2.02~beta2-1 package) I also plan to start looking at debian-installer integration as well as updating flash-kernel to setup the chain load of grub instead of loading a kernel directly.

10 July 2012

Gunnar Wolf: From DebCamp to DebConf through cheese, wine and an intro track

One week. One long week. One beautiful week. One of the two major weeks of the year has passed since my previous post. Surely, we are in the middle of the two Major Weeks of the year, in the yearly schedule I have upheld for almost(!) ten years: DebConf+DebCamp. Yesterday, DebConf officially started. For the first time ever, we had a DebConf track targetted at the local (for a wide definition of local: All of the Central American countries) communities, which I chaired. We had the following talk lineup during this track: I believe it was a great success, and I hope the talks are useful in the future. They will be put online soon thanks to the tireless work of our work team. Today we sadly lost the presence of our DPL due to very happy circumstances he will surely announce himself. But DebConf will continue nevertheless - And proof of that is our anual, great, fun and inviting Cheese and Wine Party! After a series of organizational hiccups I hope nobody notices (oops, was I supposed not to say this?), today we had a beautiful, fun and most successful cheese and wine party, as we have had year after year since 2005. As many other people, we did our humble contribution for this party to be the success it deserves. There is lots of great cheeses, great wines, and much other great stuff we have to thank to each of the individuals who made this C&W party the success it was. Yes, it might be among the least-academic parts of our conference, but at the same time, it's one of its most cherished -and successful- traditions. And above all else, we have to thank our Great Leader^W^WCheeseMaster (who we still need to convince to play by our Great Leader's mandates - And no, I don't mean Zack here!) Hugs and thanks to my good and dear friend Christian Perrier for giving form to one of DebConf's social traditions that makes it so unique, so different from every other academic or communitary conference I have ever been part of. We still have most of the week to go. And if you are not in Managua (and are not coming soon), you can follow our activities following our video streams. Remember, debian/rules, now more than ever! And even given the (perpetual) heat in Managua: Wheezy is frozen, whee! [ all photos here taken by regina ]
AttachmentSize
100_2110.JPG1.53 MB
100_2119.JPG1.49 MB
100_2121.JPG1.52 MB
100_2135.JPG1.49 MB
100_2144.JPG1.11 MB
100_2146.JPG1.53 MB
100_2145.JPG1.08 MB
100_2122.JPG1.51 MB

3 June 2012

Johannes Schauer: cross-compilable and bootstrappable Debian

When packaging software for Debian, there exist two important assumptions:
  1. Compilation is done natively
  2. Potentially all of Debian is available at compile time
Both assumptions make the life of a package maintainer much easier and they do not create any problem unless you are one of the unlucky few who want to run Debian on an architecture that it does not yet exist for. You will then have to use other distributions like OpenEmbedded or Gentoo which you compiled (or retrieved otherwise) for that new architecture to hack a core of Debian source packages until they build a minimal Debian system that you can chroot into and continue natively building the rest of it. But even if you manage to get that far you will continue to be plagued by cyclic build and runtime dependencies. So you start to hack source packages so that they drop some dependencies and you can break enough cycles to advance step by step. The Debian ports page lists 24 ports of Debian, so despite its unpleasant nature, porting it is something that is not done seldom. The process as laid out above has a number of drawbacks: If Debian would provide a set of core packages that are cross-compilable and which suffice for a minimal foreign build system, and if it would also have enough source packages that provide a reduced build dependency set so that all dependency cycles can be broken, building Debian for a yet unknown architecture could be mostly automated. The benefits would be: With three of this year's GSoC projects, this dream seems to come into reach. There is the "Multiarch Cross-Toolchains" project by Thibaut Girka and mentored by Hector Oron and Marcin Juszkiewicz. Cross-compiling toolchains need packages from the foreign architecture to be installed alongside the native libraries. Cross-compiler packages have been available through the emdebian repositories but always were more of a hack. With multiarch, it is now possible to install packages from multiple architectures at once, so that cross-compilation toolchains can be realized in a proper manner and therefor can also enter the main archives. Besides creating multiarch enabled toolchains, he will also be responsible for making them build on the Debian builld system as cross-architecture dependencies are not yet supported. There is also the "Bootstrappable Debian" project by Patrick "P. J." McDermott and mentored by Wookey and Jonathan Austin. He will make a small set of source packages multiarch cross-compilable (using cross-compilers provided by Thibaut Girka) and add a Build-Depends-StageN header to critical packages so that they can be built with reduced build dependencies for breaking dependency cycles. He will also patch tools as necessary to recognize the new control header. And then there is my project: "Port bootstrap build-ordering tool" (Application). It is mentored by Wookey and Pietro Abate. In contrast to the other two, my output will be more on the meta-level as I will not modify any actual Debian package or patch Debian tools with more functionality. Instead the goal of this project is threefold:
  1. find the minimal set of source packages that have to be cross compiled
  2. help the user to find packages that are good candidates for breaking build dependency cycles through added staged build dependencies or by making them cross-compilable
  3. develop a tool that takes the information about packages that can be cross compiled or have staged build dependencies to output an ordering with which packages must be built to go from nothing to a full archive
More on that project in my follow-up post.

6 March 2011

Neil Williams: Checking build-dependencies

I've been nagged for a while (by Wookey mainly) about the unhelpfulness of dpkg-checkbuilddeps when it outputs the versions and alternatives alongside the package names which are missing, making it harder to pass the list to the package manager. apt-get build-dep isn't particular helpful either - it doesn't look at the modified package at all.

Of course, once the output is in a suitable format, a new script might as well make it possible to simply pass that output to said package manager. Once it can do that, it can then pass the output to the cross-dependency tools, like xapt.

So, I've refactored the embuilddeps script in the xapt package to do just this.

It's gained a few features in the process:

  1. Support for Build-Conflicts resolution

  2. Support for virtual packages, swapping the virtual for the first package to provide it (borrowed some code from Dpkg::Deps for that one).

  3. Support for Build-Depends alternatives (currently using the buildd default of "first alternative gets first chance")

  4. Reads data from debian/control, not the apt cache - to help with the package you're building instead of the one you've already uploaded.

  5. Handles cross dependencies (which are always assumed to not currently be installed) and native dependencies. This support is transitory until such time as enough packages are Multi-Arch compatible that Cross-Multi-Arch becomes trivial.

  6. Support for being used as a build-dependency resolver in pbuilder, including cross-architecture dependencies with pdebuild-cross.

  7. Can locate a debian/control file in a specified directory without needing to be called from that directory

  8. Checks your apt-cache policy to see if the required version of a package is available from your current apt sources. Fails completely if not. (The pdebuild-cross usage will need that to be extended a touch to look at the apt-cache policy from within the chroot.)

  9. Hector Oron has also been asking me to get embuilddeps working with sbuild, so I'm working on that feature too.

  10. verbose and quiet support (so use -q inside other scripts)

  11. most output is already translated - more translations are welcome, especially for the documentation, but hold on until this version has actually been uploaded.



More testing is needed, particularly that the extensive refactoring hasn't broken the pbuilder resolution support and then looking at what still needs to be done for sbuild support. The new script is in Emdebian SVN.

The first-choice method for virtuals and alternatives may well bear extension to explicit management via command-line options. I'm unsure yet whether it needs to be a configuration file setting. It could simply be a recursive - try one, move on - model.

27 September 2010

Steve McIntyre: Armel buildds and porter box hosted at ARM

One of the nice things that I've been involved with since starting to work at ARM in Cambridge is setting up newer, faster machines to help with the armel port. We have six machines hosted in the machine room here now: arne All of these machines are Marvell DB-78x00-BP development boards, each configured with a 1GHz Feroceon processor (ARM v5t), 1.5GB of RAM and a 250GB drive attached via SATA. They're nice machines, reasonably powerful yet (as with many ARM-based machines) they draw very very little electrical power even when working hard. These very boards were used for a while by the folks at Canonical to help build the Ubuntu armel port, but now we've got them. In terms of configuration, these machines are not quite fully supported in Debian yet, though. The kernels we're using are locally-built, based on the Debian linux-source-2.6.32 package but with a .config (marvell.config) that's tweaked slightly to add the support for these boards. There aren't any source changes needed, so I'm hoping to get support added directly in Debian, either as a new kernel flavour or (preferred) as a patch to an existing flavour. I've had conflicting advice about whether the latter is possible, so I'm going to have to experiment and find out for myself. UPDATE 2010-09-28: I've tested, and it seems that the boards will need a new flavour after all, as the config is incompatible with the closest other config (kirkwood). Ah well... I had no end of trouble trying to get make-kpkg do the right thing, so on advice from Ben I built the kernel using "make deb-pkg", a standard target in the Linux kernel's build system: fakeroot make -j2 deb-pkg DEBEMAIL=93sam@debian.org DEBFULLNAME="Steve McIntyre" KDEB_PKGVERSION=buildd23 Annoyingly, that wouldn't work when cross-compiling either so I had to build the kernel natively. To make the resulting kernel image package install properly (and, just as importantly, allow for future easy upgrades for the DSA folks), I also needed the following tweaks to the Debian system: Finally, I've tweaked the uboot config on the machines to use the uImage and uInitrd files that are generated:
Marvell>> setenv IDE ide reset
Marvell>> setenv loadkernel ext2load ide 0:1 0x2000000 /uImage
Marvell>> setenv loadinitramfs ext2load ide 0:1 0x3000000 /uInitrd
Marvell>> setenv bootboth bootm 0x2000000 0x3000000
Marvell>> setenv bootcmd setenv bootargs \$\(bootargs\)\;$(IDE)\;$(loadkernel)\;$(loadinitramfs)\;$(bootboth)
Marvell>> saveenv
And that's it, as far as I can see. I'll now wait for people to tell me what I've got wrong above... :-)

28 March 2010

Clint Adams: DPL Campaign Questions 010

Hector Oron:
Secondly, I was wondering how Debian could make it easier for people to contribute than other (derivatives and non-derivatives) distributions. I came up with a really nice draft howto[1] which I would like to bring up to your attention, so the basic question would be what do you think you could do to make contributions paths easier (take in account not all teams, groups follow such community guidelines)
Human beings are not computers, so I find that applying HOWTOs to them to be misguided at best. As for enforcing a set of arbitrary community guidelines, I would not do that. Frank Lin PIAT:
planet.d.o has became one of the most visible media for Debian, if not the most visible one. Do you think it is a good thing?
I consider it neutral. If people start thinking of blogs or microblogging as important though, I think we might have a problem. I am in favor of our informal policy to never post anything important on blogs unless it is merely a copy of something communicated properly on an official channel. This includes anything that would go into our packages. Consequently I do not think it a valuable goal to respect copyright law by enforcing DFSG-free licenses on blogs, whether they are aggregated or not. Kumar Appaiah:
One of the questions which I've not yet seen exactly in the discussions is on the transparency in the maintenance of non-core but "important" packages, such as python, wherein the maintenance of the package and policy (till a short while ago) has been, poor at best, and we've had near zero communication from the maintainer(s) for over a year. This has led some parts of the "community" (Debian Python, in this case) to knock the doors of the tech-ctte[1] (recommended reading, unless you have done so already).
I think that is a failure of the entire community that the situation has even come to this point. In other analogous situations, the package would have been hijacked by now and the situation resolved. I think it would be appropriate for the DPL either to step in and mediate situations like this before they get anywhere near the tech-ctte, or to delegate someone to do the same. I think it would be equally appropriate for the people involved to resolve this themselves. Of course all of this should be open.

12 January 2010

John Goerzen: Sing to Me, Muse

(a review of Homer s Iliad)
Here, therefore, huge and mighty warrior though you be, here shall you die. - Homer (The Iliad)
And with that formidable quote, I begin my review of The Iliad. I shall not exhaust you with a rehashing of the plot; that you can find on Wikipedia. Nor shall I be spending page upon page of analyzing the beautiful imagery, the implications of our understanding of fate and destiny, or all the other things that compel English majors to write page after page on the topic. Nor even shall I try to decipher whether it is a piece of history or a piece of legend. Rather, I intend to talk about why I read it: it s a really good story.
Sing to me, O goddess Muse, the wrath of Achilles, son of Peleus, which brought countless ills upon the Acheans.
As I d be reading it, I d think to myself: Ah ha! Now here we finally have a section of the story that doesn t speak to modern life. Perhaps it was a section the fear of being enslaved or butchered by conquerors, or about pride leading both sides to fight a war they need not have, or a graphic description of a spear going clear through someone s head or neck. And then, I d have to sit and think. Are we really so far removed from that? Here we are, in a supposedly civilized world. We use remotely-operated drones to drop bombs on people, and think it naught but regrettable collateral damage when the brains and intestines of dozens of innocent victims are scattered about, killing them, all for the chance of killing one enemy. But we aren t the only ones to blame; we live in a world in which killing the innocent is often the goal. People fly airplanes into skyscrapers, drop atomic weapons to flatten entire cities, and kill noncombatants in terrible numbers. And for what? A little power, some prestige, some riches, some vengeance, some wounded pride? Slavery is not dead, either. We that can happily afford Internet access most likely abhor the thought. What, though, can we make of the fact that people are starving in this world in record numbers? That our actions may literally wipe some nations off the map? Are those people as free as we are? Do our actions repress them? I suspect you can t read the Iliad without being at least a bit introspective.
Fear, O Achilles, the wrath of heaven; think on your own father and have compassion upon me, who am the more pitiable
One of the great things about reading the Iliad is that I can learn about the culture of the ancient civilizations. Now, of course, one can read about this in history books and Wikipedia. But reading some dry sentence is different from reading a powerful story written by and for them. I know that the Iliad isn t exactly a work of history, but it sure shows what made people tick: their religion, their morals, their behavior, and what they valued. I feel that I finally have some sort of insight into their society, and I am glad of it.
The day that robs a child of his parents severs him from his own kind; his head is bowed, his cheeks are wet with tears, and he will go about destitute among the friends of his father, plucking one by the cloak and another by the shirt. Some one or other of these may so far pity him as to hold the cup for a moment towards him and let him moisten his lips, but he must not drink enough to wet the roof of his mouth; then one whose parents are alive will drive him from the table with blows and angry words.
There are gory battle scenes in the Iliad, but then there are also heart-wrenching tender moments: when Hector leaves his wife to go fight, for instance, and worries about the future of his child if he should die. That s not to say I found the entire story riveting. I would have liked it to be 1/3 shorter. The battle just dragged on and on. And yet, I will grant that there was some purpose served by that: it fatigued me as a reader, which perhaps gave me a small sense of the fatigue felt by the participants in the story.
Why, pray, must the Argives needs fight the Trojans? What made the son of Atreus gather the host and bring them?
A question that was never answered, for either side: why are we so foolish that we must go to war? One tragedy among many is that neither side got smart about it until way too late, if ever they did at all. I read The Illiad in the Butler translation, which overall I liked. Some of these quotes, however, use the Lattimore one. This completes the first item on my 2010 reading list. I leave you with this powerful hope for the future, written almost three thousand years ago. Despite my criticisms above, we have made a lot of progress, haven t we? What a beautiful ideal it is, and what a long ways we have yet to walk.
I wish that strife would vanish away from among gods and mortals, and gall, which makes a man grow angry for all his great mind, that gall of anger that swarms like smoke inside of a man s heart and becomes a thing sweeter to him by far than the dripping of honey.
Sing to me, O goddess, the anger of Achilles, that one day his story may speak less to our hearts, for then will we have outgrown it.

6 January 2010

John Goerzen: It s Time (or, how the ancient Greeks interfered with American sanitation)

Had you been on our yard tonight, you would have seen a strange sight. Me, dressed warmly (but not quite warmly enough), jumping on the icy and snowy hood of my pickup. Some maniacal fit? An obscure part of my 3-day-long taking out the trash routine? Not exactly. Before I tell you why I was doing such a strange thing and almost in the dark I feel compelled to take a detour through ancient Greek poetry. You heard right. I ve been reading ancient Greek poetry lately. Those Greeks were forever running around receiving signs and portents from the gods. If ever a bird flew by carrying a live snake, and you saw the snake bite the bird while looking over your left shoulder, you knew Zeus was telling you not to attack the enemy that day. Or something. And if you, fool that you are, attacked the enemy that day anyway, you should be not in the least surprised to find a spear through your shoulder in just a few pages time. If ever someone received a sign from Zeus in modern times, surely it was I, tonight. I say that because, as I was trying to close the hood on my pickup after charging its battery, the hood nearly snapped in two. The hood is supported by some spring-loaded brackets in the back, so to close it, you have to push down on the front. I did so, and was surprised that it was going down so easily. Then I realized, apparently in the nick of time, that only the front was going down. The back was staying put, aiming up in the air as ever. So I tried to straighten the hood back out. I then tried to get it down by pulling at it halfway back, but just couldn t get enough leverage to make it budge. So what next? Why, jump on it of course. I got it most of the way closed that way, then jumped/slid down, latched it in front, then crawled back up there to push down the sort of rusty mountaintop that had formed halfway down the hood where it was buckling. OK, sorted. So to recap By this point, I ve had the battery charging for 24 hours, inflated the tire, removed the bungee cord around the brake pedal, removed the battery charger, coerced the hood shut. Time to start it. I hop in, pump the accelerator a few times, and turned the key to start. I heard the sickening aaaaaaaaRRRRRRRRRRRRuuuuumphhh . of a weak battery. So it wasn t charged enough to crank fast enough to have any hope of starting in winter. So I go get the car, pull it up close, being careful to leave myself enough room to maneuver between it and the electric fence. I get out the jumper cables. Then it occurs to me: I have to open the hood again. DOH! I open it just a bit, and hook up the jumper cables. It turns over a bit faster. For just a bit. I get a small sputtering of life from it, but it fades faster than an enemy of Hector at the battle for Troy. So I have Terah come out and put her foot on the accelerator of the car while I try to start the pickup. This too proves futile. So I disconnect the jumper cables, close the pickup s hood with significantly less care than a 31-year-old rusty and nearly-broken hood deserves, roll the window back up, and go for plan B: 4 trips to the road with stuff in my car trunk, and leave the giant cardboard boxes that are covered with snow in the pickup bed for next month. Obviously I have displeased the gods. Forgotten to offer up a sacrifice to Penzoil, feared goddess of internal combustion, perhaps? Or perhaps it was just a sign that I need a new pickup? I m so convinced of the latter that I didn t even bother to replace the bungee cord. Postscript Terah walked in, and said, Ah, I thought I heard you writing a blog post. Yep. It s about pickups and the Greek gods. <very slight pause> Oh, of course it is.

31 December 2009

Debian News: New Debian Developers (December 2009)

The following developers got their Debian accounts in the last month: Congratulations!

8 April 2009

Brett Parker: Another day, another fab evening!

So, was supposed to end up in Hector's House playing pool with various people that I know, but all of 'em were out last night (erm, half of 'em with me) and didn't quite manage to make it... So after an hour of waiting for people to turn up I gave up and considered going back to my "local" (the hop poles) to have a swift beer or three... instead, it being a wednesday, and knowing that Band Aids was on at the Pav Tav, I wandered there... managed to get there before the first band played - they were Vier and were absolutely bloody fantastic. Stayed and watched/danced to the other 3 bands and all was well with the world. Now it's time for the sleep and the getting everything else ready for the big move on Friday (yay for working Good Friday... or something!)

4 January 2009

Neil Williams: Installing Emdebian Grip with D-I

A few experiments have led to a partial route to pre-seeding Emdebian Grip from a standard Debian Lenny ISO. A few issues arise:



Overall, it is possible to install Emdebian Grip - with a little help - using Debian Installer. A few more changes should make it possible to select a standard Emdebian Grip installation without X and an Emdebian Grip XFCE desktop.

Remaining issues include:


A bespoke config package can be arranged for this purpose - it won't be compliant with Debian Policy, at least as far as the status of Policy for Lenny.

Lots more to do but Lenny is still in-progress so there's time to get a usable installation before FOSDEM.

15 June 2008

Neil Williams: Emdebian cross-buildd running for ARM

http://www.emdebian.org/buildd/

The scripts are in emdebian-tools 1.1.6 and with a few more fixes and tweaks, that will be the main body of v1.2.0 for Debian unstable.

A few issues with ssh and sudo passphrase caching vs running buildd scripts as root - I'll need to improve the documentation of the process before any other buildds can be set up. The need to have emdebian-tools and subversion installed inside the buildd chroot does make things a little more complicated than a standard wanna-build implementation - as does the need to download, build, install and clean-up cross dependencies.

This autobuilder is the Emdebian target package buildd - Hector Oron has another to build the cross-building toolchains. The toolchain autobuilder has fewer packages but each package is larger and is built for more architectures. Adding more architectures to the target autobuilder is quite difficult - packages that use cache files will need new cache files with different values for certain cache values. See the wiki for info on how to fix cache files. Each solves a different problem within the overall design of any buildd. In each case, instead of listening to email from incoming.debian.org, each buildd picks up the incoming data from the apt cache - a useful little time delay that gives me time to head-off certain problematic updates.

The target package autobuilder runs incrementally - a package is repeatedly rebuilt until it works and is then left alone until the next Debian release. There's lots of work available on the frontend if anyone fancies helping with the PHP. (See Emdebian SVN).

The advantage now is that as more packages fix their cross building bugs, more packages will autobuild without intervention which frees up more of my time to fix the more complicated problems. Equally, I can refer to the buildd logs in bug reports, helping everyone see what is going on.

5 June 2007

Martin-&#201;ric Racine: Good news for those who care about Debian

Via Hector:
Debian, Ubuntu and DCCA joint efforts for improving laptop support and for Linux kernel and X11 packaging on Debian -derivative software distributions.
This kind of group effort is precisely what makes the Free Software development model so great and, as this story demonstrates eloquently, it applies remarkably well to corporate software development, even among vendors that are competing for the same market segment. Awesome, isn't it?

Next.