Search Results: "mafm"

1 November 2022

Jonathan Dowland: Halloween playlist 2022

I hope you had a nice Halloween! I've collected together some songs that I've enjoyed over the last couple of years that loosely fit a theme: ambient, instrumental, experimental, industrial, dark, disconcerting, etc. I've prepared a Spotify playlist of most of them, but not all. The list is inline below as well, with many (but not all) tracks linking to Bandcamp, if I could find them there. This is a bit late, sorry. If anyone listens to something here and has any feedback I'd love to hear it. (If you are reading this on an aggregation site, it's possible the embeds won't work. If so, click through to my main site) Spotify playlist: https://open.spotify.com/playlist/3bEvEguRnf9U1RFrNbv5fk?si=9084cbf78c364ac8; The list, with Bandcamp embeds where possible: Some sources
  1. Via Stuart Maconie's Freak Zone
  2. Via Mary Anne Hobbs
  3. Via Lose yourself with
  4. Soma FM - Doomed (Halloween Special)

22 April 2017

Manuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64)

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture. If not interested in the story but you want to check the repository, just jump to the bottom. Roots A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) about which I posted (by coincidence) exactly 2 years ago. The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far). But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story. One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on. For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory. Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be. Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding. Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase. My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test. And lastly, another thing happened at the time... Enter RISC-V In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as Instruction Sets Should Be Free: The Case For RISC-V (pdf) and articles like RISC-V: An Open Standard for SoCs - The case for an open ISA in EE Times. RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book Computer Architecture: A Quantitative Approach . Other very capable people are also also leading the project, doing the design and legwork to make it happen see the list of contributors. But, apart from throwing names, the project has many other merits. Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for instruction widths of 32, 64 and even 128 bits; with a clean set of standard but optional extensions (atomics, float, double, quad, vector, ...); and with reserved space to add new extensions in ordered and compatible ways. In the view of some people in the OpenRISC community, this unexpected development in a way made irrelevant the ongoing update of OpenRISC for 64-bits, and from what I know and for better or worse, all efforts on that front stopped. Also interestingly (if nothing else, for my purposes of running as much a free system as possible), was that the people behind RISC-V had strong intentions to make it useful to create modern hardware, and were pushing heavily towards it from the beginning. And together with this announcement or soonly after, it came the promise of free-ish hardware in the form of the lowRISC project. Although still today it seems to be a long way from actually shipping hardware, at least there was some prospect of getting it some day. On top of all that, about the freedom aspect, both the Berkeley and lowRISC teams engaged since very early on with the free hardware community, including attending and presenting at OpenRISC events; and lowRISC intended to have as much free hardware as possible in their planned SoC. Starting to work on the Debian port, this time for RISC-V So in late 2014 I slowly started to prepare the Debian port, switching on and off. The Userland spec was frozen in 2014 just before the announcement, the Privilege one is still not frozen today, so there was no need to rush. There were plans to upstream the support in the toolchain for RISC-V (GNU binutils, glibc and GCC; Linux and other useful software like Qemu) in 2015, but sadly these did not materialise until late 2016 and 2017. One of the main reasons for the delay was due to the slowness to sort out the copyright assignment of the code to the FSF (again). Still today, only binutils and GCC are upstreamed, and Linux and glibc depend on the Privilege spec being finished, so it will take a while. After the experience with OpenRISC and the support in GCC, I didn't want to invest too much time, lest it all became another dead-end due to lack of upstreaming so I was just cross-compiling here and there, testing Qemu (which still today is very limited for this architecture, e.g. no network support and very limited character and block devices) and trying to find and report bugs in the implementations, and send patches (although I did not contribute much in the end). Incompatible changes in the toolchain In terms of compiling packages and building-up a repository, things were complicate, and less mature and stable than the OpenRISC ones were even back in 2014. In theory, with the Userland spec being frozen, regular programs (below the Operating System level) compiled at any time could have run today; but in practice what happened is that there were several rounds of profound or, at least, disrupting changes in the toolchain before and while being upstreamed, which made the binary packages that I had built to not work at all (changes in dynamic loader, registers where arguments are stored when jumping functions, etc.). These major breakages happened several times already, and kind of unexpectedly at least for the people not heavily involved in the implementation. When the different pieces are upstreamed it is expected that these breakages won't happen; but still there's at least the fundamental bit of glibc, which will probably change things once again in incompatible ways before or while being upstreamed. Outside Debian but within the FOSS / Linux world, the main project that I know of is that some people from Fedora also started a port in mid 2016 and did great advances, but from what I know they put the project in the freezer in late 2016 until all such problems are resolved they don't want to spend time rebootstrapping again and again. What happened recently on the Debian front In early 2016 I created the page for RISC-V in the Debian wiki, expecting that things were at last fully stable and the important bits of the toolchain upstreamed during that year I was too optimistic. Some other people (including Debian folks) have been contributing for a while, in the wiki, mailing lists and IRC channels, and in the RISC-V project mailing lists you will see their names everywhere. However, due to the combination of lack of hardware, software not upstreamed and shortcomings of emulators (chiefly Qemu) make contributions hard and very tedious, nothing much happened recently visible to the outside world in terms of software. The private repository-in-the-making In late 2015 and beginning of 2016, having some free time in my hands and expecting that all things would coalesce quickly, I started to build a repository of binary packages in a more systematic way, with most of the basic software that one can expect in a basic Debian system (including things common to all Linux systems, and also specific Debian software like dpkg or apt, and even aptitude!). After that I also built many others outside the basic system (more than 1000 source packages and 2000 or 3000 arch-dependent binary packages in total), specially popular libraries (e.g. boost, gtk+ version 2 and 3), interpreters (several versions of lua, perl and python, also version 2 and 3) and in general packages that are needed to build many other packages (like doxygen). Unfortunately, some of these most interesting packages do not compile cleanly (more because of obscure or silly errors than proper porting), so they are not included at the moment. I intentionally avoided trying to compile thousands of packages in the archive which would be of nobody's use at this point; but many more could be compiled without much effort. About the how, initially I started cross-compiling and using rebootstrap, which was of great help in the beginning. Some of the packages that I cross-compiled had bugs that I did not know how to debug without a live and native (within emulators) system, so I tried to switch to natively built packages very early on. For that I needed many packages built natively (like doxygen or cmake) which would be unnecessary if I remained cross-compiling the host tools would be used in that case. But this also forced me to eat my own dog food, which even if much slower and tedious, it was on the whole a more instructive experience; and above all, it helped to test and make sure that the the tools and the whole stack was working well enough to build hundreds of packages. Why the repository-in-the-making was private Until now I did not attempt to make the repository available on-line, for several reasons. First because it would be kind of useless to publish files that were not working or would soon not work, due to the incompatible changes in the toolchain, rendering many or most of the packages built useless. And because, for many months now, I expected that things would stabilise and to have something stable really soon now but this didn't happen yet. Second because of lack of resources and time since mid 2016, and because I got some packages only compiled thanks to (mostly small and unimportant, but undocumented and unsaved) hacks, often working around temporary bugs and thus not worth sending upstream; but I couldn't share the binaries without sharing the full source and fulfill licenses like the GNU GPL. I did a new round of clean rebuilds in the last few weeks, just finished, the result is close to 1000 arch-dependent packages. And third, because of lack of demand. This changed in the last few weeks, when other people started to ask me to share the results even if incomplete or not working properly (I had one request in the past, but couldn't oblige in time at the time). Finally, the work so far: repository now on-line So finally, with the great help from Kurt Keville from MIT, and Bytemark sponsoring a machine where most of the packages were built, here we have the repository: The lines for /etc/apt/sources.list are:
 deb [ arch=riscv64 signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
 deb-src [ signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
The repository is signed with my key as Debian Developer, contained in the file /usr/share/keyrings/debian-keyring.gpg, which is part of the package debian-keyring (available from Debian and derivatives). WARNING!! This repository, though, is very much WIP, incomplete (some package dependencies cannot be fulfilled, and it's only a small percentage of the Debian archive, not trying to be comprehensive at the moment) and probably does not work at all in your system at this point, for the following reasons: The combination of all these shortcomings is specially unfortunate, because without glibc provided it will be difficult that you can get the binaries to run at all; but there are some packages that are arch-dependent but not too tied to libc or the dynamic loader will not be affected. At least you can try one the few static packages present in Debian, like the one in the package bash-static. When one removes moving parts like the dynamic loader and libc, since the basic machine instructions are stable for several years now, it should work, but I wouldn't discard some dark magic that prevents even static binaries from working. Still, I hope that the respository in its current state is useful to some people, at least for those who requested it. If one has the environment set-up, it's easy to unpack the contents of the .deb files and try out the software (which often is not trivial or very slow to compile, or needs lots of dependencies to be built first first). ... and finally, even if not useful at all for most people at the moment, by doing this I also hope that efforts like this spark your interest to contribute to free software, free hardware, or both! :-)

8 February 2017

Manuel A. Fernandez Montecelo: FOSDEM 2017: People, RISC-V and ChaosKey

This year, for the first time, I attended FOSDEM. There I met... People ... including: I met new people in: ... from the first hour to the last hour of my stay in Brussels. In summary, lots of people around. I also hoped to meet or spend some (more) time with a few people, but in the end I didn't catch them, our could not spend as much time with them as I would wish. For somebody like me who enjoys quiet time by itsef, it was a bit too intensive in terms of interacting with people. But overall it was a nice winter break, definitely worth to attend, and even a much better experience than what I had expected. Talks / Events Of course, I also attended a few talks, some of which were very interesting; although the event is so (sometimes uncomfortably) crowded that the rooms were full more often than not, in which case it was not possible to enter (the doors were closed) or there were very long queues for waiting. And with so many talks crammed into a weekend, I had so many schedule clashes with the talks that I had pre-selected as interesting, that I ended up missing most of them. In terms of technical stuff, I have specially enjoyed the talk by Arun Thomas RISC-V -- Open Hardware for Your Open Source Software, and some conversations related with toolchain stuff and other upstream stuff, as well as on the Debian port for RISC-V. The talk Resurrecting dinosaurs, what can possibly go wrong? -- How Containerised Applications could eat our users, by Richard Brown, was also very good. ChaosKey Apart from that, I have witnessed a shady cash transaction in a bus from the city centre to FOSDEM in exchange for hardware, not very unlike what I had read about only days before. So I could not help but to get involved in a subsequent transaction myself, to get my hands laid upon a ChaosKey.

18 June 2016

Manuel A. Fernandez Montecelo: More work on aptitude

The last few months have been a bit of a crazy period of ups and downs, with a tempest of events beneath the apparent and deceivingly calm surface waters of being unemployed (still at it). The daily grind Chief activities are, of course, those related to the daily grind of job-hunting, sending applications, and preparing and attending interviews. It is demoralising when one searches for many days or weeks without seeing anything suitable for one's skills or interests, or other more general life expectations. And it takes a lot of time and effort to put one's best in the applications for positions that one is really, really, interested in. And even for the ones which are meh, for a variety of reasons (e.g. one is not very suitable for what the offer demands). After that, not being invited to interviews (or doing very badly at them) is bad, of course, but quick and not very painful. A swift, merciful end to the process. But it's all the more draining when waiting for many weeks when not a few months with the uncertainty of not knowing if one is going to be lucky enough to be summoned for an interview; harbouring some hope one has to appear enthusiastic in the interviews, after all , while trying to keep it contained lest it grows too much ; then in the interview hearing good words and some praises, and feeling the impression that one will fit in, that one did nicely and that chances are good letting the hope grow again ; start to think about life changes that the job will require to make a quick decision should the offer finally arrives ; perhaps make some choices and compromises based on the uncertain result; then wait for a week or two after the interview to know the result... ... only to end up being unsuccessful. All the effort and hopes finally get squashed with a cold, short email or automatic response, or more often than not, complete radio silence from prospective employers, as an end to a multi-month-long process. An emotional roller coaster [1], which happened to me several times in the last few months. All in a day's work The months of preparing and waiting for a new job often imply an impasse that puts many other things that one cares about on hold, and one makes plans that will never come to pass. All in a day's (half-year's?) work of an unemployed poor soul. But not all is bad. This period was also a busy time doing some plans about life, mid- and long-term; the usual and some really unusual! family events; visits to and from friends, old and new; attending nice little local Debian gatherings or the bigger gathering of Debian SunCamp2016, and other work for side projects or for other events that will happen soon... And amidst all that, I managed to get some work done on aptitude. Two pictures worth (less than) a thousand bugs To be precise, worth 709 bugs 488 bugs in the first graph, plus 221 in the second. In 2015-11-15 (link to the post Work on aptitude): aptitude BTS Graph, 2015-11-15 In 2016-06-18: aptitude BTS Graph, 2016-06-18 Numbers The BTS numbers for aptitude right now are: Highlights Beyond graphs and stats, I am specially happy about two achievements in the last year:
  1. To have aptitude working today, first and foremost Apart from the abandon that suffered in previous years, I mean specifically the critical step of getting it through the troubles of the last summer, with the GCC-5/C++11 transition in parallel with a transition of the Boost library (explained in more detail in Work on aptitude). Without that, possibly aptitude would not have survived until today.
  2. Improvements to the suggestions of the resolver In the version 0.8, there were a lot of changes related with improving the order of the suggestions from the resolver, when it finds conflicts or other problems with the planned actions. Historically, but specially in the last few years, there have been many complaints about the nonsensical or dangerous suggestions from the resolver. The first solution offered by the resolver was very often regarded as highly undesirable (for example, removal of many packages), and preferable solutions like upgrades of one or only a handful of packages being offered only after many removals; and keeps only offered as last resort.
Perhaps these changes don't get a lot of attention, given that in the first case it's just to keep working (with few people realising that it could have collapsed on the spot, if left unattended), and the second can probably go unnoticed because it just works or it started to work more smoothly doesn't get as much immediate attention as it suddenly broke! . Still, I wanted to mention them, because I am quite proud of those. Thanks Even if I put a lot of work on aptitude in the last year, the results of the graph and numbers have not been solely achieved by me. Special thanks go to Axel Beckert (abe / XTaran) and the apt team, David Kalnischkies and Julian Andres Klode who, despite the claim in that page, does not mostly work python-apt anymore... but also in the main tools. They help with fixing some of the issues directly, or changing things in apt that benefit aptitude, testing changes, triaging bugs or commenting on them, patiently explaining to me why something in libapt doesn't do what I think it does, and good company in general. Not the least, for holding impromptu BTS group therapy / support meetings, for those cases when prolonged exposure to BTS activity starts to induce very bad feelings. Thanks also to people who sent their translation updates, notified about corrections, sent or tested patches, submitted bugs, or tried to help in other ways. Change logs for details. Notes [1] ^ It's even an example in the Cambridge Dictionaries Online website, for the entry of roller coaster:
He was on an emotional roller coaster for a while when he lost his job.

1 February 2016

Stein Magnus Jodal: January contributions

The following is a short summary of my open source work in January, just like I did back in November and December.

Debian

Mopidy
  • PR #1381: Made lookup() ignore tracks without URI.
  • Updated docs for Raspberry Pi installation to match Raspbian jessie.
  • Rewrote docs for running Mopidy as a service, more focused on systemd than Debian specifics to also cater for Arch users.
  • PR #1397: Added missing MPD volume command.
  • Merged a bunch of contributed fixes and released Mopidy 1.1.2.
  • Updated all extensions hosted under the Mopidy GitHub organization with either the name of the primary maintainer or a call for a new maintainer. The extensions in need of a new maintainer are: If you re a user of any of these and want to contribute, please step up. Instructions can be found in the README of any of these projects.
  • The feature/gst1 branch is complete as far as I know. There are no known regressions from Mopidy 1.1.2. PR #1419 is hopefully the last iteration of the pull request and GStreamer 1.x support will land in Mopidy 1.2.
  • Wrapping up the 1.2 release is now the focus. We might want to include this in Debian/Ubuntu before the Ubuntu 16.04 import freeze February 18, depending on feedback over the next week or two.

Comics
  • Added one new crawler.
  • Released comics 2.4.2.
  • Merged one new crawler and a crawler update.

15 November 2015

Manuel A. Fernandez Montecelo: Work on aptitude

Midsummer for me is also known as Noite do Lume Novo (literally New Fire Night ), one of the big calendar events of the year, marking the end of the school year and the beginning of summer. On this day, there are celebrations not very unlike the bonfires in the Guy Fawkes Night in England or Britain [1]. It is a bit different in that it is not a single event for the masses, more of a friends and neighbours thing, and that it lasts for a big chunk of the night (sometimes until morning). Perhaps for some people, or outside bigger towns or cities, Guy Fawkes Night is also celebrated in that way and that's why during the first days of November there are fireworks rocketing and cracking in the neighbourhoods all around. Like many other celebrations around the world involving bonfires, many of them also happening around the summer solstice, it is supposed to be a time of renewal of cycles, purification and keeping the evil spirits away; with rituals to that effect like jumping over the fire when the flames are not high and it is safe enough. So it was fitting that, in the middle of June (almost Midsummer in the northern hemisphere), I learnt that I was about to leave my now-previous job, which is a pretty big signal and precursor for renewal (and it might have something to do with purifying and keeping the evil away as well ;-) ). Whatever... But what does all of this have to do with aptitude or Debian, anyway? For one, it was a question of timing. While looking for a new job (and I am still at it), I had more spare time than usual. DebConf 15 @ Heidelberg was within sight, and for the first time circumstances allowed me to attend this event. It also coincided with the time when I re-gained access to commit to aptitude on the 19th of June. Which means Renewal. End of June was also the time of the announcement of the colossal GCC-5/C++11 ABI transition in Debian, that was scheduled to start on the 1st of August, just before the DebConf. Between 2 and 3 thousand source packages in Debian were affected by this transition, which a few months later is not yet finished (although the most important parts were completed by mid-end September). aptitude itself is written in C++, and depends on several libraries written in C++, like Boost, Xapian and SigC++. All of them had to be compiled with the new C++11 ABI of GCC-5, in unison and in a particular order, for aptitude to continue to work (and for minimal breakage). aptitude and some dependencies did not even compile straight away, so this transition meant that aptitude needed attention just to keep working. Having recently being awarded again with the Aptitude Hat, attending DebConf for the first time and sailing towards the Transition Maelstrom, it was a clear sign that Something Had to Be Done (to avoid the sideways looks and consequent shame at DebConf, if nothing else). Happily (or a bit unhappily for me, but let's pretend...), with the unexpected free time in my hands, I changed the plans that I had before re-gaining the Aptitude Hat (some of them involving Debian, but in other ways maybe I will post about that soon). In July I worked to fix the problems before the transition started, so aptitude would be (mostly) ready, or in the worst case broken only for a few days, while the chain of dependencies was rebuilt. But apart from the changes needed for the new GCC-5, it was decided at the last minute that Boost 1.55 would not be rebuilt with the new ABI, and that the only version with the new ABI would be 1.58 (which caused further breakage in aptitude, was added to experimental only a few days before, and was moved to unstable after the transition had started). Later, in the first days of the transition, aptitude was affected for a few days by breakage in the dependencies, due to not being compiled in sequence according to the transition levels (so with a mix of old and new ABI). With the critical intervention of Axel Beckert (abe / XTaran), things were not so bad as they could have been. He was busy testing and uploading in the critical days when I was enjoying a small holiday on my way to DebConf, with minimal internet access and communicating almost exclusively with him; and he promptly tended the complaints arriving in the Bug Tracking System and asked for rebuilds of the dependencies with the new ABI. He also brought the packaging up to shape, which had decayed a bit in the last few years. Gruesome Challenges But not all was solved yet, more storms were brewing and started to appear in the horizon, in the form of clouds of fire coming from nearby realms. The APT Deities, which had long ago spilled out their secret, inner challenge (just the initial paragraphs), were relentless. Moreover, they were present at Heidelberg in full force, in or close to their home grounds, and they were Marching Decidedly towards Victory: apt BTS Graph, 2015-11-15 In the talk @ DebConf This APT has Super Cow Powers (video available), by David Kalnischkies, they told us about the niceties of apt 1.1 (still in experimental but hopefully coming to unstable soon), and they boasted about getting the lead in our arms race (should I say bugs race?) by a few open bug reports. This act of provocation further escalated the tensions. The fierce competition which had been going on for some time gained new heights. So much so that APT Deities and our team had to sit together in the outdoor areas of the venue and have many a weissbier together, while discussing and fixing bugs. But beneath the calm on the surface, and while pretending to keep good diplomatic relations, I knew that Something Had to Be Done, again. So I could only do one thing jump over the bonfire and Keep the Evil away, be that Keep Evil bugs Away or Keep Evil APT Deities Away from winning the challenge, or both. After returning from DebConf I continued to dedicate time to the project, more than a full time job in some weeks, and this is what happened in the last few months, summarised in another graph, showing the evolution of the BTS for aptitude: aptitude BTS Graph, 2015-11-15 The numbers for apt right now (15th November 2015) are: The numbers for aptitude right now are: The Aftermath As we can see, for the time being I could keep the Evil at bay, both in terms of bugs themselves and re-gaining the lead in the bugs race the Evil APT Deities were thwarted again in their efforts. ... More seriously, as most of you suspected, the graph above is not the whole truth, so I don't want to boast too much. A big part of the reduction in the number of bugs is because of merging duplicates, closing obsolete bugs, applying translations coming from multiple contributors, or simple fixes like typos and useful suggestions needing minor changes. Many of remaining problems are comparatively more difficult or time consuming that the ones addressed so far (except perhaps avoiding the immediate breakage of the transition, that took weeks to solve), and there are many important problems still there, chief among those is aptitude offering very poor solutions to resolve conflicts. Still, even the simplest of the changes takes effort, and triaging hundreds of bugs is not fun at all and mostly a thankless effort althought there is the occasionally kind soul that thanks you for handling a decade-old bug. If being subjected to the rigours of the BTS and reading and solving hundreds of bug reports is not Purification, I don't know what it is. Apart from the triaging, there were 118 bugs closed (or pending) due to changes made in the upstream part or the packaging in the last few months, and there are many changes that are not reflected in bugs closed (like most of the changes needed due to the C++11 ABI transition, bugs and problems fixed that had no report, and general rejuvenation or improvement of some parts of the code). How long this will last, I cannot know. I hope to find a job at some point, which obviously will reduce the time available to work on this. But in the meantime, for all aptitude users: Enjoy the fixes and new features! Notes [1] ^ Some visitors of the recent mini-DebConf @ Cambridge perhaps thought that the fireworks and throngs gathered were in honour of our mighty Universal Operating System, but sadly they were not. They might be, some day. In any case, the reports say that the visitors enjoyed the fireworks.

21 April 2015

Manuel A. Fernandez Montecelo: About the Debian GNU/Linux port for OpenRISC or1k

In my previous post I mentioned my involvement with the OpenRISC or1k port. It was the technical activity in which I spent most time during 2014 (Debian and otherwise, day job aside). I thought that it would be nice to talk a bit about the port for people who don't know about it, and give an update for those who do know and care. So this post explains a bit how it came to be, details about its development, and finally the current status. It is going to be written as a rather personal account, for that matter, since I did not get involved enough in the OpenRISC community at large to learn much about its internal workings and aspects that I was not directly involved with. There is not much information about all of this elsewhere, only bits and pieces scattered here and there, but specially not much public information at all about the development of the Debian port. There is an OpenRISC entry in the Debian wiki, but it does not contain much information yet. Hopefully, this piece will help a bit to preserve history and give an insight for future porters. First Things First I imagine that most people reading this post will be familiar with the terminology, but just in case, to create a new Debian port means to get a Debian system (GNU/Linux variant, in this case) to run in the OpenRISC or1k computer architecture. Setting to one side all differences between hardware and software, and as described in their site:
The aim of the OpenRISC project is to create free and open source computing platforms
It is therefore a good match for the purposes of Debian and Free Software world in general. The processor has not been produced in silicon, or not available for the masses in any case. People with the necessary know-how can download the hardware description (Verilog) and synthesise it in a FPGA, or otherwise use simulators. It is not some piece of hardware that people can purchase yet, and there are no plans to mass-produce it in the near future either. The two people (including me) involved in this Debian port did not have the hardware, so we created the port entirely through cross-compiling from other architectures, and then compiling inside Qemu. In a sense, we were creating a Debian port for hardware that "does not [phisically] exist". The software that we built was tested by people who had hardware available in FPGA, though, so it was at least usable. I understand that people working in the arm64 port had to work similarly in the initial phases, working in the dark without access to real hardware to compile or test. The Spark The first time that I heard about the initiative to create the port was in late February of 2014, in a post which appeared in Linux Weekly News (sent by Paul Wise) and Slashdot. The original post announcing it was actually from late January, from Christian Svensson (blueCmd):
Some people know that I've been working on porting Glibc and doing some toolchain work. My evil master plan was to make a Debian port, and today I'm a happy hacker indeed! Below is a link to a screencast of me installing Debian for OpenRISC, installing python2.7 via apt-get (which you shouldn't do in or1ksim, it takes ages! (but it works!)) and running a small Python script. http://asciinema.org/a/7362
So, now, what can a Debian Hacker do when reading this? (Even if one's Hackery Level is not that high, as it is my case). And well, How Hard Can It Be? I mean, Really? Well, in my own defence, I knew that the answer to the last two questions would be a resounding Very . But for some reason the idea grabbed me and I couldn't help but think that it would be a Really Exciting Project, and that somehow I would like to get involved. So I wrote to Christian offering my help after considering it for a few days, around mid March, and he welcomed me aboard. The Ball Was Already Rolling Christian had already been in contact with the people behind DebianBootstrap, and he had already created the repository http://openrisc.debian.net/ with many packages of the base system and beyond (read: packages name_version_or1k.deb available to download and install). Still nowadays the packages are not signed with proper keys, though, so use your judgement if you want to try them. After a few weeks, I got up to speed with the status of the project and got my system working with the necessary tools. This meant basically sbuild/schroot to compile new packages, with the base system that Christian already got working, installed in a chroot, probably with the help of debootstrap, and qemu-system-or1k to simulate the system. Only a few of the packages were different from the version in Debian, like gcc, binutils or glibc -- they had not been upstreamed yet. sbuild ran through qemu-system-or1k, so the compilation of new packages could happen "natively" (running inside Qemu) rather than cross-compiling the packages, pulling _or1k.deb packages for dependencies from the repository that he had prepared, and _all.deb packages from snapshots.debian.org. I started by trying to get the packages that I [co-]maintain in Debian compiled for this architecture, creating the corresponding _or1k.deb. For most of them, though, I needed many dependencies compiled before I could even compile my packages. The GNU autotools / autoreconf Problem Since very early, many of the packages failed to build with messages such as:
Invalid configuration 'or1k-linux-gnu': machine 'or1k' not recognized
configure: error: /bin/bash ../config.sub or1k-linux-gnu failed
This means that software packages based on GNU autotools and using configure scripts need recent versions of the files config.sub and config.guess that they ship in their root directory, to be able to detect the architecture and generate the code accordingly. This is counter-intuitive, having into account that GNU autotools were designed to help with portability. Yet, in the case of creating new Debian ports, it meant that unless upstream had very recent versions of config. guess,sub , it would prevent the package to compile straight away in the new architectures -- even if invoking gcc without ado would have worked without problems in most cases for native compilation. Of course this did not only affect or1k, and there was already the autoreconf effort underway as a way to update these files automatically when building Debian packages, pushed by people porting Debian to the new architectures added in 2013/2014 (mips64el, arm64, ppc64el), which encountered the same roadblock. This affected around a thousand source packages in unstable. A Royal Pain. Also, all of their reverse dependencies (packages that depended on those to be built) could not be compiled straight away. The bugs were not Release Critical, though (none of these architectures were officially accepted at the time), so for people not concerned with the new ports there was no big incentive to get them fixed. This problem, which conceptually is easily solvable, prevented new ports to even attempt compile vast portions of the archive straight away (cleanly, without modifications to the package or to the host system), for weeks or months. The GNU autotools / autoreconf Solution We tackled this problem mainly in two ways. First, more useful for Debian in general, was to do as other porters were doing and submit bug reports and patches to Debian packages requesting them to use autoreconf, and NMUing packages (uploading changes to the archive without the official maintainers' intervention). A few NMUs were made for packages which had bug reports with patches available for a while, that were in the critical path to get many other packages compiled, and that were orphan or had almost no maintainer activity. The people working in the other new ports, and mainly Ubuntu people which helped with some of those ports and wanted to support them, had submitted a large amount of requests since late 2013, so there was no shortage of NMUs to be made. Some porters, not being Debian Developers, could not easily get the changes applied; so I also helped a bit the porters of other architectures, specially later on before the freeze of Jessie, to get as many packages compiled in those architectures as possible. The second way was to create dpkg-buildpackage hooks that updated unconditionally config. guess,sub before attempting to build the package in the local build system. This local and temporary solution allowed us in the or1k port to get many _or1k.deb packages in the experimental repository, which in turn would allow many more packages to compile. After a few weeks, I set up many sbuilds in a multi-core machine attempting to build uninterruptedly packages that were not previously built and which had their dependencies available. Every now and then (typically several times per day in peak times) I pushed the resulting _or1k.deb files to the repository, so more packages would have the necessary dependencies ready to attempt to build. Christian was doing something similar, and by April at peak times, among the two of us, we were compiling some days more than a hundred packages -- a huge amount of packages did not need any change other than up-to-date config. guess,sub files. At some point, late April, Christian set up wanna-build in a few hosts to do this more elegantly and smartly than my method, and more effectively as well. Ugly Hacks, Bugs and Shortcomings in the Toolchain and Qemu Some packages are extremely important because many other packages need them to compile (like cmake, Qt or GTK+), and they are themselves very complex and have dependency loops. They had deeper problems than the autoreconf issue and needed some seriously dirty hacking to get them built. To try to get as many packages compiled as possible, we sometimes compiled these important packages with some functionality disabled, disabling some binary packages (e.g. Java bindings) or specially disabling documentation (using DEB_BUILD_OPTIONS=nodoc when possible, and more aggressively when needed by removing chunks of debian/rules). I tried to use the more aggressive methods in as few packages as possible, though, about a dozen in total. We also used DEB_BUILD_OPTIONS=nocheck for speeding up compilation and avoiding build failures -- many packages' tests failed due to qemu-system-or1k not supporting multi-threading, which we could do nothing about at the time, but otherwise the packages mostly passed tests fine. Due to bugs and shortcomings in Qemu and the toolchain --like the compiler lacking atomics, missing functionality in glibc, Qemu entering in endless loops, or programs segfaulting (especially gettext, used by many packages and causing the packages failing to build)--, we had to resort to some very creative ways or time-consuming dull work to edit debian/rules, or to create wrappers of the real programs avoiding or forcing certain options (like gcc -O0, since -O2 made buggy binaries too often). To avoid having a mix of cleanly compiled and hacked packages in the same repository, Christian set up a two-tiered repository system -- the clean one and the dirty one. In the dirty one we dumped all of the packages that we got built, no matter how. The packages in the clean one could use packages from the dirty one to build, but they themselves were compiled without any hackery. Of course this was not a completely airtight solution, since they could contain code injected at build time from the "dirty repository" (e.g. by static linking), and perhaps other quirks. We hoped to get rid of these problems later by rebuilding all packages against clean builds of all their dependencies. In addition, Christian also spent significant amounts of time working inside the OpenRISC community, debugging problems, testing and recompiling special versions of the toolchain that we could use to advance in our compilation of the whole archive. There were other people in the OpenRISC community implementing the necessary bits in the toolchain, but I don't know the details. Good Progress Olof Kindgren wrote the OpenRISC health report April 2014 (actually posted in May), explaining the status at the time of projects in the broad OpenRISC community, and talking about the software side, Debian port included. Sadly, I think that there have been no more "health reports" since then. There was also a new post in Slashdot entitled OpenRISC Gains Atomic Operations and Multicore Support shortly thereafter. In the side of the Debian port, from time to time new versions of packages entered unstable and we started to use those newer versions. Some of them had nice fixes, like the autoreconf updates, so they did not require local modifications. In other cases, the new versions failed to build when old ones had worked (e.g. because the newer versions added support and dependencies of new versions of gnutls, systemd or other packages not yet available for or1k), and we had to repeat or create more nasty hacks to get the packages built again. But in general, progress was very good. There were about 10k arch-dependent packages in Debian at the time, and we got about half of them compiled by the beginning of May, counting clean and dirty. And, if I recall correctly, there were around the same number of arch=all (which can be installed in any architecture, after the package is built in one of them). Counting both, it meant that systems using or1k got about 15k packages available, or 75% of the whole Debian archive (at least "main", we excluded "contrib" and "non-free"). Not bad. By the middle to end of May, we had about 6k arch-dependent packages compiled, and 4k to go. The count of packages eventually reached ~6.6k at its peak (I think that in June/July). Many had been built with hacks and not rebuilt cleanly yet, but everything was fine until the amount of packages built plateaued. Plateauing There were multiple reasons for that. One of them is that after having fixed the autoreconf issue locally in some packages, new versions were uploaded to Debian without fixing that problem (in many cases there was no bug report or patch yet, so it was understandable; in other cases the requests were ignored). The wanna-build for the clean repository set up by Christian rightly considered the package out-of-date and prepared to build the more recent version, that failed. Then, other packages entering the unstable archive and build-depending on newer versions of those could not be built ("BD-Uninstallable"), until we built the newer versions of the dependencies in the dirty repository with local hacks. Consequently, the count of cleanly built packages went back-and-forth, when not backwards. More challenging was the fact that when creating a new port, language compilers which are written in that same language need to be built for that architecture first. Sometimes it is not the compiler, but compile-time or run-time support for modules of a language are not ported yet. Obviously, as with other dependencies, large amounts of packages written in those languages are bound to remain uncompiled for a long time. As Colin Watson explained in porting Haskell's GHC to arm64 and ppc64el, untangling some of the chicken-and-egg problems of language compilers for new ports is extremely challenging. Perl and Python are pretty much a pre-requisite of the base Debian system, and Christian got them working early on. But for example in May, 247 packages depended on r-base-dev (GNU R) for building, and 736 on ghc, and we did not have these dependencies compiled. Just counting those two, 1k source packages of the remaining 4k to 5k to be compiled for the new architecture would have to wait for a long time. Then there was Java, Mono, etc... Even more worrying problems were the pending issues with the toolchain, like atomics in glibc, or make check failing for some packages in the clean repository built with wanna-build. Christian continued to work on the toolchain and liasing with the rest of the OpenRISC community, I continued to request more changes to the Debian archive through a few requests to use autoreconf, and pushing a few more NMUs. Though many requests were attended, I soon got negative replies/reactions and backed off a bit. In the meantime, the porters of other new architectures at the time were mostly submitting requests to support them and not NMUing much either. Upstreaming Things continued more or less in the same state until the end of the summer. The new ports needed as many packages built as possible before the evaluation of which official ports to accept (in early September, I think, the final decision around the time of the freeze). Porters of the other new architectures (and maintainers, and other helpful Debian Developers) were by then more active in pushing for changes, specially remaining autoreconf issues, many of which benefited or1k. As I said before, I also kept pushing NMUs now and then, specially during summer, for packages which were not of immediate benefit for our port but helped the others (e.g. ppc64el needed updates to ltmain.sh for libtool which were not necessary for or1k, in addition to config. guess,sub ). In parallel in the or1k camp, there were patches that needed changes to be sent upstream, like for example Python's NumPy, that I submitted in May to the Debian package and upstream, and was uploaded to Debian in September with a new upstream release. Similar paths were followed between May and September for packages such as jemalloc, ocaml, gstreamer0.10, libgc, mesa, X.org's cf module and cmake (patch created by Christian). In April, Christian had reached the amazing milestone of tracking and getting all of the contributors of the port of GNU binutils to assign copyright to the Free Software Foundation (FSF), all of the work was refreshed and upstreamed. In July or August, he started to gather information about the contributors of the GCC port, which had started more than a decade ago. After that, nothing much happened (from the outside) until the end of the year, when Christian sent a message about the status of upstreaming GCC to the OpenRISC community. There was only one remaining person to assign the copyright to the FSF, but it was a blocker. In addition, there was the need to find one or more maintainers to liaise with upstream, review the patches, fix the remaining failures in the test suite and keeping the port in good shape. A few months after that and from what I could gather, the status remains the same. Current Status, and The Future? In terms of the Debian port, there have not been huge visible changes since the end of the summer, and not only because of the Jessie freeze. It seems that for this effort to keep going forward and be sustainable, sorting out the issues with GCC and Glibc is essential. That means having the toolchain completely pushed upstream and in good shape, and particularly completing the copyright assignment. Debian will not accept private forks of those essential packages without a very good reason even in unofficially supported ports; and from the point of view of porters, working in the remaining not-yet-built packages with continuing problems in the toolchain is very frustrating and time-consuming. Other than that, there is already a significant amount of software available that could run in an or1k system, so I think that overall the project has achieved a significant amount of success. Granted, KDE and LibreOffice are not available yet, neither are the tools depending on Haskell or Java. But a lot of software is available (including things high in the stack, like XFCE), and in many aspects it should provide a much more functional system that the one available in Linux (or other free software) systems in the late 1990s. If the blocking issues are sorted out in the near future, the effort needed to get a very functional port, on par with the unofficial Debian ports, should not be that big. In my opinion, and looking at the big picture, not bad at all for an architecture whose hardware implementation is not easy to come by, and in which the port was created almost solely with simulators. That it has been possible to get this far with such meagre resources, it's an amazing feat of Free Software and Debian in particular. As for the future, time will tell, as usual. I will try to keep you posted if there is any significant change in the future.

30 January 2015

Manuel A. Fernandez Montecelo: Hallo, Planet Debian!

Hi! I just created this blog and asked to get it aggregated to the Debian Planet, so first things first -- in the initial post I wanted to say Hallooooo! to the people in the community and to talk about my work and interests in Debian. Probably most of you never interacted with or even heard of me before. I am contributor/Maintainer since ~2010 and Developer only for ~2 years, and most of the things that I worked in or packages that I maintain are "low profile" or unintrusive, except perhaps for work in aptitude and SDL libraries, if you happen to be interested in these packages or others that I [co-]maintain. More recently, during 2014, I was helping to bootstrap and bring to life a new architecture, OpenRISC or1k. Perhaps I should devote a future post to explain a bit more about the history, status and news --or, lately, lack thereof-- about this port. This is one of the main reasons why I thought that it would be useful to have a blog -- to register activities for which it is difficult to get information by other means. Apart from or1k, as it often happens with porting efforts in Debian, the work done also helped indirectly other architectures added last year (mips64el, arm64, ppc64el). Additionally, a few of my NMUs were directly targetted to help ppc64el to get ready in time for the architecture evaluation. None of the porters working in ppc64el were Debian Maintainers/Developers and some of the patches that they created did not benefit directly the other ports, so in the packages without active maintainers, the requests were not getting much attention in the crucial time before the evaluation. In some cases, these packages were in the critical path to build many other packages or support important user-case scenarios. So I was very pleased when I learnt that arm64 and ppc64el will be in the next stable release --Jessie-- as officially supported architectures. They came without much noise or ceremonious celebrations, but I think that this is a great success story for these architectures and for Debian, and even for the computing world in general. Time will tell. In the meantime, congratulations to all people involved! In addition to porting packages within Debian, I also sent some patches upstream to get the packages that I maintain compiling in the new architectures, and sent upstream patches needed to support or1k specifically (jemalloc, nspr, libgc, cmake, components of X.org...). Enough for the first post. Just to finish, let me say that after about a decade without a personal website or blog (the previous ones not about Debian or even computing), here I am again. Let's see how it goes and I hope to have enough interesting things to tell you to keep the blog alive. Or, in other words... To infinity and beyond!

1 February 2012

Dominique Dumont: SDL packaging team update: most SDL packages are now up-to-date

Hello Thanks to the effort of Felix Geyer (debfx) and Manuel Montecelo (mafm), most of SDL packages in Debian unstable are now up-to-date. These guys did a really great job: most of the time, I could upload the packages they prepared without modification. These guys should become DD, they have the skills. Ok, debfx has started the process (congrats) and mafm is still thinking about it Anyway, here the messages for all those who gave up packaging new SDL games for Debian because of stale SDL packages: SDL team is alive, the core SDL packages are now up-to-date, the wait is over. This is true also for Perl SDL games: I ve uploaded yesterday the latest Perl bindings to SDL. Unfortunately, this upload broke some games like frozen-bubble, pangzero and dizzy. But new versions compatible with new SDL perl are available upstream so theses games won t be broken for long. All the best

3 October 2007

Ross Burton: Roku SoundBridge

Yesterday my new NAS arrived, to replace my aging and failing hacked Linkstation. As part of the bundle I also received a Roku SoundBridge, which was a nice surprise. Basically, it's a consumer-orientated device which plays music from iTunes or Internet radio, which you would plug into a hifi or powered speakers. I'd heard of these before but I've been using my old ThinkPad X22 for this duty for a while now, and MPD has served me well. I thought I'd give it a go, and I'm actually really impressed with it. Physically the SoundBridge is pretty good looking: a sliver and black ten inch cylinder about two inches in diameter, with a large LCD panel on the front. When turned on it found my wireless network, asked for the WEP key, and promptly upgraded its firmware. Once all that was done, it let me select from two libraries: Vicky's Music or Internet Radio. Vicky was running iTunes on her laptop which exports the library over DAAP, so I listened to Tori Amos whilst I explored the Internet Radio options. Then I listened to the most excellent Groove Salad on SomaFM (apparently the #4 station on the Roku Radio charts). At this point I discovered that there was a SoundBridge link in Epiphany, the SoundBridge uses mDNS to publish the web control panel: a useful application of clue from Roku. Then it just got better. The SoundBridge will stream from DAAP and UPnP servers (they pimp mt-daapd and SlimServer), and announces the web interface over mDNS and UPnP. There is a web site which indexes Internet radio streams, currently it has over 5000 entries. This site uses a Java applet (currently only tested in Windows though, I haven't installed Java yet) to talk to your SoundBridge so it can show the currently playing station and tell it to play another station. Then I discovered this in the manual.
Geeks - read this. The M-bridge has a command line interface that you can telnet to for piddling abut. You will need to telnet to port 4444. Type "?" at the command prompt to see a list of commands. ... M-bridge has a built-in UPnP AV "media renderer". This protocol can be used to control the M-bridge from your own software.
The SoundBridge supports both a custom protocol (documented in a 200-page PDF) and the standard UPnP protocol for controlling it. They even documented the signals the remote control uses. This is probably one of the most hackable "consumer" devices I've seen for a long time, short of the N800. Well done Roku, you've created a damn neat product which actually does just work out of the box. NP: theJazz, Internet radio

4 April 2007

Romain Francoise: Internet radio in danger

SomaFM reports that a new royalty system will require them to pay approximately $1 million in fees in 2007, instead of a fixed 12% rate as before; this will probably drive them out of business. They're asking U.S. citizens to write to their local Congress representative. I often listen to their indie channel.

19 January 2007

Biella Coleman: Internet Radio Under Threat

Probably one of my favorite things about the Internet (and high speed connections) is Internet Radio. My favorite stations are: Space Station, Secret Agent and Tag’s Trance Trip on soma fm out of SF, John in the (weekday) morning on kexp out of Seattle and the The Terrordome and
Asiko Africa Phantom Pyramid
hosted by the simply amazing (really, check out his shows) Minister Faust out of Edmonton. But thanks to witless senators, all this awesomeness is under threat (and most of your probably already know). So support the effort to stop the shenanigans and let’s keep Internet Radio alive… And if you know of any other awesome radio shows or music, please do pass along. I am looking for a good salsa station, classical music and more dance music.

2 March 2006

Ross Burton: Another Tutorial

I just found Yet Another Devil's Pie Tutorial online. That puts the count of unofficial tutorials up to three... obviously people out there want to document Devil's Pie but everyone is doing it in their own corner of the world, so I've just added a skeleton structure to the Devil's Pie wiki page. If anyone out there has a passion for Devil's Pie and wants to start adding documentation for the configuration file format, feel free! NP: Groove Salad, SomaFM