Search Results: "mvo"

14 December 2016

Antoine Beaupr : Debian considering automated upgrades

The Debian project is looking at possibly making automatic minor upgrades to installed packages the default for newly installed systems. While Debian has a reliable and stable package update system that has been an inspiration for multiple operating systems (the venerable APT), upgrades are, usually, a manual process on Debian for most users. The proposal was brought up during the Debian Cloud sprint in November by longtime Debian Developer Steve McIntyre. The rationale was to make sure that users installing Debian in the cloud have a "secure" experience by default, by installing and configuring the unattended-upgrades package within the images. The unattended-upgrades package contains a Python program that automatically performs any pending upgrade and is designed to run unattended. It is roughly the equivalent of doing apt-get update; apt-get upgrade in a cron job, but has special code to handle error conditions, warn about reboots, and selectively upgrade packages. The package was originally written for Ubuntu by Michael Vogt, a longtime Debian developer and Canonical employee. Since there was a concern that Debian cloud images would be different from normal Debian installs, McIntyre suggested installing unattended-upgrades by default on all Debian installs, so that people have a consistent experience inside and outside of the cloud. The discussion that followed was interesting as it brought up key issues one would have when deploying automated upgrade tools, outlining both the benefits and downsides to such systems.

Problems with automated upgrades An issue raised in the following discussion is that automated upgrades may create unscheduled downtime for critical services. For example, certain sites may not be willing to tolerate a master MySQL server rebooting in conditions not controlled by the administrators. The consensus seems to be that experienced administrators will be able to solve this issue on their own, or are already doing so. For example, Noah Meyerhans, a Debian developer, argued that "any reasonably well managed production host is going to be driven by some kind of configuration management system" where competent administrators can override the defaults. Debian, for example, provides the policy-rc.d mechanism to disable service restarts on certain packages out of the box. unattended-upgrades also features a way to disable upgrades on specific packages that administrators would consider too sensitive to restart automatically and will want to schedule during maintenance windows. Reboots were another issue discussed: how and when to deploy kernel upgrades? Automating kernel upgrades may mean data loss if the reboot happens during a critical operation. On Debian systems, the kernel upgrade mechanisms already provide a /var/run/reboot-required flag file that tools can monitor to notify users of the required reboot. For example, some desktop environments will popup a warning prompting users to reboot when the file exists. Debian doesn't currently feature an equivalent warning for command-line operation: Vogt suggested that the warning could be shown along with the usual /etc/motd announcement. The ideal solution here, of course, is reboot-less kernel upgrades, which is also known as "live patching" the kernel. Unfortunately, this area is still in development in the kernel (as was previously discussed here). Canonical deployed the feature for the Ubuntu 16.04 LTS release, but Debian doesn't yet have such capability, since it requires extra infrastructure among other issues. Furthermore, system reboots are only one part of the problem. Currently, upgrading packages only replaces the code and restarts the primary service shipped with a given package. On library upgrades, however, dependent services may not necessarily notice and will keep running with older, possibly vulnerable, libraries. While libc6, in Debian, has special code to restart dependent services, other libraries like libssl do not notify dependent services that they need to restart to benefit from potentially critical security fixes. One solution to this is the needrestart package which inspects all running processes and restarts services as necessary. It also covers interpreted code, specifically Ruby, Python, and Perl. In my experience, however, it can take up to a minute to inspect all processes, which degrades the interactivity of the usually satisfying apt-get install process. Nevertheless, it seems like needrestart is a key component of a properly deployed automated upgrade system.

Benefits of automated upgrades One thing that was less discussed is the actual benefit of automating upgrades. It is merely described as "secure by default" by McIntyre in the proposal, but no one actually expanded on this much. For me, however, it is now obvious that any out-of-date system will be systematically attacked by automated probes and may be taken over to the detriment of the whole internet community, as we are seeing with Internet of Things devices. As Debian Developer Lars Wirzenius said:
The ecosystem-wide security benefits of having Debian systems keep up to date with security updates by default overweigh any inconvenience of having to tweak system configuration on hosts where the automatic updates are problematic.
One could compare automated upgrades with backups: if they are not automated, they do not exist and you will run into trouble without them. (Wirzenius, coincidentally, also works on the Obnam backup software.) Another benefit that may be less obvious is the acceleration of the feedback loop between developers and users: developers like to know quickly when an update creates a regression. Automation does create the risk of a bad update affecting more users, but this issue is already present, to a lesser extent, with manual updates. And the same solution applies: have a staging area for security upgrades, the same way updates to Debian stable are first proposed before shipping a point release. This doesn't have to be limited to stable security updates either: more adventurous users could follow rolling distributions like Debian testing or unstable with unattended upgrades as well, with all the risks and benefits that implies.

Possible non-issues That there was not a backlash against the proposal surprised me: I expected the privacy-sensitive Debian community to react negatively to another "phone home" system as it did with the Django proposal. This, however, is different than a phone home system: it merely leaks package lists and one has to leak that information to get the updated packages. Furthermore, privacy-sensitive administrators can use APT over Tor to fetch packages. In addition, the diversity of the mirror infrastructure makes it difficult for a single entity to profile users. Automated upgrades do imply a culture change, however: administrators approve changes only a posteriori as opposed to deliberately deciding to upgrade parts they chose. I remember a time when I had to maintain proprietary operating systems and was reluctant to enable automated upgrades: such changes could mean degraded functionality or additional spyware. However, this is the free-software world and upgrades generally come with bug fixes and new features, not additional restrictions.

Automating major upgrades? While automating minor upgrades is one part of the solution to the problem of security maintenance, the other is how to deal with major upgrades. Once a release becomes unsupported, security issues may come up and affect older software. While Debian LTS extends releases lifetimes significantly, it merely delays the inevitable major upgrades. In the grand scheme of things, the lifetimes of Linux systems (Debian: 3-5 years, Ubuntu: 1-5 years) versus other operating systems (Solaris: 10-15 years, Windows: 10+ years) is fairly short, which makes major upgrades especially critical. While major upgrades are not currently automated in Debian, they are usually pretty simple: edit sources.list then:
    # apt-get update && apt-get dist-upgrade
But the actual upgrade process is really much more complex. If you run into problems with the above commands, you will quickly learn that you should have followed the release notes, a whopping 20,000-word, ten-section document that outlines all the gory details of the release. This is a real issue for large deployments and for users unfamiliar with the command line. The solutions most administrators seem to use right now is to roll their own automated upgrade process. For example, the Debian.org system administrators have their own process for the "jessie" (8.0) upgrade. I have also written a specification of how major upgrades could be automated that attempts to take into account the wide variety of corner cases that occur during major upgrades, but it is currently at the design stage. Therefore, this problem space is generally unaddressed in Debian: Ubuntu does have a do-release-upgrade command but it is Ubuntu-specific and would need significant changes in order to work in Debian.

Future work Ubuntu currently defaults to "no automation" but, on install, invites users to enable unattended-upgrades or Landscape, a proprietary system-management service from Canonical. According to Vogt, the company supports both projects equally as they differ in scope: unattended-upgrades just upgrades packages while Landscape aims at maintaining thousands of machines and handles user management, release upgrades, statistics, and aggregation. It appears that Debian will enable unattended-upgrades on the images built for the cloud by default. For regular installs, the consensus that has emerged points at the Debian installer prompting users to ask if they want to disable the feature as well. One reason why this was not enabled before is that unattended-upgrades had serious bugs in the past that made it less attractive. For example, it would simply fail to follow security updates, a major bug that was fortunately promptly fixed by the maintainer. In any case, it is important to distribute security and major upgrades on Debian machines in a timely manner. In my long experience in professionally administering Unix server farms, I have found the upgrade work to be a critical but time-consuming part of my work. During that time, I successfully deployed an automated upgrade system all the way back to Debian woody, using the simpler cron-apt. This approach is, unfortunately, a little brittle and non-standard; it doesn't address the need of automating major upgrades, for which I had to revert to tools like cluster-ssh or more specialized configuration management tools like Puppet. I therefore encourage any effort towards improving that process for the whole community. More information about the configuration of unattended-upgrades can be found in the Ubuntu documentation or the Debian wiki.
Note: this article first appeared in the Linux Weekly News.

31 October 2016

Antoine Beaupr : My free software activities, October 2016

Debian Long Term Support (LTS) This is my 7th month working on Debian LTS, started by Raphael Hertzog at Freexian, after a long pause during the summer. I have worked on the following packages and CVEs: I have also helped review work on the following packages:
  • imagemagick: reviewed BenH's work to figure out what was done. unfortunately, I forgot to officially take on the package and Roberto started working on it in the meantime. I nevertheless took time to review Roberto's work and outline possible issues with the original patchset suggested
  • tiff: reviewed Raphael's work on the hairy TIFFTAG_* issues, all the gory details in this email
The work on ImageMagick and GraphicsMagick was particularly intriguing. Looking at the source of those programs makes me wonder why were are still using them at all: it's a tangled mess of C code that is bound to bring up more and more vulnerabilities, time after time. It seems there's always an "Magick" vulnerability waiting to be fixed out there... I somehow hoped that the fork would bring more stability and reliability, but it seems they are suffering from similar issues because, fundamentally, they haven't rewritten ImageMagick... It looks this is something that affects all image programs. The review I have done on the tiff suite give me the same shivering sensation as reviewing the "Magick" code. It feels like all image libraries are poorly implemented and then bound to be exploited somehow... Nevertheless, if I had to use a library of the sort in my software, I would stay away from the "Magick" forks and try something like imlib2 first... Finally, I also did some minor work on the user and developer LTS documentation and some triage work on samba, xen and libass. I also looked at the dreaded CVE-2016-7117 vulnerability in the Linux kernel to verify its impact on wheezy users. I also looked at implementing a --lts flag for dch (see bug #762715). It was difficult to get back to work after such a long pause, but I am happy I was able to contribute a significant number of hours. It's a bit difficult to find work sometimes in LTS-land, even if there's actually always a lot of work to be done. For example, I used to be one of the people doing frontdesk work, but those duties are now assigned until the end of the year, so it's unlikely I will be doing any of that for the forseable future. Similarly, a lot of packages were assigned when I started looking at the available packages. There was an interesting discussion on the internal mailing list regarding unlocking package ownership, because some people had packages locked for weeks, sometimes months, without significant activity. Hopefully that situation will improve after that discussion. Another interesting discussion I participated in is the question of whether the LTS team should be waiting for unstable to be fixed before publishing fixes in oldstable. It seems the consensus right now is that it shouldn't be mandatory to fix issues in unstable before we fix security isssues in oldstable and stable. After all, security support for testing and unstable is limited. But I was happy to learn that working on brand new patches is part of our mandate as part of the LTS work. I did work on such a patch for tar which ended up being adopted by the original reporter, although upstream ended up implementing our recommendation in a better way. It's coincidentally the first time since I start working on LTS that I didn't get the number of requested hours, which means that there are more people working on LTS. That is a good thing, but I am worried it may also mean people are more spread out and less capable of focusing for longer periods of time on more difficult problems. It also means that the team is growing faster than the funding, which is unfortunate: now is a good time as any to remind you to see if you can make your company fund the LTS project if you are still running Debian wheezy.

Other free software work It seems like forever that I did such a report, and while I was on vacation, a lot has happened since the last report.

Monkeysign I have done extensive work on Monkeysign, trying to bring it kicking and screaming in the new world of GnuPG 2.1. This was the objective of the 2.1 release, which collected about two years of work and patches, including arbitrary MUA support (e.g. Thunderbird), config files support, and a release on PyPI. I have had to release about 4 more releases to try and fix the build chain, ship the test suite with the program and have a primitive preferences panel. The 2.2 release also finally features Tor suport! I am also happy to have moved more documentation to Read the docs, part of which I mentionned in in a previous article. The git repositories and issues were also moved to a Gitlab instance which will hopefully improve the collaboration workflow, although we still have issues in streamlining the merge request workflow. All in all, I am happy to be working on Monkeysign, but it has been a frustrating experience. In the last years, I have been maintaining the project largely on my own: although there are about 20 contributors in Monkeysign, I have committed over 90% of the commits in the code. New contributors recently showed up, and I hope this will release some pressure on me being the sole maintainer, but I am not sure how viable the project is.

Funding free software work More and more, I wonder how to sustain my contributions to free software. As a previous article has shown, I work a lot on the computer, even when I am not on a full-time job. Monkeysign has been a significant time drain in the last months, and I have done this work on a completely volunteer basis. I wouldn't mind so much except that there is a lot of work I do on a volunteer basis. This means that I sometimes must prioritize paid consulting work, at the expense of those volunteer projects. While most of my paid work usually revolves around free sofware, the benefits of paid work are not always immediately obvious, as the primary objective is to deliver to the customer, and the community as a whole is somewhat of a side-effect. I have watched with interest joeyh's adventures into crowdfunding which seems to be working pretty well for him. Unfortunately, I cannot claim the incredible (and well-deserved) reputation Joey has, and even if I could, I can't live with 500$ a month. I would love to hear if people would be interested in funding my work in such a way. I am hesitant in launching a crowdfunding campaign because it is difficult to identify what exactly I am working on from one month to the next. Looking back at earlier reports shows that I am all over the place: one month I'll work on a Perl Wiki (Ikiwiki), the next one I'll be hacking at a multimedia home cinema (Kodi). I can hardly think of how to fund those things short of "just give me money to work on anything I feel like", which I can hardly ask for of anyone. Even worse, it feels like the audience here is either friends or colleagues. It would make little sense for me to seek funding from those people: colleagues have the same funding problems I do, and I don't want to empoverish my friends... So far I have taken the approach of trying to get funding for work I am doing, bit by bit. For example, I have recently been told that LWN actually pays for contributed articles and have started running articles by them before publishing them here. This is looking good: they will publish an article I wrote about the Omnia router I have recently received. I give them exclusive rights on the article for two weeks, but I otherwise retain full ownership over the article and will publish them after the exclusive period here. Hopefully, I will be able to find more such projects that pays for the work I do on a day to day basis.

Open Street Map editing I have ramped up my OpenStreetMap contributions, having (temporarily) moved to a different location. There are lots of things to map here: trails, gaz stations and lots of other things are missing from the map. Sometimes the effort looks a bit ridiculous, reminding me of my early days of editing OSM. I have registered to OSM Live, a project to fund OSM editors that, I must admit, doesn't help much with funding my work: with the hundreds of edits I did in October, I received the equivalent of 1.80$CAD in Bitcoins. This may be the lowest hourly salary I have ever received, probably going at a rate of 10 per hour! Still, it's interesting to be able to point people to the project if someone wants to contribute to OSM mappers. But mappers should have no illusions about getting a decent salary from this effort, I am sorry to say.

Bounties I feel this is similar to the "bounty" model used by the Borg project: I claimed around $80USD in that project for what probably amounts to tens of hours of work, yet another salary that would qualify as "poor". Another example is a feature I would like to implement in Borg: support for protocols other than SSH. There is currently no bounty on this, but a similar feature, S3 support has one of the largest bounties Borg has ever seen: $225USD. And the claimant for the bounty hasn't actually implemented the feature, instead backing up to S3, the patch (to a third-party tool) actually enables support for Amazon Cloud Drive, a completely different API. Even at $225, I wouldn't be able to complete any of those features and get a decent salary. As well explained by the Snowdrift reviews, bounties just don't work at all... The ludicrous 10% fee charged by Bountysource made sure I would never do business with them ever again anyways.

Other work There are probably more things I did recently, but I am having difficulty keeping track of the last 5 months of on and off work, so you will forgive that I am not as exhaustive as I usually am.

30 November 2015

Michael Vogt: APT 1.1 released

After 1.5 years of work we released APT 1.1 this week! I m very excited about this milestone. The new 1.1 has some nice new features but it also improves a lot of stuff under the hood. With APT 1.0 we did add a lot of UI improvements, this time the focus is on the reliability of the acquire system and the library. Some of the UI highlights include: Under the hood: Whats also very nice is that apt is now the exact same version on Ubuntu and Debian (no more delta between the packages)! If you want to know more, there is nice video from David Kalnischkies Debconf 2015 talk about apt at https://summit.debconf.org/debconf15/meeting/216/this-apt-has-super-cow-powers/. Julian Andres Klode also wrote about the new apt some weeks ago here. The (impressive) full changelog is available at http://metadata.ftp-master.debian.org/changelogs/main/a/apt/apt_1.1.3_changelog. And git has an even more detailed log if you are even more curious :) Enjoy the new apt!

20 March 2015

Zlatan Todori : My journey into Debian

Notice: There were several requests for me to more elaborate on my path to Debian and impact on life so here it is. It's going to be a bit long so anyone who isn't interested in my personal Debian journey should skip it. :) In 2007. I enrolled into Faculty of Mechanical Engineering (at first at Department of Industrial Management and later transfered to Department of Mechatronics - this was possible because first 3 semesters are same for both departments). By the end of same year I was finishing my tasks (consisting primarily of calculations, some small graphical designs and write-ups) when famous virus, called by users "RECYCLER", sent my Windows XP machine into oblivion. Not only it took control over machine and just spawned so many processes that system would crash itself, it actually deleted all from hard-disk before it killed the system entirely. I raged - my month old work, full of precise calculations and a lot of design details, was just gone. I started cursing which was always continued with weeping: "Why isn't there an OS that can whithstand all of viruses, even if it looks like old DOS!". At that time, my roommate was my cousin who had used Kubuntu in past and currently was having SUSE dual-booted on his laptop. He called me over, started talking about this thing called Linux and how it's different but de facto has no viruses. Well, show me this Linux and my thought was, it's probably so ancient and not used that it probably looks like from pre Windows 3.1 era, but when SUSE booted up it had so much more beautiful UI look (it was KDE, and compared to XP it looked like the most professional OS ever). So I was thrilled, installed openSUSE, found some rough edges (I knew immediately that my work with professional CAD systems will not be possible on Linux machines) but overall I was bought. After that he even talked to me about distros. Wait, WTF distros?! So, he showed me distrowatch.com. I was amazed. There is not only a better OS then Windows - there where dozens, hundreds of them. After some poking around I installed Debian KDE - and it felt great, working better then openSUSE but now I was as most newbies, on fire to try more distros. So I was going around with Fedora, Mandriva, CentOS, Ubuntu, Mint, PCLinuxOS and in beginning of 2008 I stumbled upon Debian docs which where talking about GNU and GNU Manifesto. To be clear, I was always as a high-school kid very much attached to idea of freedom but started loosing faith by faculty time (Internet was still not taking too much of time here, youth still spent most of the day outside). So the GNU Manifesto was really a big thing for me and Debian is a social bastion of freedom. Debian (now with GNOME2) was being installed on my machine. As all that hackerdom in Debian was around I started trying to dig up some code. I never ever read a book on coding (until this day I still didn't start and finish one) so after a few days I decided to code tetris in C++ with thought that I will finish it in two days at most (the feeling that you are powerful and very bright person) - I ended it after one month in much pain. So instead I learned about keeping Debian system going on, and exploring some new packages. I got thrilled over radiotray, slimvolley (even held a tournament in my dorm room), started helping on #debian, was very active in conversation with others about Debian and even installed it on few laptops (I became de facto technical support for users of those laptops :D ). Then came 2010 which with negative flow that came in second half of 2009, started to crush me badly. I was promised to go to Norway, getting my studies on robotics and professor lied (that same professor is still on faculty even after he was caught in big corruption scandal over buying robots - he bought 15 years old robots from UK, although he got money from Norway to buy new ones). My relationship came to hard end and had big emotional impact on me. I fell a year on faculty. My father stopped financing me and stopped talking to me. My depression came back. Alcohol took over me. I was drunk every day just not to feel anything. Then came the end of 2010, I somehow got to the information that DebConf will be in Banja Luka. WHAT?! DebConf in city where I live. I got into #debconf and in December 2010/January 2011 I became part of the famous "local local organizers". I was still getting hammered by alcohol but at least I was getting out of depression. IIRC I met Holger and Moray in May, had a great day (a drop of rakia that was too much for all of us) and by their way of behaving there was something strange. Beatiful but strange. Both were sending unique energy of liberty although I am not sure they were aware of it. Later, during DebConf I felt that energy from almost all Debian people, which I can't explain. I don't feel it today - not because it's not there, it's because I think I integrated so much into Debian community that it's now a natural feeling which people here, that are close to me are saying that they feel it when I talk about Debian. DebConf time in Banja Luka was awesome - firstly I met Phil Hands and Andrew McMillan which were a crazy team, local local team was working hard (I even threw up during the work in Banski Dvor because of all heat and probably not much of sleep due to excitement), met also crazy Mexican Gunnar (aren't all Mexicans crazy?), played Mao (never again, thank you), was hanging around smart but crazy people (love all) from which I must notice Nattie (a bastion of positive energy), Christian Perrier (which had coordinated our Serbian translation effort), Steve Langasek (which asked me to find physiotherapist for his co-worker Mathias Klose, IIRC), Zach (not at all important guy at that time), Luca Capello (who gifted me a swirl on my birthday) and so many others that this would be a post for itself just naming them. During DebConf it was also a bit of hard time - my grandfather died on 6th July and I couldn't attend the funeral so I was still having that sadness in my heart, and Darjan Prtic, a local team member that came from Vienna, committed suicide on my birthday (23 July). But DebConf as conference was great, but more importantly the Debian community felt like a family and Meike Reichle told me that it was. The night it finished, me and Vedran Novakovic cried. A lot. Even days after, I was getting up in the morning having the feeling I need something to do for DebConf. After a long time I felt alive. By the end of year, I adopted package from Clint Adams and Moray became my sponsor. In last quarter of 2011 and beginning of 2012, I (as part of LUG) held talks about Linux, had Linux installation in Computer Center for the first time ever, and installed Debian on more machines. Now fast forwarding with some details - I was also on DebConf13 in Switzerland, met some great new friends such as Tincho and Santiago (and many many more), Santiago was also my roommate in Portland on the previous DebConf. In Switzerland I had really great and awesome time. Year 2014 - I was also at DebConf14, maintain a bit more packages and have applied for DD, met some new friends among which I must put out Apollon Oikonomopoulos and Costas Drogos which friendship is already deep for such a short time and I already know that they are life-long friends. Also thanks to Steve Langasek, because without his help I wouldn't be in Portland with my family and he also gave me Arduino. :) 2015. - I am currently at my village residence, have a 5 years of working experince as developer due to Debian and still a lot to go, learn and do but my love towards Debian community is by magnitude bigger then when I thought I love it at most. I am also going through my personal evolution and people from Debian showed me to fight for what you care, so I plan to do so. I can't write all and name all the people that I met, and believe me when I say that I remember most and all of you impacted my life for which I am eternally grateful. Debian, and it's community effect literally saved my life, spring new energy into me and changed me for better. Debian social impact is far bigger then technical, and when you know that Debian is a bastion of technical excellence - you can maybe picture the greatness of Debian. Some of greatest minds are in Debian but most important isn't the sheer amount of knowledge but the enormous empathy. I just hope I can in future show to more people what Debian is and to find all lost souls as me to give them the hope, to show them that we can make world a better place and that everyone is capable to live and do what they love. P.S. I am still hoping and waiting to see Bdale writing a book about Debian's history to this day - in which I think many of us would admire the work done by project members, laugh about many situations and have fun reading a book about project that was having nothing to do but fail and yet it stands stronger then ever with roots deep into our minds.

4 April 2014

Michael Vogt: apt 1.0

APT 1.0 was released on the 1. April 2014 [0]! The first APT version was announced on the 1. April exactly 16 years ago [1]. The big news for this version is that we included a new apt binary that combines the most commonly used commands from apt-get and apt-cache. The commands are the same as their apt-get/apt-cache counterparts but with slightly different configuration options. Currently the apt binary supports the following commands: Here is what the new progress looks like in 1.0:
apt-progress You can enable/disable the install progress via:
# echo 'Dpkg::Progress-Fancy "1";' > /etc/apt/apt.conf.d/99progressbar
If you have further suggestions or bugreport about APT, get in touch and most importantly, have fun!

12 October 2013

Michael Vogt: apt 0.9.12

The recently released apt 0.9.12 contains a bunch of good stuff, bugfixes and cleanups. But there are two new feature I particularly like. The first is the new parameter with-new-pkgs for the upgrade
command:
# apt-get upgrade --with-new-pkgs
that will install new dependencies on the upgrade but never remove
packages. A typical use-case is a stable system that gets a kernel
with a new kernel ABI package. The second is show-progress for
install/remove/upgrade/dist-upgrade which will show inline progress
when dpkg is running to indicate the global progress.
# apt-get install --show-progress tea
...
Selecting previously unselected package tea-data.
(Reading database ... 380116 files and directories currently installed.)
Unpacking tea-data (from .../tea-data_33.1.0-1_all.deb) ...
Progress: [ 10%]
Progress: [ 20%]
Progress: [ 30%]
Selecting previously unselected package tea.
Unpacking tea (from .../tea_33.1.0-1_amd64.deb) ...
Progress: [ 40%]
Progress: [ 50%]
Progress: [ 60%]
Processing triggers for doc-base ...
Processing 2 added doc-base files...
Registering documents with scrollkeeper...
...
Processing triggers for man-db ...
Setting up tea-data (33.1.0-1) ...
Progress: [ 70%]
Progress: [ 80%]
Setting up tea (33.1.0-1) ...
Progress: [ 90%]
Progress: [100%]
For the install progress, there is also a new experimental option
Dpkg::Progress-Fancy . It will display a persistent progress status bar in the last terminal line. This works like this:
# apt-get -o Dpkg::Progress-Fancy=true install tea
apt-install-fancy-progress This kind of information is obviously most useful on complex operations like big installs or (release) upgrades.

22 May 2013

Lisandro Damián Nicanor Pérez Meyer: Debian/Ubuntu packages caching and mobile workstations

Not so long ago I read Dmitrijs' blog post on how to configure apt-cacher-ng to advertise it's service using avahi. As I normally use my laptop in my home and at work, and both networks have apt-cacher-ng running, I decided to give it a try.

I have been administering apt-cacher-ng for three networks so far, and I really find it a useful tool. Then, thanks to the aforementioned blog post, I discovered squid-deb-proxy. I don't use squid, so it's not for my normal use case, but some people will surely find it interesting.

But I found it's client package to be really interesting. It will discover any service providing _apt_proxy._tcp through avahi and let apt use it. But then the package wasn't available in Debian. So, I contacted Michael Vogt to see if he was interested in putting at least the client in Debian's archive. He took the opportunity to upload the full squid-deb-proxy, so thanks a lot Michael :-)

I then filled a wishlist bug against apt-cacher-ng to provide the avahi configuration for publishing the service, which Eduard included in the last version of it. So thanks a lot Eduard too!

tl;dr
You know only need apt-cacher-ng >= 0.7.13-1 and avahi-daemon installed on your server and your mobile users just need squid-deb-proxy-client. Then the proxy autoconfiguration for apt will just work.

One again, thanks a lot to the respective maintainers for allowing this into Jessie :-)

Gotchas
Yes, there are still some rough edges. On one of the networks I'm behind a proxy. While configuring my machine to use apt-cacher-ng's service as a proxy trough apt.conf, apt-listbugs would just work. But now, using the service as discovered by squid-deb-proxy-client, apt-listbugs just times out. Maybe I need to fill some other bug yet...

16 May 2013

Michael Vogt: git fast-import apt

Due to popular demand I moved debian apt and python-apt from bzr to git today. Moving was pretty painless:
$ git init
$ bzr fast-export --export-marks=marks.bzr -b debian/sid /path/to/debian-sid   git fast-import --export-marks=marks.git
And then a fast-import for the debian-wheezy and debian-experimental branches too. Then a
$ git gc --aggressive
(thanks to Guillem Jover for pointing this out) and that was it. The branches are available at:

17 December 2012

Dominique Dumont: Be wary of optimised for devices

Hello This story began with linphone having a weird behavior: every now and then, the mouse pointer would become stuck: the pointer moves, but clikcking on it always activate the same widget. Since this always happened in the middle of phone conference done for work, this was infuriating. Long story short, I finally found that my plantronics USB headset was responsible for this weird behavior. This headset is connected to the USB bus and is seen by the computer as a USB sound card. The bug was triggered by pressing the mute button provided by the headset. Suspicious of this usb device, I used lsusb to find the features provided by the heaset: $ lsusb -v -s 1:3 2>&1 grep InterfaceClass
bInterfaceClass 1 Audio
bInterfaceClass 1 Audio
bInterfaceClass 1 Audio
bInterfaceClass 1 Audio
bInterfaceClass 1 Audio
bInterfaceClass 3 Human Interface Device
Audio devices were expected, but why a HID device ? This device is a headset, not a keyboard or a mouse Testing further, I also saw some number poping up on my screen whenever I pressed the the mute button. The only way to get the mouse back was to unplug the headset. A quick google search gave a solution to setup X11 to ignore the headset. Problem solved. But this did not answer the question regarding the HID device. Then I got a hint in the form of a sticker glued on the headset USB plug: Optimized for Microsoft Lync . This page gave the answer: an optimised for Lync device provides mute/unmute across PC and device . I can only guess that when the mute button is pressed, some data is sent from the HID interface. Unfortunately, the window manager does not like to be on the receiving end of this data. The moral of this story is: optimized for something actually means not standard .
ok. There s a bug somewhere. The comments of this blog have convinced me that I went too far with the moral of this story. It s now overstriked instead of plainly remvoed so the comments still make sense. Thanks everybody for the constructive comments.
All the best
Tagged: wtf

29 March 2011

Steve Langasek: Multiarch Monomania

So the other day, I was able to do this in an Ubuntu natty amd64 chroot for the first time.
# cat > /etc/apt/apt.conf.d/multiarch-me
APT::Architectures   "amd64"; "i386";  ;
^D
# cat >> /etc/dpkg/dpkg.cfg
foreign-architecture i386
^D
# apt-get update
# apt-get install flashplugin-installer:i386
Reading package lists... Done
Building dependency tree       
Reading state information... Done
[...]
The following NEW packages will be installed:
  flashplugin-installer:i386 flashplugin-nonfree:i386 gcc-4.5-base:i386
  libatk1.0-0:i386 libavahi-client3:i386 libavahi-common-data:i386
  libavahi-common3:i386 libc6:i386 libcairo2:i386 libcomerr2:i386
  libcups2:i386 libdatrie1:i386 libdbus-1-3:i386 libdrm2:i386
  libegl1-mesa:i386 libexpat1:i386 libfontconfig1:i386 libfreetype6:i386
  libgcc1:i386 libgcrypt11:i386 libgdk-pixbuf2.0-0:i386 libgl1-mesa-glx:i386
  libglib2.0-0:i386 libgnutls26:i386 libgpg-error0:i386 libgssapi-krb5-2:i386
  libgtk2.0-0:i386 libgtk2.0-common libice6:i386 libjasper1:i386
  libjpeg62:i386 libk5crypto3:i386 libkeyutils1:i386 libkrb5-3:i386
  libkrb5support0:i386 libnspr4:i386 libnspr4-0d:i386 libnss3:i386
  libnss3-1d:i386 libpango1.0-0:i386 libpcre3:i386 libpixman-1-0:i386
  libpng12-0:i386 libselinux1:i386 libsm6:i386 libsqlite3-0:i386
  libstdc++6:i386 libtasn1-3:i386 libthai-data libthai0:i386 libtiff4:i386
  libudev0:i386 libuuid1:i386 libx11-6:i386 libx11-data libx11-xcb1:i386
  libxau6:i386 libxcb-dri2-0:i386 libxcb-render0:i386 libxcb-shape0:i386
  libxcb-shm0:i386 libxcb-xfixes0:i386 libxcb1:i386 libxcomposite1:i386
  libxcursor1:i386 libxdamage1:i386 libxdmcp6:i386 libxext6:i386
  libxfixes3:i386 libxft2:i386 libxi6:i386 libxinerama1:i386 libxrandr2:i386
  libxrender1:i386 libxt6:i386 libxxf86vm1:i386 x11-common zlib1g:i386
0 upgraded, 78 newly installed, 0 to remove and 3 not upgraded.
Need to get 15.1 MB/15.6 MB of archives.
After this operation, 48.9 MB of additional disk space will be used.
Do you want to continue [Y/n]?
It is a truly heady experience, after so many years of talking about the need to properly support multiarch in Debian and Ubuntu, to see support for cross-installation of packages come to fruition. If you've talked to me any time in the past couple of weeks and noticed it's a little hard to get me to change the subject, well, that's why. Many who have grown accustomed to Debian and Ubuntu's lack of support for installing i386 packages on amd64 (or vice versa) may wonder what the fuss is about. (Whereas others who are well versed in distributions such as Red Hat and SuSE may laugh and wonder what took us so long...) So maybe a few words of explanation are in order. If you've ever installed ia32-libs on an amd64 machine anywhere; if you've ever noticed a bug where ia32-libs didn't work right because of wrong system paths, or had to file a request for another library to be added to ia32-libs because it wasn't included in the set of libraries Debian decided to package up in this grotesque, all-in-one 32-bit compatibility bundle; if you've ever decided not to install a 64-bit OS on your perfectly 64-bit-capable hardware because of concern that you wouldn't be able to run $random32-bitonly_application; multiarch is for you. If you've gotten stuck maintaining a lib32foo "biarch" package in Debian due to popular demand, multiarch is definitely for you. :) If you've ever cross-compiled software for a different Debian architecture than the one you were running, multiarch is for you. If you've ever wanted to run binaries for a different architecture under emulation, and found it awkward to set up the library dependencies, multiarch is for you, too. Because although the .deb world may be a little late to the party, we're also naturally taking things much further than anyone's done with rpm. Multiarch won't just give you the ability to install 32-bit libs on 64-bit systems; it'll give you the ability to install libs for any known architecture on any system. And a whole lot of pain just falls out of the equation in the process. A cross-compiling environment looks the same as a native-compiling environment. An emulated system looks the same as a native system. We can start to seriously consider cross-grading systems from one architecture to another. And all this is happening now. The groundwork is there in Ubuntu natty. Wheezy will be the release that brings multiarch to Debian. When dpkg 1.16.0 is uploaded to unstable real soon now, the bootstrapping will begin. I am immensely grateful to everyone who's helped make multiarch a reality - to Tollef, Matt and others for seeding the vision; Aurelien, Matthias and Arthur for their work to ready the toolchain; David and Michael for the apt implementation; Guillem and Raphael for the dpkg implementation, and Linaro's support to help make this possible; and the many other developers who've helped to refine this design over the years in numerous other BoFs, sessions and mailing list threads. I'm excited to find out what the Debian community will do with multiarch now that it's upon us. Christian, maybe you should start a pool for how long it will take before all the libraries shipped in ia32-libs have been converted to multiarch and we can drop ia32-libs from the archive?

13 August 2010

Julian Andres Klode: APT2 is now UPS

APT2 is now called UPS (Universal Package System). The name is inspired by the company that delivers packages in those brown trucks and from the Image Packaging System (IPS) of OpenSolaris; and mvo writing ups after I proposed upt ( ber package tool) in IRC. It s definitely better than my first thought which was moo (and libmoo). Update: OK, let s cancel this rename insanity.
Filed under: APT2

21 October 2009

MJ Ray: Royal Mail Rub Our Noses in it

So after Royal Mail shut down useful community websites causing MP comments on the idiocy of Royal Mail, I was rather surprised to get this little thing in the post today: postcode That s a postmark advert for Celebrating 50 years of POSTCODES 1959-2009 . So this is what Royal Mail does with some of the money it makes from its claimed monopoly on postcode databases: it spends it on ink to celebrate postcodes in the bit where they can t sell adverts. After the postcode-takedown, I suggested deleting postcodes from all our co-op s websites. Instead, another member has persuaded me to contribute to something like free the postcode, which I first saw on CycleStreets blog. As well as slapping its customers, Royal Mail is also currently taking on its workers who are campaigning for sustainable jobs and against the recent increase in bullying and harassment cases. I already send most of my letters, invoices and so on electronically since our three nearest post offices closed last year. I ve noticed Edinburgh Bicycle Co-op switching to DPD and Terry Lane suggesting more online use. Are those good approaches? How are you adapting to the postal delays? Have you put your postcode into free the postcode or a similar site?

27 May 2009

Stephan Peijnik: update-manager weekly update #0

It has been more than a month since I last wrote about my work on update-manager during this year s Google Summer Of Code and I am somewhat ashamed I wasn t able to provide you with updates more regularly. So first of all, yes, I did do some work and yes, there has been quite some progress. Basically both private and university stuff have kept me from writing and that s why I d like to start with this series of weekly updates today.
This series are meant to summarize what has happened during a week of writing code and give you an overview of what s happening. This first issue however will sum up the past month. So let me begin explaining what has happened since my last post. update-manager bazaar repository All code I have written so far is available through a public bazaar branch on launchpad.net. My branch s page can be found here and provides you with its history and of course instructions on how to obtain the code. The location is only temporary though, as I am going to move hosting over to alioth.debian.org. This is on my task list for next week. modular design I have ripped apart nearly all of update-manager and put it together in a more modular way, which should implementing new frontends or backends more easy, whilst also simplifying code maintenance. The new design consists of four major parts: backend implementation When I started working on update-manager it heavily relied on synaptic and used it to do the dirty-work. However, together with my mentor, mvo, I decided to drop synaptic support and rather concentrate on using python-apt. This means that the only backend implementation right now is a python-apt backend. The python-apt backend is currently a work in progress, but already includes some basic functionality. Right now it can (re-)load the package cache and package lists and is able to provide a list of packages which are upgradable to the frontend.
Whilst implementing these functions I noticed some shortcomings of python-apt itself, fixed those and got mvo included in his python-apt branch at launchpad. frontend implementation I started re-implementing the Gtk frontend as provided through current update-manager and right now it visualizes the package cache reloading process and provides users with a list of upgradable packages. However, that s pretty much all of the functionality it includes right now, which is why implementing more functions is pretty much on the top of my todo list. Additionally I have ported the text frontend, as included in Ubuntu, to the new modular system, and this frontend s code really shows how easy adding a frontend with the new modular design is. This frontend contains the same functionality as the Gtk frontend. distribution specific code The core described above does not include any distribution specific code anymore, which is the main focus of this project. The implementations of distribution-specific functionality contains classifiers for update categories for both Debian and Ubuntu, whilst I focused on getting things right with the Debian implementation for now. These classifiers allow the frontend to let the user know which kind of update they are about to install, like a security update, a recommended upgrade or a third-party (unofficial) upgrade. documentation As update-manager was poorly (read: hardly at all) documented I started documenting the API using sphinx. However, right now the generated documentation cannot be found anywhere yet. This should change as soon as an alioth project for update-manager has been created. next week s tasks I would also like to provide you with my task list for the coming week. The list, ordered by priority, is: As you can see this list is rather short. This can mainly be attributed to a few university assignments, and instead of providing a long list of tasks which I probably won t be able to finish I rather keep the list short and hopefully get things not on this list done too.

29 November 2008

Joost Yervante Damad: Linux, Debian & Bluetooth

I was getting sick of all the wires on my desk, and I needed a new keyboard anyway,
so I bought a logitech bluetooth key and mouse (mx 5000). It's supposed to work just fine.

The keyboard comes with a bluetooth dongle, but it's rather silly not to use the bluetooth build in my laptop, so i never tried the dongle.

I was running linux-image-2.6.26-1-amd64 on my laptop and it had serieus issues with bluetooth. It was very hard to get the device to pair, it imvolved alot of manual probing/forcing.

This morning I upgraded kernel to 2.6.27.7 from kernel.org and it all started working flawlessly...

P.S.: might be fun to see if I can find a way to have it's LCD display work in Linux ;-)

16 September 2008

Christoph Haas: MySQL to PostgreSQL - a Bacula conversion odyssey

Why is it that always the seemingly most simple things turn out to be the most annoying? This time I “just” wanted to get rid of one of my last MySQL databases and move it over to PostgreSQL. The Bacula catalog that saves which file I backed up when and to which storage medium. I tried with MySQL’s “mysqldump” and it’s PostgreSQL compatibility option - but apparently MySQL developers know nothing about PostgreSQL. Then I tried with “sqlfairy” - and found myself booting my system hard after it ate 2 GB of swap and died while converting 500 MB of data. So finally I asked in #bacula and was told to try CSV (comma-seperated values) as an intermediate format. Yuck… that satan-spawned format that reminds me of my dark past? Okay. First dump the catalog from MySQL: for table in BaseFiles CDImages Client Counters Device File Filename FileSet Job JobMedia Location LocationLog Log Media MediaType Path Pool Status Storage UnsavedFiles Version ; do mysqldump -u root -pmypassword -T. bacula $table; done (Okay, okay, this is not comma- but tab-seperated. But that’s even better for running the COPY-FROM command later.) This creates an ‘.sql’ (the schema) and a ‘.txt’ (the rows/records) file for each table in the current directory. Just don’t try to apply the schema to PostgreSQL. Instead better create a new schema. Bacula ships with a script for that purpose. Unless you have the PostgreSQL database for Bacula ready you should run something like… /usr/share/bacula-director/make_postgresql_tables -h localhost -U bacula …and… /usr/share/bacula-director/grant_postgresql_privileges -h localhost -U bacula Now on to read the tab-delimited data into PostgreSQL. The importing via the COPY command must be done with administrator privileges! And it’s important to explicitly state which columns correspond to which table columns (see the respective ‘.sql’ files) or otherwise you’ll get chaos. Of course this only has to be done for .txt files larger than 0 bytes. Oh, and the filename has to be absolute. Example: Unfortunately my “Job.txt” and “Media.txt” contained datestamp entries like “0000-00-00 00:00:00″ which are not valid for PostgreSQL. So I went into Vim and replaced it: s/0000-00-00 00:00:00/1970-01-01 00:00:00/g. Clear the table (DELETE FROM job) and import again. And finally it’s important to get the sequence numbers right as described in the Bacula manual. Morale: spend two extra-minutes to start with PostgreSQL right away instead of bothering about conversions later. And never assume converting from one database to another would work - just because both have “SQL” in their names.

7 June 2008

Lucas Nussbaum: Datamining Launchpad bugs

One think that really is annoying with Launchpad is its lack of interfaces with the outside world. No SOAP interface (well, I think that work is being done on this), no easy way to export all bugs. The only way to get all the bug data in a machine-parseable is to first fetch this URL, and then, for each bug number listed there, to make another request for https://launchpad.net/bugs/$bug/+text. I filed a bug a few weeks ago, asking for a simpler way to get all the data. A Launchpad dev suggested to do what I just described (fetch all the number, then fetch the data for each bug). I originally dismissed the idea because it just sounded too dirty/aggressive/whatever, but since I needed to practice python, I gave it a try. And actually, it works: I was able to get all the data in less than an hour (but that probably put some load on Launchpad ;-)). That allows to write cool SQL queries. Bugs with the most subscribers:
select bugs.bug, title, count(*) as subscribers
from bugs, subscribers
where bugs.bug = subscribers.bug
group by bugs.bug, title
order by subscribers desc
limit 10;
bug firefox subscribers
188540 firefox-3.0 crashed with SIGSEGV in g_slice_alloc() 291
154697 package update-manager 1:0.81 failed to install/upgrade: ErrorMessage: SystemError in cache.commit(): E:Sub-process /tmp/tmpjP6Bsx/backports/usr/bin/dpkg returned an error code (1), E:Sub-process /tmp/tmpjP6Bsx/backports/usr/bin/dpkg returned an error code (1), E:Sub-process /tmp/tmpjP6Bsx/backports/usr/bin/dpkg returned an error code (1), E:Sub-process /tmp/tmpjP6Bsx/backports/usr/bin/dpkg returned an error code (1) 278
141613 npviewer.bin crashed with SIGSEGV 262
59695 High frequency of load/unload cycles on some hard disks may shorten lifetime 182
215005 jockey-gtk crashed with AttributeError in enables_composite() 171
216043 compiz.real crashed with SIGSEGV 168
121653 [gutsy] fglrx breaks over suspend/resume 144
1 Microsoft has a majority market share 142
145360 compiz.real crashed with SIGSEGV 134
23369 firefox(-gnome-support) should get proxy from gconf 126
Bugs where someone is subscribed twice:
select bug, subscriber_login as cnt
from subscribers
group by bug, subscriber_login
having count(*) > 1;
bug subscriber
33065 mvo
48262 mvo
144628 skyguy
158126 benekal
213741 sandro-grundmann
216043 jotacayul
221630 kami911
(Yes, that forced me to change a primary key) Packages with the most bugs:
select package, count(distinct bug) as cnt
from tasks
group by package
order by cnt desc
limit 10;
package number
ubuntu 5392
linux 1464
linux-source-2.6.20 1034
update-manager 826
linux-source-2.6.22 724
firefox 684
kdebase 673
firefox-3.0 668
ubiquity 590
openoffice.org 566
Bugs with the shortest titles:
select bug, title, length(title) as len
from bugs
order by len asc
limit 5;
bug title length
190560 - 1
160381 uh 2
224350 css 3
133621 gnus 4
138052 pbe5 4
If you want to play too, you can fetch the SQLite3 DB (5.8M, lzma-compressed), the DB creation script, and the script that fetches the bugs and import them into the DB. Comments about my code would be very appreciated (stuff like “oh, there’s a better way to do that in python!”), as I’m not very confident about my pythonic skills. :-) Update: apparently, I’m not really fetching all the bugs. I’m getting the same results as when you just press “Search” on https://launchpad.net/ubuntu/+bugs. But if you click on “Advanced search”, then select all the bug statuses, and click search, you get a lot more bugs (154066 vs 49031). If someone know which bugs are excluded with the default search, I’m interested! Update 2: Got it. Apparently the default search doesn’t list bugs that have all their “tasks” marked “Won’t fix”, “Fix Released”, or “Invalid”.

11 February 2008

Andreas Schuldei: apt https transport for etch?

Over the weekend Michael Vogt (mvo) and I produced a backport of the apt https transport to etch. In lenny and sid that functionality is in a seperate package (apt-transport-https), but for etch it seemed easier to just put it into the existing apt package and rebuild that. The advantage of not backporting all of testing's apt is of course the horrible intrusiveness with all its reverse dependencies because of the apt abi changes.

Since this cant go into backports.org because it is not a straight backport from testing to etch i wonder if it is worthwhile to make it publicly accessable in an alternative way.

A https transport is usefull in settings where sensitive packages (e.g. from a private repository) need to be accessed from accross unsecure networks.

Please tell me if you want to use this so I (and mvo for ubuntu perhaps?) can put the package up somewhere. Is there a standard place for non-standard backports yet?

29 May 2007

Ian Haken: GSoC, vlosuts, and other acronyms

Summer of Code officially started today, and as such it was time to start hacking out some code for it. For the past couple of weeks I've been casually looking over an upgrade testing suite for Ubuntu (website, bzr repo). It's fairly sophisticated and does a lot of things that I'd like to bring to Debian. However, after considering it for awhile, I decided that I would prefer to start on the Debian version from scratch. This decision came about for a few reasons:

At any rate, I will probably end up reusing many pieces of the code from that tester.

Updates posted here as progress continues.

16 April 2007

Jonathan McDowell: In search of a HDTV PVR

As mentioned in the past I got a new TV this year. It's an LG 32LCD2DB with a native panel resolution of 1366x768. As well as the usual SCART/component inputs it has VGA and HDMI. Currently my homebrew PVR outputs to the TV via a full featured DVB card with onboard MPEG2 decoder, but this limits me to 720x576 output and a basic OSD. My current box is a PIII-800 with a basic onboard SiS graphics chipset, so it's not got the grunt to drive the TV directly itself. I'm thus on the lookout for an upgrade. Let's start with my constraints. I have a Silverstone LC-02 case. I'd previously planned to upgrade to something bigger, but the discovery that LinITX can provide a variety of riser cards to suit PCI-E/PCI as needed means I don't have to. The LC-02 will take a MicroATX board and provide me with 2 slots, one of which is taken up by the PCI DVB card. This box lives in my living room, so it needs to be quiet - the current incarnation is a bit too loud but I believe that's mainly due to using a standard heatsink+fan assembly, so I'm hoping that something specifically designed to be quiet will be better. An lirc dongle and wireless keyboard/mouse are already sorted. I'd like to expand the use of the PVR to be a more general PC as well; up until now it's just been a PVR, but being able to check email/IRC would be nice, as would the ability to play games. Up until now I've run VDR, which takes full advantage of the MPEG2 decoder, but I'd like to give MythTV a decent try. All of this points to wanting more than just a HD hardware MPEG2/MPEG4 decoder. First lets start with the graphics chipset options. The contenders seem to be ATI, Intel, nVidia and Via. From my reading ATI lack any form of XvMC support in both their Free and binary blob drivers, so we can discount them. nVidia have support, but only in their binary drivers. The Nouveau project have support on their TODO list, but I can't see any sign of it yet. I'd really rather avoid non-free drivers, so that leaves us with just Intel and Via. I really like the look of the Intel G965/X3000. They've opened up their driver development and it seems to be quite active from reading the Xorg mailing lists. However they don't have any XvMC support; Keith Packard on the xorg list in February says "For media purposes, the current drivers aren't taking full advantage of the hardware yet", though later on does say "XvMC is on several of our lists; I don't know when someone will pick it up and implement it". Unfortunately I get the feeling it's not a high priority; I've seen/heard several comments about the fact that any processor that'll get hooked up to this chipset will be more than fast enough to cope without the assistance. That may be true, but I don't want my CPU to be spending all its time trying to decode MPEG2 or MPEG4 at 1920x1080i while the GPU sits mostly unused. I'd much rather the GPU was used to the full giving me spare CPU cycles, or perhaps the ability to clock it a bit slower for common use cases, reducing power and heat. I'm keeping Intel on my list because I hope they do get round to proper support sooner rather than later. They've also talked about the fact the GPU is programmable which would make it perfectly possible to add MPEG4 acceleration to the chipset; that would be pretty sweet. It's a bit odd to end up with Via on the list. I mainly think of them as associated with MiniITX boards, but they do also produce various motherboards for Intel/AMD CPUs with integrated graphics. They managed to shoot themselves in the foot for a period of time by limiting the maximum accelerated resolution to 1024x1024, but recently they've released the CX700M /P4M900 chipsets which appear to do 2048x2048. The OpenChrome project have these up and running to a basic level from what I can tell, but work on XvMC support appears to be progressing with various patches and reports flying about on the list. So currently it looks like Via has the best option (albeit through the Free software community, rather than a concerted effort on their part), while Intel has something that could be very appealing if the driver was more complete. The other factor is the CPU. 3 contenders this time; AMD Intel and Via. My current PIII-800 is rated at 20.8W TDP. Modern processors are in general a lot hotter, but I need to keep in mind that I'm looking for a quiet machine, and that the case doesn't have a huge amount of room for big cooling systems. AMD's dual core chips all start at 89W or more, though some of their single core lower end CPUs are more like 67W. Of course the Intel graphics won't work with it and I'm not 100% sure Via has a suitable Athlon chipset at present either (the 2 I found were for Intel/Via). Intel's P4 offerings aren't much better for power consumption, with many of the highest end chips being over 100W. Core 2 Duo looks promising; 65W, so more than the current CPU, but a lot more grunt. Via come out tops here; 20W for their 2GHz top end processor. However this is only single core and compares poorly to the equivalent clock speed of Core 2 Duo. A MiniITX board with a 2GHz C7 and a CX700M would probably make a fine PVR, but I'm not convinced about there being enough grunt for anything else. The Via chipsets appear to have more active in-progress support for media acceleration, which suggests a P4M900 + Core 2 Duo. That's contrasted with what looks to be the more powerful X3000, that actually has a manufacturer who seems to want to get full Free support out there. I'm probably a couple of months off actually making a decision; partly because I want to watch how it all plays out in the hope that the Intel chipset will get some more support, or that the Via chipset will be proven to be reliable in operation. And assuming I do make a video chipset/CPU decision I then get the joys of trying to find a motherboard with DVI + SP/DIF. I know the G965 can do DVI out with an ADD2 card in the PCI-E slot, but I don't know if the P4M900 supports the same thing. I can't see an Intel MicroATX 965 board with SP/DIF out - there's an HD audio header, but this doesn't seem to be a direct mapping. Can I get a converter?

16 October 2006

James Morrison: Sadists

For those that don't know, Katelyn has become a sadist. Her idea of a slow 4 mile run.

Other recent events, I got learned about San Pablo. The fuzzy vodka party with some hot tub time was worth the confusion though. I wasn't as silly
cycling home
though.

I hiked up Half-dome the other weekend. It was an amazing night hike. Too bad I wussed out on the cables. I think the world may be telling me to get a car, because this is the second time I've gone camping where my ride has left a day early without room for me. The last time I split my stuff up and hitched a ride home, this time I split my stuff up again, and went in one vehicle until the first BART station, then took the BART home.

Next.